Patents Examined by David F Dunphy
-
Patent number: 11741368Abstract: In one aspect, hierarchical image segmentation is applied to an image formed of a plurality of pixels, by classifying the pixels according to a hierarchical classification scheme, in which at least some of those pixels are classified by a parent level classifier in relation to a set of parent classes, each of which is associated with a subset of child classes, and each of those pixels is also classified by at least one child level classifier in relation to one of the subsets of child classes, wherein each of the parent classes corresponds to a category of visible structure, and each of the subset of child classes associated with it corresponds to a different type of visible structure within that category.Type: GrantFiled: June 6, 2019Date of Patent: August 29, 2023Assignee: Five AI LimitedInventors: John Redford, Sina Samangooei
-
Patent number: 11734827Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlain on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.Type: GrantFiled: May 11, 2021Date of Patent: August 22, 2023Assignee: Matterport, Inc.Inventor: Gary Bradski
-
Patent number: 11727585Abstract: Provided is an information processing device including an acquisition unit that acquires a first captured image, a second captured image, and a distance to a subject, and a derivation unit that derives an imaging position distance which is a distance between the first imaging position and the second imaging position, on the basis of a plurality of pixel coordinates for specifying a plurality of pixels of more than three pixels which are present in the same planar region as an emission position irradiated with the directional light beam on the real space and correspond to the position on the real space in each of the first captured image and the second captured image which are acquired by the acquisition unit, emission position coordinates which are derived on the basis of the distance acquired by the acquisition unit, a focal length of an imaging lens, and dimensions of imaging pixels.Type: GrantFiled: June 24, 2021Date of Patent: August 15, 2023Assignee: FUJIFILM CORPORATIONInventor: Tomonori Masuda
-
Patent number: 11723344Abstract: Aspects of this disclosure include a system for providing non-contact, computer-vision based monitoring of the health and pollination activity of a beehive. The system may include camera positioned proximate to a beehive. The camera may include an onboard processor that analyzes video of the beehive captured by the camera and calculates an activity value that estimates a number of bees moving about the beehive. The video calculated activity values may be uploaded to a server where they can be accessed via a user device. The user device may allow the user to display interactive plots of the activity values over a variety of time bases. The disclosed beehive monitoring system relies on relatively lost-cost hardware and requires neither modification to the hive nor special constraints on the placement of the camera.Type: GrantFiled: January 17, 2022Date of Patent: August 15, 2023Assignee: KELTRONIX, INC.Inventors: Kelton Temby, Jonathan Simpson
-
Patent number: 11721130Abstract: The present disclosure relates to a weakly supervised video activity detection method and system based on iterative learning. The method includes: extracting spatial-temporal features of a video that contains actions; constructing a neural network model group; training a first neural network model according to the class label of the video, a class activation sequence output by the first neural network model, and a video feature output by the first neural network model; training the next neural network model according to the class label of the video, a pseudo temporal label output by the current neural network model, a class activation sequence output by the next neural network model, and a video feature output by the next neural network model; and performing action detection on the test video according to the neural network model corresponding to the highest detection accuracy value.Type: GrantFiled: September 16, 2020Date of Patent: August 8, 2023Assignee: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Yan Song, Rong Zou, Xiangbo Shu
-
Patent number: 11720648Abstract: A deep learning machine includes a classification unit having a labeling criterion and configured to label input data according to the labeling criterion, a conversion unit configured to integerize input data labeled as a first type requiring integerization among the input data labeled by the classification unit, a first learning data unit configured to receive the input data of the first type integerized through the conversion unit and to infer output data, and a second learning data unit configured to receive input data labeled as a second type requiring no integerization and to infer the output data.Type: GrantFiled: August 9, 2021Date of Patent: August 8, 2023Assignee: HYUNDAI MOBIS CO., LTD.Inventor: Hyuk Lee
-
Patent number: 11721102Abstract: A method of identifying fixing in a tennis match includes collecting one or more metrics related to a player in the tennis match using one or more computing devices, comparing the collected one or more metrics to one or more standards, and determining, based on the comparison using an algorithm that will identify a pattern or reoccurrence of unusual metrics, whether the player has deliberately lost one or more points in the tennis match.Type: GrantFiled: January 31, 2020Date of Patent: August 8, 2023Inventor: Fredric Goldstein
-
Patent number: 11715286Abstract: Disclosed is a method for recognizing a marine object based on hyperspectral data including collecting target hyperspectral data; preprocessing the target hyperspectral data; and detecting and identifying an object included in the target hyperspectral data based on a marine object detection and identification model, trained through learning of the detection and identification of the marine object. According to the present invention, the preprocessing and processing of the hyperspectral data collected in real time according to a communication state may be performed in the sky or on the ground.Type: GrantFiled: August 24, 2021Date of Patent: August 1, 2023Assignee: KOREA INSTITUTE OF OCEAN SCIENCE & TECHNOLOGYInventors: Dongmin Seo, Sangwoo Oh
-
Patent number: 11710255Abstract: An object identification and collection method is disclosed. The method includes receiving a pick-up path that identifies a route in which to guide an object-collection system over a target geographical area to pick up objects, determining a current location of the object-collection system relative to the pick-up path, and guiding the object-collection system along the pick-up path over the target geographical area based on the current location. The method further includes capturing images in a direction of movement of the object-collection system along the pick-up path, identifying a target object in the images; tracking movement of the target object through the images, determining that the target object is within range of an object picker assembly on the object-collection system based on the tracked movement of the target object, and instructing the object picker assembly to pick up the target object.Type: GrantFiled: July 21, 2021Date of Patent: July 25, 2023Assignee: TerraClear Inc.Inventors: Brent Ronald Frei, Dwight Galen McMaster, Michael Racine, Jacobus du Preez, William David Dimmit, Isabelle Butterfield, Clifford Holmgren, Dafydd Daniel Rhys-Jones, Thayne Kollmorgen, Vivek Ullal Nayak
-
Patent number: 11710305Abstract: Described herein are systems, methods, and non-transitory computer readable media for validating or rejecting automated detections of an entity being tracked within an environment in order to generate a track representative of a travel path of the entity within the environment. The automated detections of the entity may be generated by an artificial intelligence (AI) algorithm. The track may represent a travel path of the tracked entity across a set of image frames. The track may contain one or more tracklets, where each tracklet includes a set of validated detections of the entity across a subset of the set of image frames and excludes any rejected detections of the entity. Each tracklet may also contain one or more user-provided detections in scenarios in which the tracked entity is observed or otherwise known to be present in an image frame but automated detection of the entity did not occur.Type: GrantFiled: November 9, 2021Date of Patent: July 25, 2023Assignee: Palantir Technologies Inc.Inventors: Leah Anderson, Mark Montoya, Andrew Elder, Alisa Le, Ezra Zigmond, Jocelyn Rivero
-
Patent number: 11704814Abstract: In various examples, an adaptive eye tracking machine learning model engine (“adaptive-model engine”) for an eye tracking system is described. The adaptive-model engine may include an eye tracking or gaze tracking development pipeline (“adaptive-model training pipeline”) that supports collecting data, training, optimizing, and deploying an adaptive eye tracking model that is a customized eye tracking model based on a set of features of an identified deployment environment. The adaptive-model engine supports ensembling the adaptive eye tracking model that may be trained on gaze vector estimation in surround environments and ensemble based on a plurality of eye tracking variant models and a plurality of facial landmark neural network metrics.Type: GrantFiled: May 13, 2021Date of Patent: July 18, 2023Assignee: NVIDIA CorporationInventors: Nuri Murat Arar, Niranjan Avadhanam, Hairong Jiang, Nishant Puri, Rajath Shetty, Shagan Sah
-
Patent number: 11704810Abstract: System and techniques for detecting a crop related row from an image are described herein. An image that includes several rows—where the several rows including crop rows and furrows—can be obtained. The image can be segmented to produce a set of image segments. A filter can be shifted across respective segments of the set of image segments to get a set of positions. A line can be fit members of the set of positions, the line representing a crop row or furrow.Type: GrantFiled: July 20, 2021Date of Patent: July 18, 2023Assignee: Raven Industries, Inc.Inventors: Yuri Sneyders, John D. Preheim, Jeffrey Allen Van Roekel
-
Patent number: 11698955Abstract: Some implementations provide input-triggered user verification. This may involve trigging a user verification (e.g., capture of an image, sound, fingerprint, etc.) to verify a user's identity based on input (e.g., typing) received ad the device. Triggering the user-verification based on receiving input may help ensure that the image, sound, fingerprint, etc. is captured at a time when the user is close to the device, touching the finger-print sensor, and/or in view of the camera during the capturing. Some implementations provide user verification based on a user-identification of a previously selected image. This may involve using an inmate-selected picture or other image to recover a forgotten alphanumeric reference. Some implementations of the invention disclosed herein provide user verification based on a computer-vision identification of a wearable identification tag. This may involve using an image of the user's identification tag worn on the user's wrist to verify the user's identity.Type: GrantFiled: December 14, 2021Date of Patent: July 11, 2023Assignee: Confinement Telephony Technology, LLCInventors: Timothy Edwin Pabon, John Vincent Townsend, III, Robert James Deglman, Rick Allen Lubbehusen
-
Method for detecting cantor under intra-class occulusion based on cross-scale layered feature fusion
Patent number: 11694428Abstract: Disclosed is a method for detecting Ophiocephalus argus cantor under intra-class occulusion based on cross-scale layered feature fusion, including image collecting, image processing and network model, where collected images are labeled, image sizes are adjusted to obtain input images, and the input images are input into an object detection network, integrated by convolution and inserted into cross-scale layered feature fusion modules, characterized by including dividing all features input into the cross-scale layered feature fusion modules into n layers, composed of s feature mapping subsets, and fusing features of each feature mapping subset with that of other feature mapping subsets, and connecting; carrying out convolution operation, outputting training result; adjusting network parameters by a loss function to obtain parameters for a network model; inputting final output candidate boxes into a non-maximum suppression module to screen correct prediction boxes, so that prediction result is obtained.Type: GrantFiled: March 15, 2023Date of Patent: July 4, 2023Assignee: Ludong UniversityInventors: Jun Yue, Yifei Zhang, Qing Wang, Zhenbo Li, Guangjie Kou, Jun Zhang, Shixiang Jia, Ning Li -
Patent number: 11694426Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for identifying traffic control features based on telemetry patterns within digital image representations of vehicle telemetry information. The disclosed systems can generate a digital image representation based on collected telemetry information to represent the frequency of different speed-location combinations for transportation vehicles passing through a traffic area. The disclosed systems can also apply a convolutional neural network to analyze the digital image representation and generate a predicted classification of a type of traffic control feature that corresponds to the digital image representation of vehicle telemetry information. The disclosed systems further train the convolutional neural network to determine traffic control features based on training data.Type: GrantFiled: April 27, 2021Date of Patent: July 4, 2023Assignee: Lyft, Inc.Inventors: Deeksha Goyal, Han Suk Kim, James Kevin Murphy, Albert Yuen
-
Patent number: 11693921Abstract: A method of data preparation for artificial intelligence models includes receiving data characterizing a first plurality of images. The method further includes annotating a first subset of images of the first plurality of images based at least in part on a first user input to generate annotated first subset of images. The annotating includes labelling one or more features of the first subset of images. The method also includes generating, by a training code, an annotation code, the training code configured to receive the annotated first subset of images as input and output the annotation code. The training and the annotation code includes computer executable instructions. The method also includes annotating, by the annotation code, a second subset of images of the first plurality of images to generate annotated second subset of images, wherein the annotating includes labelling one or more features of the second subset of images.Type: GrantFiled: December 10, 2020Date of Patent: July 4, 2023Assignee: Baker Hughes Holdings LLCInventors: Xiaoqing Ge, Dustin Michael Sharber, Jeffrey Potts, Braden Starcher
-
Patent number: 11688163Abstract: A target recognition method and device based on a MASK RCNN network model are disclosed. The method comprises: determining a multi-stage network as a basic network; selecting at least one intermediate layer capable of extracting a feature map from the basic network, and inputting respectively a feature map output by the intermediate layer and a feature map output by an end layer of the basic network to corresponding MASK RCNN recognition networks to construct a network model based on the MASK RCNN, wherein the feature map output by the intermediate layer and the feature map output by the end layer have different sizes; training the MASK RCNN recognition networks with a data set and stopping training until a preset training end condition is satisfied; and recognizing the target using the MASK RCNN recognition networks after trained. This solution is very suitable for small target recognition of a flying UAV.Type: GrantFiled: October 24, 2020Date of Patent: June 27, 2023Assignee: GOERTEK INC.Inventor: Xiufeng Song
-
Patent number: 11676239Abstract: Embodiments described herein include, software, firmware, and hardware logic that provides techniques to perform arithmetic on sparse data via a systolic processing unit. Embodiment described herein provided techniques to skip computational operations for zero filled matrices and sub-matrices. Embodiments additionally provide techniques to maintain data compression through to a processing unit. Embodiments additionally provide an architecture for a sparse aware logic unit.Type: GrantFiled: June 3, 2021Date of Patent: June 13, 2023Assignee: Intel CorporationInventors: Joydeep Ray, Scott Janus, Varghese George, Subramaniam Maiyuran, Altug Koker, Abhishek Appu, Prasoonkumar Surti, Vasanth Ranganathan, Andrei Valentin, Ashutosh Garg, Yoav Harel, Arthur Hunter, Jr., SungYe Kim, Mike Macpherson, Elmoustapha Ould-Ahmed-Vall, William Sadler, Lakshminarayanan Striramassarma, Vikranth Vemulapalli
-
Patent number: 11669593Abstract: Systems and methods for training image processing models for vehicle data collection by image analysis are provided. An example method involves accessing an image of a field of interest in a vehicle captured by a camera in the vehicle, providing a user interface to, display the image, receive input that defines a region of interest in the image that is expected to convey vehicle information, and receive input that assigns a label to the region of interest that associates the region of interest with an image processing model that is to be trained to extract a type of vehicle information from the region of interest, and contributing the image, labelled with the region of interest and the label associating the region of interest to the image processing model, to a training data library to train the image processing model.Type: GrantFiled: March 18, 2021Date of Patent: June 6, 2023Assignee: Geotab Inc.Inventors: Thomas Arthur Walli, William John Ballantyne, Javed Siddique, Amir Antoun Renne Sayegh
-
Patent number: 11663816Abstract: Provided is an apparatus for classifying an attribute of an image object, including: a first memory configured to store target object images that are indexed; a second memory configured to store target object images that are un-indexed; and an object attribute classification module configured to perform learning on the un-indexed target object images to construct a classifier for classifying a detailed attribute of target object, and finely adjust the classifier on the basis of the indexed target object images.Type: GrantFiled: February 12, 2021Date of Patent: May 30, 2023Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeun Woo Lee, Sung Chan Oh