Patents Examined by Nathan J Bloom
-
Patent number: 12327392Abstract: A method for image segmentation includes (a) clustering, based upon k-means clustering, pixels of an image into first clusters, (b) outputting a cluster map of the first clusters (c) re-clustering the pixels into a new plurality of non-disjoint pixel-clusters, and (d) classifying the non-disjoint pixel-clusters in categories, according to a user-indicated classification. Another method for image segmentation includes (a) forming a graph with each node of the graph corresponding to a first respective non-disjoint pixel-cluster of the image and connected to each terminal of the graph and to all other nodes corresponding to other respective non-disjoint pixel-clusters that, in the image, are within a neighborhood of the first respective non-disjoint pixel-cluster, (b) setting weights of connections of the graph according to a user-indicated classification in categories respectively associated with the terminals, and (c) segmenting the image into the categories by cutting the graph based upon the weights.Type: GrantFiled: December 2, 2020Date of Patent: June 10, 2025Assignee: Dolby Laboratories Licensing CorporationInventors: Amirhossein Khalilian-Gourtani, Neeraj J. Gadgil, Guan-Ming Su
-
Patent number: 12299076Abstract: Quality associated with an interpretation of data captured as unstructured data can be determined. Attributes can be identified within the unstructured data automatically. Subsequently, sentiment associated with each of the attributes can be determined based on the unstructured data. Correctness of the unstructured data, and thus the interpretation, can be assessed based on a comparison of the attribute and associated sentiment with structured data. A quality score can be generated that captures the quality of the data interpretation in terms of correctness and as well as results of another analysis including completeness, among others. Comparison of the quality score to a threshold can dictate whether or not the interpretation is subject to further review.Type: GrantFiled: December 8, 2022Date of Patent: May 13, 2025Assignee: Wells Fargo Bank, N.A.Inventors: Pranshu Sharma, Srimoyee Duttagupta, Naveen Gururaja Yeri, Hemalatha AC, Dipan Banerjee, Alan On Yau, Michelle Sunna Nowe, Manesh Saini, Hasan Adem Yilmaz
-
Patent number: 12299975Abstract: The systems and methods described herein provide improvements to mapping project sites by using ground and/or aerial imagery. A system can receive one or more ground images of site locations within a site area and metadata associated with each of the one or more ground images. The metadata can include GPS coordinates that correspond to the one or more ground images. The system can determine that aerial imagery represents at least the one or more site locations and determine a relative position of each of the one or more site locations within the aerial imagery. Such a determination may be based on the metadata (e.g., on the GPS coordinates). The system can then generate data to display on a user interface one or more features, such as a map, an indication of the site area, and one or more indicators.Type: GrantFiled: January 19, 2024Date of Patent: May 13, 2025Inventor: Anthony M. Brunetti
-
Patent number: 12292494Abstract: Among the various aspects of the present disclosure is the provision of methods and systems for segmenting images and expediting a contouring process for MRI-guided adaptive radiotherapy (MR-IGART) comprising applying a convolutional neural network (CNN), wherein the CNN accurately segments organs (e.g., the liver, kidneys, stomach, bowel, or duodenum) in 3D MR images.Type: GrantFiled: July 30, 2019Date of Patent: May 6, 2025Assignee: Washington UniversityInventors: Deshan Yang, Yabo Fu
-
Patent number: 12283102Abstract: A computing system identifies broadcast video data for a game. The computing system generates tracking data for the game from the broadcast video data using computer vision techniques. The tracking data includes coordinates of players during the game. The computing system generates optical character recognition data for the game from the broadcast video data by applying one or more optical character recognition techniques to each frame of the plurality of frames to extract score and time information from a scoreboard displayed in each frame. The computing system detects a plurality of events that occurred in the game by applying one or more machine learning techniques to the tracking data. The computing system receives play-by-play data for the game. The computing system generates enriched tracking data. The generating includes merging the play-by-play data with one or more of the tracking data, the optical character recognition data, and the plurality of events.Type: GrantFiled: January 9, 2024Date of Patent: April 22, 2025Assignee: Stats LLCInventors: Alex Ottenwess, Matthew Scott, Ken Rhodes, Patrick Joseph Lucey
-
Patent number: 12277752Abstract: Systems and methods for selecting data for training a machine learning model using active learning are disclosed. The methods include receiving a plurality of unlabeled sensor data logs corresponding to surroundings of an autonomous vehicle and identifying one or more trends associated with a training dataset comprising a plurality of labeled data logs. The methods also include selecting a subset of the plurality of unlabeled sensor data logs that have an importance score greater than a threshold, the importance score being determined based on the one or more trends. The subset of the plurality of unlabeled sensor data logs is used for updating the machine learning model to generate an updated model.Type: GrantFiled: August 3, 2023Date of Patent: April 15, 2025Assignee: Volkswagen Group of America Investments, LLCInventors: Jelena Frtunikj, Daniel Alfonsetti
-
Patent number: 12254668Abstract: Devices and techniques are generally described for object matching in image data. In various examples, first image data and second image data may be received. A first feature map representing the first image data and a second feature map representing the second image data may be generated. The first feature map and second feature map may be combined and a first and second segmentation mask may be generated using the combined feature map. The first segmentation mask may be used to filter the first feature map to generate a filtered representation. The second segmentation mask may be used to filter the second feature map to generate a filtered representation. A determination may be made that a first object depicted in the first image data corresponds to a second object depicted in the second image data using the filtered representations.Type: GrantFiled: March 30, 2022Date of Patent: March 18, 2025Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Ioana-Sabina Stoian, Alin-Ionut Popa, Ionut Catalin Sandu, Daniel Voinea
-
Patent number: 12236683Abstract: Video frames from a video are compressed into a single image or a single data structure that represents a unique visual flowprint or visual signature for a given activity being modeled from the video frames. The flowprint comprises a computed summary of the original pixel values associated with the video frames within the single image and the flowprint is specific to movements occurring within the video frames that are associated with the given activity. In an embodiment, the flowprint is provided as input to a machine-learning algorithm to allow the algorithm to perform object tracking and monitoring from the flowprint rather than from the video frames of the video, which substantially improves processor load and memory utilization on a device that executes the algorithm, and substantially improved responsiveness of the algorithm.Type: GrantFiled: December 3, 2021Date of Patent: February 25, 2025Assignee: NCR Voyix CorporationInventors: Joshua Migdal, Vikram Srinivasan
-
Patent number: 12219113Abstract: A method and apparatus with image correction is provided. A processor-implemented method includes generating, using a neural network model provided an input image, an illumination map including illumination values dependent on respective color casts by one or more illuminants individually affecting each pixel of the input image, and generating a white-adjusted image by removing at least a portion of the color casts from the input image using the generated illumination map.Type: GrantFiled: August 16, 2021Date of Patent: February 4, 2025Assignees: Samsung Electronics Co., Ltd., Industry-Academic Cooperation Foundation, Yonsei UniversityInventors: Nahyup Kang, Seon Joo Kim, DongYoung Kim, JinWoo Kim, Yeonkyung Lee, Byung In Yoo, Hyong Euk Lee
-
Patent number: 12216112Abstract: In one aspect, a method to detect white blood cells and/or white blood cell subtypes from non-invasive capillary videos is featured. The method includes acquiring a first plurality of images of a region of interest including one or more capillaries of a predetermined area of a human subject from non-invasive capillary videos captured with an optical device, processing the first plurality of images to determine one or more optical absorption gaps located in said capillary, and annotating the first plurality of images with an indication of any optical absorption gap detected in the first plurality of images.Type: GrantFiled: November 14, 2023Date of Patent: February 4, 2025Assignee: Leuko Labs, Inc.Inventors: Carlos Castro Gonzalez, Ian Butterworth, Aurelien Bourquard, Alvaro Sanchez Ferro
-
Patent number: 12211253Abstract: A system and method of automatic product attribute recognition receive training images having bounding boxes associated with one or more products in the training images, receive attribute values for each of the one or more products in the training images, and train a first convolutional neural network (CNN) model to generate bounding boxes for and identify each of the one or more products with the training images until the accuracy of the first CNN model is above a first predetermined threshold. The system and method further train a second CNN model for each of the products associated with the cropped images until the second CNN generates attribute values for the one or more attributes with an accuracy above a second predetermined threshold, and automatically recognize the one or more attributes for a new product image by presenting the product image to the first and second CNN models.Type: GrantFiled: April 26, 2021Date of Patent: January 28, 2025Assignee: Blue Yonder Group, Inc.Inventors: Ramakrishna Perla, Arun Raj Parwana Adiraju, Vineet Chaudhary
-
Patent number: 12205360Abstract: A computing system generates a training data set for training the prediction model to detect defects present in a target surface of a target specimen and training the prediction model to detect defects present in the target surface of the target specimen based on the training data set. The computing system generates the training data set by identifying a set of images for training the prediction model, the set of images comprising a first subset of images. A deep learning network generates a second subset of images for subsequent labelling based on the set of images comprising the first subset of images. The deep learning network generates a third subset of images for labelling based on the set of images comprising the first subset of images and the labeled second subset of images. The computing system continues the process until a threshold number of labeled images is generated.Type: GrantFiled: August 15, 2022Date of Patent: January 21, 2025Assignee: Nanotronics Imaging, Inc.Inventors: Tonislav Ivanov, Denis Babeshko, Vadim Pinskiy, Matthew C. Putman, Andrew Sundstrom
-
Patent number: 12204686Abstract: A method, an apparatus, and a system for video communications include: transmitting, from a first apparatus using a network, a first video stream of a first user to a second apparatus of a second user, wherein the first user is in video communication with the second user; receiving, from the second apparatus using the network, a second video stream of the second user; determining, by a processor, a reaction of the second user to an area of interest in the first video stream using the second video stream; and updating, in response to the reaction of the second user to the area of interest in the first video stream, a parameter for encoding the area of interest in the first video stream at the first apparatus.Type: GrantFiled: March 25, 2021Date of Patent: January 21, 2025Assignee: Agora Lab, Inc.Inventors: Sheng Zhong, Yue Feng
-
Patent number: 12190629Abstract: Provided is a computer-implemented deep-learning-based method for extracting minutiae from a latent friction ridge image. The method comprises training a minutiae extraction model though a deep-learning network with ground truth latent friction ridge images as training samples, and, inputting a latent friction ridge image into the minutiae extraction model to extract minutiae of the latent friction ridge image, wherein the model outputs locations and directions for the extracted minutiae. The deep-learning network includes a base network configured to generate a minutiae feature map from a latent friction ridge image, a Region Proposal Network (RPN) configured to propose minutiae locations and directions from the minutiae feature map, and a Region-Based Convolutional Neural Network (RCNN) configured to definitely decide minutiae locations and directions from RPN's proposal.Type: GrantFiled: October 14, 2021Date of Patent: January 7, 2025Assignee: THALES DIS FRANCE SASInventors: Songtao Li, Amit Pandey
-
Patent number: 12189719Abstract: One embodiment of the present invention sets forth a technique for evaluating labeled data. The technique includes selecting, from a set of labels for a data sample, a subset of the labels representing non-outliers in a distribution of values in the set of labels. The technique also includes aggregating the subset of the labels into a benchmark for the data sample. The technique further includes generating, based on a comparison between the benchmark and an additional label, a benchmark score associated with the data sample, and generating a set of performance metrics for labeling the data sample based on the benchmark score.Type: GrantFiled: January 6, 2022Date of Patent: January 7, 2025Assignee: Scale AI, Inc.Inventors: Nathaniel John Herman, Akshat Bubna, Alexandr Wang, Shariq Shahab Hashme, Samuel J. Clearman, Liren Tu, Jeffrey Zhihong Li, James Lennon
-
Patent number: 12159386Abstract: A characteristic included in data is determined with relatively high precision. An inspection system according to one aspect of the present invention acquires a plurality of learning data sets respectively including a combination of image data and correct answer data, and sets a difficulty level of determination for each of the learning data sets in accordance with a degree to which a result which is obtained by determining the acceptability of a product in the image data of each of the learning data sets by a first discriminator conforms to a correct answer indicated by the correct answer data. Besides, the inspection system constructs a second discriminator that determines the acceptability of the product by executing stepwise machine learning in which the learning data sets are utilized in ascending order of the set difficulty level.Type: GrantFiled: March 13, 2019Date of Patent: December 3, 2024Assignee: OMRON CorporationInventor: Yoshihisa Ijiri
-
Patent number: 12136257Abstract: A learning device includes a class classification learning unit that learns class classification of a classification target by using a loss function in which a loss is calculated to become smaller as a magnitude of a difference between a function value obtained by inputting a log-likelihood ratio to a function having a finite value range and a constant associated with a correct answer to the class classification of the classification target becomes smaller, the log-likelihood ratio being the logarithm of a ratio between the likelihood that the classification target belongs to a first class and the likelihood that the classification target belongs to a second class.Type: GrantFiled: May 11, 2020Date of Patent: November 5, 2024Assignee: NEC CORPORATIONInventors: Akinori Ebihara, Taiki Miyagawa
-
Patent number: 12126940Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a structure configured to store items. The image sensor generates angled-view images of the items stored on the structure. A tracking subsystem determines that a person has interacted with the structure and receives image frames of the angled-view images. The tracking subsystem determines that the person interacted with a first item stored on the structure. A first image is identified associated with a first time before the person interacted with the first item, and a second image is identified associated with a second time after the person interacted with the first item. If it is determined, based on a comparison of the first and second images, that the item was removed from the structure, the first item is assigned to the person.Type: GrantFiled: June 2, 2021Date of Patent: October 22, 2024Assignee: 7-ELEVEN, INC.Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Patent number: 12125265Abstract: A method for training a locally interpretable model includes obtaining a set of training samples and training a black-box model using the set of training samples. The method also includes generating, using the trained black-box model and the set of training samples, a set of auxiliary training samples and training a baseline interpretable model using the set of auxiliary training samples. The method also includes training, using the set of auxiliary training samples and baseline interpretable model, an instance-wise weight estimator model. For each auxiliary training sample in the set of auxiliary training samples, the method also includes determining, using the trained instance-wise weight estimator model, a selection probability for the auxiliary training sample. The method also includes selecting, based on the selection probabilities, a subset of auxiliary training samples and training the locally interpretable model using the subset of auxiliary training samples.Type: GrantFiled: June 29, 2022Date of Patent: October 22, 2024Assignee: GOOGLE LLCInventors: Sercan Omer Arik, Jinsung Yoon, Tomas Jon Pfister
-
Patent number: 12112516Abstract: A non-intrusive detection method for detecting at least one pop-up window button of the pop-up window includes the following steps: retrieving a screen image on a display device; comparing the screen image with a preset screen image and generating a differential image area according the screen image and the preset screen image; determining the differential image area as the pop-up window when the differential image area is greater than an image area threshold value; selecting a plurality of contour lengths of the pop-up window matching up with a contour length threshold value by Canny edge detector; and analyzing the contour lengths according to Douglas-Peucker algorithm and an amount of endpoints to generate a contour edge corresponding to the pop-up window button.Type: GrantFiled: October 11, 2021Date of Patent: October 8, 2024Assignee: ADLINK Technology Inc.Inventors: Chien-Chung Lin, Wei-Jyun Tu, Yu-Yen Chen