Patents Examined by Manav Seth
-
Patent number: 12288384Abstract: An apparatus and method for a machine learning engine for domain generalization which trains a vision transformer neural network using a training dataset including at least two domains for diagnosis of a medical condition. Image patches and class tokens are processed through a sequence of feature extraction transformer blocks to obtain a predicted class token. In parallel, intermediate class tokens are extracted as outputs of each of the feature extraction transformer blocks, where each transformer block is a sub-model. One sub-model is randomly sampled from the sub-models to obtain a sampled intermediate class token. The intermediate class token is used to make a sub-model prediction. The vision transformer neural network is optimized based on a difference between the predicted class token and the sub-model prediction. Inferencing is performed for a target medical image in a target domain that is different from the at least two domains.Type: GrantFiled: December 19, 2022Date of Patent: April 29, 2025Assignee: Mohamed bin Zayed University of Artifical IntellegenceInventors: Maryam Sultana, Muhammad Muzammal Naseer, Muhammad Haris Khan, Salman Khan, Fahad Shahbaz Khan
-
Patent number: 12288323Abstract: A method and system for treating a patient is described. The method includes: (i) receiving a sample from a patient, the sample including one or more cancer cells; (ii) obtaining, using an imaging device, one or more images of the cancer cells; (iii) processing, using an imaging processor, the one or more images to extract one or more image coefficients; (iv) mapping, using a trained classifier, the one or more image coefficients to a cancer cell type; (v) identifying, based on mapping the one or more image coefficients to a cancer cell type, one or more cancer cell types in the sample; (vi) identifying, based on the identified one or more cancer cell types in the sample, a course of treatment specific to the one or more cancer cell types; and (vii) treating the patient using the identified course of treatment.Type: GrantFiled: October 15, 2019Date of Patent: April 29, 2025Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Shrutin Ulmann, Prasad Raghotham Venkat
-
Patent number: 12277701Abstract: A computer implemented method for reading a test region of an assay, the method comprising: (i) providing digital image data of a first assay; (ii) inputting the digital image data into a trained convolutional neural network configured to output a first probability, based on the input digital image data, that a first region of pixels of the digital image data corresponds to a first test region of the assay; (iii) if the first probability is at or above a first predetermined threshold, accepting the first region of pixels as a first region of interest associated with the first test region; and (iv) estimating an intensity value of a portion of the first test region in the first region of interest.Type: GrantFiled: April 22, 2020Date of Patent: April 15, 2025Assignee: FORSITE DIAGNOSTICS LIMITEDInventors: Neeraj Adsul, Marcin Tokarzewski
-
Patent number: 12272148Abstract: Approaches presented herein provide for semantic data matching, as may be useful for selecting data from a large unlabeled dataset to train a neural network. For an object detection use case, such a process can identify images within an unlabeled set even when an object of interest represents a relatively small portion of an image or there are many other objects in the image. A query image can be processed to extract image features or feature maps from only one or more regions of interest in that image, as may correspond to objects of interest. These features are compared with images in an unlabeled dataset, with similarity scores being calculated between the features of the region(s) of interest and individual images in the unlabeled set. One or more highest scored images can be selected as training images showing objects that are semantically similar to the object in the query image.Type: GrantFiled: April 9, 2021Date of Patent: April 8, 2025Assignee: Nvidia CorporationInventors: Donna Roy, Suraj Kothawade, Elmar Haussmann, Jose Manuel Alvarez Lopez, Michele Fenzi, Christoph Angerer
-
Patent number: 12271442Abstract: Methods and computing apparatus are provided for training AI/ML systems and use of such systems for performing image analysis so that the damaged parts of a physical structure can be identified accurately and efficiently. According to one embodiment, a method includes selecting an AI/ML system of a particular type; and training the AI/ML system using a dataset comprising one or more auto-labeled images. The auto-labeling was performed using the selected AI/ML system configured using a parts-identification model. The configuration of the trained AI/ML system is output as an improved parts-identification model.Type: GrantFiled: July 30, 2021Date of Patent: April 8, 2025Assignee: Genpact USA, Inc.Inventors: Krishna Dev Oruganty, Siva Tian, Amit Arora, Edmond John Schneider
-
Patent number: 12271684Abstract: Provided are methods for automated verification of annotated sensor data, which can include receiving annotated image data associated with an image, wherein the annotated image data comprises an annotation associated with an object within the image, determining an error with the annotation based at least in part on a comparison of the annotation with annotation criteria data associated with criteria for at least one annotation, determining a priority level of the error, and routing the annotation to a destination based at least in part on the priority level of the error. Systems and computer program products are also provided.Type: GrantFiled: October 8, 2021Date of Patent: April 8, 2025Assignee: Motional AD LLCInventors: Kok Seang Tan, Holger Caesar, Yiluan Guo, Oscar Beijbom
-
Patent number: 12260568Abstract: An optimal automatic mapping method between a real image and a thermal image in a body heat tester, and the body heat tester using the method. The real image from the real imaging camera has wider angle of view than the thermal image from the thermal imaging camera, to maximize the use of thermal imaging without omission of thermal imaging pixels in a thermal inspection device using an infrared imaging device. The body heat tester comprises a thermal imaging camera, a real imaging camera and a data processing unit. The data processing unit matches the thermal and real images, obtains the reconstructed real image matched with the thermal image by stretching or shortening the top, bottom, left, and right of the real image based on the thermal image, and detects the body heat (temperature) of the subject using the thermal image and the reconstructed real image.Type: GrantFiled: November 23, 2021Date of Patent: March 25, 2025Assignee: MESH CO., LTD.Inventors: Jung Hoon Lee, Joo Sung Lee, Hyun Chul Ko
-
Patent number: 12254700Abstract: A system comprising at least one hardware processor for updating a cloud-based road anomaly database is disclosed, wherein the system receives information from a vehicle regarding an object detected by one or more sensors of the vehicle, the information may include a location and/or a size of the detected object; compares the location and/or the size of the first detected object against data stored in a road feature database regarding the first detected object; determines, based on comparing the location and/or the size, an accuracy score associated with the first detected object and the second detected object; and updates the cloud-based road anomaly database with the received information for the second detected object based on the associated accuracy score being higher than at least one of a threshold accuracy score or an accuracy score associated with corresponding object information stored in the cloud-based road anomaly database.Type: GrantFiled: November 4, 2021Date of Patent: March 18, 2025Assignee: Hitachi Astemo, Ltd.Inventors: Xiaoliang Zhu, Subrata Kumar Kundu, Paul Liu
-
Patent number: 12256078Abstract: A predetermined context variable is assigned, according to a device, system, or method, to a first bin of a bin sequence obtained by binarizing an adaptive orthogonal transform identifier indicating a mode of adaptive orthogonal transform in image encoding and context encoding is performed for the first bin of the bin sequence. Furthermore, a predetermined context variable is assigned to a first bin of a bin sequence obtained by binarizing an adaptive orthogonal transform identifier indicating a mode of inverse adaptive orthogonal transform in image decoding and context decoding is performed for the first bin of the bin sequence.Type: GrantFiled: February 2, 2024Date of Patent: March 18, 2025Assignee: SONY GROUP CORPORATIONInventor: Takeshi Tsukuba
-
Patent number: 12236587Abstract: Examples herein include methods, systems, and computer program products for utilizing neural networks in ultrasound systems. The methods include processor(s) of a computing device identifying a neural network for implementation on the computing device to generate, based on ultrasound data, inferences and confidence levels for the inferences, the computing device being communicatively coupled via a computing network to an ultrasound machine configured to generate the ultrasound data. The processor(s) implements the neural network on the computing device, including configuring the neural network to generate an inference and a confidence level for at least one image of the images. The processor(s) obtains the ultrasound data including images from the ultrasound machine. The processor(s) determines, for the at least one image, an accuracy of the inference and the confidence level. The processor(s) automatically reconfigures the neural network to increase the accuracy based on the determining the accuracy.Type: GrantFiled: February 14, 2022Date of Patent: February 25, 2025Assignee: FUJIFILM SONOSITE, INC.Inventors: Davin Dhatt, Thomas Duffy, Adam Pely, Christopher White, Andrew Lundberg, Paul Danset, Craig Chamberlain, Diku Mandavia
-
Patent number: 12229976Abstract: The present disclosure relates generally to evaluating the surfaces of a building. The present disclosure relates more particularly to a method of characterizing a surface texture of a building surface. The method includes illuminating a first area of the building surface from a single direction and capturing an image of the first area using a camera while the first area is illuminated. The first image includes a first group of digital pixel values. The method further includes calculating a first set of values that characterize a first surface texture of the first area based on a first group of digital pixel values of the image, and comparing the first set of values to a second set of values that characterize a second surface texture, so as to produce a comparator value.Type: GrantFiled: June 26, 2023Date of Patent: February 18, 2025Assignee: CertainTeed Gypsum, Inc.Inventors: Rachel Z. Pytel, Sidath S. Wijesooriya, Simon Mazoyer, Brice Dubost
-
Patent number: 12211314Abstract: Provided is a control method including obtaining a first image; performing face recognition and gesture recognition on the first image; turning on a gesture control function when a first target face is recognized from the first image and a first target gesture is recognized from the first image; and returning to the act of obtaining the first image when the first target face is not recognized from the first image or the first target gesture is not recognized from the first image.Type: GrantFiled: January 26, 2021Date of Patent: January 28, 2025Assignee: BOE Technology Group Co., Ltd.Inventors: Honghong Jia, Fengshuo Hu, Jingru Wang
-
Patent number: 12205308Abstract: An image processing apparatus includes circuitry that estimates a first geometric transformation parameter aligning first output image data with the original image data, the first output image data being acquired by reading a first output result, and a second geometric transformation parameter aligning second output image data with the original image data, the second output image data being acquired by reading a second output result. The circuitry associates, based on the first and second geometric transformation parameters, combinations of color components of the first and second output image data, corresponding to pixels of the original image data, to generate pixel value association data, and determines, based on the pixel value association data, a mapping for estimating color drift between the first output image data and the second output image data from the original image data. The original image data is subjected to color conversion based on the mapping.Type: GrantFiled: July 9, 2021Date of Patent: January 21, 2025Assignee: RICOH COMPANY, LTD.Inventor: Tomoyasu Aizaki
-
Patent number: 12205271Abstract: A rotating platform having a plurality of stalls is provided, each of the stalls being configured to house a respective animal during milking. The stalls are separated from one another by delimiting structures. A camera registers three-dimensional image data of the rotating platform within a field of view. A controller receives the image data that has been registered while the rotating platform completes at least one full revolution around its rotation axis. The controller processes the image data to derive a set of key features of the rotating platform, and then stores the set of key features in a data storage, which is configured to make the set of key features available for use at a later point in time.Type: GrantFiled: May 6, 2020Date of Patent: January 21, 2025Assignee: DeLaval Holding ABInventor: Erik Oscarsson
-
Patent number: 12205292Abstract: Systems, methods and apparatus for sematic segmentation of 3D point clouds using deep neural networks. The deep neural network generally has two primary subsystems: a multi-branch cascaded subnetwork that includes an encoder and a decoder, and is configured to receive a sparse 3D point cloud, and capture and fuse spatial feature information in the sparse 3D point cloud at multiple scales and multi hierarchical levels; and a spatial feature transformer subnetwork that is configured to transform the cascaded features generated by the multi-branch cascaded subnetwork and fuse these scaled features using a shared decoder attention framework to assist in the prediction of sematic classes for the sparse 3D point cloud.Type: GrantFiled: July 16, 2021Date of Patent: January 21, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Ran Cheng, Ryan Razani, Bingbing Liu
-
Patent number: 12198395Abstract: Disclosed are methods, systems, and apparatus for object localization in video. A method includes obtaining a reference image of an object; generating, from the reference image, homographic adapted images showing the object at various locations with various orientations; determining interest points from the homographic adapted images; determining locations of an object center in the homographic adapted images relative to the interest points; obtaining a sample image of the object; identifying matched pairs of interest points, each matched pair including an interest point from the homographic adapted images and a matching interest point in the sample image; and determining a location of the object in the sample image based on the locations of the object center in the homographic adapted images relative to the matched pairs. The method includes generating a homography matrix; and projecting the reference image of the object to the sample image using the homography matrix.Type: GrantFiled: December 21, 2021Date of Patent: January 14, 2025Assignee: ObjectVideo Labs, LLCInventors: Sima Taheri, Gang Qian, Allison Beach
-
Patent number: 12190634Abstract: Techniques for automated facial measurement are provided. A set of coordinates for a set of landmarks on a face of a user are extracted by processing an image using a machine learning model. An orientation of the face of the user is determined. It is determined that impedance conditions are not present in the images, and a reference distance on the face of the user is estimated based on the image, where the image depicts the user facing towards the imaging sensor. A nose depth of the user is estimated based on a second image based at least in part on the reference distance, where the second image depicts the user facing at an angle relative to the imaging sensor. A facial mask is selected for the user based on the nose depth.Type: GrantFiled: September 27, 2021Date of Patent: January 7, 2025Assignees: ResMed Corp, ResMed Pty Ltd, Reciprocal Labs Corporation, ResMed Halifax ULCInventors: Leah J. Morrell, Michael C. Hogg, Paul A. Dickens, Harsh C. Parikh, Mark W. Potter, Matthieu Cormier, Dyllon T. Moseychuck
-
Patent number: 12190581Abstract: Techniques are described for automated operations related to analyzing visual data from images captured in rooms of a building and optionally additional captured data about the rooms to assess room layout and other usability information for the building's rooms and optionally for the overall building, and to subsequently using the assessed usability information in one or more further automated manners, such as to improve navigation of the building. The automated operations may include identifying one or more objects in each of the rooms to assess, evaluating one or more target attributes of each object, assessing usability of each object using its target attributes' evaluations and each room using its objects' assessment and other room information with respect to an indicated purpose, and combining the assessments of multiple rooms in a building and other building information to assess usability of the building with respect to its indicated purpose.Type: GrantFiled: September 11, 2023Date of Patent: January 7, 2025Assignee: MFTB Holdco, Inc.Inventors: Viktoriya Stoeva, Sing Bing Kang, Naji Khosravan, Lambert E. Wixson
-
Patent number: 12175799Abstract: Methods and systems for adaptive, template-independent handwriting extraction from images using machine learning models and without manual localization or review. For example, the system may receive an input image, wherein the input image comprises native printed content and handwritten content. The system may process the input image with a model to generate an output image, wherein the output image comprises extracted handwritten content based on the native handwritten content. The system may process the output image to digitally recognize the extracted handwritten content. The system may generate a digital representation of the input image, wherein the digital representation comprises the native printed content and the digitally recognized extracted handwritten content.Type: GrantFiled: July 9, 2021Date of Patent: December 24, 2024Assignee: THE BANK OF NEW YORK MELLONInventors: Houssem Chatbri, Bethany Kok
-
Patent number: 12175616Abstract: An image positioning system capable of real-time positioning compensation includes a 3D marking device, a photographing device, a 3D scanning device, a beam splitter, and a processing unit. The 3D marking device has a polyhedral cube. The beam splitter is configured to cause the photographing device and the 3D scanning device to capture an image of and scan the 3D marking device respectively from the same field of view. The processing unit is configured to calculate image data and 3D scanning data generated respectively by the photographing device and the 3D scanning device to obtain a positioning compensation amount and perform positioning compensation.Type: GrantFiled: July 28, 2021Date of Patent: December 24, 2024Assignee: METAL INDUSTRIES RESEARCH & DEVELOPMENT CENTREInventors: Chieh-Hua Chen, Po-Chi Hu, Chin-Chung Lin, Wen-Hui Huang, Yan-Ting Chen