Patents Examined by Manav Seth
  • Patent number: 12288384
    Abstract: An apparatus and method for a machine learning engine for domain generalization which trains a vision transformer neural network using a training dataset including at least two domains for diagnosis of a medical condition. Image patches and class tokens are processed through a sequence of feature extraction transformer blocks to obtain a predicted class token. In parallel, intermediate class tokens are extracted as outputs of each of the feature extraction transformer blocks, where each transformer block is a sub-model. One sub-model is randomly sampled from the sub-models to obtain a sampled intermediate class token. The intermediate class token is used to make a sub-model prediction. The vision transformer neural network is optimized based on a difference between the predicted class token and the sub-model prediction. Inferencing is performed for a target medical image in a target domain that is different from the at least two domains.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: April 29, 2025
    Assignee: Mohamed bin Zayed University of Artifical Intellegence
    Inventors: Maryam Sultana, Muhammad Muzammal Naseer, Muhammad Haris Khan, Salman Khan, Fahad Shahbaz Khan
  • Patent number: 12288323
    Abstract: A method and system for treating a patient is described. The method includes: (i) receiving a sample from a patient, the sample including one or more cancer cells; (ii) obtaining, using an imaging device, one or more images of the cancer cells; (iii) processing, using an imaging processor, the one or more images to extract one or more image coefficients; (iv) mapping, using a trained classifier, the one or more image coefficients to a cancer cell type; (v) identifying, based on mapping the one or more image coefficients to a cancer cell type, one or more cancer cell types in the sample; (vi) identifying, based on the identified one or more cancer cell types in the sample, a course of treatment specific to the one or more cancer cell types; and (vii) treating the patient using the identified course of treatment.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: April 29, 2025
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Shrutin Ulmann, Prasad Raghotham Venkat
  • Patent number: 12277701
    Abstract: A computer implemented method for reading a test region of an assay, the method comprising: (i) providing digital image data of a first assay; (ii) inputting the digital image data into a trained convolutional neural network configured to output a first probability, based on the input digital image data, that a first region of pixels of the digital image data corresponds to a first test region of the assay; (iii) if the first probability is at or above a first predetermined threshold, accepting the first region of pixels as a first region of interest associated with the first test region; and (iv) estimating an intensity value of a portion of the first test region in the first region of interest.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: April 15, 2025
    Assignee: FORSITE DIAGNOSTICS LIMITED
    Inventors: Neeraj Adsul, Marcin Tokarzewski
  • Patent number: 12272148
    Abstract: Approaches presented herein provide for semantic data matching, as may be useful for selecting data from a large unlabeled dataset to train a neural network. For an object detection use case, such a process can identify images within an unlabeled set even when an object of interest represents a relatively small portion of an image or there are many other objects in the image. A query image can be processed to extract image features or feature maps from only one or more regions of interest in that image, as may correspond to objects of interest. These features are compared with images in an unlabeled dataset, with similarity scores being calculated between the features of the region(s) of interest and individual images in the unlabeled set. One or more highest scored images can be selected as training images showing objects that are semantically similar to the object in the query image.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: April 8, 2025
    Assignee: Nvidia Corporation
    Inventors: Donna Roy, Suraj Kothawade, Elmar Haussmann, Jose Manuel Alvarez Lopez, Michele Fenzi, Christoph Angerer
  • Patent number: 12271442
    Abstract: Methods and computing apparatus are provided for training AI/ML systems and use of such systems for performing image analysis so that the damaged parts of a physical structure can be identified accurately and efficiently. According to one embodiment, a method includes selecting an AI/ML system of a particular type; and training the AI/ML system using a dataset comprising one or more auto-labeled images. The auto-labeling was performed using the selected AI/ML system configured using a parts-identification model. The configuration of the trained AI/ML system is output as an improved parts-identification model.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: April 8, 2025
    Assignee: Genpact USA, Inc.
    Inventors: Krishna Dev Oruganty, Siva Tian, Amit Arora, Edmond John Schneider
  • Patent number: 12271684
    Abstract: Provided are methods for automated verification of annotated sensor data, which can include receiving annotated image data associated with an image, wherein the annotated image data comprises an annotation associated with an object within the image, determining an error with the annotation based at least in part on a comparison of the annotation with annotation criteria data associated with criteria for at least one annotation, determining a priority level of the error, and routing the annotation to a destination based at least in part on the priority level of the error. Systems and computer program products are also provided.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: April 8, 2025
    Assignee: Motional AD LLC
    Inventors: Kok Seang Tan, Holger Caesar, Yiluan Guo, Oscar Beijbom
  • Patent number: 12260568
    Abstract: An optimal automatic mapping method between a real image and a thermal image in a body heat tester, and the body heat tester using the method. The real image from the real imaging camera has wider angle of view than the thermal image from the thermal imaging camera, to maximize the use of thermal imaging without omission of thermal imaging pixels in a thermal inspection device using an infrared imaging device. The body heat tester comprises a thermal imaging camera, a real imaging camera and a data processing unit. The data processing unit matches the thermal and real images, obtains the reconstructed real image matched with the thermal image by stretching or shortening the top, bottom, left, and right of the real image based on the thermal image, and detects the body heat (temperature) of the subject using the thermal image and the reconstructed real image.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: March 25, 2025
    Assignee: MESH CO., LTD.
    Inventors: Jung Hoon Lee, Joo Sung Lee, Hyun Chul Ko
  • Patent number: 12254700
    Abstract: A system comprising at least one hardware processor for updating a cloud-based road anomaly database is disclosed, wherein the system receives information from a vehicle regarding an object detected by one or more sensors of the vehicle, the information may include a location and/or a size of the detected object; compares the location and/or the size of the first detected object against data stored in a road feature database regarding the first detected object; determines, based on comparing the location and/or the size, an accuracy score associated with the first detected object and the second detected object; and updates the cloud-based road anomaly database with the received information for the second detected object based on the associated accuracy score being higher than at least one of a threshold accuracy score or an accuracy score associated with corresponding object information stored in the cloud-based road anomaly database.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: March 18, 2025
    Assignee: Hitachi Astemo, Ltd.
    Inventors: Xiaoliang Zhu, Subrata Kumar Kundu, Paul Liu
  • Patent number: 12256078
    Abstract: A predetermined context variable is assigned, according to a device, system, or method, to a first bin of a bin sequence obtained by binarizing an adaptive orthogonal transform identifier indicating a mode of adaptive orthogonal transform in image encoding and context encoding is performed for the first bin of the bin sequence. Furthermore, a predetermined context variable is assigned to a first bin of a bin sequence obtained by binarizing an adaptive orthogonal transform identifier indicating a mode of inverse adaptive orthogonal transform in image decoding and context decoding is performed for the first bin of the bin sequence.
    Type: Grant
    Filed: February 2, 2024
    Date of Patent: March 18, 2025
    Assignee: SONY GROUP CORPORATION
    Inventor: Takeshi Tsukuba
  • Patent number: 12236587
    Abstract: Examples herein include methods, systems, and computer program products for utilizing neural networks in ultrasound systems. The methods include processor(s) of a computing device identifying a neural network for implementation on the computing device to generate, based on ultrasound data, inferences and confidence levels for the inferences, the computing device being communicatively coupled via a computing network to an ultrasound machine configured to generate the ultrasound data. The processor(s) implements the neural network on the computing device, including configuring the neural network to generate an inference and a confidence level for at least one image of the images. The processor(s) obtains the ultrasound data including images from the ultrasound machine. The processor(s) determines, for the at least one image, an accuracy of the inference and the confidence level. The processor(s) automatically reconfigures the neural network to increase the accuracy based on the determining the accuracy.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: February 25, 2025
    Assignee: FUJIFILM SONOSITE, INC.
    Inventors: Davin Dhatt, Thomas Duffy, Adam Pely, Christopher White, Andrew Lundberg, Paul Danset, Craig Chamberlain, Diku Mandavia
  • Patent number: 12229976
    Abstract: The present disclosure relates generally to evaluating the surfaces of a building. The present disclosure relates more particularly to a method of characterizing a surface texture of a building surface. The method includes illuminating a first area of the building surface from a single direction and capturing an image of the first area using a camera while the first area is illuminated. The first image includes a first group of digital pixel values. The method further includes calculating a first set of values that characterize a first surface texture of the first area based on a first group of digital pixel values of the image, and comparing the first set of values to a second set of values that characterize a second surface texture, so as to produce a comparator value.
    Type: Grant
    Filed: June 26, 2023
    Date of Patent: February 18, 2025
    Assignee: CertainTeed Gypsum, Inc.
    Inventors: Rachel Z. Pytel, Sidath S. Wijesooriya, Simon Mazoyer, Brice Dubost
  • Patent number: 12211314
    Abstract: Provided is a control method including obtaining a first image; performing face recognition and gesture recognition on the first image; turning on a gesture control function when a first target face is recognized from the first image and a first target gesture is recognized from the first image; and returning to the act of obtaining the first image when the first target face is not recognized from the first image or the first target gesture is not recognized from the first image.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: January 28, 2025
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Honghong Jia, Fengshuo Hu, Jingru Wang
  • Patent number: 12205308
    Abstract: An image processing apparatus includes circuitry that estimates a first geometric transformation parameter aligning first output image data with the original image data, the first output image data being acquired by reading a first output result, and a second geometric transformation parameter aligning second output image data with the original image data, the second output image data being acquired by reading a second output result. The circuitry associates, based on the first and second geometric transformation parameters, combinations of color components of the first and second output image data, corresponding to pixels of the original image data, to generate pixel value association data, and determines, based on the pixel value association data, a mapping for estimating color drift between the first output image data and the second output image data from the original image data. The original image data is subjected to color conversion based on the mapping.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: January 21, 2025
    Assignee: RICOH COMPANY, LTD.
    Inventor: Tomoyasu Aizaki
  • Patent number: 12205271
    Abstract: A rotating platform having a plurality of stalls is provided, each of the stalls being configured to house a respective animal during milking. The stalls are separated from one another by delimiting structures. A camera registers three-dimensional image data of the rotating platform within a field of view. A controller receives the image data that has been registered while the rotating platform completes at least one full revolution around its rotation axis. The controller processes the image data to derive a set of key features of the rotating platform, and then stores the set of key features in a data storage, which is configured to make the set of key features available for use at a later point in time.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: January 21, 2025
    Assignee: DeLaval Holding AB
    Inventor: Erik Oscarsson
  • Patent number: 12205292
    Abstract: Systems, methods and apparatus for sematic segmentation of 3D point clouds using deep neural networks. The deep neural network generally has two primary subsystems: a multi-branch cascaded subnetwork that includes an encoder and a decoder, and is configured to receive a sparse 3D point cloud, and capture and fuse spatial feature information in the sparse 3D point cloud at multiple scales and multi hierarchical levels; and a spatial feature transformer subnetwork that is configured to transform the cascaded features generated by the multi-branch cascaded subnetwork and fuse these scaled features using a shared decoder attention framework to assist in the prediction of sematic classes for the sparse 3D point cloud.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: January 21, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ran Cheng, Ryan Razani, Bingbing Liu
  • Patent number: 12198395
    Abstract: Disclosed are methods, systems, and apparatus for object localization in video. A method includes obtaining a reference image of an object; generating, from the reference image, homographic adapted images showing the object at various locations with various orientations; determining interest points from the homographic adapted images; determining locations of an object center in the homographic adapted images relative to the interest points; obtaining a sample image of the object; identifying matched pairs of interest points, each matched pair including an interest point from the homographic adapted images and a matching interest point in the sample image; and determining a location of the object in the sample image based on the locations of the object center in the homographic adapted images relative to the matched pairs. The method includes generating a homography matrix; and projecting the reference image of the object to the sample image using the homography matrix.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 14, 2025
    Assignee: ObjectVideo Labs, LLC
    Inventors: Sima Taheri, Gang Qian, Allison Beach
  • Patent number: 12190634
    Abstract: Techniques for automated facial measurement are provided. A set of coordinates for a set of landmarks on a face of a user are extracted by processing an image using a machine learning model. An orientation of the face of the user is determined. It is determined that impedance conditions are not present in the images, and a reference distance on the face of the user is estimated based on the image, where the image depicts the user facing towards the imaging sensor. A nose depth of the user is estimated based on a second image based at least in part on the reference distance, where the second image depicts the user facing at an angle relative to the imaging sensor. A facial mask is selected for the user based on the nose depth.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: January 7, 2025
    Assignees: ResMed Corp, ResMed Pty Ltd, Reciprocal Labs Corporation, ResMed Halifax ULC
    Inventors: Leah J. Morrell, Michael C. Hogg, Paul A. Dickens, Harsh C. Parikh, Mark W. Potter, Matthieu Cormier, Dyllon T. Moseychuck
  • Patent number: 12190581
    Abstract: Techniques are described for automated operations related to analyzing visual data from images captured in rooms of a building and optionally additional captured data about the rooms to assess room layout and other usability information for the building's rooms and optionally for the overall building, and to subsequently using the assessed usability information in one or more further automated manners, such as to improve navigation of the building. The automated operations may include identifying one or more objects in each of the rooms to assess, evaluating one or more target attributes of each object, assessing usability of each object using its target attributes' evaluations and each room using its objects' assessment and other room information with respect to an indicated purpose, and combining the assessments of multiple rooms in a building and other building information to assess usability of the building with respect to its indicated purpose.
    Type: Grant
    Filed: September 11, 2023
    Date of Patent: January 7, 2025
    Assignee: MFTB Holdco, Inc.
    Inventors: Viktoriya Stoeva, Sing Bing Kang, Naji Khosravan, Lambert E. Wixson
  • Patent number: 12175799
    Abstract: Methods and systems for adaptive, template-independent handwriting extraction from images using machine learning models and without manual localization or review. For example, the system may receive an input image, wherein the input image comprises native printed content and handwritten content. The system may process the input image with a model to generate an output image, wherein the output image comprises extracted handwritten content based on the native handwritten content. The system may process the output image to digitally recognize the extracted handwritten content. The system may generate a digital representation of the input image, wherein the digital representation comprises the native printed content and the digitally recognized extracted handwritten content.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: December 24, 2024
    Assignee: THE BANK OF NEW YORK MELLON
    Inventors: Houssem Chatbri, Bethany Kok
  • Patent number: 12175616
    Abstract: An image positioning system capable of real-time positioning compensation includes a 3D marking device, a photographing device, a 3D scanning device, a beam splitter, and a processing unit. The 3D marking device has a polyhedral cube. The beam splitter is configured to cause the photographing device and the 3D scanning device to capture an image of and scan the 3D marking device respectively from the same field of view. The processing unit is configured to calculate image data and 3D scanning data generated respectively by the photographing device and the 3D scanning device to obtain a positioning compensation amount and perform positioning compensation.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: December 24, 2024
    Assignee: METAL INDUSTRIES RESEARCH & DEVELOPMENT CENTRE
    Inventors: Chieh-Hua Chen, Po-Chi Hu, Chin-Chung Lin, Wen-Hui Huang, Yan-Ting Chen