Patents Examined by Van D Huynh
  • Patent number: 11967049
    Abstract: The present disclosure describes multi-stage image editing techniques to improve detail and accuracy in edited images. An input image including a target region to be edited and an edit parameter specifying a modification to the target region are received. A parsing map of the input image is generated. A latent representation of the parsing map is generated. An edit is applied to the latent representation of the parsing map based on the edit parameter. The edited latent representation is input to a neural network to generate a modified parsing map including the target region with a shape change according to the edit parameter. Based on the input image and the modified parsing map, a masked image corresponding to the shape change is generated. Based on the masked image, a neural network is used to generate an edited image with the modification to the target region.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: April 23, 2024
    Assignee: Adobe Inc.
    Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Haoliang Wang, YoungJoong Kwon
  • Patent number: 11967080
    Abstract: A system is provided for object localization in image data. The system includes an object localization framework comprising a plurality of object localization processes. The system is configured to receive an image comprising unannotated image data having at least one object in the image, access a first object localization process of the plurality of object localization processes, determine first bounding box information for the image using the first object localization process, wherein the first bounding box information comprises at least one first bounding box annotating at least a first portion of the at least one object in the image, and receive first feedback regarding the first bounding box information determined by the first object localization process. The system is further configured to persist the image with the first bounding box information or access a second object localization process based on the first feedback.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: April 23, 2024
    Assignee: Salesforce, Inc.
    Inventors: Joy Mustafi, Lakshya Kumar, Rajdeep Singh Dua
  • Patent number: 11967164
    Abstract: Systems, methods and computer program products for detecting objects using a multi-detector are disclosed, according to various embodiments. In one aspect, a computer-implemented method includes defining analysis profiles, where each analysis profile: corresponds to one of a plurality of detectors, and comprises: a unique set of analysis parameters and/or a unique detection algorithm. The method further includes analyzing image data in accordance with the analysis profiles; selecting an optimum analysis result based on confidence scores associated with different analysis results; and detecting objects within the optimum analysis result. According to additional aspects, the analysis parameters may define different subregions of a digital image to be analyzed; a composite analysis result may be generated based on analysis of the different subregions by different detectors; and the optimum analysis result may be based on the composite analysis result.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: April 23, 2024
    Assignee: KOFAX, INC.
    Inventors: Jiyong Ma, Stephen M. Thompson, Jan W. Amtrup
  • Patent number: 11960570
    Abstract: A multi-level contrastive training strategy for training a neural network relies on image pairs (no other labels) to learn semantic correspondences at the image level and region or pixel level. The neural network is trained using contrasting image pairs including different objects and corresponding image pairs including different views of the same object. Conceptually, contrastive training pulls corresponding image pairs closer and pushes contrasting image pairs apart. An image-level contrastive loss is computed from the outputs (predictions) of the neural network and used to update parameters (weights) of the neural network via backpropagation. The neural network is also trained via pixel-level contrastive learning using only image pairs. Pixel-level contrastive learning receives an image pair, where each image includes an object in a particular category.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Taihong Xiao, Sifei Liu, Shalini De Mello, Zhiding Yu, Jan Kautz
  • Patent number: 11961245
    Abstract: Provided is a method for performing image guidance, including: acquiring a 3D magnetic resonance (MR) image of a target individual, wherein at least one region range of an object of interest is marked in the 3D MR image; acquiring a reference 3D image of the target individual, wherein the reference 3D image is a 3D-reconstructed computed tomography (CT) image; performing a 3D-3D registration on the 3D MR image and the reference 3D image, so as to mark each region range of the object of interest in the reference 3D image; and performing image guidance on the basis that the reference 3D image is adopted to characterize an initial position state.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: April 16, 2024
    Assignee: Our United Corporation
    Inventors: Zhongya Wang, Daliang Li, Hao Yan, Jiuliang Li
  • Patent number: 11948398
    Abstract: A face recognition system, a face recognition method, and a storage medium that can perform face matching smoothly in a short time are provided. The face recognition system includes: a face detection unit that detects a face image from an image including an authentication subject as a detected face image; a storage unit stores identification information identifying the authentication subject and a registered face image of the authentication subject in association with each other; and a face matching unit that, in response to acquisition of the identification information identifying the authentication subject, matches, against the registered face image corresponding to the acquired identification information, the detected face image detected by the face detection unit from an image captured before the acquisition.
    Type: Grant
    Filed: March 14, 2023
    Date of Patent: April 2, 2024
    Assignee: NEC CORPORATION
    Inventors: Noriaki Hayase, Hiroshi Tezuka
  • Patent number: 11948677
    Abstract: Systems and techniques that facilitate hybrid unsupervised and supervised image segmentation are provided. In various embodiments, a system can access a computed tomography (CT) image depicting an anatomical structure. In various aspects, the system can generate, via an unsupervised modeling technique, at least one class probability mask of the anatomical structure based on the CT image. In various instances, the system can generate, via a deep-learning model, an image segmentation based on the CT image and based on the at least one class probability mask.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: April 2, 2024
    Assignee: GE PRECISION HEALTHCARE LLC
    Inventors: Soumya Ghose, Jhimli Mitra, Peter M Edic, Prem Venugopal, Jed Douglas Pack
  • Patent number: 11931128
    Abstract: Disclosed are methods and digital tools for deriving tooth condition information for a patient's teeth, for populating a digital dental chart with derived tooth condition information, and for generating an electronic data record containing such information.
    Type: Grant
    Filed: March 2, 2023
    Date of Patent: March 19, 2024
    Assignee: 3SHAPE A/S
    Inventors: Mike Van Der Poel, Rune Fisker, Karl-Josef Hollenbeck
  • Patent number: 11931149
    Abstract: Described herein is a method of using functional MRI (fMRI) for determining brain activation patterns for a group of subjects in response to different odor-elicited feelings (i.e., conscious emotions). A method for preparing a perfume by using the method of the present invention and a consumer product including said perfume are also described herein.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: March 19, 2024
    Assignee: FIRMENICH SA
    Inventors: Aline Pichon, Patrik Vuilleumier, Sylvain Delplanque, David Sander, Isabelle Cayeux, Christelle Porcherot, Maria Inés Velazco, Christian Margot
  • Patent number: 11935237
    Abstract: A method for interpreting an input image by a computing device operated by at least one processor is provided. The method for interpreting an input image comprises storing an artificial intelligent (AI) model that is trained to classify a lesion detected in the input image as suspicious or non-suspicious and, under a condition of being suspicious, to classify the lesion detected in the input image as malignant or benign-hard representing that the lesion is suspicious but determined to be benign, receiving an analysis target image, by using the AI model, obtaining a classification class of a target lesion detected in the analysis target image and, when the classification class is the suspicious, obtaining at least one of a probability of being suspicious, a probability of being benign-hard, and a probability of malignant for the target lesion, and outputting an interpretation result including at least one probability obtained for the target lesion.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: March 19, 2024
    Assignee: LUNIT INC.
    Inventors: Hyo-Eun Kim, Hyeonseob Nam
  • Patent number: 11925182
    Abstract: An apparatus for detecting surface characteristics on food objects conveyed by a conveyor has a first imaging device for capturing two-dimensional image data (2D) and a second imaging device for capturing three-dimensional image data (3D) of a food object. A image processing unit is configured to utilize either the 2D or 3D image data in determining whether a potential defect is present on the surface of the food object. The image processing unit determines a surface position of a potential defect, and utilizes the 2D or 3D image data in determining whether an actual defect is present on the surface of the food object at the indicated surface position. The apparatus has an output unit for outputting defect related data in case both of the 2D and the 3D image data indicate that an actual defect is present on the surface of the food object at the indicated surface position.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: March 12, 2024
    Assignee: MAREL SALMON A/S
    Inventor: Carsten Krog
  • Patent number: 11922629
    Abstract: Various example embodiments are described in which an anisotropic encoder-decoder convolutional neural network architecture is employed to process multiparametric magnetic resonance images for the generation of cancer predication maps. In some example embodiments, a simplified anisotropic encoder-decoder convolutional neural network architecture may include an encoder portion that is deeper than a decoder portion. In some example embodiments, simplified network architectures may be combined with test-time-augmentation in order to facilitate training and testing with a minimal number of test subjects.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: March 5, 2024
    Assignee: NOVA SCOTIA HEALTH AUTHORITY
    Inventors: Alessandro Guida, David Hoar, Peter Lee, Steve Patterson, Sharon Clarke, Chris Bowen
  • Patent number: 11911129
    Abstract: A trained deep learning network is for determining a cardiac phase in magnet resonance imaging. In an embodiment, the trained deep learning network includes an input layer; an output layer; and a number of hidden layers between input layer and output layer, the layers processing input data entered into the input layer. In an embodiment, the deep learning network is designed and trained to output a probability or some other label of a certain cardiac phase at a certain time from entered input data. A method for determining a cardiac phase in magnet resonance imaging; a related device; a training method for the deep learning network; a control device and a related magnetic resonance imaging system are also disclosed.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: February 27, 2024
    Assignee: Siemens Healthineers AG
    Inventors: Elisabeth Hoppe, Jens Wetzl, Seung Su Yoon
  • Patent number: 11915414
    Abstract: A discrimination unit discriminates a disease region included in a medical image. The discrimination unit has a first neural network which discriminates a first medical feature in the medical image, and consists of a plurality of processing layers, and a second neural network which discriminates a second medical feature associated with the first medical feature, in the medical image, and consists of a plurality of processing layers. A feature quantity output from the processing layer in the middle of the second neural network or an output result of the second neural network is input to the first neural network.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 27, 2024
    Assignee: FUJIFILM Corporation
    Inventor: Mizuki Takei
  • Patent number: 11908169
    Abstract: A method of compressing meshes using a projection-based approach, leveraging and expanding the tools and syntax generated for projection-based volumetric content compression is described. The mesh is segmented into surface patches, with the difference that the segments follow the connectivity of the mesh. The dense mesh compression utilizes 3D surface patches to represent connected triangles on a mesh surface and groups of vertices to represent triangles not captured by surface projection. Each surface patch (or 3D patch) is projected to a 2D patch, whereby for the mesh, the triangle surface sampling is similar to a common rasterization approach. For each patch, position and connectivity of the projected vertices are kept. The sampled surface resembles a point cloud and is coded with the same approach used for point cloud compression. The list of vertices and connectivity per patch is encoded, and the data is sent with the coded point cloud data.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: February 20, 2024
    Assignee: Sony Group Corporation
    Inventor: Danillo Graziosi
  • Patent number: 11908140
    Abstract: Disclosed is a method and system for identifying a protein domain based on a protein three-dimensional structure image. According to the present application, the protein domain is identified based on a structure similarity, the identification errors and omissions of the protein domain caused by protein multi-sequence alignment errors when sequence consistency is not high can be effectively solved. According to the present application, the point cloud segmentation model based on the dynamic graph convolutional neural network is constructed, and by integrating global structural features and local structural features, segmentation of the protein domain and acquisition of semantic labels of the protein domain can be completed at the same time.
    Type: Grant
    Filed: August 2, 2023
    Date of Patent: February 20, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Jingsong Li, Jing Ma, Yu Wang
  • Patent number: 11887298
    Abstract: One embodiment provides an apparatus for fluorescence lifetime imaging (FLI). The apparatus includes a deep neural network (DNN). The DNN includes a first convolutional layer, a plurality of intermediate layers and an output layer. The first convolutional layer is configured to receive FLI input data. Each intermediate layer is configured to receive a respective intermediate input corresponding to an output of a respective prior layer. Each intermediate layer is further configured to provide a respective intermediate output related to the received respective intermediate input. The output layer is configured to provide estimated FLI output data corresponding to the received FLI input data. The DNN is trained using synthetic data.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: January 30, 2024
    Assignee: Rensselaer Polytechnic Institute
    Inventors: Jason Tyler Smith, Ruoyang Yao, Xavier Intes, Pingkun Yan, Marien Ochoa-Mendoza
  • Patent number: 11886975
    Abstract: Provided herein are methods of generating models to predict prospective pathology scores of test subjects having a pathology in certain embodiments. Related systems and computer program products are also provided.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: January 30, 2024
    Assignee: THE JOHNS HOPKINS UNIVERSITY
    Inventors: Yong Du, Kevin H. Leung, Martin Gilbert Pomper
  • Patent number: 11880962
    Abstract: Methods and systems for synthesizing contrast images from a quantitative acquisition are disclosed. An exemplary method includes performing a quantification scan, using a trained deep neural network to synthesize a contrast image from the quantification scan, and outputting the contrast image synthesized by the trained deep neural network. In another exemplary method, an operator can identify a target contrast type for the synthesized contrast image. A trained discriminator and classifier module determines whether the synthesized contrast image is of realistic image quality and whether the synthesized contrast image matches the target contrast type.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: January 23, 2024
    Assignees: GENERAL ELECTRIC COMPANY, THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY
    Inventors: Suchandrima Banerjee, Enhao Gong, Greg Zaharchuk, John M. Pauly
  • Patent number: 11875496
    Abstract: A method for dimension estimation based on duplication identification. In some embodiments, the method includes receiving a set of images of an object. The method then includes detecting, using a first machine learning system trained to perform image segmentation, a first image segmentation representing a damage of the object on a first image and a second image segmentation representing a damage of the object on a second image. The method further includes determining, using a second machine learning system trained to perform dimension estimation, a first dimension for the damage represented by the first image segmentation and a second dimension for the damage represented by the second image segmentation. The method includes determining whether the first and second image segmentations represent a same damage. If these image segmentations represent the same damage, the method intelligently combines the first and second dimensions to obtain a final dimension for the damage.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: January 16, 2024
    Assignee: Genpact Luxembourg S.à r.l. II
    Inventors: Abhilash Nvs, Ankit Sati, Payanshi Jain, Koundinya K. Nvss, Rajat Katiyar, Mohiuddin Khan, Chirag Jain, Sreekanth Menon