Patents Examined by Andrew M Moyer
  • Patent number: 12379440
    Abstract: Systems and methods for reconstruction for a medical imaging system. An adapter is used to adapt scan data so that different quantities of repetitions or directions may be used to train and implement a single multichannel backbone network.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: August 5, 2025
    Assignee: Siemens Healthineers AG
    Inventors: Simon Arberet, Marcel Dominik Nickel, Thomas Benkert, Mahmoud Mostapha, Mariappan S. Nadar
  • Patent number: 12361572
    Abstract: A system and method of automatic image view alignment for a camera-based road condition detection on a vehicle. The method includes transforming a fisheye image into a non-distorted subject image, comparing the subject image with a reference image, aligning the subject image with the reference image, and analyzing the aligned subject image to detect and identify road conditions in real-time as the vehicle is in operation. The subject image is aligned with the reference image by determining a distance (d) between predetermined feature points of the subject and reference images, estimating a pitch of a projection center based on the distance d, and generating an aligned subject image by applying a rectification transformation on the fisheye image by relocating a center of projection of the fisheye image by the pitch angle .
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: July 15, 2025
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Qingrong Zhao, Farui Peng, Bakhtiar B. Litkouhi
  • Patent number: 12361531
    Abstract: There is provided a method of automated defects' classification, and a system thereof. The method comprises obtaining data informative of a set of defects' physical attributes usable to distinguish between defects of different classes among the plurality of classes; training a first machine learning model to generate, for the given defect, a multi-label output vector informative of values of the physical attributes, thereby generating for the given defect a multi-label descriptor; and using the trained first machine learning model to generate multi-label descriptors of the defects in the specimen. The method can further comprise obtaining data informative of multi-label data sets, each data set being uniquely indicative of a respective class of the plurality of classes and comprising a unique set of values of the physical attributes; and classifying defects in the specimen by matching respectively generated multi-label descriptors of the defects to the multi-label data sets.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: July 15, 2025
    Assignee: Applied Materials Israel Ltd.
    Inventors: Ohad Shaubi, Boaz Cohen, Kirill Savchenko, Ore Shtalrid
  • Patent number: 12358416
    Abstract: A method for illuminating vehicle surroundings of a motor vehicle that comprises an illumination device and a detection device, wherein the illumination device is set up to illuminate at least part of a solid angle region of the vehicle surroundings with different illumination patterns, in particular with visible light, wherein the illumination patterns each predefine illumination intensities for different solid angle subregions of the solid angle region, comprising the steps of: illuminating the vehicle surroundings with a first of the illumination patterns by means of the illumination device, detecting a light pattern that results from the illumination of the vehicle surroundings with the first illumination pattern by means of the detection device, selecting a second of the illumination patterns on the basis of the detected light pattern, and illuminating the vehicle surroundings with the second illumination pattern by means of the illumination device.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: July 15, 2025
    Assignee: AUDI AG
    Inventors: Valentin Schmidt, Tilman Armbruster, Johannes Reim, Marcel Debelec
  • Patent number: 12354411
    Abstract: A method and an apparatus that includes performing a material alteration detection process: sampling the document images based at least in part on account information indicated on the document images; performing image pre-processing on the sampled document images; determining a document type for each image of the sampled document images; for handwritten documents, analyzing the document image using a machine learning (ML) algorithm trained to detect material alterations on handwritten documents; for printed documents, analyzing the document image using a ML algorithm trained to detect material alterations on printed documents; and outputting a fraud probability representation for each analyzed document image; and a signature forgery detection process: obtaining past signatures corresponding to the document images; performing signature image pre-processing; authenticating each signature using the past signatures and a ML algorithm trained to match signatures; outputting a similarity measure; and adjusting the
    Type: Grant
    Filed: October 15, 2024
    Date of Patent: July 8, 2025
    Assignee: Morgan Stanley Services Group Inc.
    Inventors: Atul Mittal, Sonu Agarwal, Joe Manjiyil, Neeta Pande, Sandeep Ramesh
  • Patent number: 12354268
    Abstract: Method, executed by a computer, for identifying a coronary sinus of a patient, comprising: receiving a 3D image of a body region of the patient; extracting 2D axial images of the 3D image taken along respective axial planes, 2D sagittal images of the 3D image taken along respective sagittal planes, and 2D coronal images of the 3D image taken along respective coronal planes; applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map; generating, based on the 2D probability maps, a 3D mask of the coronary sinus of the patient.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: July 8, 2025
    Assignee: XSPLINE S.P.A.
    Inventors: Mikhail Chmelevskii Petrovich, Aleksandr Sinitca, Anastasiia Fadeeva, Werner Rainer
  • Patent number: 12354351
    Abstract: There is provided a method of refining a configuration for analyzing video. The method includes deploying the configuration to at least one device positioned to capture video of a scene; receiving data from the at least one device; using the data to automatically refine the configuration; and deploying a refined configuration to the at least one device. There is also provided a method for automatically generating a configuration for analyzing video. The method includes deploying at least one device without an existing configuration; running at least one computer vision algorithm to detect vehicles and assign labels; receiving data from the at least one device; automatically generating a configuration; and deploying the configuration to the at least one device.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: July 8, 2025
    Assignee: Miovision Technologies Incorporated
    Inventors: Justin Alexander Eichel, Chu Qing Hu, Fatemeh Mohammadi, David Martin Swart
  • Patent number: 12347124
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: July 1, 2025
    Assignee: Adobe Inc.
    Inventors: Matheus Gadelha, Radomir Mech
  • Patent number: 12300036
    Abstract: A deep neural network can provide output from a selected biometric analysis task that is one of a plurality of biometric analysis tasks based on an image. The selected biometric analysis task can be performed in a deep neural network that includes a common feature extraction neural network, a plurality of biometric task-specific neural networks, a plurality of segmentation mask neural networks and an expert pooling neural network that perform the plurality of biometric analysis tasks by inputting the image to the common feature extraction network to determine latent variables. The latent variables can be input to the plurality of biometric task-specific neural networks to determine a plurality of biometric analysis task outputs. The latent variables can be input to a segmentation neural network to determine a facial feature segmentation output. The facial feature segmentation output can be output to a plurality of segmentation mask neural networks.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: May 13, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Ali Hassani, Hafiz Malik, Rafi Ud Daula Refat, Zaid El Shair
  • Patent number: 12299033
    Abstract: The present disclosure describes techniques of generating videos. The techniques comprise acquiring a target video frame among a plurality of frames of a target video; acquiring at least one comment file corresponding to the target video frame, wherein the at least one comment file comprises a plurality of pieces of comment data; determining a mask file corresponding to the target video frame; determining a display coordinate of each of the plurality of pieces of comment data in the target video frame; determining each of the plurality of pieces of comment data is hidden or rendered into the target video frame based on the mask file and the display coordinate of each piece of comment data; and generating a new frame corresponding to the target frame, wherein the new frame comprises at least one subset of the plurality of pieces of comment data embedded in the target frame.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: May 13, 2025
    Assignee: SHANGHAI BILIBILI TECHNOLOGY CO., LTD.
    Inventors: Ran Tang, Yi Wang, Long Zheng, Jun He
  • Patent number: 12288422
    Abstract: A method of living body detection includes generating a detection interface in response to a receipt of a living body detection request for a verification of a detection object. The detection interface includes a first region with a viewing target for the detection object to track, a position of the first region in the detection interface changes during a detection time according to a first sequence of a position change of the first region. The method also includes receiving a video stream that is captured during the detection time, determining a second sequence of a sight line change of the detection object based on the video stream, and determining that the detection object is a living body at least partially based on the second sequence of the sight line change of the detection object matching the first sequence of the position change of the first region.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: April 29, 2025
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jia Meng, Xinyao Wang, Shouhong Ding, Jilin Li
  • Patent number: 12283121
    Abstract: The present specification describes a computer-implemented method. According to the method, user-specific tags are generated for a virtual reality (VR) object displayed within a VR environment. The user-specific tags are generated based on an interaction of a first user with the VR object. Role-based access rights are assigned to the user-specific tags. A role of a second user accessing the VR environment is determined and the user-specific tags are presented to the second user, alongside the VR object, based on a comparison of the role of the second user and the role-based access rights.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: April 22, 2025
    Assignee: International Business Machines Corporation
    Inventors: Vinod A. Valecha, Partho Ghosh, Saurabh Yadav, Amrita Maitra
  • Patent number: 12272137
    Abstract: A target object detection method and apparatus are provided. The target object detection method and apparatus are applicable to fields such as artificial intelligence, object tracking, object detection, and image processing. An object is detected from a frame image of a video including a plurality of frame images based on a target template set including one or more target templates.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 8, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jingtao Xu, Yiwei Chen, Changbeom Park, Hyunjeong Lee, Byung In Yoo, Jaejoon Han, Qiang Wang, Jiaqian Yu
  • Patent number: 12266209
    Abstract: A system to generate an image classifier and test it nearly instantaneously is described herein. Image embeddings generated by an image fingerprinting model are indexed and an associated approximate nearest neighbors (ANN) model is generated. The embeddings in the index are clustered and the clusters are labeled. Users can provide just a few images to add to the index as a labeled cluster. The ANN model is trained to receive an image embedding as input and return a score and label of the most similar identified embedding. The label may be applied if the score exceeds a threshold value. The image classifier can be tested efficiently using Leave One Out Cross Validation (“LOOCV”) to provide near-instantaneous quality indications of the image classifier to the user. Near-instantaneous indications of outliers in the provided images can also be provided to the user using a distance to the centroid calculation.
    Type: Grant
    Filed: February 26, 2024
    Date of Patent: April 1, 2025
    Assignee: Netskope, Inc.
    Inventors: Jason B. Bryslawskyj, Yi Zhang, Emanoel Daryoush, Ari Azarafrooz, Wayne Xin, Yihua Liao, Niranjan Koduri
  • Patent number: 12258552
    Abstract: Apparatus, systems and methods for the adaptive passage of a culture of cells and apparatus and methods for dissociating cell colonies are described. The systems may include an imaging module, a pipette module, a handling module, and/or a stage module. Coordinated operation of the modules, optionally in an automated manner, is effected by at least one processor based on one or more characteristics of the culture of cells calculated from one or more images captured at more than one time point. A first apparatus for adaptive passage of a culture cells includes an imaging module and at least one processor, which apparatus may be included in the systems or used in the methods. A second apparatus for dissociating cell colonies, may also be included in the systems or used in the methods, includes impact bumper(s) collidable with impact bracket(s) to transmit a dissociative force to a culture of cells.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: March 25, 2025
    Assignee: STEMCELL Technologies Canada Inc.
    Inventors: Emanuel Nazareth, Eric Jervis, Martin O'Keane, Tia Sojonky, Mark Romanish
  • Patent number: 12236635
    Abstract: This application provides a digital person training method and system, and a digital person driving system. According to the method, human-body pose estimation data in training data is extracted, and the human-body pose estimation data is input into an optimized pose estimation network to obtain human-body pose optimization data. Generation losses of position optimization data and acceleration optimization data in the human-body pose optimization data are calculated based on a loss function of the optimized pose estimation network, so as to minimize errors between position estimation data and acceleration estimation data and a real value. In this way, the optimized pose estimation network is driven to update a network parameter to obtain an optimal driving model that is based on the optimized pose estimation network. The errors between the position estimation data and the acceleration estimation data and the real value are minimized.
    Type: Grant
    Filed: August 19, 2024
    Date of Patent: February 25, 2025
    Inventors: Huapeng Sima, Hao Jiang, Hongwei Fan, Qixun Qu, Jiabin Li, Jintai Luan
  • Patent number: 12223693
    Abstract: An object detection method includes: obtaining a video to be detected; preprocessing the video to be detected to obtain an image to be detected; inputting the image to be detected into an object detection network; extracting, by the object detection network, a feature map of the image to be detected; performing, by the object detection network, an object prediction on the extracted feature map to obtain a position of an object in the image to be detected and a confidence degree corresponding to the position; and generating a marked object video according to the position of the object in the image to be detected, the confidence degree corresponding to the position, and the video to be detected.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: February 11, 2025
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Chunshan Zu
  • Patent number: 12217724
    Abstract: A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
    Type: Grant
    Filed: December 18, 2023
    Date of Patent: February 4, 2025
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Jon Scott Miller, Scott Daly, Mahdi Nezamabadi, Robin Atkins
  • Patent number: 12217503
    Abstract: The provided is a tidal flat extraction method based on hyperspectral data. The method includes: acquiring and preprocessing hyperspectral data; calculating a constraint condition; calculating, according to the constraint condition, a normalized difference tidal flat index (NDTFI), generating a sample index statistical box chart, and determining a tidal flat extraction threshold according to the sample index statistical box chart; and removing a misclassified pixel based on a coastal buffer zone and a high spatial resolution image, thereby achieving final extraction of a tidal flat. The method has the following beneficial effects. The method is an effective supplement to the existing tidal flat extraction methods. The method adopts an easily realized process, improves the accuracy of tidal flat extraction, reflects the real spatial distribution of the tidal flat, and provides a scientific basis for tidal flat management and protection.
    Type: Grant
    Filed: August 19, 2024
    Date of Patent: February 4, 2025
    Assignee: NINGBO UNIVERSITY
    Inventors: Gang Yang, Weiwei Sun, Chunchen Shao, Xiangchao Meng, Tian Feng
  • Patent number: 12211243
    Abstract: A system and method of combining different sensor data for higher reliable diagnosis information on a portable mobile device includes using a camera for imaging a region of interest of a subject to obtain an image signal, a microphone for capturing acoustic information from the subject, and one or more processors. The one or more processors can be configured to spectrally analyze the image signal, estimate a first vital-sign of the subject corresponding to a diagnosis using the image signal, analyze the acoustic information, estimate a second vital-sign of the subject corresponding to the diagnosis using the acoustic information, and combine the first vital sign with the second vital-sign to provide a higher confidence level diagnostic of the diagnosis.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: January 28, 2025
    Assignee: XGenesis
    Inventor: Todd Darling