Patents Examined by Vaisali Rao Koppolu
  • Patent number: 12236712
    Abstract: A facial expression recognition method includes extracting a first feature from color information of pixels in a first image, and extracting a second feature of facial key points from the first image. The method further includes combining the first feature and the second feature, to obtain a fused feature, and determining, by processing circuitry of an electronic device, a first expression.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: February 25, 2025
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Yanbo Fan, Yong Zhang, Le Li, Baoyuan Wu, Zhifeng Li, Wei Liu
  • Patent number: 12236625
    Abstract: The present disclosure relates generally to image processing, and more particularly, toward techniques for structured illumination and reconstruction of three-dimensional (3D) images. Disclosed herein is a method to jointly learn structured illumination and reconstruction, parameterized by a diffractive optical element and a neural network in an end-to-end fashion. The disclosed approach has a differentiable image formation model for active stereo, relying on both wave and geometric optics, and a trinocular reconstruction network. The jointly optimized pattern, dubbed “Polka Lines,” together with the reconstruction network, makes accurate active-stereo depth estimates across imaging conditions. The disclosed method is validated in simulation and used with an experimental prototype, and several variants of the Polka Lines patterns specialized to the illumination conditions are demonstrated.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: February 25, 2025
    Assignee: The Trustees of Princeton University
    Inventors: Seung-Hwan Baek, Felix Heide
  • Patent number: 12229918
    Abstract: Provided are an electronic device for sharpening an image and an operation method thereof. A method, performed by the electronic device, of sharpening an image includes: obtaining an image generated by a camera of the electronic device; obtaining a first sharpening kernel for enhancing sharpness of the image, wherein the first sharpening kernel is data including a plurality of weights to be applied to pixels in the image, the data being of a lower resolution than the image; determining coordinates corresponding to some weights indicating representative values of the first sharpening kernel from among the plurality of weights in the first sharpening kernel; generating a second sharpening kernel by selecting the weights corresponding to the determined coordinates; and obtaining a sharpened image by applying the second sharpening kernel to the image.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: February 18, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Pawel Kies, Radoslaw Chmielewski
  • Patent number: 12223569
    Abstract: A computer-implemented method of an embodiment is for automatically estimating and/or correcting an error due to artifacts in a multi-energy computed tomography result dataset relating to at least one target material. In an embodiment, the method includes determining at least one first subregion of the imaging region, which is free from the target material and contains at least one, in particular exactly one, second material with known material-specific energy dependence of x-ray attenuation; for each first subregion, comparing the image values of the energy dataset for each voxel, taking into account the known energy dependence, to determine deviation values indicative of artifacts; and for at least a part of the at least one remaining second subregion of the imaging region, calculating estimated deviation values by interpolating and/or extrapolating from the determined deviation values in the first subregion, the estimated deviation values being used as estimated error due to artifacts.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: February 11, 2025
    Assignee: SIEMENS HEALTHINEERS AG
    Inventors: Bernhard Schmidt, Katharine Grant, Thomas Flohr
  • Patent number: 12217505
    Abstract: Disclosed are an image-based indoor positioning service system and method. A service server includes a communication unit configured to receive a captured image of a node set in an indoor map, and a location estimation model generation unit configured to learn the captured image of the node received through the communication unit, segment the learned captured image to obtain objects, and selectively activate the objects in the learned image to generate a location estimation model.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: February 4, 2025
    Assignee: DABEEO, INC.
    Inventor: Ju Hum Park
  • Patent number: 12210586
    Abstract: A model may be trained on a training dataset, e.g., for medical image processing or medical signal processing tasks. Systems and computer-implemented methods are provided for associating a population descriptor with the trained model and using the population descriptor to determine whether records to which the model is to be applied, conform to the population descriptor. The population descriptor characterizes a distribution of the one or more characteristic features over the training dataset, with the characteristic features characterizing the training record and/or a model output provided when the trained model is applied to the training record. For instance, the model may be applied only to records conforming to population descriptor, or model outputs of applying the model to non-conforming records may be flagged as possibly untrustworthy.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: January 28, 2025
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Rolf Jurgen Weese, Hans-Aloys Wischmann
  • Patent number: 12211225
    Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: January 28, 2025
    Assignee: ADOBE INC.
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
  • Patent number: 12205305
    Abstract: The present disclosure is related to systems and methods for noise reduction. The method includes obtaining a current frame including a plurality of first pixels. The method includes determining an interframe difference between each first pixel in the current frame and a corresponding pixel in a previous frame obtained prior to the current frame. The method includes generating a denoised frame by performing a first noise reduction operation on the current frame. The method includes determining an intraframe difference for each second pixel in the denoised frame. The method includes generating a target frame by performing a second noise reduction operation on the denoised frame.
    Type: Grant
    Filed: January 29, 2022
    Date of Patent: January 21, 2025
    Assignee: ZHEJIANG PIXFRA TECHNOLOGY CO., LTD.
    Inventors: Chenghan Ai, Keqiang Yu, Song Wang, Xiaomu Liu, Hainian Sun
  • Patent number: 12205284
    Abstract: A console includes a CPU that acquires an image to be processed which is an object to be subjected to a support process that is a diagnosis support process or an imaging support process and selects a support process to be applied to the image to be processed from a plurality of processes of the support process on the basis of a part of a subject which is included in the image to be processed.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: January 21, 2025
    Assignee: FUJIFILM CORPORATION
    Inventors: Koji Taninai, Hiromu Hayashi, Akihito Bettoyashiki, Takeyasu Kobayashi, Kazuhiro Makino
  • Patent number: 12175650
    Abstract: A method includes obtaining an image data set that depicts semiconductor components, and applying a hierarchical bricking to the image data set. In this case, the bricking includes a plurality of bricks on a plurality of hierarchical levels. The bricks on different hierarchical levels have different image element sizes of corresponding image elements.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: December 24, 2024
    Assignee: Carl Zeiss SMT GmbH
    Inventors: Jens Timo Neumann, Abhilash Srikantha, Christian Wojek, Thomas Korb
  • Patent number: 12175771
    Abstract: A control system for a vehicle includes a camera mounted on the vehicle and configured to take an image of an occupant of the vehicle, and an anti-droplet protective equipment providing device mounted on the vehicle and configured to provide anti-droplet protective equipment to the occupant, a determination unit configured to determine whether the occupant is wearing the anti-droplet protective equipment based on the image of the occupant taken by the camera, and a provision control unit configured to provide the anti-droplet protective equipment to the occupant with the anti-droplet protective equipment providing device when the determination unit determines that the occupant is not wearing the anti-droplet protective equipment.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: December 24, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Ryota Tomizawa, Shozo Takaba, Ayako Shimizu, Hojung Jung, Daisuke Sato, Yasuhiro Kobatake
  • Patent number: 12169975
    Abstract: This specification relates to reconstructing three-dimensional (3D) scenes from two-dimensional (2D) images using a neural network. According to a first aspect of this specification, there is described a method for creating a three-dimensional reconstruction of a scene with multiple objects from a single two-dimensional image, the method comprising: receiving a single two-dimensional image; identifying all objects in the image to be reconstructed and identifying the type of said objects; estimating a three-dimensional representation of each identified object; estimating a three-dimensional plane physically supporting all three-dimensional objects; and positioning all three-dimensional objects in space relative to the supporting plane.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: December 17, 2024
    Assignee: SNAP INC.
    Inventors: Riza Alp Guler, Georgios Papandreou, Iason Kokkinos
  • Patent number: 12169916
    Abstract: An inpainting method includes obtaining an image including an object having a delicate shape and identifying a target region within the image, where the target region is adjacent to the object. The method also includes using a first mask to separate the image into a number of semantic categories and aggregating neighboring contexts for the target region based on the semantic categories. The method further includes restoring, based on the aggregated contexts, textures in the target region without affecting the delicate shape of the object. In addition, the method includes displaying a refined image including the restored textures in the target region and the object.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: December 17, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wenbo Li, Hongxia Jin
  • Patent number: 12165291
    Abstract: A method for the simultaneous imaging of different physical properties of an examined medium from multi-physics datasets and for digital enhancement and restoration of multiple multidimensional digital images is described. The method introduces nonnegative joint entropy determined as a joint weighted average of the logarithm of the corresponding density of the model parameters and/or images and/or their attributes. The joint entropy measures are introduced as additional constraints, and their minimization results in enforcing of the order and consistency between the different model parameters and/or multiple images and/or their transforms. The method does not require a priori knowledge about specific physical, analytical, empirical or statistical relationships between the different model parameters and/or multiple images and their attributes, nor does the method require a priori knowledge about specific geometric or structural relationships between different model parameters, images, and/or their attributes.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: December 10, 2024
    Assignee: TECHNOIMAGING, LLC
    Inventor: Michael S. Zhdanov
  • Patent number: 12158925
    Abstract: A method for an online mapping system includes localizing a location of an ego vehicle relative to an offline feature map. The method also includes querying surrounding features of the ego vehicle based on the offline feature map. The method further includes generating a probabilistic map regarding the surrounding features of the ego vehicle queried from the offline feature map.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: December 3, 2024
    Inventors: Shunsho Kaku, Jeffrey Michael Walls, Ryan Wolcott
  • Patent number: 12159378
    Abstract: Disclosed is a high-contrast minimum variance imaging method based on deep learning. For the problem of the poor performance of a traditional minimum variance imaging method in terms of ultrasonic image contrast, a deep neural network is applied in order to suppress an off-axis scattering signal in channel data received by an ultrasonic transducer, and after the deep neural network is combined with a minimum variance beamforming method, an ultrasonic image with a higher contrast can be obtained while the resolution performance of the minimum variance imaging method is maintained. In the present method, compared with the traditional minimum variance imaging method, after an apodization weight is calculated, channel data is first processed by using a deep neural network, and weighted stacking of the channel data is then carried out, so that the pixel value of a target imaging point is obtained, thereby forming a complete ultrasonic image.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: December 3, 2024
    Assignee: SOUTH CHINA UNIVERSITY OF TECHNOLOGY
    Inventors: Junying Chen, Renxin Zhuang
  • Patent number: 12136271
    Abstract: Aspects of the disclosure relate to controlling a vehicle. For instance, using a camera, a first camera image including a first object may be captured. A first bounding box for the first object and a distance to the first object may be identified. A second camera image including a second object may be captured. A second bounding box for the second image and a distance to the second object may be identified. Whether the first object is the second object may be determined using a plurality of models to compare visual similarity of the two bounding boxes, to compare a three-dimensional location based on the distance to the first object and a three-dimensional location based on the distance to the second object, and to compare results from the first and second models. The vehicle may be controlled in an autonomous driving mode based on a result of the third model.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: November 5, 2024
    Assignee: Waymo LLC
    Inventors: Ruichi Yu, Kang Li, Tao Han, Robert Cosgriff, Henrik Kretzschmar
  • Patent number: 12136200
    Abstract: To replace text in a digital video image sequence, a system will process frames of the sequence to: define a region of interest (ROI) with original text in each of the frames; use the ROIs to select a reference frame from the sequence; select a target frame from the sequence; determine a transform function between the ROI of the reference frame and the ROI of the target frame; replace the original text in the ROI of the reference frame with replacement text to yield a modified reference frame ROI; and use the transform function to transform the modified reference frame ROI to a modified target frame ROI in which the original text is replaced with the replacement text. The system will then insert the modified target frame ROI into the target frame to produce a modified target frame. This process may repeat for other target frames of the sequence.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: November 5, 2024
    Assignee: CareAR Holdings LLC
    Inventors: Vijay Kumar Baikampady Gopalkrishna, Raja Bala
  • Patent number: 12130237
    Abstract: A method for calibrating a camera of a mobile device for detecting an analyte in a sample. An image of an object is captured using the camera with an illumination source turned on. A first area is determined in the image which is affected by direct reflection of light originating from the illumination source and reflected by the object. A second area which does not substantially overlap with the first area is determined as a target area of a test strip. Also disclosed is a detection in which a sample is applied to a test strip and a visual indication is provided to position the test strip relative to the camera to thereby locate the test field of the strip in the target area. An image of the test field is captured using the camera while the illumination source is turned on, and analyte concentration is determined from the image.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: October 29, 2024
    Assignee: Roche Diabetes Care, Inc.
    Inventors: Timo Klein, Max Berg
  • Patent number: 12106606
    Abstract: The embodiments of the present disclosure disclose a method for determining the direction of gaze. A specific implementation of the method includes: obtaining a face or eye image of a target subject, and establishing a feature extraction network; using an adversarial training method to optimize the feature extraction network, and implicitly removing the gaze-irrelevant features extracted by the feature extraction network, so that the feature extraction network extracts gaze-related features from the face or eye image to obtain the gaze-related features; determining the target gaze direction based on the gaze-related features. This implementation can separate the gaze-irrelevant features contained in the image features from the gaze-related features, so that the image features contain the gaze-related features, and that the accuracy and stability of the determined direction of gaze are further improved.
    Type: Grant
    Filed: December 24, 2021
    Date of Patent: October 1, 2024
    Assignee: Beihang University
    Inventors: Feng Lu, Yihua Cheng