Patents Examined by Chan S. Park
  • Patent number: 11978245
    Abstract: The present disclosure discloses a method and apparatus for generating an image. A specific embodiment of the method comprises: acquiring at least two frames of facial images extracted from a target video; and inputting the at least two frames of facial images into a pre-trained generative model to generate a single facial image. The generative model updates a model parameter using a loss function in a training process, and the loss function is determined based on a probability of the single facial generative image being a real facial image and a similarity between the single facial generative image and a standard facial image. According to this embodiment, authenticity of the single facial image generated by the generative model may be enhanced, and then a quality of a facial image obtained based on the video is improved.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: May 7, 2024
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Tao He, Gang Zhang, Jingtuo Liu
  • Patent number: 11977607
    Abstract: Disclosed are a CAM-based weakly supervised object localization device and method. The device includes: a feature map extractor configured to extract a feature map of a last convolutional layer in a convolutional neural network (CNN) in a process of applying an image to the CNN; a weight vector binarization unit configured to first binarize a weight vector of a linear layer in a process of sequentially applying the feature map to a pooling layer that generates a feature vector and the linear layer that generates a class label; a feature map binarization unit configured to second binarize the feature map based on the first binarized weight vector; and a class activation map generation unit configured to generate a class activation map for object localization based on the second binarized feature map.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: May 7, 2024
    Assignee: UIF (UNIVERSITY INDUSTRY FOUNDATION), YONSEI UNIVERSITY
    Inventors: Hye Ran Byun, Sanghuk Lee, Cheolhyun Mun, Pilhyeon Lee, Jewook Lee
  • Patent number: 11964762
    Abstract: Subject matter regards generating a 3D point cloud and registering the 3D point cloud to the surface of the Earth (sometimes called “geo-locating”). A method can include capturing, by unmanned vehicles (UVs), image data representative of respective overlapping subsections of the object, registering the overlapping subsections to each other, and geo-locating the registered overlapping subsections.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 23, 2024
    Assignee: Raytheon Company
    Inventors: Torsten A. Staab, Steven B. Seida, Jody D. Verret, Richard W. Ely, Stephen J. Raif
  • Patent number: 11968372
    Abstract: A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: April 23, 2024
    Assignee: Avalon Holographics Inc.
    Inventors: Matthew Hamilton, Chuck Rumbolt, Donovan Benoit, Matthew Troke, Robert Lockyer
  • Patent number: 11948303
    Abstract: A method and apparatus of objective assessment of images captured from a human gastrointestinal (GI) tract are disclosed. According to this method, one or more images captured using an endoscope when the endoscope is inside the human gastrointestinal (GI) tract are received. Whether there is any specific target object is checked. When one or more specific target objects in the images are detected: areas of the specific target objects in the images are determined; an objective assessment score is derived based on the areas of the specific target objects in a substantial number of images from the images; where the step of detecting the specific target objects is performed using an artificial intelligence process.
    Type: Grant
    Filed: June 19, 2021
    Date of Patent: April 2, 2024
    Assignee: CapsoVision Inc.
    Inventors: Kang-Huai Wang, Chenyu Wu, Gordon C. Wilson
  • Patent number: 11934488
    Abstract: The present disclosure provides a method and system for constructing a digital rock, and relates to the technical field of digital rocks. According to the method, a three-dimensional (3D) digital rock image that can reflect real rock information is obtained using an image scanning technology, and the image is preprocessed to obtain a digital rock training image for training a generative adversarial network (GAN). The trained GAN is stored to obtain a digital rock construction model. The stored digital rock construction model can be directly used to quickly construct a target digital rock image. This not only greatly reduces computational costs, but also reduces costs and time consumption for obtaining high-resolution sample images. In addition, the constructed target digital rock image can also reflect real rock information.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: March 19, 2024
    Assignee: China University of Petroleum (East China)
    Inventors: Yongfei Yang, Fugui Liu, Jun Yao, Huaisen Song, Kai Zhang, Lei Zhang, Hai Sun, Wenhui Song, Yuanbo Wang, Bozhao Xu
  • Patent number: 11908233
    Abstract: A system, method, and apparatus for generating a normalization of a single two-dimensional image of an unconstrained human face. The system receives the single two-dimensional image of the unconstrained human face, generates an undistorted face based on the unconstrained human face by removing perspective distortion from the unconstrained human face via a perspective undistortion network, generates an evenly lit face based on the undistorted face by normalizing lighting of the undistorted face via a lighting translation network, and generates a frontalized and neutralized expression face based on the evenly lit face via an expression neutralization network.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: February 20, 2024
    Assignee: Pinscreen, Inc.
    Inventors: Koki Nagano, Huiwen Luo, Zejian Wang, Jaewoo Seo, Liwen Hu, Lingyu Wei, Hao Li
  • Patent number: 11908100
    Abstract: Artefacts in a sequence of image frames may be reduced or eliminated through modification of an input image frame to match another image frame in the sequence, such as by geometrically warping to generate a corrected image frame with a field of view matched to another frame in sequence of frames with the image frame. The warping may be performed based on a model generated from data regarding the multi-sensor device. The disparity between image frames may be modeled based on image captures from the first and second image sensor for scenes at varying depths. The model may be used to predict disparity values for captured images, and those predicted disparity values used to reduce artefacts resulting from image sensor switching. The predicted disparity values may be used in image conditions resulting in erroneous actual disparity values.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: February 20, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Shizhong Liu, Jincheng Huang, Weiliang Liu, Jianfeng Ren, Sahul Madanayakanahalli Phaniraj Venkatesh
  • Patent number: 11908194
    Abstract: A modular tracking system is described comprising of the network of independent tracking units optionally accompanied by a LIDAR scanner and/or (one or more) elevated cameras. Tracking units are combining panoramic and zoomed cameras to imitate the working principle of the human eye. Markerless computer vision algorithms are executed directly on the units and provide feedback to motorized mirror placed in front of the zoomed camera to keep tracked objects/people in its field of view. Microphones are used to detect and localize sound events. Inference from different sensor is fused in real time to reconstruct high-level events and full skeleton representation for each participant.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: February 20, 2024
    Assignee: New York University
    Inventors: Yurii S. Piadyk, Carlos Augusto Dietrich, Claudio T Silva
  • Patent number: 11891740
    Abstract: A laundry appliance includes a basket rotatably mounted within a cabinet and defining a chamber configured for receiving a load of clothes, a water supply valve for regulating a flow of water into the chamber, and a camera assembly mounted within the cabinet in view of the chamber. A controller is configured to determine that the water supply valve is open to permit the flow of water into the chamber, identify an anticipated fog condition within the chamber based at least in part on the water supply valve being open, obtaining one or more images of the chamber using the camera assembly, analyzing the one or more images of the chamber to determine an actual fog condition in the chamber, and implementing a responsive action if the actual fog condition is different than the anticipated fog condition, e.g., providing a user notification.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: February 6, 2024
    Assignee: Haier US Appliance Solutions, Inc.
    Inventors: Je Kwon Yoon, Seung-Yeong Park, JaeHyo Lee
  • Patent number: 11887290
    Abstract: An electronic component evaluation method of evaluating a state of an electronic component includes acquiring reference point information, with respect to at least one terminal, reference point information including at least one of position information and first height information of a plurality of corresponding reference points on the terminal from imaging data obtained by image-capturing the electronic component including a component body and a plurality of terminals attached to the component body, and determining a state according to a shape of the electronic component based on a plurality of pieces of the reference point information.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: January 30, 2024
    Inventors: Daichi Gemba, Junji Morita
  • Patent number: 11881035
    Abstract: A calibration device for calibrating detection processing of a line of sight of a user can perform first calibration processing in which an instruction to look at each of a plurality of positions by moving the face is given and second calibration processing in which an instruction to look at each of the plurality of positions without moving the face is given. The calibration device executes both the first calibration processing and the second calibration processing when it is determined that an eye of the user can be detected, and executes the first calibration processing and cease to execute the second calibration processing when it is determined that an eye of the user cannot be detected.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: January 23, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Yuji Yasui
  • Patent number: 11880988
    Abstract: The present disclosure relates to an image registration method and a model training method thereof. The image registration method comprises obtaining a reference image and a floating image to be registered, performing image preprocessing on the reference image and the floating image, performing non-rigid registration on the preprocessed reference image and floating image to obtain a registration result image, and outputting the registration result image. The image preprocessing comprises performing, on the reference image and the floating image, coarse-to-fine rigid registration based on iterative closest point registration and mutual information registration. The non-rigid registration uses a combination of a correlation coefficient and a mean squared error between the reference image and the registration result image as a loss function.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: January 23, 2024
    Assignee: GE Precision Healthcare LLC
    Inventors: Chen Zhang, Zhoushe Zhao, Yingbin Nie
  • Patent number: 11880979
    Abstract: A method with video segmentation may include: acquiring, over time, a video sequence including a plurality of image frames, the plurality of image frames including a second image frame corresponding to a time t of the video sequence and a first image frame corresponding to a time t?1 before the time t; extracting a second feature vector from the second image frame; generating second hidden state information corresponding to the second image frame, based on first hidden state information corresponding to the first image frame and second fusion information in which the second feature vector is fused with information related to the second image frame stored in a memory; generating a second segmentation mask corresponding to the second image frame, based on an output vector corresponding to the second hidden state information; and outputting the second segmentation mask.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: January 23, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungin Park, Seokhwan Jang
  • Patent number: 11875521
    Abstract: A method for self-supervised depth and ego-motion estimation is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes generating a self-occlusion mask by manually segmenting self-occluded areas of images captured by the multi-camera rig of the ego vehicle. The method further includes multiplying the multi-camera photometric loss with the self-occlusion mask to form a self-occlusion masked photometric loss. The method also includes training a depth estimation model and an ego-motion estimation model according to the self-occlusion masked photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the depth estimation model and the ego-motion estimation model.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: January 16, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Rares Andrei Ambrus, Adrien David Gaidon, Igor Vasiljevic, Gregory Shakhnarovich
  • Patent number: 11872016
    Abstract: A computing system for wound tracking is disclosed herein. A server computing device receives a first image of a wound of a patient captured by a first camera. Subsequently, the server computing device receives a message generated by a computing device, the message indicating that a second camera of the computing device is to capture a second image of the wound. Responsive to receiving the message, the server computing device causes data to be transmitted to the computing device, the data based in part upon the first image. The data causes the computing device to present a semi-transparent overlay to a view of the wound on a display as perceived through a lens of the second camera, the semi-transparent overlay indicative of the first image. The computing device captures the second image via the second camera and causes the second image to be received by the server computing device.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: January 16, 2024
    Assignee: Allscripts Software, LLC
    Inventors: Joshua Brown, David Windell
  • Patent number: 11830162
    Abstract: An image processing apparatus includes a low-resolution image generating circuit configured to generate a low-resolution image including a second pixel corresponding to first pixels based on an input image including the first pixels, and an edge preserving smoothing circuit configured to generate a reliability of the second pixel based on characteristics of values of the first pixels and perform edge preserving smoothing on the input image using a value of the second pixel of which a reflection ratio is adjusted, based on the reliability of the second pixel.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: November 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyoungseok Ko, Sol Namkung, Ildo Kim
  • Patent number: 11822600
    Abstract: Systems, methods, devices, media, and computer readable instructions are described for local image tagging in a resource constrained environment. One embodiment involves processing image data using a deep convolutional neural network (DCNN) comprising at least a first subgraph and a second subgraph, the first subgraph comprising at least a first layer and a second layer, processing, the image data using at least the first layer of the first subgraph to generate first intermediate output data; processing, by the mobile device, the first intermediate output data using at least the second layer of the first subgraph to generate first subgraph output data, and in response to a determination that each layer reliant on the first intermediate data have completed processing, deleting the first intermediate data from the mobile device. Additional embodiments involve convolving entire pixel resolutions of the image data against kernels in different layers if the DCNN.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: November 21, 2023
    Assignee: Snap Inc.
    Inventors: Xiaoyu Wang, Ning Xu, Ning Zhang, Vitor R. Carvalho, Jia Li
  • Patent number: 11823357
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11810329
    Abstract: Methods and systems for determining a surface color of a target surface under an environment with an environmental light source. A plurality of images of the target surface are captured as the target surface is illuminated with a variable intensity, constant color light source and a constant intensity, constant color environmental light source, wherein the intensity of the light source on the target surface is varied by a known amount between the capturing of the images. A color feature tensor, independent of the environmental light source, is extracted from the image data, and used to infer a surface color of the target surface.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: November 7, 2023
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yuanhao Yu, Shuhao Li, Juwei Lu, Jin Tang