Patents Examined by Utpal D Shah
  • Patent number: 11120249
    Abstract: The present disclosure is to provide a computer-aided cell segmentation method for determining cellular Nuclear-to-Cytoplasmic ratio, which comprises acts of obtaining a cytological image using non-invasive in vivo biopsy technique; performing a nuclei segmentation process to identify a position and a contour of each of identified nuclei in the cytological image; performing a cytoplasmic process with an improved active contour model to obtain a cytoplasmic region for each identified nucleus based; and determine a cellular Nuclear-to-Cytoplasmic ratio based on the obtained nucleus and cytoplasmic regions.
    Type: Grant
    Filed: December 22, 2019
    Date of Patent: September 14, 2021
    Assignee: National Cheng Kung University
    Inventors: Gwo Giun Lee, Yi-Hsuan Chou, Chen-Han Sung
  • Patent number: 11112639
    Abstract: A method for sensing a biometric object using an electronic device includes the steps of: (a) emitting a sensing light from a backlight unit upon the biometric object contacting a sensing region on a display surface, and allowing the sensing light to pass through a color filter unit and then reach and be reflected by the biometric object to return as a reflected light; and (b) controlling arrangement of liquid crystal molecules located in a first region of a liquid crystal layer to define a first light path, and allowing the reflected light having predetermined wavelengths to pass through the color filter unit and the first light path to reach and be detected by the optical sensing unit.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: September 7, 2021
    Assignee: MACROBLOCK, INC.
    Inventor: Yi-Sheng Lin
  • Patent number: 11113573
    Abstract: A method of generating training data for a deep learning network is provided.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: September 7, 2021
    Assignee: Superb AI Co., Ltd.
    Inventor: Kye-Hyeon Kim
  • Patent number: 11113511
    Abstract: A makeup evaluation system according to an embodiment of the present invention includes a mobile terminal for photographing a facial image and transmitting the photographed facial image to a makeup server, and the makeup server for storing makeup score data and, when receiving the facial image from the mobile terminal, detecting at least one face region in the facial image, calculating a makeup score for each of the detected face regions on the basis of the makeup score data, and transmitting the calculated makeup score to the mobile terminal.
    Type: Grant
    Filed: February 1, 2018
    Date of Patent: September 7, 2021
    Assignee: LG HOUSEHOLD & HEALTH CARE LTD.
    Inventors: Sang E Kim, Do Hyuk Kwon, Do Sik Hwang, Tae Seong Kim, Doo Hyun Park, Ki Hun Bang, Tae Joon Eo, Yo Han Jun, Se Won Hwang
  • Patent number: 11100373
    Abstract: A system and methods are provided in which an artificial intelligence inference module identifies targeted information in large-scale unlabeled data, wherein the artificial intelligence inference module autonomously learns hierarchical representations from large-scale unlabeled data and continually self-improves from self-labeled data points using a teacher model trained to detect known targets from combined inputs of a small hand labeled curated dataset prepared by a domain expert together with self-generated intermediate and global context features derived from the unlabeled dataset by unsupervised and self-supervised processes. The trained teacher model processes further unlabeled data to self-generate new weakly-supervised training samples that are self-refined and self-corrected, without human supervision, and then used as inputs to a noisy student model trained in a semi-supervised learning process on a combination of the teacher model training set and new weakly-supervised training samples.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: August 24, 2021
    Assignee: DocBot, Inc.
    Inventors: Peter Crosby, James Requa
  • Patent number: 11093032
    Abstract: A sight line direction estimation device includes: an imaging unit that captures an image of a head portion including an eyeball of a person; an apparent sight line direction derivation unit that derives an apparent sight line direction connecting a center position of the eyeball and an apparent pupil center position on a surface of the eyeball corresponding to a pupil center position based on the image captured by the imaging unit; and a sight line direction estimation unit that estimates an actual sight line direction of the person based on a predetermined corresponding relationship between an actual sight line direction and the apparent sight line direction and the apparent sight line direction derived by the sight line direction derivation unit.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: August 17, 2021
    Assignee: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Shinichi Kojima, Takashi Kato
  • Patent number: 11087142
    Abstract: Systems and methods for recognizing fine-grained objects are provided. The system divides unlabeled training data from a target domain into two or more target subdomains using an attribute annotation. The system ranks the target subdomains based on a similarity to the source domain. The system applies multiple domain discriminators between each of the target subdomains and a mixture of the source domain and preceding target domains. The system recognizes, using the multiple domain discriminators for the target domain, fine-grained objects.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: August 10, 2021
    Inventors: Yi-Hsuan Tsai, Manmohan Chandraker, Shuyang Dai, Kihyuk Sohn
  • Patent number: 11068684
    Abstract: To provide a fingerprint authentication sensor module having a simple configuration, high resolution, and a high authentication rate. A fingerprint authentication device includes a cover glass on which a finger is to be placed and a fingerprint authentication sensor module placed under the cover glass. The fingerprint authentication sensor module includes an image forming unit, a first glass portion placed under the image forming unit, and an image sensor placed under the first glass portion. The image forming unit includes an array of a plurality of microlenses and light-shielding portions that surround each of the plurality of microlenses and that limit light entering the array of the plurality of microlenses.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: July 20, 2021
    Assignee: MICROMETRICS TECHNOLOGIES PTE. LTD.
    Inventor: Hiroshi Ishibe
  • Patent number: 11068692
    Abstract: The image capturing device under the screen and the electronic equipment including a nonopaque cover plate, a light source module, and a photosensor module are provided. The nonopaque cover plate, the light source module, and the photosensor module are sequentially arranged from top to bottom. The nonopaque cover plate is provided with nonopaque area, the light source module includes a plurality of light sources arranged in an array, the photosensor module comprises a plurality of discrete photosensors. The light emitted from each light source toward the nonopaque area is reflected by the nonopaque cover plate and received by one photosensor in the photosensor module.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: July 20, 2021
    Inventor: Fengjun Gu
  • Patent number: 11055517
    Abstract: A non-contact human input method is to capture an image in front of a non-contact human input interface of the non-contact human input system, then extract a figure from the image, and then determine whether a user corresponding to the figure is at an input state according to a posture of the figure. The method is to ignore actions of the user when the user is not at the input state. The method is to receive a voice input or a posture input from the user when the user is at the input state. A non-contact human input system includes a displaying device, at least one camera, and a processor electrically connected to the displaying device and the at least one camera. The at least one camera is used to capture an image of a user. The processor implements the above non-contact human input method.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: July 6, 2021
    Assignee: QISDA CORPORATION
    Inventors: Tsung-Hsun Wu, Min-Hsiung Huang
  • Patent number: 11049094
    Abstract: The disclosure relates, e.g., to image processing technology including device to device communication. One claim recites an apparatus for device to device communication using displayed imagery, said apparatus comprising: a camera for capturing a plurality of image frames, the plurality of image frames representing a plurality of graphics displayed on a display screen of a mobile device, in which each of the graphics comprises an output from an erasure code generator, in which the erasure code generator produces a plurality of outputs corresponding to a payload; means for decoding outputs from the plurality of graphics; and means for constructing the payload from decoded outputs; and means for carrying out an action based on a constructed payload. A great variety of other features, arrangements and claims are also detailed.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: June 29, 2021
    Assignee: Digimarc Corporation
    Inventor: Tomas Filler
  • Patent number: 11042779
    Abstract: Described is a system for automatically generating images that satisfy specific image properties. Using a code parser component, a tensor expression intermediate representation (IR) of a deep neural network code is produced. A specification describing a set of image properties is parsed in a fixed formal syntax. The tensor expression IR and the specification is input into a rewriting and analysis engine. The rewriting and analysis engine queries an external solver to obtain pixel values satisfying the specification. When pixel values satisfying the specification can be found in a fixed time period, the rewriting and analysis engine combines the pixel values into an image that satisfies the specification and outputs the image.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: June 22, 2021
    Assignee: HRL Laboratories, LLC
    Inventors: Michael A. Warren, Pape Sylla
  • Patent number: 11042995
    Abstract: A system includes at least one dynamically configurable tracking device, receiving a video stream and, adapted for detection and automatic tracking of at least one target by analysis of the video stream; a calculator of a metric performance value starting from a target tracking result supplied by the tracking device; a configuration parameter corrector of the tracking device as a function of the metric performance value; and a dynamic configurator of the tracking device by applying the corrected configuration parameter. It further includes a supplementary reference tracking device, receiving at least one portion of the video stream, and the calculator calculates the metric performance value from a comparison on the video stream portion, of the target tracking result supplied by the tracking device and a reference tracking result supplied by the supplementary reference tracking device.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: June 22, 2021
    Assignee: BULL SAS
    Inventors: Florent Trolat, Sophie Guegan-Marat
  • Patent number: 11044450
    Abstract: Techniques are described for white balancing an input image by determining a color transformation for the input image based on color transformations that have been computed for training images whose color characteristics are similar to those of the input image. Techniques are also described for generating a training dataset comprising color information for a plurality of training images and color transformation information for the plurality of training images. The color information in the training dataset is searched to identify a subset of training images that are most similar in color to the input image. The color transformation for the input image is then computed by combining color transformation information for the identified training images. The contribution of the color transformation information for any given training image to the combination can be weighted based on the degree of color similarity between the input image and the training image.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: June 22, 2021
    Assignees: Adobe Inc., York University
    Inventors: Mahmoud Afifi, Michael Brown, Brian Price, Scott Cohen
  • Patent number: 11037109
    Abstract: Machine-readable storage media having instructions stored therein that, when executed by a processor of a mobile device, configure the mobile device to capture a check image for funds to be deposited into a recipient account. The mobile device configured to display a request to a user of the mobile device to provide one or more portions of a MICR line for the received check image and receive user inputs from the user specifying the one or more portions of the MICR line. The mobile device configured to transmit a message to a bank account computer system associated with the recipient account, the message including data specifying the one or more portions of the MICR line.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: June 15, 2021
    Assignee: Wells Fargo Bank, N.A.
    Inventors: David Joel Sherman, Nishant Usapkar, Katie Knight, Ranjit S. Pradhan
  • Patent number: 11030489
    Abstract: A method for auto-labeling images by using a class-agnostic refinement module is provided. The method includes steps of: (a) an auto-labeling device inputting the images into a coverage controlling module, to thereby allow the coverage controlling module to label objects on the images and thus to output first labeling data including first object region data and first class data; (b) the auto-labeling device inputting the images and the first object region data into the class-agnostic refinement module, to thereby allow the class-agnostic refinement module to label the objects on the images and thus to generate second object region data, and allowing the class-agnostic refinement module to align the first object region data and the second object region data to thereby output refined object region data; and (c) the auto-labeling device generating second labeling data including the first class data and the refined object region data.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: June 8, 2021
    Assignee: SUPERB AI CO., LTD.
    Inventors: Kye-Hyeon Kim, Jung Kwon Lee
  • Patent number: 11023702
    Abstract: There is provided a fingerprint sensing module comprising a fingerprint sensor device having a sensing array arranged on a first side of the device, the sensing array comprising an array of fingerprint sensing elements. The fingerprint sensor device also comprises connection pads for connecting the fingerprint sensor device to external circuitry. The fingerprint sensing module further comprises a fingerprint sensor device cover structure arranged to cover the fingerprint sensor device, the cover structure having a first side configured to be touched by a finger, thereby forming a sensing surface of the sensing module, and a second side facing the sensing array, wherein the cover structure comprises conductive traces for electrically connecting the fingerprint sensor module to external circuitry, and wherein a surface area of the cover structure is larger than a surface area of the sensor device.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: June 1, 2021
    Assignee: Fingerprint Cards AB
    Inventors: Nils Lundberg, Zhimin Mo, Mats Slottner
  • Patent number: 11024027
    Abstract: Systems and methods for generating synthesized images are provided. An input medical image patch, a segmentation mask, a vector of appearance related parameters, and manipulable properties are received. A synthesized medical image patch including a synthesized nodule is generated based on the input medical image patch, the segmentation mask, the vector of appearance related parameters, and the manipulable properties using a trained object synthesis network. The synthesized nodule is synthesized according to the manipulable properties. The synthesized medical image patch is output.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: June 1, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Siqi Liu, Eli Gibson, Sasa Grbic, Zhoubing Xu, Arnaud Arindra Adiyoso, Bogdan Georgescu, Dorin Comaniciu
  • Patent number: 11023779
    Abstract: A method for training an auto labeling device performing automatic verification using uncertainty of labels is provided. The method includes steps of: a learning device (a) (i) inputting unlabeled training images into (i-1) an object detection network to generate bounding boxes for training and (i-2) a convolution network to generate feature maps for training, and (ii) allowing an ROI pooling layer to generate pooled feature maps for training and inputting the pooled feature maps for training into a deconvolution network to generate segmentation masks for training; and (b) (i) inputting the pooled feature maps for training into at least one of (i-1) a first classifier to generate first per-pixel class scores for training and first mask uncertainty scores for training and (i-2) a second classifier to generate second per-pixel class scores for training and second mask uncertainty scores for training and (ii) training the first classifier or the second classifier.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: June 1, 2021
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye-Hyeon Kim
  • Patent number: 11023780
    Abstract: A method for training an auto labeling device capable of performing automatic verification by using uncertainty scores of labels is provided. The method includes steps of: a learning device (a) inputting unlabeled training images into a trained object detection network and a trained convolution network to generate bounding boxes for training and feature maps for training; and (b) (i) instructing an ROI pooling layer to generate pooled feature maps for training, (ii) at least one of (ii-1) inputting the pooled feature maps for training into a first classifier to generate first class scores for training and first box uncertainty scores for training, and (ii-2) inputting the pooled feature maps for training into a second classifier to generate second class scores for training and second box uncertainty scores for training, and (iii) training one of the first classifier using first class losses and the second classifier using second class losses.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: June 1, 2021
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye-Hyeon Kim