Patents Examined by Wesley J Tucker
  • Patent number: 11302045
    Abstract: An image processing apparatus comprises an image obtaining unit that obtains a captured image, an information obtaining unit that obtains analysis data recorded in correspondence with the captured image and including flag information indicating whether an object present in the captured image is a masking target, a detecting unit that detects objects from the captured image, and a mask processing unit that generates an image in which an object, among the objects detected from the captured image, which is indicated as the masking target by the flag information, is masked.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: April 12, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kan Ito
  • Patent number: 11295550
    Abstract: An image processing method comprises: acquiring an actual image of a specified target from a video stream collected by a camera; identifying an area not shielded by the VR HMD and an area shielded by the VR HMD of the face of the specified target from the actual image, and acquiring first facial image data corresponding to the area not shielded; obtaining second facial image data matching the first facial image data according to the first facial image data and a preset facial expression model, wherein the second facial image data correspond to the area shielded; and fusing the first facial image data and the second facial image data to generate a composite image. An image processing device comprises a first acquiring unit, an identifying unit, a second acquiring unit and a generating unit, and is for performing the steps of the method described above.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: April 5, 2022
    Inventors: Tianrong Dai, Yuge Zhu, Dachuan Zhao, Xiang Chen
  • Patent number: 11295116
    Abstract: A collation system of the present invention includes imaging means for acquiring a captured image of a pre-passage side area with respect to each of gates arranged in parallel with each other, and collation means for performing a collation process on the captured image of the pre-passage side area for each of the gates, between a previously registered target and a target included in the captured image. The collation means performs the collation process on the basis of a target in the captured image corresponding to one of the gates and a target in the captured image corresponding to another one of the gates.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: April 5, 2022
    Assignee: NEC CORPORATION
    Inventors: Taketo Kochi, Kenji Saito
  • Patent number: 11288891
    Abstract: In a method for operating a fingerprint sensor comprising a plurality of ultrasonic transducers, a first subset of ultrasonic transducers of the fingerprint sensor are activated, the first subset of ultrasonic transducers for detecting interaction between an object and the fingerprint sensor. Subsequent to detecting interaction between an object and the fingerprint sensor, a second subset of ultrasonic transducers of the fingerprint sensor are activated, the second subset of ultrasonic transducers for determining whether the object is a human finger, wherein the second subset of ultrasonic transducers comprises a greater number of ultrasonic transducers than the first subset of ultrasonic transducers.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: March 29, 2022
    Assignee: InvenSense, Inc.
    Inventors: James Christian Salvia, Hao-Yen Tang, Michael H. Perrott, Bruno W. Garlepp, Etienne De Foras
  • Patent number: 11281909
    Abstract: Exemplary embodiments of the present disclosure are directed towards a system for analyzing graffiti content, tracking graffiti vandals who executed graffiti on different surfaces and report the graffiti vandal reports to law enforcement agent or public works departments, comprising: a computing device is configured to allow a user to capture graffiti executed on a plurality of surfaces and uploads the captured graffiti to a graffiti analyzing and tracking module, the graffiti analyzing and tracking module parses out the graffiti content into data points and analyzes, reconfigures, and reports the data points to clearly reveal trends in categories on computing device, and a database configured to store information about graffiti crimes, locations, and allows the user to allocate resources, the computing device enables the graffiti analyzing and tracking module to display graffiti vandals and map the graffiti content to graffiti vandals that have appeared in graffiti renderings from the database.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: March 22, 2022
    Inventor: Timothy Kephart
  • Patent number: 11275970
    Abstract: The invention provides systems and method for generating device-specific artificial neural network (ANN) models for distribution across user devices. Sample datasets are collected from devices in a particular environment or use case and include predictions by device-specific ANN models executing the user devices. The received datasets are used with existing datasets and stored ANN models to generate updated device-specific ANN models from each of the stored instances of the device ANN models based on the training data.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: March 15, 2022
    Assignee: Xailient
    Inventors: Lars Oleson, Shivanthan Yohanandan, Ryan Mccrea, Deepa Lakshmi Chandrasekharan, Sabina Pokhrel, Yousef Rabi, Zhenhua Zhang, Priyadharshini Devanand, Bernardo Rodeiro Croll, James J. Meyer
  • Patent number: 11263293
    Abstract: Methods, structures and computer program products for digital sample rate conversion are presented. An input digital sample with a first frequency is converted to an output sample with a second frequency. A sample rate conversion circuit is provided which provides an enhanced transposed farrow structure that enables an optimised trade-off between noise levels and computational complexity. Each output sample is derived by convolution of a continuous time interpolation kernel with a continuous time step function representing the input sample stream. In a sample rate conversion structure, there is a trade-off between the quality and the computational complexity. The quality is defined as a ratio between the (wanted) signal power and the (unwanted) noise power. The computational complexity may be defined as the average number of arithmetic operations that are required to generate one output sample. A higher computational complexity will generally lead to a higher power consumption and larger footprint.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: March 1, 2022
    Assignee: Dialog Semiconductor B.V.
    Inventor: Wessel Lubberhuizen
  • Patent number: 11256907
    Abstract: Described herein is a system and techniques for classification of subjects within image information. In some embodiments, a set of subjects may be identified within image data obtained at two different points in time. For each of the subjects in the set of subjects, facial landmark relationships may be assessed at the two different points in time to determine a difference in facial expression. That difference may be compared to a threshold value. Additionally, contours of each of the subjects in the set of subjects may be assessed at the two different points in time to determine a difference in body position. That difference may be compared to a different threshold value. Each of the subjects in the set of subjects may then be classified based on the comparison between the differences and the threshold values.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: February 22, 2022
    Assignee: Adobe Inc.
    Inventors: Sourabh Gupta, Saurabh Gupta, Ajay Bedi
  • Patent number: 11256924
    Abstract: Systems and methods for identifying and associating contextual metadata across related media.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: February 22, 2022
    Assignee: OPENTV, INC.
    Inventors: Nicholas Daniel Doerring, John Michael Teixeira, Steven J. Szymanski, Claes Georg Andersson
  • Patent number: 11250242
    Abstract: A user terminal according to an embodiment of the present invention includes a capturing device for capturing a face image of a user, and an eye tracking unit for, on the basis of a configured rule, acquiring, from the face image, a vector representing the direction that the face of the user is facing, and a pupil image of the user, and performing eye tracking of the user by inputting, in a configured deep learning model, the face image, the vector and the pupil image.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: February 15, 2022
    Assignee: VisualCamp Co., Ltd.
    Inventors: Yoon Chan Seok, Tae Hee Lee
  • Patent number: 11243305
    Abstract: A method, system and computer program product for intelligent tracking and transformation between interconnected sensor devices of mixed type is disclosed. Metadata derived from image data from a camera is compared to different metadata derived from radar data from a radar device to determine whether an object in a Field of View (FOV) of one of the camera and the radar device is an identified object that was previously in the FOV of the other of the camera and the radar device.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: February 8, 2022
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Shervin Sabripour, John B Preston, Bert Van Der Zaag, Patrick D Koskan
  • Patent number: 11244145
    Abstract: An information processing apparatus includes an estimating unit that estimates a normal on at least a part of a face of an object in a real space on the basis of polarization information corresponding to a detection result of each of a plurality of beams of polarized light acquired by a polarization sensor and having different polarization directions and a control unit that controls output of notification information for guiding a change in a position in the real space according to an estimation result of the normal.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: February 8, 2022
    Assignee: SONY CORPORATION
    Inventors: Masashi Eshima, Akihiko Kaino, Daiki Yamanaka
  • Patent number: 11238611
    Abstract: Provided is a process for generating specifications for lenses of eyewear based on locations of extents of the eyewear determined through a pupil location determination process. Some embodiments capture an image and determine, using computer vision image recognition functionality, the pupil locations of a human's eyes based on the captured image depicting the human wearing eyewear.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: February 1, 2022
    Assignee: Electric Avenue Software, Inc.
    Inventors: David Barton, Ethan D. Joffe
  • Patent number: 11216647
    Abstract: Technology for devices, systems, techniques and processes to provide anti-spoofing features for facial identification with enhanced security against facial spoofing devices or technique by using optical sensing and other sensing mechanisms to explore certain unique characteristics of a face of a live person that lack in most spoofing devices made of artificial materials or are difficult to replicate, including optical sensing based on unique optical absorption or reflection features of biological parts of a person's face.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: January 4, 2022
    Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.
    Inventors: Yi He, Bo Pi
  • Patent number: 11205298
    Abstract: There is provided a method for creating a voxel occupancy model. The voxel occupancy model is representative of a region of space which can be described using a three-dimensional voxel array. The region of space contains at least part of an object. The method comprises receiving first image data, the first image data being representative of a first view of the at least part of an object and comprising first image location data, and receiving second image data, the second image data being representative of a second view of the at least part of an object and comprising second image location data. The method also comprises determining a first descriptor, the first descriptor describing a property of a projection of a first voxel of the voxel array in the first image data, and determining a second descriptor, the second descriptor describing a property of a projection of the first voxel in the second image data.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: December 21, 2021
    Assignee: BLUE VISION LABS UK LIMITED
    Inventors: Peter Ondruska, Lukas Platinsky
  • Patent number: 11195301
    Abstract: There is provided a method of computing head pose of a subject in a 2D image, comprising: analyzing the image to identify a first nostril, a second nostril, a medial eye substructure of a first and a second eye, and a lateral eye substructure of the first and the second eye, computing a median nostril point between the first and second nostrils, computing a horizontal line that connects the lateral eye substructures of the first and second eyes, computing a vertical axis line that is orthogonal to the horizontal line and passes through a median eye point between the medial eye substructures of the first and second eyes, computing an offset distance between the vertical axis line and the median nostril point, and computing an indication of an estimated yaw angle of the head pose of the subject based on the offset distance.
    Type: Grant
    Filed: July 26, 2020
    Date of Patent: December 7, 2021
    Assignee: NEC Corporation Of America
    Inventor: Tsvi Lev
  • Patent number: 11195307
    Abstract: Aspects of the present invention include an apparatus including a memory storing instructions, and a control unit configured to execute the instructions to detect an image of an object of interest within an image of real space, detect an orientation of the real space object image with respect to a real space user perspective, generate a modified image comprising an image of a modified object, corresponding to the real space object, such that an orientation of the modified object image corresponds to a desired user perspective, and display the modified image.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: December 7, 2021
    Assignee: SONY CORPORATION
    Inventors: Shunichi Kasahara, Ken Miyashita
  • Patent number: 11188771
    Abstract: The present disclosure provides a living-body detection method for a face. The method includes: when an object to be detected is not illuminated by a reference light source, performing image collection on an area to be detected of the object to be detected to obtain a first image; performing illumination on the object to be detected by utilizing the reference light source, and performing image collection on the area to be detected of the object to be detected to obtain a second image; and judging whether there is a bright spot reflected by a cornea of the object to be detected according to a difference between the first image and the second image to generate a judged result, and determining whether the object to be detected is a non-living body according to the judged result.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: November 30, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventor: Caihong Ma
  • Patent number: 11176668
    Abstract: Reaction values indicating based on which areas in an input image a plurality of identification results representing the states of cells or tissues are judged are calculated, and differences between a plurality of reaction values are obtained to thereby calculate the degree to which cells or tissues in an area of interest likely are in each state. Thereby, each state of cells or tissues can be visualized at an accurate position in an area of interest; therefore, diagnostic assistance to pathologist becomes possible.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: November 16, 2021
    Assignee: Hitachi High-Tech Corporation
    Inventors: Yasuki Kakishita, Hideharu Hattori, Kenko Uchida, Sadamitsu Aso, Toshinari Sakurai
  • Patent number: 11170254
    Abstract: A method for synthetic data generation and analysis including: determining a set of parameter values; generating a scene based on the parameter values; rendering a synthetic image of the scene; and generating a synthetic dataset including a set of synthetic images.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: November 9, 2021
    Inventors: Carl Magnus Wrenninge, Carl Jonas Magnus Unger