Patents Examined by Santiago Garcia
  • Patent number: 11961250
    Abstract: A light-field image generation system including a shape information acquisition server that acquires shape information indicating a three-dimensional shape of an object, and an image generation server that is provided with a shape reconstruction unit that reconstructs the three-dimensional shape of the object as a virtual three-dimensional shape in a virtual space based on the shape information and a light-field image generation unit that generates a light-field image of the virtual three-dimensional shape at a predetermined viewing point in the virtual space.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: April 16, 2024
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventor: Tetsuro Morimoto
  • Patent number: 11960035
    Abstract: Systems and methods for encoding radiofrequency, RF, data, e.g., electrical signals, by a microbeamformer are disclosed herein. The microbeamformer may use a pseudo-random sampling pattern (700) to sum samples of the RF data stored in a plurality of memory cells. The memory cells may be included in a delay line of the microbeamformer in some examples. The summed samples may form an encoded signal transmitted to a decoder which reconstructs the original RF data from the encoded signal. The decoder may use knowledge of the pseudo-random sampling pattern to reconstruct the original data in some examples.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: April 16, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Gregory Tsiang Ely, Faik Can Meral, Jean-Luc Francois-Marie Robert
  • Patent number: 11954941
    Abstract: One example method includes accessing an appearance history of a person, the appearance history including information concerning an appearance of a person at a particular time, generating, based on the appearance history, a forecast that comprises a probability that the person will appear again at some future point in time, determining that the probability meets or exceeds a threshold, and updating a high probability group database to include a facial image of a face of the person.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: April 9, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Avitan Gefen, Amihai Savir
  • Patent number: 11954917
    Abstract: A present disclosure is a method of segmenting an abnormal robust for complex autonomous driving scenes and a system thereof, specifically relates to the technical field of an image segmenting system. The system includes: a segmentation module, configured to transmit an obtained input image to the segmentation network to obtain a segmentation prediction image, and then quantify the uncertainty of a segmentation prediction by means of calculating two different discrete metrics; a synthesis module, configured to match a generated data distribution with a data distribution of the input image by utilizing a conditional generative adversarial network; a difference module, configured to model and calculate the input image, an generated image, the semantic feature map and the uncertainty feature map based on an encoder, a fusion module and a decoder, to generate the segmentation prediction images for the abnormal objects; a model training module; and an integrated prediction module.
    Type: Grant
    Filed: July 12, 2023
    Date of Patent: April 9, 2024
    Assignee: Shandong Kailin Environmental Protection Equipment Co., Ltd.
    Inventors: Shouen Pang, Jichong Yang, Xiaoming Xi, Yang Ning, Longsheng Xu, Shixi Pang, Zhenxing Sun
  • Patent number: 11955272
    Abstract: A method for generating an object detector based on deep learning capable of detecting an extended object class is provided. The method is related to generating the object detector based on the deep learning capable of detecting the extended object class to thereby allow both an object class having been trained and additional object class to be detected. According to the method, it is possible to generate the training data set necessary for training an object detector capable of detecting the extended object class at a low cost in a short time and further it is possible to generate the object detector capable of detecting the extended object class at a low cost in a short time.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: April 9, 2024
    Assignee: SUPERB AI CO., LTD.
    Inventor: Kye Hyeon Kim
  • Patent number: 11944261
    Abstract: An electronic endoscope system includes an electronic endoscope, a processor that includes an evaluation unit, and a monitor. The evaluation unit includes an image evaluation value calculation unit that calculates an image evaluation value indicating an intensity of lesion in the living tissue for each of a plurality of images of the living tissue, and a lesion evaluation unit that calculates a representative evaluation value of the image evaluation value from the image evaluation values of the plurality of images corresponding to a plurality of sections for each of the plurality of sections in which a region of the organ is divided using information of an imaging position in the organ whose image is captured and evaluates an extent of the lesion in a depth direction inside the organ using the representative evaluation value.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: April 2, 2024
    Assignee: HOYA CORPORATION
    Inventors: Ryohei Koizumi, Yousuke Ikemoto, Takao Makino
  • Patent number: 11948297
    Abstract: A racially unbiased mammogram analyzer includes an interface for receiving mammograms; a processor for extracting features of mammograms of general population; a processor for extracting features of mammograms of a specific race. In one embodiment, the general population mammogram features are represented by middle layers of a CNN and the race specific features are represented by the end layer of the CNN network. In one embodiment, the race specific layers of CNN change dynamically according to the race indication done explicitly. In one embodiment the race specific layers of CNN change dynamically according to the race indication given by race indication processor. In one embodiment, the race indications are computed by a network of parallel variational autoencoder networks. In one embodiment, the race indicator computes race specific information to the CNN and are provided by variational autoencoders.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: April 2, 2024
    Assignee: MedCognetics, Inc.
    Inventors: Timothy Cogan, Richard Stubblefield, Lakshman Tamil
  • Patent number: 11941357
    Abstract: Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing text similarity determination. Certain embodiments of the present invention utilize systems, methods, and computer program products that perform text similarity determination by using at least one of Word Mover's Similarity measures, Relaxed Word Mover's Similarity measures, and Related Relaxed Word Mover's Similarity measures.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: March 26, 2024
    Assignee: OPTUM TECHNOLOGY, INC.
    Inventors: Suman Roy, Amit Kumar, Sourabh Kumar Bhattacharjee, Shashi Kumar, William Scott Paka, Tanmoy Chakraborty
  • Patent number: 11935388
    Abstract: The present invention discloses a method to deliver a reminder message. The method includes a step of triggering a delivery of the reminder message upon detecting or sensing a reminder message required event-or-activity to prevent a person from forgetting or losing a person item. In an exemplary embodiment, the step of sensing the reminder message required event-or-activity includes a step of detecting or sensing an activity when the person preparing to leave a place for a next destination.
    Type: Grant
    Filed: May 31, 2021
    Date of Patent: March 19, 2024
    Inventor: Bo-In Lin
  • Patent number: 11935296
    Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: March 19, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Young Moon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
  • Patent number: 11935307
    Abstract: A vehicle control automatically distinguishes between a moving body and a stationary body, reduces user's operation process, and reduces burdens to shorten time for a parking process. Obstruction points are grouped so as to be divided between obstructions, coloring of moving and stationary bodies are changed for each obstruction, and it is determined whether there is an obstruction for which the coloring has not been changed. If there is an obstruction for which the coloring has not been changed, whether there is license plate information and whether the obstruction is a moving body or a stationary body are determined, and a moving body is changed to red and a stationary body to blue. A display device displays the obstruction information distinguished between stationary or moving objects, and a message to the user such as “obstruction stored” notifies the user of completion of the distinction of the obstruction types.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: March 19, 2024
    Assignee: HITACHI AUTOMOTIVE SYSTEMS, LTD.
    Inventors: Koichiro Ozaki, Takashi Tsutsui
  • Patent number: 11926319
    Abstract: A driving monitoring device (100) decides an applicable collision pattern that applies to a collision pattern of a case where a vehicle (200) collides with a mobile object, based on a velocity vector of the vehicle, a velocity vector of the mobile object, and so on. Subsequently, the driving monitoring device calculates a time until collision, which is a time taken until the vehicle collides with the mobile object, in the applicable collision pattern. Then, the driving monitoring device calculates a danger level of an accident in which the vehicle collides with the mobile object, based on the applicable collision pattern and the time until collision.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: March 12, 2024
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Haruo Nakata, Masahiko Tanimoto, Yosuke Ishiwatari, Masahiro Abukawa
  • Patent number: 11921819
    Abstract: A defense method against adversarial examples based on feature remapping, includes the following steps: building the feature remapping model, the feature remapping model is composed of the significant feature generation model and the nonsignificant feature generation model, and a shared discriminant model, the significant generation model is used to generate significant features, the nonsignificant generation model is used to generate nonsignificant features, and the shared discriminant model is used to discriminate fake or true of generated significant and nonsignificant features.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: March 5, 2024
    Assignee: ZHEJIANG UNIVERSITY OF TECHNOLOGY
    Inventors: Jinyin Chen, Haibin Zheng, Longyuan Zhang, Xueke Wang
  • Patent number: 11922727
    Abstract: Disclosed herein are methods, apparatus, and systems for iris recognition. A method includes acquiring at least two angularly differentiated iris images from a subject needing access, processing each of the at least two angularly differentiated iris images to generate at least one boundary delineated image from one of the at least two angularly differentiated iris images, applying image comparative analysis to the at least two angularly differentiated iris images to generate a boundary delineated image when the processing fails to produce the at least one boundary delineated image, segmenting and encoding one of the at least one boundary delineated image or the boundary delineated image to generate at least one iris template, matching the at least one iris template against an enrolled iris, and accepting the subject for access processing when the at least one iris template matches the enrolled iris.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: March 5, 2024
    Assignee: Princeton Identity
    Inventors: John Timothy Green, David Alan Ackerman, Jean-Michel Florent
  • Patent number: 11910784
    Abstract: An animal management system has one or more imaging devices, and a computing device coupled to the one or more image devices for receiving one or more images captured by the one or more imaging devices, processing at least one image using an artificial intelligence (AI) pipeline for: (i) detecting and locating in the image one or more animals, (ii) for each detected animal: (a) generating at least one section of the detected animal, (b) determining a plurality of key points in each section, (c) generating an embedding for each section based on the plurality of key points in the section, and (d) combining the embeddings for generating an identification of the detected animal with a confidence score. Key points and bounding boxes may also have associated confidence scores.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: February 27, 2024
    Inventors: Jeffrey Shmigelsky, Mocha Shmigelsky, Madison Lovett, Philip Cho
  • Patent number: 11915415
    Abstract: Embodiments of this application include an image processing method and apparatus, a non-transitory computer-readable storage medium, and an electronic device. In the image processing method a to-be-predicted medical image is input into a multi-task deep convolutional neural network model. The multi-task deep convolutional neural network model includes an image input layer, a shared layer, and n parallel task output layers. One or more lesion property prediction results of the to-be-predicted medical image is output through one or more of the n task output layers. The multi-task deep convolutional neural network model is trained with n types of medical image training sets, n being a positive integer that is greater than or equal to 2.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: February 27, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hong Shang, Zhongqian Sun, Xinghui Fu, Wei Yang
  • Patent number: 11897453
    Abstract: An automated parking system for a vehicle includes a camera configured to obtain images of objects proximate the vehicle, and a controller configured to review the obtained images of objects proximate the vehicle to classify an environment proximate the vehicle, determine a type of parking lot associated with the classified location and initiate an automated parking function of the vehicle corresponding to the determined type of parking lot.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: February 13, 2024
    Assignee: Continental Automotive Systems, Inc.
    Inventors: Julien Ip, Eduardo Jose Ramirez Llanos, Xin Yu, Kyle Carpenter
  • Patent number: 11890900
    Abstract: The transmitter is arranged in a tire attached to a wheel and configured to transmit data to a receiver. The transmitter includes an obtaining section configured to obtain a detection result of the sensor, a generating section configured to generate the data including the detection result of the sensor, a transmitting section configured to transmit the data generated by the generating section, and an organic power generation element that is a power source of the transmitter. The organic power generation element is configured to generate power through a chemical reaction with organic matter contained in a fuel solution accommodated in the tire.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: February 6, 2024
    Assignee: PACIFIC INDUSTRIAL CO., LTD.
    Inventors: Akira Momose, Yasuhisa Tsujita
  • Patent number: 11894125
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a fundus image processing machine learning models that is configured to process one or more fundus images captured by a fundus camera to generate a predicted label. One of the methods includes generating training data, comprising: receiving sets of one or more training fundus images captured by a fundus camera; receiving, for each of the sets, a ground truth label assigned to a different image of the eye of the patient corresponding to the set that has been captured using a different imaging modality; and generating, for each set of training fundus images, a training example that includes the set of training fundus images in association with the ground truth label assigned to the different image of the patients eye; and training the machine learning model on the training examples in the training data.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: February 6, 2024
    Assignee: Google LLC
    Inventors: Lily Hao Yi Peng, Dale R. Webster, Avinash Vaidyanathan Varadarajan, Pinal Bavishi
  • Patent number: 11887349
    Abstract: An image acquisition unit (2) acquires an image obtained by capturing at least eyes of a target person. An object specifying unit (4) specifies an object in a line-of-sight direction of the target person using the acquired image. An incident light amount calculation unit (6) calculates an incident light amount representing an amount of light incident on the eyes of the target person using the acquired image. A reference pupil size determination unit (8) determines a reference pupil size based on the calculated incident light amount. A pupil size calculation unit (10) calculates a pupil size of the target person using the acquired image. An interest determination unit (12) determines an interest of the target person in the object by comparing the determined reference pupil size with the calculated pupil size.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: January 30, 2024
    Assignee: NEC CORPORATION
    Inventors: Masato Tsukada, Hiroshi Imai, Chisato Funayama, Yuka Ogino, Ryuichi Akashi, Keiichi Chono, Emi Inui, Yasuhiko Yoshida, Hiroshi Yamada, Shoji Yachida, Takashi Shibata