Patents Examined by Leon-Viet Q. Nguyen
  • Patent number: 11466984
    Abstract: An ECU includes a memory including computer executable instructions for monitoring the condition of a ground engaging tool, and a processor coupled to the memory and configured to execute the computer executable instructions, the computer executable instructions when executed by the processor cause the processor to: acquire an image of the ground engaging tool, evaluate the image using an algorithm that compares the acquired image to a database of existing images to determine the damage, the amount of wear, or the absence of the ground engaging tool, and grade the quality of the acquired image.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: October 11, 2022
    Assignee: Caterpillar Inc.
    Inventors: John Michael Plouzek, Mitchell Chase Vlaminck, Nolan S. Finch
  • Patent number: 11468698
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468663
    Abstract: There is provided a method for generating a personalized Head Related Transfer Function (HRTF). The method can include capturing an image of an ear using a portable device, auto-scaling the captured image to determine physical geometries of the ear and obtaining a personalized HRTF based on the determined physical geometries of the ear.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: October 11, 2022
    Assignee: Creative Technology Ltd
    Inventors: Teck Chee Lee, Christopher Tjiongan, Desmond Hii, Geith Mark Benjamin Leslie
  • Patent number: 11468681
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the sets of vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, Jaechul Kim, Kushagra Srivastava, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11462010
    Abstract: The various embodiments of the present invention relate to an electronic apparatus, and a method for controlling same. The electronic apparatus according to the present invention comprises a display, a communication module, and a processor electrically connected to the display and communication module, wherein the processor controls so that an object detected in an image displayed on the display is recognized, and one or more elements contained in the object are classified and displayed on the display, and can control so that an image similar to the detected object is searched for, by means of the communication module, and displayed on the display.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: October 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Boram Lee, Jung-Kun Lee, Yeseul Hong, Jieun Lee
  • Patent number: 11443832
    Abstract: The present disclosure provides methods, systems, and computer program products that use deep learning models to classify candidate mutations detected in sequencing data, particularly suboptimal sequencing data. The methods, systems, and programs provide for increased efficiency, accuracy, and speed in identifying mutations from a wide range of sequencing data.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: September 13, 2022
    Assignee: NVIDIA Corporation
    Inventors: Johnny Israeli, Avantika Lal, Michael Vella, Nikolai Yakovenko, Zhen Hu
  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 11443521
    Abstract: A method for evaluating images and, in particular, for evaluating correspondence hypotheses of images. The method includes (i) providing a hypothesis matrix of correspondence hypotheses between first and second images, each given as a corresponding image matrix, (ii) evaluating the hypothesis matrix and conditional verification of the image correspondence hypotheses and (iii) providing verified image correspondence hypotheses in a correspondence matrix of image correspondences as the evaluation result, the hypothesis matrix being evaluated by forming and evaluating a histogram with respect to the values of the component for each element of the hypothesis matrix for at least one component of correspondence hypotheses.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: September 13, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Fernando Ortiz Cuesta, Stephan Simon, Arne Zender
  • Patent number: 11443451
    Abstract: A vehicular positioning system utilizing multiple optical cameras having contiguous fields of view for reading coded markers having pre-determined positions for determining the position of vehicle inside a structure with a high degree of accuracy. The vehicle positioning system provides for the direct installation and use of a positioning apparatus on a vehicle with a limited number of coded markers to determine the vehicle's position to within millimeter level accuracy.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: September 13, 2022
    Assignee: Topcon Positioning Systems, Inc.
    Inventors: Dmitry Vitalievich Tatarnikov, Leonid Valerianovich Edelman, Aleksandr Aleksandrovich Pimenov, Michail Nikolaevich Smirnov, Nikolay Aleksandrovich Penkrat
  • Patent number: 11443141
    Abstract: A method, system and computer-usable medium are disclosed for tracking selected points in a series of images to determine motions made by a subject to perform an action to train a system, such as a machine or robot. A series of images are received depicting incremental steps of the subject performing the action. Selected points that are useful to track the subject performing the action are identified. Datasets of points used to train a model are mapped, and the model is trained using the mapped datasets of points.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: September 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Justin Eyster, Al Chakra, Aniruddh Jhavar, Patrick Morrison, Gagandeep Rajpal
  • Patent number: 11430263
    Abstract: A method is described that includes receiving raw image data corresponding to a series of raw images, and processing the raw image data with an encoder of a processing device to generate encoded data. The encoder is characterized by an input/output transformation that substantially mimics the input/output transformation of at least one retinal cell of a vertebrate retina. The method also includes processing the encoded data to generate dimension reduced encoded data by applying a dimension reduction algorithm to the encoded data. The dimension reduction algorithm is configured to compress an amount of information contained in the encoded data. An apparatus and system usable with such a method is also described.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 30, 2022
    Assignee: CORNELL UNIVERSITY
    Inventor: Sheila Nirenberg
  • Patent number: 11430205
    Abstract: A method and an apparatus for detecting a salient object in an image includes separately performing convolution processing corresponding to at least two convolutional layers on a to-be-processed image to obtain at least two first feature maps of the to-be-processed image, performing superposition processing on at least two first feature maps included in a superposition set in at least two sets to obtain at least two second feature maps of the to-be-processed image, the at least two sets are in a one-to-one correspondence with the at least two second feature maps, and a resolution of a first feature map included in the superposition set is lower than or equal to a resolution of a second feature map corresponding to the superposition set, and splicing the at least two second feature maps to obtain a saliency map.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 30, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qibin Hou, Mingming Cheng, Wei Bai, Xunyi Zhou
  • Patent number: 11416705
    Abstract: An image acquisition unit acquires image data in which an image of a normal monitoring target is captured. An image processing unit generates a plurality of duplicate image data pieces by performing different image processing causing a change in color tone on the image data within a range not exceeding a normal range of the monitoring target. A learning unit trains a model so as to output a value used for determining normality of the monitoring target from the image data, in which the image of the monitoring target is captured, using the plurality of duplicate image data pieces as training data.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: August 16, 2022
    Assignee: MITSUBISHI HEAVY INDUSTRIES, LTD.
    Inventors: Masumi Nomura, Koki Tateishi, Motoshi Takasu
  • Patent number: 11408878
    Abstract: A method, computer system, and a computer program product for lifecycle prediction is provided. The present invention may include identifying a unit of a product that has been selected from a store bin. The present invention may include retrieving data from at least one connected Internet of Things (IoT) device and a connected infrared scanner. The present invention may include predicting a lifecycle of the identified unit. The present invention may include displaying the lifecycle on the at least one connected IoT device or the infrared scanner. The present invention may include determining whether the unit is added to a cart. The present invention may lastly include pushing the collected data to a point of sale for generating data at checkout.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventors: Raghuveer Prasad Nagar, Jagadesh Ramaswamy Hulugundi, Srikant Vitta
  • Patent number: 11410458
    Abstract: A method and device for face identification, and a mobile terminal and a storage medium are provided. The method includes: (101) an image sensor is controlled to perform imaging; (102) imaging data obtained by the image sensor through the imaging is acquired; and (103) liveness detection is performed on an imaging object based on the imaging data.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: August 9, 2022
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Haitao Zhou, Fangfang Hui, Ziqing Guo, Xiao Tan
  • Patent number: 11410325
    Abstract: An electronic apparatus and method for configuration of an audio reproduction system is provided. The electronic apparatus captures a set of stereo images of the listening environment and identifies a plurality of objects, including a display device and a plurality of audio devices, in the set of stereo images. The electronic apparatus estimates first location information of the plurality of audio devices and second location information of the display device. Based on first location information and the second location information, the electronic apparatus identifies a layout of the plurality of audio devices. The electronic apparatus receives an audio signal from each audio device and determines a distance between each audio device of the plurality of audio devices and a user location based on the received audio signal. The electronic apparatus determines an anomaly in connection of at least one audio device and generates connection information based on the determined anomaly.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: August 9, 2022
    Assignee: SONY CORPORATION
    Inventors: Sharanappagouda Patil, Kaneaki Fujishita
  • Patent number: 11403813
    Abstract: Systems and methods for generating a three-dimensional (3D) model of a user's dental arch based on two-dimensional (2D) images include a model training system that receives data packets of a training set. Each data packet may include data corresponding to training images of a respective dental arch and a 3D training model of the respective dental arch. The model training system identifies, for a data packet, correlation points between the one or more training images and the 3D training model of the respective dental arch. The model training system generates a machine learning model using the correlation points for the data packets of the training set. A model generation system receives one or more images of a dental arch. The model generation system generates a 3D model of the dental arch by applying the images of the dental arch to the machine learning model.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 2, 2022
    Assignee: SDC U.S. SMILEPAY SPV
    Inventors: Jordan Katzman, Christopher Yancey, Tim Wucher
  • Patent number: 11397872
    Abstract: A method of expanding a visual learning database in a computer by teaching the computer includes providing a series of training images to the computer wherein each series includes three images with each image falling within a unique image domain and with each image domain representing a possible combination of a first attribute and a second attribute with a first image domain including the first attribute and the second attribute in a first state (X=0, Y=0), a second image domain including the first attribute in a second state and the second attribute in the first state (X=1, Y=0), and a third image domain including the first attribute in the first state and the second attribute in the second state (X=0, Y=1).
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: July 26, 2022
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Yunye Gong, Ziyan Wu, Jan Ernst
  • Patent number: 11380134
    Abstract: The disclosure discloses a method and device for determining parameter for a gaze tracking device. The method includes that: an image to be detected and a characteristic matrix of the image to be detected are acquired; the image to be detected is processed according to a preset model to obtain a first vector and second vector of the image to be detected; and a parameter of the image to be detected is determined according to the first vector and the second vector, the parameter of the image to be detected including at least one of: a position of a glint in the image to be detected and a gaze direction of eyes in the image to be detected.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: July 5, 2022
    Assignee: BEIJING 7INVENSUN TECHNOLOGY CO., LTD.
    Inventors: Wei Liu, Jian Wang, Dongchun Ren, Fengmei Nie, Xiaohu Gong
  • Patent number: 11367205
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: June 21, 2022
    Assignee: Snap Inc.
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li