Patents Examined by Ian L Lemieux
  • Patent number: 11042787
    Abstract: This disclosure describes a system for automatically updating item image information stored in an item images data store and used for processing captured images to identify items represented in those images. In one implementation, once an identity of an item has been verified, captured images of that item are associated with the item and stored in the item images data store. As a result, the item images data store is updated each time an image of the item is captured and the identity of the item is verified.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: June 22, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, Jon Robert Ducrou, Joseph Xavier, Ramanathan Palaniappan, Michel Leonard Goldstein, Michael Lee Brundage
  • Patent number: 11037293
    Abstract: A cell observation system includes an imaging element that acquires images of the inside of a culture container in which cells are cultured, the imaging element acquiring the images over time; a computer configured to: quantitatively analyze a culture state of the cells cultured in the culture container on the basis of each of the images acquired by the imaging element; and statistically analyze the quantitatively analyzed data; and a display that displays statistical analysis results in the culture container within a plurality of subculture periods obtained by the computer in a manner allowing comparison of the statistical analysis results.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: June 15, 2021
    Assignee: OLYMPUS CORPORATION
    Inventors: Naohiro Ariga, Shintaro Takahashi, Yohei Tanikawa, Shinichi Takimoto
  • Patent number: 11036975
    Abstract: Described herein is a human pose prediction system and method. An image comprising at least a portion of a human body is received. A trained neural network is used to predict one or more human features (e.g., joints/aspects of a human body) within the received image, and, to predict one or more human poses in accordance with the predicted one or more human features. The trained neural network can be an end-to-end trained, single stage deep neural network. An action is performed based on the predicted one or more human poses. For example, the human pose(s) can be displayed as an overlay with received image.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 15, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Noranart Vesdapunt, Baoyuan Wang, Ying Jin, Pierrick Arsenault
  • Patent number: 11030478
    Abstract: A system and method for determining a correspondence map between a first and second image by determining a set of correspondence vectors for each pixel in the first image and selecting a correspondence vector from the set of correspondence vectors based on a cost value.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: June 8, 2021
    Assignee: Compound Eye, Inc.
    Inventors: Jason Devitt, Haoyang Wang, Konstantin Azarov
  • Patent number: 11025891
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: June 1, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Barry Gravante, Pietro Russo, Mahesh Saptharishi
  • Patent number: 11017533
    Abstract: The system is configured to receive at least one digital image of a tissue sample of a patient; analyze the at least one received image for identifying tumor cells in a region of the at least one received image; analyze the at least one received image for identifying FAP+ areas in the region, each FAP+ area being a pixel blob representing one or more cells express the fibroblast activation protein—“FAP”; analyze the at least one received image for identifying distances between the identified tumor cells and their respective nearest FAP+ area; computing a proximity measure as a function of the identified distances; process the proximity measure by a classifier for generating a classification result, the classification result indicating if the tumor of the patient can be treated by a drug or drug-component that binds to FAP; and output the classification result.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: May 25, 2021
    Assignee: HOFFMANN-LA ROCHE INC.
    Inventors: Fabien Gaire, Oliver Grimm, Hadassah Sumum Sade, Suzana Vega Harring
  • Patent number: 11010872
    Abstract: Techniques related to generating a fine super resolution image from a low resolution image including a person wearing a predetermined uniform are discussed. Such techniques include applying a pretrained convolutional neural network to a stacked image including image channels from a coarse super resolution image, label data corresponding to the coarse super resolution image from available labels relevant to the uniform, and pose data corresponding to the person to determine the fine super resolution image.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: May 18, 2021
    Assignee: Intel Corporation
    Inventors: Yuri Shpalensky, Tzach Ashkenazi, Maria Bortman, Roland Mishaev
  • Patent number: 11003917
    Abstract: A method for monitoring a patient (22a) within a medical monitoring area (100) by means of a monitoring system (200) with a depth camera device (210). The method includes the following steps: generating a point cloud (30) of the monitoring area (100) with the monitoring system (200); analyzing the point cloud (30) for detecting predefined objects (20), especially persons (22); determining a location of at least one detected object (20) in the monitoring area (100); and comparing the determined location of the at least one detected object (20) with at least one predefined value (40) for the location of this detected object (20).
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: May 11, 2021
    Assignee: Drägerwerk AG & Co. KGaA
    Inventors: Frank Franz, Stefan Schlichting
  • Patent number: 11003896
    Abstract: Aspects of the current disclosure include systems and methods for identifying an entity in a query image by comparing the query image with digital images in a database. In one or more embodiments, a query feature may be extracted from the query image and a set of candidate features may be extracted from a set of images in the database. In one or more embodiments, the distances between the query feature and the candidate features are calculated. A feature, which includes a set of shortest distances among the calculated distances and a distribution of the set of shortest distances, may be generated. In one or more embodiments, the feature is input to a trained model to determine whether the entity in the query image is the same entity associated with one of the set of shortest distances.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 11, 2021
    Assignee: Stripe, Inc.
    Inventors: Pranav Dandekar, Ashish Goel, Peter Lofgren, Matthew Fisher
  • Patent number: 10997707
    Abstract: An improved leak detection system for oil and gas pipelines and the like, including submerged or buried structures, the system including an aerial- or space-based platform with GPS and Attitude Determination and Control System (ADCS) capability, the platform connected to a hyperspectral imaging sensor, the system including processor and memory structured with a vegetative index such that chemical and hydrocarbon leaks are detected within regulatorily-approved time limits based on changes in the vegetative index.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: May 4, 2021
    Assignee: Orbital Sidekick, Inc.
    Inventors: Daniel L. Katz, Tushar Prabhakar
  • Patent number: 10991141
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: April 27, 2021
    Assignee: ADOBE INC.
    Inventors: Abhishek Shah, Andaleeb Fatima
  • Patent number: 10977810
    Abstract: Implementations generally relate to determining camera motion. In one implementation, a method includes capturing a first image and a second image of a physical scene with a camera in respective first and second positions. The method further includes determining first and second image points from the respective first and second image points. The method further includes determining first and second directions of gravity relative to the camera in the respective first and second positions. The method further includes determining a motion of the camera between the capturing of the first image and the capturing of the second image, wherein the determining of the motion of the camera is based at least in part on the first image points, the second image points, the first direction of gravity, and the second direction of gravity.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: April 13, 2021
    Inventors: Erik Murphy-Chutorian, Nicholas Butko
  • Patent number: 10970536
    Abstract: Systems and methods for assessing similarity of documents are provided. Embodiments of the systems and methods include extracting a reference document text from a reference document, extracting an archived document text from an archived document, and quantifying the reference document and the archived document. The systems and methods may also include determining a document similarity value of the quantified reference document and the archived document. Determining the document similarity value includes calculating a set of vector similarity values for a set of combinations of a reference document text vector and an archived document text vector, and calculating the document similarity value, including a sum of the plurality of vector similarity values.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 6, 2021
    Assignee: Open Text Corporation
    Inventors: Jeroen Mattijs van Rotterdam, Michael T Mohen, Chao Chen, Kun Zhao
  • Patent number: 10970820
    Abstract: In a method for super resolution imaging, the method includes: receiving, by a processor, a low resolution image; generating, by the processor, an intermediate high resolution image having an improved resolution compared to the low resolution image; generating, by the processor, a final high resolution image based on the intermediate high resolution image and the low resolution image; and transmitting, by the processor, the final high resolution image to a display device for display thereby.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 6, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mostafa El-Khamy, Jungwon Lee, Haoyu Ren
  • Patent number: 10964036
    Abstract: The present disclosure relates to a method and a system for detecting an anomaly within a biological tissue. A first image of the biological tissue is obtained, the first image containing light at a first wavelength. A second image of the biological tissue is obtained, the second image containing light at a second wavelength. A texture analysis of the biological tissue is performed using spatial information of the first and second images. The texture analysis is resolved over the first and second wavelengths.
    Type: Grant
    Filed: October 19, 2017
    Date of Patent: March 30, 2021
    Assignee: OPTINA DIAGNOSTICS, INC.
    Inventors: Jean Philippe Sylvestre, David Lapointe, Claudia Chevrefils, Reza Jafari
  • Patent number: 10964004
    Abstract: The present invention is an automated optical inspection method using deep learning, comprising the steps of: providing a plurality of paired image combinations, wherein each said paired image combination includes at least one defect-free image and at least one defect-containing image corresponding to the defect-free image; providing a convolutional neural network to start a training mode of the convolutional neural network; inputting the plurality of paired image combinations into the convolutional neural network, and adjusting a weight of at least one fully connected layer of the convolutional neural network through backpropagation to complete the training mode of the convolutional neural network; and performing an optical inspection process using the trained convolutional neural network.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: March 30, 2021
    Assignee: UTECHZONE CO., LTD.
    Inventors: Chih-Heng Fang, Chia-Liang Lu, Ming-Tang Hsu, Arulmurugan Ambikapathi, Chien-Chung Lin
  • Patent number: 10956757
    Abstract: An image processing device includes: a road surface detecting section to detect a road surface region from an input image based on a shot image obtained by shooting with a camera; a time-series verifying section to perform time-series verification to verify a result of detection of the road surface region in the input image in a time-series manner; a detection region selecting section to set a detection region for detection of an object in the input image according to the result of detection of the road surface region by the road surface detecting section and a result of the time-series verification by the time-series verifying section; and a detecting section to detect the object in the detection region.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: March 23, 2021
    Assignee: Clarion Co., Ltd.
    Inventors: Yasuhiro Akiyama, Yasusi Kanada, Koichi Hamada
  • Patent number: 10949674
    Abstract: An apparatus for video summarization using sematic information is described herein. The apparatus includes a controller, a scoring mechanism, and a summarizer. The controller is to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity. The scoring mechanism is to calculate a score for each frame of each activity, wherein the score is based on a plurality of objects in each frame. The summarizer is to summarize the activity segments based on the score for each frame.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Myung Hwangbo, Krishna Kumar Singh, Teahyung Lee, Omesh Tickoo
  • Patent number: 10949707
    Abstract: An approach is provided for determining a feature correspondence based on camera geometry. The approach, for example, involves determining a first labeled or detected pixel location in a first image and a second labeled or detected pixel location in a second image. The approach also involves computing a first ray from a first camera position of the first image through the first labeled or detected pixel location. The approach further involves computing a second ray from a second camera position of the second image through the second labeled or detected pixel location. The approach further involves computing a closeness value of the first ray and the second ray. The approach further involves providing an output indicating the feature correspondence between the first labeled or detected pixel location and the second labeled or detected pixel location based on determining that the closeness value is within a threshold value.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: March 16, 2021
    Assignee: HERE Global B.V.
    Inventors: Anish Mittal, David Lawlor, Krishna Balakrishnan, Zhanwei Chen
  • Patent number: 10943157
    Abstract: A pattern recognition method of the immunofluorescence images of autoantibody identification is disclosed. The method includes the following steps: inputting a plurality of original cell immunofluorescence images; conducting an operation of a plurality of convolutional neural networks by a processor, the plurality of convolutional neural networks include a convolution layer, a pooling layer and an inception layer for capturing the plurality of convolution features; conducting a judgment process to obtain the proportions of the antinuclear antibodies morphological patterns; and outputting the recognition results.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: March 9, 2021
    Assignee: CHANG GUNG MEMORIAL HOSPITAL, LINKOU
    Inventors: Chang-Fu Kuo, Chi-Hung Lin, Yi-Ling Chen, Meng-Jiun Chiou