Patents Examined by Amara Abdi
  • Patent number: 10220782
    Abstract: An image analysis apparatus is applied to an on-vehicle camera that captures an image of a predetermined monitoring region oriented in a predetermined direction referenced to a vehicle, to analyze an image captured by the on-vehicle camera. The image analysis apparatus includes (i) a storage section that stores a feature quantity of a monitoring region image obtained when an image of the monitoring region is captured by the on-vehicle camera, (ii) an extraction section that acquires an image captured by the on-vehicle camera and extracts a feature quantity of the image captured by the on-vehicle camera, and (iii) a notification section that compares the feature quantity of the image captured by the on-vehicle camera against the feature quantity of the monitoring region image to perform determination whether the on-vehicle camera is mounted in an abnormal position, and notifies a result of the determination.
    Type: Grant
    Filed: January 16, 2015
    Date of Patent: March 5, 2019
    Assignee: DENSO CORPORATION
    Inventor: Shusaku Shigemura
  • Patent number: 10210629
    Abstract: A detection area is set in a three-dimensional space in which a subject exists. When an actual hand enters the detection area, coordinate points (white and black dots) represented by pixels making up a silhouette of the hand in a depth image enter the detection area. In the detection area, a reference vector is set that shows the direction which the hand should face relative to the shoulder as a reference point. Then, an inner product between two vectors, a vector from the reference point to each of coordinate points and the reference vector, is calculated, followed by comparison between the inner products. Positions of coordinate points whose inner products are ranked high are acquired as the position of tips of the hand.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: February 19, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Akio Ohba, Hiroyuki Segawa, Tetsugo Inada, Hidehiko Ogasawara, Hirofumi Okamoto
  • Patent number: 10204445
    Abstract: An information processing apparatus includes: an input unit that inputs an image of real space captured by an image capturing apparatus; a measurement value input unit that inputs a measurement value regarding a position and orientation of the image capturing apparatus measured by a sensor attached to the image capturing apparatus; a position and orientation derivation unit that, based on three-dimensional information of a feature in the real space and the input image, derives a position and orientation of the image capturing apparatus; a determination unit that, based on the measurement value and the position and orientation of the image capturing apparatus derived by the position and orientation derivation unit, makes a determination as to whether derivation of the position and orientation of the image capturing apparatus performed by the position and orientation derivation unit has failed; and an output unit that outputs a result provided by the determination unit.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: February 12, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Daisuke Kotake, Keisuke Tateno
  • Patent number: 10198648
    Abstract: The present disclosure relates to advanced image signal processing technology including: i) rapid localization for machine-readable indicia including, e.g., 1-D and 2-D barcodes; and ii) barcode reading and decoders.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: February 5, 2019
    Assignee: Digimarc Corporation
    Inventors: Brett A. Bradley, Ajith M. Kamath, Tomas Filler, Vojtech Holub
  • Patent number: 10198642
    Abstract: A method for a motor vehicle provided with a camera includes: providing, by the camera, an image representing surroundings of the motor vehicle; detecting at least one line of vehicles in the image; detecting at least one driving lane based on the at least one detected line of vehicles; detecting a state of at least one driving direction display in the image; and detecting a lane topology for the at least one detected driving lane, based on the state of the at least one driving direction display.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: February 5, 2019
    Assignee: Continental Automotive GmbH
    Inventor: Abdelkarim Belhoula
  • Patent number: 10192111
    Abstract: Aspects of the subject disclosure may include, for example, a method comprising obtaining, by a processing system including a processor, first and second models for a structure of an object, based respectively on ground-level and aerial observations of the object. Model parameters are determined for a three-dimensional (3D) third model of the object based on the first and second models; the determining comprises a transfer learning procedure. Data representing observations of the object is captured at an airborne unmanned aircraft system (UAS) operating at an altitude between that of the ground-level observations and the aerial observations. The method also comprises dynamically adjusting the third model in accordance with the operating altitude of the UAS; updating the adjusted third model in accordance with the data; and determining a 3D representation of the structure of the object, based on the updated adjusted third model. Other embodiments are disclosed.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: January 29, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Raghuraman Gopalan
  • Patent number: 10185897
    Abstract: In the event that a moving body (e.g. a person, a car, etc.) is outfitted with a video camera or with a camera-equipped device (e.g. a tablet or a mobile phone), the system described in one aspect is able to understand the motion of the moving by analyzing the video frame sequence captured by the camera. This means that the system can categorize the motion of the body-carrying camera to one of several types (e.g., is this a person walking? is this a person running? etc.), understand the nature of the moving body holding the camera-equipped device (e.g. Is this a car?, Is this a person? etc.) and even to identify the moving body (which car?, which person? etc.).
    Type: Grant
    Filed: November 29, 2016
    Date of Patent: January 22, 2019
    Assignee: IRIDA LABS S.A.
    Inventors: Ilias Theodorakopoulos, Nikos Fragoulis
  • Patent number: 10185874
    Abstract: A digital imaging system and method for searching for expressions that appear on a microform medium, the system having a computer including a processor and an input device, and a digital microform imaging apparatus having an area sensor generating a digital microform image of the microform medium. The computer is configured to receive a search expression from the input device, create an expression template representing a shape of the search expression, and search the digital microform image for instances of the expression template.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: January 22, 2019
    Assignee: E-IMAGE DATA CORPORATION
    Inventors: Todd A. Kahle, Grant Taylor
  • Patent number: 10176405
    Abstract: This disclosure relates to improved vehicle re-identification techniques. The techniques described herein utilize artificial intelligence (AI) and machine learning functions to re-identify vehicles across multiple cameras. Vehicle re-identification can be performed using an image of the vehicle that is captured from any single viewpoint. Attention maps may be generated that identify regions of the vehicle that include visual patterns that overlap between the viewpoint of the captured image and one or more additional viewpoints. The attention maps are used to generate a multi-view representation of the vehicle that provides a global view of the vehicle across multiple viewpoints. The multi-view representation of the vehicle can then be compared to previously captured image data to perform vehicle re-identification.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: January 8, 2019
    Assignee: INCEPTION INSTITUTE OF ARTIFICIAL INTELLIGENCE
    Inventors: Yi Zhou, Ling Shao
  • Patent number: 10165168
    Abstract: Ambiguous portions of an image which have fewer photons of a reflected light signal detected than required to determine depth can be classified as being dark (i.e., reflecting too few photons to derive depth) and/or far (i.e., beyond a range of a camera) based at least in part on expected depth and reflectivity values. Expected depth and reflectivity values for the ambiguous portions of the image may be determined by analyzing a model of an environment created by previously obtained images and depth and reflectivity values. The expected depth and reflectivity values may be compared to calibrated values for a depth sensing system to classify the ambiguous portions of the image as either dark or far based on the actual photon count detected for the image.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: December 25, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael John Schoenberg, Michael Bleyer, Christopher S. Messer, Denis Demandolx
  • Patent number: 10154624
    Abstract: In an approach, hyperspectral and/or multispectral remote sensing images are automatically analyzed by a nitrogen analysis subsystem to estimate the value of nitrogen variables of crops or other plant life located within the images. For example, the nitrogen analysis subsystem may contain a data collector module, a function generator module, and a nitrogen estimator module. The data collector module prepares training data which is used by the function generator module to train a mapping function. The mapping function is then used by the nitrogen estimator module to estimate the values of nitrogen variables for a new remote sensing image that is not included in the training set. The values may then be reported and/or used to determine an optimal amount of fertilizer to add to a field of crops to promote plant growth.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: December 18, 2018
    Assignee: The Climate Corporation
    Inventors: Wei Guan, Ying Xu
  • Patent number: 10147187
    Abstract: A DR radiography lung contour extraction method based on fully convolutional network, which includes the steps: Establish the fully convolutional network structure of lung contour segmentation; Conduct off-line training on the weighting parameters of the fully convolutional network; Read DR image and weighting parameters of the fully convolutional network; Input DR image into fully convolutional network and output segmentation results of image through network terminal with network layer-by-layer feedforward. Establish lung contour in accordance with segmentation results.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: December 4, 2018
    Assignee: SICHUAN UNIVERSITY
    Inventors: Junfeng Wang, Peng Tang, Fan Li, Yihua Du, Yulin Ji, Zongan Liang
  • Patent number: 10146992
    Abstract: An image processing apparatus including an acquisition unit configured to acquire results of analysis processing for a plurality of images; a designation unit configured to designate a type of a target to be detected from an image; and a determination unit configured to determine, among the plurality of images, an image used for detection processing of the detection target designated by the designation unit based on the type of the detection target designated by the designation unit and the result of the analysis processing.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: December 4, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yuichi Tsunematsu
  • Patent number: 10146153
    Abstract: An image processing apparatus includes a memory device that stores a set of instructions and at least on processor that executes the set of instructions to set a sampling condition under which a pixel of an image is sampled based on information indicating at least a number of bits of a pixel of the image, to sample a pixel of the image based on a set sampling condition, and to analyze the image based on sampled pixel data. When a number of bits of a pixel of the image is greater than or equal to a predetermined number of bits, a sampling condition is set so that a sampling interval becomes greater than that in a case when a number of bits of a pixel of the image is less than the predetermined number of bits.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: December 4, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Tateki Narita
  • Patent number: 10147179
    Abstract: Provided is a technology for recovery from an abnormality of an action of a worker. An action instruction apparatus includes: a standard operating procedure storing unit configured to store, for each operation step, output information from a predetermined sensor relating to a standard action of a worker; an operation step identifying unit configured to acquire output information from a sensor and to compare the acquired output information with the standard action to identify an operation step being performed; an operation abnormality detecting unit configured to acquire output information from a sensor relating to an operation step subsequent to the operation step being performed by the worker to detect an operation abnormality when the acquired output information differs from the output information in the operation step subsequent to the operation step being performed; and a recovery action instruction generating unit configured to generate an operation instruction detail for recovery.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: December 4, 2018
    Assignee: HITACHI, LTD.
    Inventors: Ryusuke Kimura, Kei Imazawa, Takaharu Matsui
  • Patent number: 10140826
    Abstract: A surveillance system including a surveillance server and at least one network camera is provided. The surveillance server includes: a communication interface configured to communicate with a network camera; and a processor configured to determine an event based on at least one image received from the network camera during a first period, determine an activation time of the network camera based on the event, and transmit an event reaction request including information about the activation time to the network camera during a second period after the first period.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: November 27, 2018
    Assignee: HANWHA AEROSPACE CO., LTD.
    Inventors: Chan Ki Jeon, Min Suk Sung, Joon Sung Lee
  • Patent number: 10140503
    Abstract: The subject tracking apparatus comprises: a first registering unit configured to register a partial area as a template indicative of a subject in one image of supplied images; a first matching unit configured to estimate a subject area by collating a partial area in newly supplied images with the template registered by the first registering unit; a second registering unit configured to register a histogram generated based on a pixel value of a partial area indicative of the subject in one image of supplied images; a second matching unit configured to estimate a subject area by collating a histogram of a partial area in newly supplied images with the histogram registered by the second registering unit; and a tracking area determination unit configured to determine a tracking area based on estimation results by the first matching unit and the second matching unit.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: November 27, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Ryosuke Tsuji
  • Patent number: 10130323
    Abstract: The invention provides a method and apparatus for classifying a region of interest in imaging data, the method comprising: calculating a feature vector for at least one region of interest in the imaging data, said feature vector including features of a first modality; projecting the feature vector for the at least one region of interest in the imaging data using a decision function to generate a classification, wherein the decision function is based on classified feature vectors including features of a first modality and features of a second modality; estimating the confidence of the classification if the feature vector is enhanced with features of the second modality.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: November 20, 2018
    Assignee: Delineo Diagnostics, Inc
    Inventors: Scott Anderson Middlebrooks, Henricus Wilhelm van der Heijden
  • Patent number: 10133961
    Abstract: Aspects of the subject disclosure may include, for example, a method for determining a first set of features in first images of first media content, generating a similarity score by processing the first set of features with a favorability model derived by identifying generative features and discriminative features of second media content that is favored by a viewer, and providing the similarity score to a network for predicting a response by the viewer to the first media content. Other embodiments are disclosed.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: November 20, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Raghuraman Gopalan
  • Patent number: 10120543
    Abstract: An unmanned image capture system captures images of a field or work area using a first, spectral image capture system and a second video image capture system. Crop location data that is indicative of the location of crop plants within the field, is obtained. Evaluation zones in the image data generated by the first image capture system are identified based on the crop location data. Crop plants within the evaluation zones are then identified, analyzed to generate a corresponding emergence metric, and linked to a corresponding video image generated by the second image capture system.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: November 6, 2018
    Assignee: Deere & Company
    Inventors: Ramanathan Sugumaran, Marc Lemoine, Federico Pardina-Malbran