Patents Examined by Mike Rahmjoo
  • Patent number: 9349163
    Abstract: A method and apparatus for processing a depth image that removes noise of a depth image may include a noise estimating unit to estimate noise of a depth image using an amplitude image, a super-pixel generating unit to generate a planar super-pixel based on depth information of the depth image and the noise estimated, and a noise removing unit to remove noise of the depth image using depth information of the depth image and depth information of the super-pixel.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: May 24, 2016
    Inventors: Ouk Choi, Byong Min Kang, Kee Chang Lee
  • Patent number: 9202110
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: December 1, 2015
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9195876
    Abstract: A system for oil storage tank monitoring, comprising an extraction module and an analysis module, wherein an extraction module is utilized to determine information from an oil storage tank image and an analysis module is utilized to perform operations on extracted information, such as for determining measurements or values of oil storage tanks.
    Type: Grant
    Filed: December 3, 2013
    Date of Patent: November 24, 2015
    Inventor: Mark Tabb
  • Patent number: 9196032
    Abstract: A process is provided for estimating a three-dimensional (3D) radiance field of a combustion process in an enclosure. An on-line intensity-temperature calibration is performed based on an association between an intensity of an image pixel and an actual temperature associated with a selected region in the enclosure. The intensity of the corresponding image is transformed to a radiance image based on settings of an image-capturing device and the on-line calibration. A registration and alignment estimation of the image is performed based on positional information of the enclosure. The radiance image is aligned based on the registration estimation. The 3D radiance field having voxels of the enclosure is estimated based on a two-dimensional to 3D transforming of the aligned radiance images.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: November 24, 2015
    Inventors: Kurt Kraus, Kwong Wing Au, Matthew Martin, Sharath Venkatesha, Stefano Bietto
  • Patent number: 9189682
    Abstract: Facial recognition algorithms may identify the faces of one or more people in a digital image. Multiple types of communication may be available for the different people in the digital image. A user interface may be presented indicating recognized faces along with the available forms of communication for the corresponding person. An indication of the total number of people available to be communicated with using each form of communication may be presented. The user may have the option to choose one or more forms of communication, causing the digital image to be sent to the recipients using the selected forms of communication. An individual may have provided information for facial recognition of the individual to a service. Based on the information, the service may recognize that the individual is in an uploaded picture and send the digital image to the user account of the individual.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: November 17, 2015
    Assignee: Apple Inc.
    Inventors: Richard H. Salvador, Steve G. Salvador
  • Patent number: 9189707
    Abstract: Annotating and classifying an image based on a user context includes determining a location data of an object captured in an image, determining an attribute data of the object, obtaining sensor data from sensors that are associated with the location data based on the attribute data, determining a recommended user context from one or more predefined user contexts based on a comparison of the location data, the attribute data, and the sensor data with location data, attribute data, and sensor data of one or more images associated with the one or more predefined user contexts, determining a recommended class of the captured image based on the recommended user context, selecting one or more annotation data from the location data, the attribute data, and the sensor data based on the recommended class or the recommended user context, and annotating the image with the one or more annotation data.
    Type: Grant
    Filed: February 24, 2014
    Date of Patent: November 17, 2015
    Assignee: LLC
    Inventor: Stephen J. Brown
  • Patent number: 9183443
    Abstract: Disclosed are systems and methods for configuring a vision detector, wherein a training image is obtained from a production line operating in continuous motion so as to provide conditions substantially identical to those that will apply during actual manufacturing and inspection of objects. A training image can be obtained without any need for a trigger signal, whether or not the vision detector might use such a signal for inspecting the objects. Further disclosed are systems and methods for testing a vision detector by selecting, storing, and displaying a limited number of images from a production run, where those images correspond to objects likely to represent incorrect decisions.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: November 10, 2015
    Assignee: Cognex Technology and Investment LLC
    Inventors: Andrew Eames, Brian Mirtich, William Silver
  • Patent number: 9183276
    Abstract: According to one embodiment, an electronics device generates first and second index information, the first index information including codes of character strings corresponding to strokes, the second index information including characteristic quantity of strokes. The device executes at least either one of a first search and a second search, according to character string likelihood of first strokes which is a search key. The first search is performed by using the second index information and characteristic quantity of the first strokes. The second search is performed by using the first index information and a code of a character string corresponding to the first strokes.
    Type: Grant
    Filed: August 13, 2013
    Date of Patent: November 10, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Takehiro Ogawa, Atsushi Kakemura
  • Patent number: 9177194
    Abstract: A face is detected and identified in a digital image. A weight is calculated and assigned to the detected face based on characteristics of the face and social media connections between the person identified from the face and a target viewer of the digital image. One or more image effects are applied to the digital image to visually distinguish the detected face from other parts of the digital image and/or in relation to other faces detected in the image.
    Type: Grant
    Filed: January 29, 2014
    Date of Patent: November 3, 2015
    Assignees: Sony Corporation, Sony Mobile Communications AB
    Inventor: Henrik Sundström
  • Patent number: 9171227
    Abstract: Provided is an apparatus and method for extracting feature information of an image using a scale-invariant feature transform (SIFT) algorithm. The apparatus may include a first interface configured to generate one or more tile images from a first source image stored in a particular memory, such as a high-capacity short-term memory, and a feature information extractor configured to receive the generated one or more tile images and to respectively extract feature information from each of the one or more input tile images, where the first interface may be configured to generate the one or more tile images by selectively dividing the first source image into the one or more tile images based on a horizontal resolution of the first source image.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: October 27, 2015
    Inventors: Yong Min Tai, Young Su Moon, Jung Uk Cho, Joon Hyuk Cha, Hyun Sang Park, Shi Hwa Lee
  • Patent number: 9152867
    Abstract: Embodiments include methods, devices, software, and systems for identifying a person based on relatively permanent pigmented or vascular skin mark (RPPVSM) patterns in images. Locations of RPPVSMs in different images of people are point matched, and a correspondence probability that the point matched RPPVSMs are from different people is calculated. Other embodiments are also described. Other embodiments are also described and claimed.
    Type: Grant
    Filed: June 10, 2014
    Date of Patent: October 6, 2015
    Assignees: Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center, Nanyang Technological University
    Inventors: Noah Ames Craft, Wai Kin Adams Kong, Arfika Nurhudatiana
  • Patent number: 9141865
    Abstract: The invention provides a method of using machine vision to recognize text and symbols, and more particularly traffic signs.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: September 22, 2015
    Assignee: Itseez, Inc.
    Inventor: Victor Erukhimov
  • Patent number: 9141844
    Abstract: The system includes a 3D feature detection module and 3D recognition module 202. The 3D feature detection module processes 3D surface map of a biometric object, wherein the 3D surface map includes a plurality of 3D coordinates. The 3D feature detection module determines whether one or more types of 3D features are present in the 3D surface map and generates 3D feature data including 3D coordinates and feature type for the detected features. The 3D recognition module compares the 3D feature data with biometric data sets for identified persons. The 3D recognition module determines a match between the 3D feature data and one of the biometric data sets when a confidence value exceeds a threshold.
    Type: Grant
    Filed: March 4, 2013
    Date of Patent: September 22, 2015
    Inventors: Veeraganesh Yalla, Colby B. Boles, Raymond Charles Daley, Robert Kyle Fleming
  • Patent number: 9143659
    Abstract: Body art, such as tattoos, are integrated or extended onto clothing. Clothing patterns may be integrated or extended onto body art, such as temporary tattoos. In one embodiment, a computer analyzes an image of a body with a tattoo and generates an image suitable for application to clothing. When applied to clothing, the image displays the portion of the tattoo that is covered by the clothing, or may extend the appearance of the tattoo from the skin to the adjacent clothing.
    Type: Grant
    Filed: January 8, 2013
    Date of Patent: September 22, 2015
    Inventor: Gary Shuster
  • Patent number: 9135519
    Abstract: According to an aspect of the present invention, there is provided a pattern matching method of detecting an image of a detection target from a search image, comprising: obtaining a reference image of the detection target; generating the model edge image on a basis of the reference image; generating the edge extraction domain that is specified as a portion where the model edge image can exist by overlying a plurality of the model edge images obtained with at least one of rotating the model edge image within a predetermined range around a rotation center of the model edge image and translating the model edge image within a predetermined range; and performing pattern matching between the model edge image and the search edge image.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: September 15, 2015
    Inventor: Hiroshi Kitajima
  • Patent number: 9122925
    Abstract: The present disclosure is directed towards methods and systems for capturing artifact-free biometric images of an eye. The eye may be in motion and in the presence of partially-reflective eyewear. The method may include acquiring, by a first sensor, a first image of an eye while the eye is illuminated by a first illuminator. The first image may include a region of interest. The first sensor may be disposed at a fixed displacement from the first illuminator and a second sensor. The second sensor may acquire, within a predetermined period of time from the acquisition of the first image, a second image of the eye. The second image may include the region of interest. An image processor may determine if at least one of the first and second images include artifacts arising from one or both of the first illuminator and eyewear, within the region of interest.
    Type: Grant
    Filed: January 15, 2015
    Date of Patent: September 1, 2015
    Assignee: EyeLock, Inc.
    Inventor: Keith J. Hanna
  • Patent number: 9122957
    Abstract: An image processing apparatus includes a first acquiring unit that acquires an image to be processed; a setting unit that sets multiple partial image areas in the image to be processed; a second acquiring unit that acquires a first classification result indicating a possibility that an object of a specific kind is included in each of the multiple partial image areas; and a generating unit that generates a second classification result indicating a possibility that the object of the specific kind is included in the image to be processed on the basis of the first classification result of each of the multiple partial image areas.
    Type: Grant
    Filed: June 5, 2014
    Date of Patent: September 1, 2015
    Assignee: FUJI XEROX CO., LTD
    Inventor: Noriji Kato
  • Patent number: 9122958
    Abstract: Object recognition systems, methods, and devices are provided. Candidate objects may be detected. The candidate objects may be verified as depicting objects of a predetermined object type with verification tests that are based on comparisons with reference images known to include such objects and/or based on context of the candidate objects. The object recognition system may identify images in a social networking service that may include objects of a predetermined type.
    Type: Grant
    Filed: February 14, 2014
    Date of Patent: September 1, 2015
    Assignee: Social Sweepster, LLC
    Inventors: Tod Joseph Curtis, Thomas Ryan McGrath, Kenneth Edward Jagacinski Schweickert
  • Patent number: 9117137
    Abstract: Inputs of a plurality of images constituting a group of images of items regarded as non-defective items are previously accepted and stored, and a defect threshold for detecting a defective portion of an inspection object is set based on the plurality of stored images. A defect amount to be compared with a determination threshold for making a non-defective/defective determination on the inspection object is calculated with respect to each of the plurality of stored images based on the set defective threshold, and whether or not each of the calculated defect amounts is an outlier is tested by use of at least one of a parametric technique and a non-parametric technique. Outlier information for specifying an image whose defect amount has been tested to be the outlier is displayed and outputted.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: August 25, 2015
    Assignee: Keyence Corporation
    Inventors: Naoya Uchiyama, Hidetoshi Morimoto
  • Patent number: 9117108
    Abstract: A method for controlling the authorization of a person to access a secure area, particularly a cockpit of a passenger aircraft, is provided. According to the method, an access control apparatus for detecting a set of biometric features is provided, which apparatus can be enabled by entering a predetermined access code. The access code is transferred by the person to the access control apparatus. The access control apparatus detects a set of biometric features of the person transferring the access code. The set of biometric features of the person are saved. Access for the person for a predetermined time period is subsequently enabled. Solely verifying the set of biometric features of the person seeking access allows access to be enabled again for the person during the predetermined time period.
    Type: Grant
    Filed: March 6, 2014
    Date of Patent: August 25, 2015
    Assignee: Diehl Aerospace GMBH
    Inventors: Olaf Kammer, Lothar Trunk, Jörg Waffenschmidt