Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 8675921
    Abstract: A distance measurement system and method are provided. The distance measurement method first projects a light beam with a speckle pattern to reference planes and an object to allow the reference planes and a surface of the object each have an image of the speckle pattern, the speckle pattern having a plurality of speckles. Next, images of the speckle pattern reflected by the reference planes are captured to generate reference image information, and an image of the speckle pattern reflected by the surface of the object is captured to generate an object image information. A processing module which may be a processing software can compare the object image information with the reference image information to obtain several similarity scores. If the most the most similarity score is greater than a threshold value, the processing module identifies the corresponding reference plane, thereby computing the position of the object.
    Type: Grant
    Filed: September 2, 2011
    Date of Patent: March 18, 2014
    Assignee: Pixart Imaging Inc.
    Inventors: Shu-Sian Yang, Ren-Hau Gu, Hsin-Chia Chen, Sen-Huang Huang
  • Patent number: 8675958
    Abstract: A photographic subject determination method includes: a binarization step of creating a plurality of binarized images of a subject image, based upon color information or luminance information in the subject image; an evaluation value calculation step of, for each of the plurality of binarized images, calculating an evaluation value that is used for specifying at least one of a position, a size, and a shape of a photographic subject within the subject image; and a photographic subject specification step of specifying at least one of the position, the size, and the shape of a photographic subject within the subject image, based upon the evaluation value.
    Type: Grant
    Filed: March 11, 2011
    Date of Patent: March 18, 2014
    Assignee: Nikon Corporation
    Inventor: Hiroyuki Abe
  • Patent number: 8670590
    Abstract: An image processing device for improving the accuracy of optical flow calculation when an optical flow is calculated in a window unit. An image processing device for calculating an optical flow on the basis of image information within a window for a processing target using a plurality of images captured at different times includes position acquisition means which acquires position information of the processing target and setting means which sets a size of a window for calculating an optical flow on the basis of the position information acquired by the position acquisition means.
    Type: Grant
    Filed: July 30, 2009
    Date of Patent: March 11, 2014
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Naohide Uchida
  • Publication number: 20140064565
    Abstract: A scale (20) has a plurality of patterns so as to spatially modulate an energy distribution, and the scale includes a first pattern having a first modulation period in a moving direction, and a second pattern having a second modulation period different from the first modulation period in the moving direction, a relative phase between the first pattern and the second pattern changes in accordance with a direction perpendicular to the moving direction, each of the first pattern and the second pattern is configured by including a reflective portion (26) that reflects light and a non-reflective portion (25) that does not reflect the light, and a width of the reflective portion (26) in the moving direction at a first position is different from the width at a second position different from the first position along the direction perpendicular to the moving direction.
    Type: Application
    Filed: August 30, 2013
    Publication date: March 6, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Chihiro Nagura
  • Patent number: 8666118
    Abstract: Controlling an image element associated with a distance from an energy measuring device, in a reflected energy measurement system involves producing at least one signal for controlling a common visible characteristic of the image element, in response to a plurality of signals representing respective intensities of reflected energy at respective different frequencies, measured at the energy measuring device at a time corresponding to the distance.
    Type: Grant
    Filed: May 20, 2009
    Date of Patent: March 4, 2014
    Assignee: Imagenex Technology Corp.
    Inventors: Craig Calder Lindholm, Yingchun Lu
  • Patent number: 8666145
    Abstract: A system and method for identifying a region of interest in a digital image. A first and second images of a scene may be obtained from a respective first and second points of view. Following an acquisition of a first image from a first point of view, a subsequent image may be automatically acquired upon determining that a second view point is achieved. Based on two or more images of a scene, a background object may be removed from an image to produce an image that only includes a foreground object or a region of interest.
    Type: Grant
    Filed: September 7, 2011
    Date of Patent: March 4, 2014
    Assignee: Superfish Ltd.
    Inventors: Michael Chertok, Adi Pinhas
  • Patent number: 8660312
    Abstract: Embodiments of the present invention relate to a method for computing depth sectioning of an object using a quantitative differential interference contrast device having a wavefront sensor with one or more structured apertures, a light detector and a transparent layer between the structured apertures and the light detector. The method comprises receiving light, by the light detector, through the one or more structured apertures. The method also measures the amplitude of an image wavefront, and measures the phase gradient in two orthogonal directions of the image wavefront based on the light. The method can then reconstruct the image wavefront using the amplitude and phase gradient. The method can then propagate the reconstructed wavefront to a first plane intersecting an object at a first depth. In one embodiment, the method propagates the reconstructed wavefront to additional planes and generates a three-dimensional image based on the propagated wavefronts.
    Type: Grant
    Filed: January 21, 2010
    Date of Patent: February 25, 2014
    Assignee: California Institute of Technology
    Inventors: Xiquan Cui, Changhuei Yang
  • Patent number: 8655024
    Abstract: A displacement detection method includes the steps of: acquiring an image frame; calculating a characteristic index of the image frame; maintaining the image frame when the characteristic index is larger than a threshold value; and adding a fixed pattern to the image frame when the characteristic index is smaller than the threshold value. The present invention further provides a displacement detection device.
    Type: Grant
    Filed: March 9, 2011
    Date of Patent: February 18, 2014
    Assignee: Pixart Imaging Inc.
    Inventors: Chun Wei Chen, Hsin Chia Chen
  • Patent number: 8655025
    Abstract: Provided is a data analysis device for automatically detecting a step on the ground based on point cloud data representing a three-dimensional shape of a feature surface. A space subject to analysis is divided into a plurality of subspaces. A boundary search unit (22) searches for a boundary formed by the step on a horizontal plane for each of the subspaces. The boundary search unit (22) searches for a step neighborhood area having a predetermined width, in which the points projected on the horizontal plane are accumulated equal to or more than a criterion set in advance and a cloud of the points have a difference in height equal to or more than a step threshold set in advance, and searches for a directional line along a distribution of the cloud of points belonging to the step neighborhood area on the horizontal plane as the boundary.
    Type: Grant
    Filed: January 26, 2012
    Date of Patent: February 18, 2014
    Assignee: Pasco Corporation
    Inventors: Shizuo Manabe, Ikuo Kitagawa
  • Publication number: 20140044314
    Abstract: A method for all-in-focus image reconstruction and depth map generation in an imaging device is provided that includes capturing a multi-focus image by the imaging device, partitioning the multi-focus image into a plurality of blocks, determining, for each block of the plurality of blocks, a best inverse multi-focus point spread function (PSF) for reconstructing original image intensity values in the block, wherein the best inverse multi-focus PSF is selected from a plurality of predetermined inverse multi-focus PSFs stored in a memory of the imaging device, and applying to each block of the plurality of blocks the best inverse multi-focus PSF determined for the block to reconstruct the all-in-focus image.
    Type: Application
    Filed: August 12, 2013
    Publication date: February 13, 2014
    Applicant: Texas Instruments Incorporated
    Inventor: Osman Gokhan Sezer
  • Patent number: 8649565
    Abstract: Described is a system for automatic object localization based on visual simultaneous localization and mapping (SLAM) and cognitive swarm recognition. The system is configured to detect a set of location data corresponding to a current location of a sensor positioned on a platform. A map model of an environment surrounding the sensor is generated based on an input image from the sensor and the location data. In a desired aspect, a cognitive swarm object detection module is used to search for and detect an object of interest. The three-dimensional location of the object of interest relative to the platform is then estimated based on the map model and the location data regarding the sensor. The system described allows for real-time, continuous three-dimensional location updating for moving objects of interest from a mobile platform. A computer-implemented method and computer program product are also described.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: February 11, 2014
    Assignee: HRL Laboratories, LLC
    Inventors: Kyungnam Kim, Michael Daily
  • Patent number: 8649567
    Abstract: In various embodiments, old flood maps may be compared to new flood maps to determine which areas of the flood map have changed. These changed areas may be correlated against geographic area descriptions that are within changed areas of the flood map. The changed areas may also be analyzed to determine whether each area has had a change in status (e.g., from a high risk flood zone to a non-high risk flood zone or vice versa) or a change in zone within a status (e.g., from one flood zone to another flood zone). The information on type of change (or no change) may be used to populate a database that includes geographic area description identifiers. In some embodiments, detection of certain types of changes may initiate a manual comparison of the old and new flood maps to verify the change.
    Type: Grant
    Filed: November 17, 2006
    Date of Patent: February 11, 2014
    Assignee: Corelogic Solutions, LLC
    Inventor: David R. Maltby, II
  • Patent number: 8649593
    Abstract: An image processing apparatus includes a projective transformation unit that performs projective transformation on left and right images captured from different points of view, a projective transformation parameter generating unit that generates a projective transformation parameter used by the projective transformation unit by receiving feature point information regarding the left and right images, a stereo matching unit that performs stereo matching using left and right projective transformation images subjected to projective transformation, and a matching error minimization control unit that computes image rotation angle information regarding the left and right projective transformation images and correspondence information of an error evaluation value of the stereo matching.
    Type: Grant
    Filed: May 25, 2011
    Date of Patent: February 11, 2014
    Assignee: Sony Corporation
    Inventor: Yoshihiro Myokan
  • Patent number: 8644560
    Abstract: An image processing apparatus includes a depth image obtaining unit configured to obtain a depth image including information on distances from an image-capturing position to a subject in a two-dimensional image to be captured; a local tip portion detection unit configured to detect a portion of the subject at a depth and a position close from the image-capturing position as a local tip portion; a projecting portion detection unit configured to detect, in a case where, when each of the blocks is set as a block of interest, the local tip portion of the block of interest in an area formed of the plurality of blocks adjacent to the block of interest, becomes a local tip portion closest from the image-capturing position, the local tip portion as a projecting portion; and a tracking unit configured to continuously track the position of the projecting portion.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: February 4, 2014
    Assignee: Sony Corporation
    Inventor: Yoshihiro Myokan
  • Patent number: 8645099
    Abstract: A depth estimation apparatus and method are provided. The depth estimation method includes grouping a plurality of frame signals generated by a depth pixel into a plurality of frame signal groups which are used to estimate a depth to an object without a depth estimation error caused by an omission of a frame signal, the grouping of the a plurality of frame signals based on whether an omitted frame signal exists in the plurality of frame signals and based on a continuous pattern of the plurality of frame signals; and estimating the depth to the object using each of the plurality of frame signal groups.
    Type: Grant
    Filed: February 11, 2011
    Date of Patent: February 4, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong Ki Min, Young Gu Jin
  • Publication number: 20140029805
    Abstract: When reading calibration chevrons during mark-on-belt (MOB) sensor timing calibration, cyan portions or legs of printed chevrons are detected in order to determine a timing window offset adjustment. Depending on which of the six cyan legs on the left side of the chevrons are detected, a determination can be made regarding whether the window needs to be started earlier or later. If only the first two cyan legs on the left side of the chevron are detected, then the MOB sensor timing window is beginning (and ending) too early and an appropriate adjustment can be made to cause the timing window to initiate later. If only the last two cyan legs on the left side of the chevron are detected, then the MOB sensor timing window is beginning (and ending) too late, and appropriate adjustment can be made to cause the timing window to initiate earlier.
    Type: Application
    Filed: July 24, 2012
    Publication date: January 30, 2014
    Applicant: XEROX CORPORATION
    Inventor: James P. Calamita
  • Publication number: 20140029806
    Abstract: In an object searching apparatus for searching through a database of objects, an image pickup unit repeatedly shoots a subject with the optical axis moved to obtain plural pieces of image data. A distance from the image pickup unit to the subject is calculated based on the plural pieces of image data, and a main object of the subject is clipped from the obtained image data. A calculating unit calculates a real size of the main object of the subject based on a size of the clipped main object on the image data, the calculated distance from the image pickup unit to the subject and a focal length of the image pickup unit. A searching unit accesses the database to search for a sort of the main object of the subject, using the calculated real size of the main object.
    Type: Application
    Filed: June 25, 2013
    Publication date: January 30, 2014
    Inventors: Michihiro NIHEI, Kazuhisa MATSUNAGA, Masayuki HIROHAMA, Kouichi NAKAGOME
  • Patent number: 8638983
    Abstract: A digital image processing apparatus and tracking method are provided to rapidly and accurately track a subject location in video images. The apparatus searches for a target image that is most similar to a reference image, in a current frame image in which each pixel has luminance, and other, data, the reference image being smaller than the current frame image, and includes a similarity calculator for calculating a degree of similarity between the reference image and each of a plurality of matching images that have the same size as the reference image and are portions of the current frame image; and a target image determination unit for determining one of the plurality of matching images as the target image using the degree of similarity obtained by the similarity calculator. The similarity calculator calculates the degree of similarity by applying greater weight to the other data than to the luminance data.
    Type: Grant
    Filed: March 16, 2010
    Date of Patent: January 28, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Soon-keun Chang, Eun-sun Ahn
  • Patent number: 8639021
    Abstract: An apparatus and method capable of calculating a coordinate transformation parameter without having to utilize a rig are provided. The apparatus and method extract a first feature point based on a plane of first data, project the first feature point onto second data and then extract a second feature point from a part of the second data onto which the first feature point is projected. Then, calibration is performed based on the extracted feature points. Therefore, it is possible to perform calibration immediately as necessary without having to utilize a separate device such as a rig.
    Type: Grant
    Filed: December 7, 2010
    Date of Patent: January 28, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-Jo Kim, Dong-Ryeol Park
  • Publication number: 20140023240
    Abstract: A system receives an iris image and segments the iris region. The segmented iris region is mapped to a unit disk and partitioned into local iris regions (or sectors) as a function of the radius and angle The system calculates localized Zernike moments for a plurality of regions of the unit disk. The localized Zernike moment includes a projection of the local iris region into a space of Zernike polynomial orthogonal basis functions. The system generates an iris feature set from the localized Zernike moments for each partitioned region, excluding the regions which are comprised by occlusion. The iris features are weighted based on the conditions of blur, gaze and occlusion of the iris region. A probe iris image is then matched to a plurality of iris images in a database based on the distance of its feature set to the corresponding plurality of iris feature sets.
    Type: Application
    Filed: July 19, 2012
    Publication date: January 23, 2014
    Applicant: Honeywell International Inc.
    Inventors: Sharath Venkatesha, Saad J. Bedros, Jan Jelinek
  • Patent number: 8634594
    Abstract: A computerized system for displaying and making measurements based upon captured oblique images. The system includes a computer system executing image display and analysis software reading an oblique image having corresponding geo-location data and a data table storing ground plane data, the ground plane data comprising a plurality of facets within an area depicted within the oblique image, the facets having a plurality of elevation data that conforms to at least a portion of terrain depicted within the oblique image; wherein the computer system displays the oblique image, receives a starting point and an end point selected by the user, where one or both points may be above the terrain, and calculates a height difference between the starting and end points dependent upon the geo-location data and the elevation data of a facet of the ground plane data.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: January 21, 2014
    Assignee: Pictometry International Corp.
    Inventors: Stephen L. Schultz, Frank D. Giuffrida, Robert L. Gray, Charles Mondello
  • Publication number: 20140016827
    Abstract: According to an embodiment, an image processing device includes a generator, a determinator, and a processor. The generator is configured to generate a refocused image focused at a predetermined distance from a plurality of unit images for which points on an object are imaged at different positions according to distances between an imaging unit and the positions of the points on the object by the imaging unit. The determinator is configured to determine sampling information including pairs of positions of pixels of the plurality of unit images in the refocused image and pixel values of the pixels. The processor is configured to perform resolution enhancement on a predetermined region including a first position indicated by the sampling information of the refocused image according to an intensity corresponding to a focusing degree of a pixel corresponding to the first position.
    Type: Application
    Filed: July 9, 2013
    Publication date: January 16, 2014
    Inventors: Takuma YAMAMOTO, Yasunori TAGUCHI, Toshiyuki ONO, Nobuyuki MATSUMOTO
  • Patent number: 8630456
    Abstract: To carry out satisfactory object recognition in a short time. An object recognition method in accordance with an exemplary aspect of the present invention is an object recognition method for recognizing a target object by using a preliminarily-created object model. The object recognition method generates a range image of an observed scene, detects interest points from the range image, extracts first features, the first features being features of an area containing the interest points, carries out a matching process between the first features and second features, the second features being features of an area in the range image of the object model, calculates a transformation matrix based on a result of the matching process, the transformation matrix being for projecting the second features on a coordinate system of the observed scene, and recognizes the target object with respect to the object model based on the transformation matrix.
    Type: Grant
    Filed: May 12, 2009
    Date of Patent: January 14, 2014
    Assignees: Toyota Jidosha Kabushiki Kaisha, Albert-Ludwigs University Freiburg
    Inventors: Yoshiaki Asahara, Takashi Yamamoto, Mark Van Loock, Bastian Steder, Giorgio Grisetti, Wolfram Burgard
  • Patent number: 8625854
    Abstract: A hand-held mobile 3D scanner (10) for scanning a scene. The scanner (10) comprises a range sensor (11) that is arranged to sense the location of surface points in the scene relative to the scanner (10) and generate representative location information, a texture sensor (12) that is arranged to sense the texture of each surface point in the scan of the scene and generate representative texture information, and a position and orientation sensor (13) that is arranged to sense the position and orientation of the scanner (10) during the scan of the scene and generate representative position and orientation information. A control system (14) is also provided that is arranged to receive the information from each of the sensors and generate data representing the scan of the scene.
    Type: Grant
    Filed: September 8, 2006
    Date of Patent: January 7, 2014
    Assignee: Industrial Research Limited
    Inventors: Robert Jan Valkenburg, David William Penman, Johann August Schoonees, Nawar Sami Alwesh, George Terry Palmer
  • Patent number: 8625850
    Abstract: There are provided an environment recognition device and an environment recognition method. The device obtains position information of a target portion in a detection area, including a relative distance from a subject vehicle; groups continuous target portions into a target object of which position differences in a width direction vertical to an advancing direction of the vehicle and in a depth direction parallel to the advancing direction fall within a first distance; determines that the target object is a candidate of a wall, when the target portions forming the target object forms a tilt surface tilting at a predetermined angle or more with respect to a plane vertical to the advancing direction; and determines that the continuous wall candidates of which position differences in the width and depth directions among the wall candidates fall within a second predetermined distance longer than the first predetermined distance are a wall.
    Type: Grant
    Filed: May 16, 2012
    Date of Patent: January 7, 2014
    Assignee: Fuji Jukogyo Kabushiki Kaisha
    Inventor: Seisuke Kasaoki
  • Patent number: 8625855
    Abstract: A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.
    Type: Grant
    Filed: February 7, 2013
    Date of Patent: January 7, 2014
    Assignee: Edge 3 Technologies LLC
    Inventor: Tarek El Dokor
  • Patent number: 8620023
    Abstract: A method and apparatus for detecting objects. An object detector associated with a platform and configured to detect a number of objects is used to monitor for the number of objects. In response to detecting the number of objects, a number of distances to the number of objects detected by the object detector are measured using a distance measurement system. A number of geographic locations for the number of objects is identified using the number of distances, a location of the platform, and an orientation of the distance measurement system.
    Type: Grant
    Filed: September 13, 2010
    Date of Patent: December 31, 2013
    Assignee: The Boeing Company
    Inventor: Leonard A. Plotke
  • Patent number: 8620065
    Abstract: Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scence, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: December 31, 2013
    Assignee: The Regents of the University of Colorado
    Inventors: Rafael Piestun, Sean Albert Quirin
  • Publication number: 20130343612
    Abstract: Among other things, one or more techniques and/or systems are disclosed for identifying an area of interest comprising a desired object in imagery (e.g., so an image comprising the desired object may be altered in some manner). A determination can be made as to whether a capture event occurs within a proximity mask, where an object is not likely to be out of range if an image of the object is captured from within the proximity mask. For an image captured within the proximity mask, a determination can be made as to whether capture event imagery metadata for the image overlaps a footprint mask for the desired object. If so, the image may be regarded as comprising a discernible view of at least some of the desired object and is thus identified as an area of interest (e.g., that may be modified to accommodate privacy concerns, for example).
    Type: Application
    Filed: June 22, 2012
    Publication date: December 26, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Jeremy Thomas Buch, Charles Frankel, Cody Keawe Yancey
  • Publication number: 20130343613
    Abstract: Methods for distance determination, as are used, for example, in parking assistance systems are described. In a vehicle at standstill, the method involves detecting a first predefined event that occurs in connection with a pitching motion of the vehicle, and based on the detection of the first event, activating the camera in order to record a first and a second image of the vehicular environment and include a time reference to the pitching motion. The method also includes processing the first and the second image in order to determine a distance to the object from a displacement of the object in the field of view of the camera that has taken place between the points in time of the recording of the first and second image in response to the pitching motion.
    Type: Application
    Filed: December 1, 2011
    Publication date: December 26, 2013
    Inventors: Thomas Heger, Michael Helmle
  • Patent number: 8615128
    Abstract: For the purpose of 3D scanning the surface of an object by optical double triangulation using the phase-shifting method, more particularly for dental purposes, at least two 3D scans of the same object (1) are carried out at different triangulation angles (?1, ?2), the first angle of which is known and the second angle of which is known at least approximately. For each pixel (Bi) of the phase related image (?1(x,y)), a wave number (wz(xi,yi) is determined using the second phase related image, the integral portion of which is equal to the order (n) of the uniqueness range (E1) in which the respective pixel (Bi) is located. The wave number (wz(x,y)) is optimized, at least for a random sample of m pixels (Bi), by minimizing a non-integral portion of the wave number (wz (xi,yi)?[wz(xi,yi)]).
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: December 24, 2013
    Assignee: Sirona Dental Systems GmbH
    Inventors: Axel Schwotzer, Konrad Klein
  • Publication number: 20130336539
    Abstract: According to an embodiment, a position estimation device includes first and second obtaining units, first and second calculators, and an estimating unit. The first obtaining unit is configured to obtain first data about a size and a position of an object in a first image. The second obtaining unit is configured to obtain second data about a distance to or a position of the object in a second image. The first calculator is configured to calculate, based on the first and second data, weights for first and second actual sizes of the object estimated respectively from the first and second data. The second calculator is configured to calculate a third actual size of the object by using the first and second actual sizes and the weights. The estimating unit is configured to estimate a three-dimensional position of the object by using the third actual size.
    Type: Application
    Filed: December 21, 2012
    Publication date: December 19, 2013
    Inventors: Ryusuke Hirai, Kenichi Shimoyama, Takeshi Mita
  • Patent number: 8611591
    Abstract: Described herein are tracking algorithm modifications to handle occlusions when processing a video stream including multiple image frames. Specifically, system and methods for handling both partial and full occlusions while tracking moving and non-moving targets are described. The occlusion handling embodiments described herein may be appropriate for a visual tracking system with supplementary range information.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: December 17, 2013
    Assignee: 21 CT, Inc.
    Inventors: Thayne R. Coffman, Ronald C. Larcom
  • Patent number: 8611641
    Abstract: An apparatus for detecting disparity is described.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: December 17, 2013
    Assignee: Sony Corporation
    Inventor: Hideki Ando
  • Patent number: 8611592
    Abstract: Methods, systems, and apparatus are presented for associating a point of interest with a captured image. In one aspect, metadata associated with a digital image can be accessed, the metadata identifying an image capture location. Further, a depth of field corresponding to the digital image can be determined and one or more points of interest can be identified that are located within the determined depth of field. Additionally, one of the one or more identified points of interest can be selected as an image subject and the metadata associated with the digital image can be edited to include data identifying the selected point of interest.
    Type: Grant
    Filed: August 25, 2010
    Date of Patent: December 17, 2013
    Assignee: Apple Inc.
    Inventors: Alexander David Wallace, Tim Cherna, Eric Hanson, Nikhil Bhatt
  • Patent number: 8611610
    Abstract: A method and apparatus for determining a distance between an optical apparatus and an object by considering a measured nonlinear waveform, as opposed to a mathematically ideal waveform. The method and apparatus may accurately calculate distance information without being affected by a type of waveform projected onto the object and may not require an expensive light source or a light modulator for generating a light with little distortion and nonlinearity. Further, since the method may be able to use a general light source, a general light modulator, and a general optical apparatus, additional costs do not arise. Furthermore, a lookup table, in which previously calculated distance information is stored, may be used, and thus the amount of computation required to be performed to calculate the distance is small, thereby allowing for quick calculation of the distance information in real time.
    Type: Grant
    Filed: July 16, 2010
    Date of Patent: December 17, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yong-hwa Park, Jang-woo You
  • Patent number: 8611607
    Abstract: Systems and methods are disclosed for identifying objects captured by a depth camera by condensing classified image data into centroids of probability that captured objects are correctly identified entities. Output exemplars are processed to detect spatially localized clusters of non-zero probability pixels. For each cluster, a centroid is generated, generally resulting in multiple centroids for each differentiated object. Each centroid may be assigned a confidence value, indicating the likelihood that it corresponds to a true object, based on the size and shape of the cluster, as well as the probabilities of its constituent pixels.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: December 17, 2013
    Assignee: Microsoft Corporation
    Inventors: Matthew Bronder, Oliver Williams, Ryan Geiss, Andrew Fitzgibbon, Jamie Shotton
  • Publication number: 20130329962
    Abstract: A displacement detection device includes a light source, an image sensor and a processing unit. The light source is configured to illuminate a work surface. The image sensor is configured to capture reflected light from the work surface and to output an image frame. The processing unit is configured to select a window of interest in the image frame having a maximum image feature and to calculate a displacement of the displacement detection device according to the window of interest.
    Type: Application
    Filed: April 8, 2013
    Publication date: December 12, 2013
    Applicant: PixArt Imaging Inc.
    Inventors: Ming-Tsan KAO, Ren-Hau GU, Yu-Hao HUANG
  • Publication number: 20130329963
    Abstract: A storage media provided by the present invention, has a non-transitory processing software for computing a position of an object in a distance measurement system, the execution of the processing software comprising: receiving a plurality of reference image information contained in an image with a speckle pattern, wherein the image is projected from a light beam on a plurality of reference flat surfaces which are located on different position points, and the speckle contains a plurality of speckles; receiving an object image information contained in an image with the speckle pattern which is projected from the light beam on an object; obtaining a plurality of comparison results through comparing the plurality of reference image information with the object image information; and computing the position of the object through performing an interpolation operation to the plurality of comparison results.
    Type: Application
    Filed: August 15, 2013
    Publication date: December 12, 2013
    Applicant: PixArt Imaging Inc.
    Inventors: SHU-SIAN YANG, HSIN-CHIA CHEN, REN-HAU GU, SEN-HUANG HUANG
  • Patent number: 8606480
    Abstract: A vehicle travel amount estimation device includes a camera, a taken image storing unit, a compensated image storing unit, a travel amount calculation unit, a vehicle speed and gyro sensor, and a travel amount determination unit. The taken image storing unit and the compensated image storing unit store images taken by the camera. The travel amount calculation unit calculates the amount of travel based on two stored images. The sensors detect the amount of travel of the vehicle. The travel amount determination unit is configured to compare a first amount of travel calculated by the travel amount calculation unit with a second amount of travel detected by the vehicle speed sensor or the like in order to determine the first amount of travel to be the amount of travel of the vehicle when the difference between the first amount and the second amount is smaller than a predetermined value.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: December 10, 2013
    Assignee: Alpine Electronics, Inc.
    Inventors: Takayuki Watanabe, Daishi Mori, Kenji Shida
  • Publication number: 20130322694
    Abstract: A method and system for calculating an energy efficient route is disclosed. A route calculation application calculates one or more routes from an origin to a destination. For each of the routes, the route calculation application uses impedance factor data associated with each segment in the route. The impedance factor is calculated using probe data when the probe data is available for a road segment. When probe data is unavailable, the impedance factor is calculated using machine learning techniques that analyze the results of the impedance factor classifications for road segments having probe data.
    Type: Application
    Filed: August 8, 2013
    Publication date: December 5, 2013
    Applicant: Navteq B.V.
    Inventors: Praveen J. Arcot, Justin M. Spelbrink, Finn A. Swingley, Matthew G. Lindsay
  • Patent number: 8600619
    Abstract: An approach is provided for custom zooming of geographic representation. A custom zooming application determines an input specifying a level of zoom for rendering a geographic representation presented at a device, the geographic representation including a plurality of objects. The custom zooming application determines respective degrees of relevance of the plurality of objects based, at least in part, on the device, a user of the device, related context information, or a combination thereof. The custom zooming application determines to render one or more of the plurality of objects with at least one different level of visibility with respect to other ones of the plurality of objects based, at least in part, on the respective degrees of relevance, the level of zoom, or a combination thereof.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: December 3, 2013
    Assignee: Nokia Corporation
    Inventors: Elizabeth Bales, Timothy Youngjin Sohn
  • Patent number: 8600110
    Abstract: A system for detecting intruding viewers of a display and responding to an intrusion by editing content. The system includes an electronic media display, a sensor, and a processing circuit. The processing circuit is configured to obtain information from the sensor, determine a visibility envelope of the electronic media display device, analyze the information from the sensor to determine a presence of an intruder within the visibility envelope, distinguish the intruder from an authorized user, and edit any displayed content.
    Type: Grant
    Filed: September 17, 2012
    Date of Patent: December 3, 2013
    Assignee: Elwha LLC
    Inventors: Alistair K. Chan, William D. Duncan, William Gates, Daniel A. Gerrity, Paul Holman, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud, Nathan P. Myhrvold, Keith D. Rosema, Clarence T. Tegreene, Lowell L. Wood, Jr.
  • Patent number: 8594370
    Abstract: A range map of a visual scene generated by a stereo vision and associate image processing system, and is filtered to remove objects beyond a region of interest and for which a collision is not possible, and to remove an associated road surface. Objects clustered in range bins are separated by segmentation. A composite range map is generated using principale components analysis and processed with a connected-components sieve filter. Objects are identified using one or more of a harmonic profile and other features using an object recognition processor using a combination of inclusive, exclusive and harmonic networks to generate a classification metric.
    Type: Grant
    Filed: July 26, 2005
    Date of Patent: November 26, 2013
    Assignee: Automotive Systems Laboratory, Inc.
    Inventors: Gregory G. Schamp, Owen A. Davies, James C. Demro
  • Patent number: 8594417
    Abstract: Systems and methods for inspecting anodes, and smelting management based thereon are provided. In one embodiment, a system includes an imaging device configured to obtain images of at least one anode assembly, an image processor configured to producing imaging data based on the images, and a data analyzer configured to produce anode characteristic data based on the imaging data. In one embodiment, a method includes the steps of obtaining at least one image of at least a portion of an anode assembly, producing imaging data based on the at least one image, and deriving anode characteristic data based, at least in part, on the imaging data.
    Type: Grant
    Filed: November 27, 2007
    Date of Patent: November 26, 2013
    Assignee: ALCOA Inc.
    Inventors: Jean-Pierre Gagné, Gilles Dufour
  • Patent number: 8594438
    Abstract: A method for the identification of objects in a predetermined target area involves recording a first and a second height profile of the target area, wherein the two height profiles are recorded at a predeterminable time interval. A height difference profile is determined from the first and the second height profile. The height difference profile is subdivided in equidistant horizontal height sections. The positions of the centroids of the surface areas enclosed by the respective contour lines of the horizontal height sections are calculated and the determined height difference profile and the calculated centroids of the surface areas are supplied to a system for classifying objects.
    Type: Grant
    Filed: January 19, 2010
    Date of Patent: November 26, 2013
    Assignee: EADS Deutschland GmbH
    Inventor: Manfred Hiebl
  • Publication number: 20130307966
    Abstract: A depth measurement apparatus calculates depth information on a subject in an image by using a plurality of images having different blurs taken under different imaging parameters, and includes a region segmentation unit that segments at least one of the images into regions based on an image feature amount, wherein in each of the regions pixels are presumed to be substantially equal in depth to the subject, and a depth calculation unit that calculates a depth for each region resulting from the segmentation by the region segmentation unit and serving as a processing target region for depth calculation, and sets the calculated depth as the depth of the processing target region.
    Type: Application
    Filed: April 24, 2013
    Publication date: November 21, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Satoru Komatsu
  • Publication number: 20130308826
    Abstract: A technique that enables an image distortion caused on a pseudo image of an object to be reduced is provided. In order to achieve the object, an image processor includes a first obtaining section for obtaining a base image, a second obtaining section for obtaining first pieces of distance information, a first generating section for generating second pieces of distance information by executing a process for reducing dispersion of the first pieces of distance information, and a second generating section for generating a pseudo image constituting a stereoscopic image. The first generating section executes the reducing process so that strength for reducing the dispersion of the first pieces of distance information in a second direction crossing a first direction on an original distance image relating to the first pieces of distance information is stronger than strength for reducing the dispersion in the first direction on the original distance image.
    Type: Application
    Filed: January 27, 2012
    Publication date: November 21, 2013
    Applicant: KONICA MINOLTA, INC.
    Inventor: Motohiro Asano
  • Patent number: 8588471
    Abstract: A mapping method is provided. The environment is scanned to obtain depth information of environmental obstacles. The image of the environment is captured to generate an image plane. The depth information of environmental obstacles is projected onto the image plane, so as to obtain projection positions. At least one feature vector is calculated from a predetermined range around each projection position. The environmental obstacle depth information and the environmental feature vector are merged to generate a sub-map at a certain time point. Sub-maps at all time points are combined to generate a map. In addition, a localization method using the map is also provided.
    Type: Grant
    Filed: February 4, 2010
    Date of Patent: November 19, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Hsiang-Wen Hsieh, Hung-Hsiu Yu, Yu-Kuen Tsai, Wei-Han Wang, Chin-Chia Wu
  • Patent number: 8588515
    Abstract: A method and apparatus for enhancing quality of a depth image are provided. A method for enhancing quality of a depth image includes: receiving a multi-view image including a left image, a right image, and a center image; receiving a current depth image frame and a previous depth image frame of the current depth image frame; setting an intensity difference value corresponding to a specific disparity value of the current depth image frame by using the current depth image frame and the previous depth image frame; setting a disparity value range including the specific disparity value; and setting an intensity difference value corresponding to the disparity value range of the current depth image frame by using the multi-viewpoint image.
    Type: Grant
    Filed: January 28, 2010
    Date of Patent: November 19, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gun Bang, Gi-Mun Um, Eun-Young Chang, Taeone Kim, Nam-Ho Hur, Jin-Woong Kim, Soo-In Lee