Range Or Distance Measuring Patents (Class 382/106)
  • Publication number: 20130308826
    Abstract: A technique that enables an image distortion caused on a pseudo image of an object to be reduced is provided. In order to achieve the object, an image processor includes a first obtaining section for obtaining a base image, a second obtaining section for obtaining first pieces of distance information, a first generating section for generating second pieces of distance information by executing a process for reducing dispersion of the first pieces of distance information, and a second generating section for generating a pseudo image constituting a stereoscopic image. The first generating section executes the reducing process so that strength for reducing the dispersion of the first pieces of distance information in a second direction crossing a first direction on an original distance image relating to the first pieces of distance information is stronger than strength for reducing the dispersion in the first direction on the original distance image.
    Type: Application
    Filed: January 27, 2012
    Publication date: November 21, 2013
    Applicant: KONICA MINOLTA, INC.
    Inventor: Motohiro Asano
  • Patent number: 8588471
    Abstract: A mapping method is provided. The environment is scanned to obtain depth information of environmental obstacles. The image of the environment is captured to generate an image plane. The depth information of environmental obstacles is projected onto the image plane, so as to obtain projection positions. At least one feature vector is calculated from a predetermined range around each projection position. The environmental obstacle depth information and the environmental feature vector are merged to generate a sub-map at a certain time point. Sub-maps at all time points are combined to generate a map. In addition, a localization method using the map is also provided.
    Type: Grant
    Filed: February 4, 2010
    Date of Patent: November 19, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Hsiang-Wen Hsieh, Hung-Hsiu Yu, Yu-Kuen Tsai, Wei-Han Wang, Chin-Chia Wu
  • Patent number: 8588515
    Abstract: A method and apparatus for enhancing quality of a depth image are provided. A method for enhancing quality of a depth image includes: receiving a multi-view image including a left image, a right image, and a center image; receiving a current depth image frame and a previous depth image frame of the current depth image frame; setting an intensity difference value corresponding to a specific disparity value of the current depth image frame by using the current depth image frame and the previous depth image frame; setting a disparity value range including the specific disparity value; and setting an intensity difference value corresponding to the disparity value range of the current depth image frame by using the multi-viewpoint image.
    Type: Grant
    Filed: January 28, 2010
    Date of Patent: November 19, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gun Bang, Gi-Mun Um, Eun-Young Chang, Taeone Kim, Nam-Ho Hur, Jin-Woong Kim, Soo-In Lee
  • Patent number: 8588480
    Abstract: A method for generating a density image of an observation zone over a given time interval, in which method a plurality of images of the observation zone is acquired, for each image acquired the following steps are carried out: a) detection of zones of pixels standing out from the fixed background of the image, b) detection of individuals, c) for each individual detected, determination of the elementary surface areas occupied by this individual, and d) incrementation of a level of intensity of the elementary surface areas thus determined in the density image.
    Type: Grant
    Filed: February 12, 2009
    Date of Patent: November 19, 2013
    Assignees: CLIRIS
    Inventors: Alexandre Zeller, Alexandre Revue
  • Patent number: 8588465
    Abstract: A method of tracking a target includes classifying a pixel having a pixel address with one or more pixel cases. The pixel is classified based on one or more observed or synthesized values. An example of an observed value for a pixel address includes an observed depth value obtained from a depth camera. Examples of synthesized values for a pixel address include a synthesized depth value calculated by rasterizing a model of the target; one or more body-part indices estimating a body part corresponding to that pixel address; and one or more player indices estimating a target corresponding to that pixel address. One or more force vectors are calculated for the pixel based on the pixel case, and the force vector is mapped to one or more force-receiving locations of the model representing the target to adjust the model representing the target into an adjusted pose.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: November 19, 2013
    Assignee: Microsoft Corporation
    Inventor: Ryan M. Geiss
  • Publication number: 20130300860
    Abstract: Provided is a depth measurement apparatus that measures depth information on a subject by using a plurality of images taken under different imaging parameters and having different blurs, the depth measurement apparatus including a spatial frequency determination unit that determines a spatial frequency band from a spatial frequency present in at least one of the plurality of images, an image comparison unit that compares the plurality of images by using the component of the spatial frequency band of the plurality of images and outputs a depth dependence value dependent on the depth of the subject, and a depth calculation unit that calculates the depth from the depth dependence value and the spatial frequency band.
    Type: Application
    Filed: May 2, 2013
    Publication date: November 14, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Satoru Komatsu
  • Patent number: 8581997
    Abstract: Portable communication devices transmit digital images and their location information to a central server. If a particular location is often photographed it can be designated as a hot spot. Thereafter, if a communication device is currently transmitting from within a vicinity of the hot spot, based on the location data received from the communication device, notification data can be transmitted to the communication device for notifying the user of the hot spot. The notification data can include directional information for the user to access on the communication device for enabling the user to find the hot spot.
    Type: Grant
    Filed: October 28, 2010
    Date of Patent: November 12, 2013
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: Tomi Lahcanski, Dustin L. Winters
  • Patent number: 8582803
    Abstract: Determination of human behavior from an alignment of data streams includes acquiring visual image primitives from a video input comprising visual information relevant to a human activity. The primitives are temporally aligned to an optimally hypothesized sequence of primitives transformed from a sequence of transactions as a function of a distance metric between the observed primitive sequence and the transformed primitive sequence. More particularly, transforming includes comparing the distance metric costs and choosing and performing the lowest cost of temporally matching the observed primitives to one or more transactions, deleting a primitive, or associating a primitive with a pseudo transaction marker. Accordingly, alerts are issued based on analysis of the transformation of primitives.
    Type: Grant
    Filed: October 15, 2010
    Date of Patent: November 12, 2013
    Assignee: International Business Machines Corporation
    Inventors: Lei Ding, Quanfu Fan, Prasad Gabbur, Arun Hampapur, Sachiko Miyazawa, Sharathchandra U. Pankanti
  • Patent number: 8582823
    Abstract: An image processing apparatus includes: a motion detection unit detecting a motion of a subject to be evaluated by using an image of the subject to be evaluated; a correlation calculation unit calculating a temporal change correlation between motion amounts of a plurality of portions of the subject to be evaluated, by using a motion vector indicating the motion of the subject to be evaluated, which is detected by the motion detection unit; and an evaluation value calculation unit calculating an evaluation value to evaluate cooperativity of the motion of the subject to be evaluated, by using the correlation calculated by the correlation calculation unit.
    Type: Grant
    Filed: October 7, 2011
    Date of Patent: November 12, 2013
    Assignee: Sony Corporation
    Inventors: Takeshi Kunihiro, Tomohiro Hayakawa, Masashi Uchida, Eriko Matsui
  • Patent number: 8582820
    Abstract: An image capture device is used to identify object range information, and includes: providing an image capture device, an image sensor, a coded aperture, and a lens; and using the image capture device to capture a digital image of the scene from light passing through the lens and the coded aperture, the scene having a plurality of objects. The method further includes: dividing the digital image into a set of blocks; assigning a point spread function (psf) value to each of the blocks; combining contiguous blocks in accordance with their psf values; producing a set of blur parameters based upon the psf values of the combined blocks and the psf values of the remaining blocks; producing a set of deblurred images based upon the captured image and each of the blur parameters; and using the set of deblurred images to determine the range information for the objects in the scene.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: November 12, 2013
    Assignee: Apple Inc.
    Inventors: Paul James Kane, Sen Wang
  • Publication number: 20130293679
    Abstract: A method for processing data includes receiving a depth map of a scene containing at least an upper body of a humanoid form. The depth map is processed so as to identify a head and at least one arm of the humanoid form in the depth map. Based on the identified head and at least one arm, and without reference to a lower body of the humanoid form, an upper-body pose, including at least three-dimensional (3D) coordinates of shoulder joints of the humanoid form, is extracted from the depth map.
    Type: Application
    Filed: May 2, 2012
    Publication date: November 7, 2013
    Applicant: PRIMESENSE LTD.
    Inventors: Amiad Gurman, Oren Tropp
  • Patent number: 8577608
    Abstract: A method essentially comprising a step of storing each plot mesh. Each plot mesh receiving plots sent thereto by detector means and retaining in memory at least the altitude of the highest plot and the number of plots neighboring said highest plot in said mesh. The altitude and the number of plots neighboring are updated each time a new plot is sent to the plot mesh. A step of rejecting plot meshes is presenting a number of neighboring plots that is less than a predetermined rejection threshold value. The rejection threshold value is a function of the position of the mesh relative to the detector means. A step is preparing the local terrain elevation database from the plot meshes (M(i,j)) that are not rejected.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: November 5, 2013
    Assignee: Eurocopter
    Inventors: Marianne Gillet, Francois-Xavier Filias, Richard Pire
  • Patent number: 8577595
    Abstract: A device, system, and method for generating location and path-map data for displaying a location and path-map is disclosed.
    Type: Grant
    Filed: July 17, 2008
    Date of Patent: November 5, 2013
    Assignee: Memsic, Inc.
    Inventors: Yang Zhao, Jin Liang
  • Patent number: 8577126
    Abstract: A method for facilitating cooperation between humans and remote vehicles comprises creating image data, detecting humans within the image data, extracting gesture information from the image data, mapping the gesture information to a remote vehicle behavior, and activating the remote vehicle behavior. Alternatively, voice commands can by used to activate the remote vehicle behavior.
    Type: Grant
    Filed: April 11, 2008
    Date of Patent: November 5, 2013
    Assignee: iRobot Corporation
    Inventors: Christopher Vernon Jones, Odest Chadwicke Jenkins, Matthew M. Loper
  • Patent number: 8577084
    Abstract: A visual target tracking method includes representing a human target with a machine-readable model configured for adjustment into a plurality of different poses and receiving an observed depth image of the human target from a source. The observed depth image is compared to the model. A refine-z force vector is then applied to one or more force-receiving locations of the model to move a portion of the model towards a corresponding portion of the observed depth image if that portion of the model is Z-shifted from that corresponding portion of the observed depth image.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: November 5, 2013
    Assignee: Microsoft Corporation
    Inventor: Ryan M. Geiss
  • Patent number: 8577085
    Abstract: A target tracking method includes modeling the target in a first frame with a first frame iteration of a machine-readable model and receiving an observed depth image of a second frame of a scene including the target. The first frame iteration of the machine-readable model is then adjusted into a second frame iteration of the machine-readable model based on the observed depth image of the second frame.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: November 5, 2013
    Assignee: Microsoft Corporation
    Inventor: Ryan M. Geiss
  • Patent number: 8577089
    Abstract: Provided is a depth image unfolding apparatus and method that may remove a depth fold from a depth image to restore a three-dimensional (3D) image. The depth image unfolding apparatus may include an input unit to receive inputted multiple depth images with respect to the same scene, the multiple depth images being photographed based on different modulation frequencies of a fixed photographing device, a depth fold estimator to estimate a number of depth folds based on a distance between multiple three-dimensional (3D) points of multiple pixels indicating the same location of the scene in the multiple depth images, and an output unit to output the multiple depth images from which depth folds are removed based on the estimated number of depth folds.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: November 5, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ouk Choi, Hwa Sup Lim, Kee Chang Lee, Seung Kyu Lee
  • Patent number: 8575527
    Abstract: A vehicle including electro-optic (EO) imaging has a vehicle body having an outer surface including a front portion and a side portion, wherein the side portion includes a plurality of portholes. A propulsion source is within the vehicle body for moving the vehicle. A fixed EO imaging system having a field-of-regard (FOR) includes a plurality of fixed EO imaging sub-systems arrayed within the vehicle body. The fixed EO imaging sub-systems each have a different field-of-view (FOV) for providing a portion of the FOR and include a camera affixed within the vehicle body and an optical window secured to one of the portholes for transmitting electromagnetic radiation received from one of the portions of the FOR to the camera, wherein the cameras each generate image data representing one of the portions of the FOR therefrom. A processor is coupled to receive the image data from the plurality of fixed EO imaging sub-systems for combining the image data to provide composite image data spanning the FOR.
    Type: Grant
    Filed: November 10, 2010
    Date of Patent: November 5, 2013
    Assignee: Lockheed Martin Corporation
    Inventor: James A. Fry
  • Publication number: 20130287262
    Abstract: A method (20) is described for optically measuring the three-dimensional location of one or more wires W, in a group of wires W1-Wn, such a overhead power cables in an electric rail system. A first step (22) comprises obtaining stereoscopic image data for each of the wires W from the first and second spaced apart stereoscopic camera pairs 10a and 10b which lie in the common plane P1. At step (24), image data from the first and second stereoscopic camera pairs 10a and 10b is processed to identify each of the wires W in the region of interest (12). At step (26), a determination is made of the location in 3D space of selected identified wires W using image data from one of the cameras C1 or C2; and, C3 or C4 in each of the first and second camera pairs 10a and 10b.
    Type: Application
    Filed: January 19, 2011
    Publication date: October 31, 2013
    Inventor: Ian Stewart Blair
  • Patent number: 8571265
    Abstract: It is an object to measure a position of a feature around a road. An image memory unit stores images in which neighborhood of the road is captured. Further, a three-dimensional point cloud model memory unit 709 stores a point cloud showing three-dimensional coordinates obtained by laser measurement which is carried out simultaneously to the image-capturing of the images as a road surface shape model. Using an image point inputting unit 342, a pixel on a feature of a measurement target is specified by a user as a measurement image point. A neighborhood extracting unit 171 extracts a point which is located adjacent to the measurement image point and superimposed on the feature for the measurement target from the point cloud. A feature position calculating unit 174 outputs three-dimensional coordinates shown by the extracted point as three-dimensional coordinates of the feature for the measurement target.
    Type: Grant
    Filed: July 12, 2012
    Date of Patent: October 29, 2013
    Assignees: Mitsubishi Electric Corporation, Waseda University
    Inventors: Junichi Takiguchi, Naoyuki Kajiwara, Yoshihiro Shima, Ryujiro Kurosaki, Takumi Hashizume
  • Publication number: 20130278493
    Abstract: A gesture control method includes steps of capturing at least one image; detecting whether there is a face in the at least one image; if there is a face in the at least one image, detecting whether there is a hand in the at least one image; if there is a hand in the at least one image, identifying a gesture performed by the hand and identifying a relative distance or a relative moving speed between the hand and the face; and executing a predetermined function in a display screen according to the gesture and the relative distance or according to the gesture and the relative moving speed.
    Type: Application
    Filed: September 5, 2012
    Publication date: October 24, 2013
    Inventors: Shou-Te Wei, Chia-Te Chou, Hsun-Chih Tsao, Chih-Pin Liao
  • Publication number: 20130278755
    Abstract: Provided is a method of spatially referencing a plurality of images captured from a plurality of different locations within an indoor space by determining the location from which the plurality of images were captured. The method may include obtaining a plurality of distance-referenced panoramas of an indoor space. The distance-referenced panoramas may each include a plurality of distance-referenced images each captured from one position in the indoor space and at a different azimuth from the other distance-referenced images, a plurality of distance measurements, and orientation indicators each indicative of the azimuth of the corresponding one of the distance-referenced images. The method may further include determining the location of each of the distance-referenced panoramas based on the plurality of distance measurements and the orientation indicators and associating in memory the determined locations with the plurality of distance-referenced images captured from the determined location.
    Type: Application
    Filed: March 4, 2013
    Publication date: October 24, 2013
    Inventors: Alexander T. Starns, Arjun Raman, Gadi Royz
  • Publication number: 20130279760
    Abstract: Disclosed herein is a location correction apparatus and method. The location correction apparatus includes a reference object search unit for searching a pre-stored geographic object database (DB) for one or more objects corresponding to objects included in a captured image and setting a reference object to be used to correct a location among the one or more objects that have been found. A reference point extraction unit sets reference points from the set reference object. A location determination unit obtains an actual distance between the reference points and calculates a location using the actual distance, a distance between the reference points included in the captured image, and metadata of the captured image. Therefore, the present invention can improve positioning accuracy and can be applied to high-quality location-based services or space information services thanks to the improved accuracy.
    Type: Application
    Filed: December 4, 2012
    Publication date: October 24, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seong-Ho Lee, Jae-Chul Kim, Yoon-Seop Chang
  • Patent number: 8564710
    Abstract: A digital camera includes: a display section displaying a photographing image obtained by photographing a subject; a signal processing and control section reading out information related to the subject and information related to surroundings of the subject from a database on the basis of position information and compass direction information about the digital camera; and a display control section selecting the information related to the subject according to the lens condition and performing control so as to display the selected information on the display section by superimposing the selected information on the photographing image.
    Type: Grant
    Filed: September 16, 2011
    Date of Patent: October 22, 2013
    Assignee: Olympus Imaging Corp.
    Inventors: Osamu Nonaka, Manabu Ichikawa, Koichi Nakata
  • Patent number: 8564656
    Abstract: The invention relates to a method for recognizing surface characteristics of metallurgical products, especially continuously cast products and rolled products. According to said method, a defined section of the product surface (12, 12?) is irradiated by at least two radiation sources of different wavelengths, from different directions, and the irradiated surface section is optoelectronically detected. Three light sources (21, 22, 23) are oriented towards the product surface (12, 12?), as radiation sources, under the same angle (a), the positions thereof being in three planes (E1, E2, E3) forming a 120 DEG angle and being perpendicular to the product surface (12, 12?). In this way, instructive information about metallurgical products can be determined and stored in a very short space of time such that the products can be determined in a perfectly identified manner for the reprocessing, in terms of the surface quality or surface structure thereof.
    Type: Grant
    Filed: March 19, 2008
    Date of Patent: October 22, 2013
    Assignee: SMS Concast AG
    Inventor: Tobias Rauber
  • Patent number: 8565476
    Abstract: A target tracking method includes representing a human target with a machine-readable model configured for adjustment into a plurality of different poses. The machine-readable model includes a plurality of joints, including one or more magnetism joints, and each joint has a three-dimensional world space position. The method further includes receiving an observed depth image of the human target from a source. The observed depth image includes a plurality of observed pixels. A magnetism body part is assigned to one or more of the plurality of observed pixels, and a magnetism joint position is estimated based on world space positions of the one or more observed pixels assigned the magnetism body part. A joint of the machine-readable model is then shifted toward the magnetism joint position.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: October 22, 2013
    Assignee: Microsoft Corporation
    Inventor: Ryan M. Geiss
  • Patent number: 8565477
    Abstract: A target tracking method includes representing a human target with a machine-readable model configured for adjustment into a plurality of different poses and receiving an observed depth image of the human target from a source. One or more push force vectors are applied to one or more force-receiving locations of the model to push the model in an XY plane towards a silhouette of the human target in the observed depth image when portions of the model are shifted away from the silhouette of the human target in the observed depth image. One or more pull force vectors are applied to one or more force-receiving locations of the model to pull the model in an XY plane towards the silhouette of the human target in the observed depth image when portions of the observed depth image are shifted away from the silhouette of the model.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: October 22, 2013
    Assignee: Microsoft Corporation
    Inventor: Ryan M. Geiss
  • Patent number: 8565487
    Abstract: A system for determining a kinetic parameter of an object, the system includes: (a) a target, that includes a calibration facilitating pattern and which is connected to the object, so that a movement of the target correlates with a movement of the object; (b) an optical source, which is adapted to illuminate the target; (c) an optical sensor, adapted to generate detection signals in response to light received from the target; and (d) a processor, adapted to determine a calibration parameter and the kinetic parameter in response to the detection signals and to detection signals reference information.
    Type: Grant
    Filed: February 6, 2008
    Date of Patent: October 22, 2013
    Assignee: MediTouch Ltd.
    Inventors: Ziv Kuniz, Giora Ein-Zvi, Avraham Feazadeh, Motti Haridim
  • Publication number: 20130272581
    Abstract: A method and apparatus for determining a position and attitude of at least one camera by calculating and extracting estimated rotation and translation values from an estimated fundamental matrix based on information from at least a first and second 2D image. Variable substitution is utilized to strengthen derivatives and provide a more rapid convergence. A solution is provided for solving position and orientation from correlated point features in images using a method that solves for both rotation and translation simultaneously.
    Type: Application
    Filed: October 1, 2010
    Publication date: October 17, 2013
    Applicant: SAAB AB
    Inventor: Anders Modén
  • Patent number: 8559675
    Abstract: A driving support device includes an image-capturer that captures side rear images of a vehicle, a distance measurer that measures a distance between the vehicle and another vehicle, and a vehicle detector that detects the another vehicle in the captured images. A superimposed image generator calculates a distance of units of the vehicle length based on information concerning the vehicle length stored in a vehicle length information storage and the distance to the another vehicle detected by the distance measurer, and generates a superimposed image based on the calculated distance. A display image generator synthesizes the generated superimposed image on a side peripheral image including the captured side rear images, and displays the image on a display installed in a position at which the field of front vision of the driver is not obstructed.
    Type: Grant
    Filed: April 15, 2010
    Date of Patent: October 15, 2013
    Assignee: Panasonic Corporation
    Inventors: Tateaki Nogami, Yuki Waki, Kazuhiko Iwai
  • Patent number: 8559674
    Abstract: A moving state estimating device (100) includes a feature point candidate extracting part (10) configured to extract feature point candidates in an image taken at a first time point by an imaging sensor (2) mounted on a vehicle, a positional relation obtaining part (11) configured to obtain a positional relation of an actual position of each feature point candidate relative to the vehicle, a feature point selecting part (12) configured to select a feature point candidate as a feature point based on the positional relation of the feature point candidate, a corresponding point extracting part (13) configured to extract a corresponding point in an image taken by the imaging sensor (2) at a second time point corresponding to the selected feature point, and a moving state estimating part configured to estimate a moving state of the vehicle from the first time point to the second time point based on a coordinate of the selected feature point and a coordinate of the corresponding point.
    Type: Grant
    Filed: December 24, 2008
    Date of Patent: October 15, 2013
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventors: Naohide Uchida, Tatsuya Shiraishi
  • Patent number: 8554462
    Abstract: In a method for controlling an unmanned aerial vehicle (UAV), a digital image is obtained by an image capturing device of the UAV. The method detects an object in the digital image, determines a distance between the detected object and the UAV, and obtains a flight direction of the UAV if the distance is less than a preset value. The method further calculates a relative position and a relative angle between the detected object and the UAV, determines a flight limiting range of the UAV according to the relative position and the relative angle, and controls the flight direction of the UAV according to the flight limiting range.
    Type: Grant
    Filed: January 13, 2012
    Date of Patent: October 8, 2013
    Assignee: Hon Hai Precision Industry Co., Ltd.
    Inventors: Hou-Hsien Lee, Chang-Jung Lee, Chih-Ping Lo
  • Patent number: 8553946
    Abstract: A method and apparatus are described for rendering a display image generated from digital map information. The method includes the steps of: determining elevation information (20a) from the digital map information; determining display scale information (28) for the display image; and determining (22) a shading value to apply to a pixel in the display image, and applying the shading value to the respective pixel in the display image, to generate a display (24) that represents elevation information by pixel shading. The shading value varies as a function of the elevation information and the display scale information, whereby the display is generated to represent elevation information by pixel shading that varies with display scale.
    Type: Grant
    Filed: January 9, 2008
    Date of Patent: October 8, 2013
    Assignee: Tomtom International B.V.
    Inventor: Alexandru Serbanescu
  • Patent number: 8553932
    Abstract: The optical device for determining position and orientation of an object comprising a fixed part comprises a laser beam projector with sequential scan and a projection centre (O), defining the centre of a reference frame (R) in space. The projector emits, in a zone comprising at least four sensors fixed to the said object, the four sensors having a known disposition on the object. The instants at which each of the sensors provides an electrical pulse determine the angular directions of the said sensors in the reference frame, the four straight lines passing through the origin and through each of the sensors intercepting an image plane of the fixed part at four projected points. The positions in the image plane of the mappings of the four points determine a geometric shape making it possible to calculate the position and the orientation of the object in space.
    Type: Grant
    Filed: September 25, 2009
    Date of Patent: October 8, 2013
    Assignee: Thales
    Inventors: Bruno Barbier, Siegfried Rouzes
  • Patent number: 8553942
    Abstract: One or more systems, devices, and/or methods for emphasizing objects in an image, such as a panoramic image, are disclosed. For example, a method includes receiving a depthmap generated from an optical distancing system, wherein the depthmap includes position data and depth data for each of a plurality of points. The optical distancing system measures physical data. The depthmap is overlaid on the panoramic image according to the position data. Data is received that indicates a location on the panoramic image and, accordingly, a first point of the plurality of points that is associated with the location. The depth data of the first point is compared to depth data of surrounding points to identify an area on the panoramic image corresponding to a subset of the surrounding points. The panoramic image is altered with a graphical effect that indicates the location.
    Type: Grant
    Filed: October 21, 2011
    Date of Patent: October 8, 2013
    Assignee: Navteq B.V.
    Inventor: James D. Lynch
  • Publication number: 20130259306
    Abstract: An exemplary automatic revolving door control method includes obtaining a preset number of successive images captured by a camera. The images include distance information by TOF technology of the objects captured in the images. The method creates successive 3D scene models. Next, the method determines whether one or more persons appear in the created successive 3D scene models. The method further includes determining a foremost person of the one or more person as a person being monitored, and determines whether the moving direction of the person being monitored is toward the entrance. The method determines the moved distance by the person being monitored in the two created 3D scene models. The method determines the moving time taken for the calculated moved distance, and further determines the moving speed of the person being monitored, to rotate the automatic revolving door at a speed to match that of the person being monitored.
    Type: Application
    Filed: March 15, 2013
    Publication date: October 3, 2013
    Applicant: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: HOU-HSIEN LEE, CHANG-JUNG LEE, CHIH-PING LO
  • Publication number: 20130259315
    Abstract: Methods for generating stereoscopic views from monoscopic endoscope images and systems using the same are provided. First, monoscopic images are obtained by capturing images of organs in an operating field via an endoscope. A back-ground depth map of the operating field is obtained for each image. An instrument depth map corresponding to an instrument is obtained for each image, wherein the instrument is inserted in a patient's body cavity. The background depth maps and the instrument depth maps are merged to generate an integrated depth map for each image. Stereoscopic views are generated according to the monoscopic images and the integrated depth maps.
    Type: Application
    Filed: December 8, 2010
    Publication date: October 3, 2013
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Ludovic Angot, Kai-Che Liu, Wei-Jia Huang
  • Patent number: 8548685
    Abstract: A method of providing range measurements for use with a vehicle, the method comprising the steps of: a) visually sensing (2) the area adjacent the vehicle to produce visual sense data (22); b) range sensing (26) objects around the vehicle to produce range sense data; c) combining the visual sense data and the range sense data to produce, with respect to the vehicle, an estimate of ranges to the objects around the vehicle (28). The estimate of ranges to the objects around the vehicle may be displayed (29) to a driver.
    Type: Grant
    Filed: December 1, 2004
    Date of Patent: October 1, 2013
    Assignee: Zorg Industries Pty Ltd.
    Inventor: Tarik Hammadou
  • Patent number: 8547397
    Abstract: A method for processing data, in particular an image representative of an observation zone in which is situated an object arranged with respect to a reference plane.
    Type: Grant
    Filed: March 23, 2010
    Date of Patent: October 1, 2013
    Assignee: France Telecom
    Inventor: Olivier Gachignard
  • Patent number: 8548229
    Abstract: A method for detecting objects, wherein two images of a surrounding (1) are taken and a disparity image is determined by means of stereo image processing, wherein a depth map of the surrounding (1) is determined from the determined disparities, wherein a free space delimiting line (2) is identified, delimiting an unobstructed region of the surrounding (1), wherein outside and along the free space delimiting line (1) the depth card is segmented by segments (3) of a suitable width formed by pixels of the same or similar distance to an image plane, wherein a height of each segment (3) is estimated as part of an object (4.1 to 4.6) located outside of the unobstructed region in a way, such that each segment (3) is characterized by the two-dimensional position of the base (for example the distance and angle to the longitudinal axis of the vehicle) and the height thereof.
    Type: Grant
    Filed: February 4, 2010
    Date of Patent: October 1, 2013
    Assignee: Daimler AG
    Inventors: Hernan Badino, Uwe Franke
  • Patent number: 8548270
    Abstract: Techniques are provided for determining depth to objects. A depth image may be determined based on two light intensity images. This technique may compensate for differences in reflectivity of objects in the field of view. However, there may be some misalignment between pixels in the two light intensity images. An iterative process may be used to relax a requirement for an exact match between the light intensity images. The iterative process may involve modifying one of the light intensity images based on a smoothed version of a depth image that is generated from the two light intensity images. Then, new values may be determined for the depth image based on the modified image and the other light intensity image. Thus, pixel misalignment between the two light intensity images may be compensated.
    Type: Grant
    Filed: October 4, 2010
    Date of Patent: October 1, 2013
    Assignee: Microsoft Corporation
    Inventors: Sagi Katz, Avishai Adler
  • Patent number: 8548226
    Abstract: In stereo matching based on standard area matching, in order to suppress a decrease in matching accuracy, it is effective to adaptively change a matching area in accordance with the local properties of an image. However, this requires high calculation costs.
    Type: Grant
    Filed: February 24, 2010
    Date of Patent: October 1, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Morihiko Sakano, Mirai Higuchi, Takuya Naka, Takeshi Shima, Shoji Muramatsu
  • Publication number: 20130251209
    Abstract: Embodiments of the invention relate to an image processing apparatus and method of a black box system for vehicles, which can simplify an analysis stage without causing any Doppler effect by directly analyzing an image of a camera basically mounted to the black box for vehicles, and which includes a unit for detecting an accident risk before a sudden braking operation and occurrence of an accident. The image processing apparatus includes: a subject distance change detector which analyzes a size change of a subject present in an image captured by a camera to detect a distance change between the camera and the subject; a light source analyzer which analyzes a light source present in the image; an image divider which divides the image into plural sections to apply a differently weighted accident-risk level value to each of the divided sections; and an alarm unit for generating an alarm corresponding to an accident-risk situation in the divided sections.
    Type: Application
    Filed: March 20, 2013
    Publication date: September 26, 2013
    Applicant: CORE LOGIC INC.
    Inventor: Byungho KIM
  • Patent number: 8542875
    Abstract: A system for complexity reduction in images involving concepts of visual attention based most probable region detection for object presence and perspective-view based reduced scale-search approaches. Visual attention concept in context uses gradient and contrast of an image. A pixel meeting certain criteria for gradient or contrast values may be further processed for object presence. Limiting image processing to such regions may reduce the complexity of digitized images. Post processing the outcome using morphological operations like dilation and erosion appropriately may help retain some of the missed object pixels in the resultant image. Typically image blocks at different scales are searched for object presence. Reduced-scale search involves removing certain scales during search. As object size in image varies with its location scales within a given scale-range if searched, may lead to higher chances of object presence.
    Type: Grant
    Filed: September 17, 2010
    Date of Patent: September 24, 2013
    Assignee: Honeywell International Inc.
    Inventor: Lalitha M. Eswara
  • Patent number: 8543254
    Abstract: A vehicular imaging system for determining roadway width includes an image sensor for capturing images and an image processor for receiving the captured images. The image processor determines roadway width by identifying roadway marker signs and oncoming traffic in processed images captured by the image sensor and determining the number of lanes, vehicle location on the roadway based on the roadway size and/or width and location of oncoming traffic.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: September 24, 2013
    Assignee: Gentex Corporation
    Inventors: Jeremy A. Schut, David M. Falb
  • Patent number: 8542884
    Abstract: In various embodiments, old flood maps may be compared to new flood maps to determine which areas of the flood map have changed. These changed areas may be correlated against geographic area descriptions that are within changed areas of the flood map. The changed areas may also be analyzed to determine whether each area has had a change in status (e.g., from a high risk flood zone to a non-high risk flood zone or vice versa) or a change in zone within a status (e.g., from one flood zone to another flood zone). The information on type of change (or no change) may be used to populate a database that includes geographic area description identifiers. In some embodiments, detection of certain types of changes may initiate a manual comparison of the old and new flood maps to verify the change.
    Type: Grant
    Filed: November 17, 2006
    Date of Patent: September 24, 2013
    Assignee: Corelogic Solutions, LLC
    Inventor: David R. Maltby, II
  • Patent number: 8542872
    Abstract: A method of autonomously monitoring a remote site, including the steps of locating a primary detector at a site to be monitored; creating one or more geospatial maps of the site using an overhead image of the site; calibrating the primary detector to the geospatial map using a detector-specific model; detecting an object in motion at the site; tracking the moving object on the geospatial map; and alerting a user to the presence of motion at the site. In addition thermal image data from a infrared cameras, rather than optical/visual image data, is used to create detector-specific models and geospatial maps in substantially the same way that optical cameras and optical image data would be used.
    Type: Grant
    Filed: July 3, 2008
    Date of Patent: September 24, 2013
    Assignee: Pivotal Vision, LLC
    Inventors: John R. Gornick, Craig B. Moksnes
  • Patent number: 8542910
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities.
    Type: Grant
    Filed: February 2, 2012
    Date of Patent: September 24, 2013
    Assignee: Microsoft Corporation
    Inventors: Tommer Leyvand, Johnny Lee, Szymon Stachniak, Craig Peeper, Shao Liu
  • Publication number: 20130243261
    Abstract: When it is determined that a type of a physical body in real space corresponding to an image portion is a crossing pedestrian, a distance calculating unit 13 performs a first distance calculating process of calculating a distance between a vehicle 1 and the physical body, on the basis of a correlative relationship between the distance from the vehicle 1 set on assumption of a height of the pedestrian and a height of the image portion, according to the height of the image portion. When it is determined that the type of the physical body is not the crossing pedestrian, then the distance calculating unit 13 performs a second distance calculating process which calculates the distance between the physical body and the vehicle, on the basis of a change in size of the image portions of the physical body extracted from time-series captured images.
    Type: Application
    Filed: June 22, 2011
    Publication date: September 19, 2013
    Applicant: HONDA MOTOR CO., LTD.
    Inventors: Kodai Matsuda, Nobuharu Nagaoka, Makoto Aimura
  • Patent number: 8538085
    Abstract: A distance measurement system provided by the present invention comprises a light source module, for projecting a light beam having a speckle pattern to a plurality of reference flat surfaces and an object which are located at different position points, so as to show an image of the speckle pattern on each of the reference flat surface and the object. The speckle pattern contains a plurality of speckles. The invention generates a plurality of reference image information through capturing the image of the speckle pattern on each of the plurality of reference flat surfaces and generates an object image information through capturing the image of the speckle pattern on the object. The Invention further generates a plurality of comparison results through comparing the plurality of reference image information, and computes the position of the object through performing an interpolation operation to generate the plurality of comparison results.
    Type: Grant
    Filed: September 6, 2011
    Date of Patent: September 17, 2013
    Assignee: PixArt Imaging Inc.
    Inventors: Shu-Sian Yang, Hsin-Chia Chen, Ren-Hau Gu, Sen-Huang Huang