Multiple Cameras Patents (Class 348/47)
  • Publication number: 20140267631
    Abstract: A method for reducing power consumption of a 3D image capture system includes capturing 3D image data with the 3D image capture system while the 3D image capture system is in a first power state, detecting a power state change trigger, and switching from the first power state to a second power state based on the power state change trigger, wherein the 3D image capture system consumes less power in the second power state than in the first power state.
    Type: Application
    Filed: March 17, 2014
    Publication date: September 18, 2014
    Applicant: Occipital, Inc.
    Inventors: Jeffrey Powers, Patrick O'Keefe
  • Publication number: 20140267632
    Abstract: Stereoscopic instruments for viewing stereoscopic images of objects at a range of magnifications are described. The stereoscopic instruments are arranged to provide an optical beam comprising light received from an object over a given angular range, and to split the optical beam into left and right optical beams each traversing a respective optical path. The left and right optical paths each transmit a sub-beam over a respective angular range, the respective angular ranges thereby defining a first angular relationship between the respective sub-beams. Each optical path comprises a first angle adjustment means for adjusting the first angular relationship. Some embodiments also include a means for transmitting images formed from the left and right sub-beams, the images having a second angular relationship related to a vergence angle at which a user's eyes view the object.
    Type: Application
    Filed: May 29, 2014
    Publication date: September 18, 2014
    Inventor: John WARD
  • Publication number: 20140267628
    Abstract: A method and a device for estimating the coefficient of friction by a 3D camera. The 3D camera records at least one image of the vehicle's surroundings. The image data of the 3D camera is used to produce a height profile of the road surface in the entire space ahead of the vehicle. Based on the height profile, the local coefficient of friction of the road surface that is to be expected in the space ahead of the vehicle is estimated.
    Type: Application
    Filed: February 10, 2012
    Publication date: September 18, 2014
    Applicant: CONTI TEMIC MICROELECTRONIC GmbH
    Inventors: Martin Randler, Stefan Heinrich
  • Publication number: 20140267629
    Abstract: A method for determining 3D coordinates of points on a surface of the object by providing a 3D coordinate measurement device attached to a moveable apparatus that is coupled to a position sensing mechanism, all coupled to a processor, projecting a pattern of light onto the surface to determine a first set of 3D coordinates of points on the surface, determining susceptibility of the object to multipath interference by projecting and reflecting rays from the measured 3D coordinates of the points, moving the moveable apparatus under processor control to change the relative position of the device and the object, and projecting the a pattern of light onto the surface to determine a second set of 3D coordinates.
    Type: Application
    Filed: December 23, 2013
    Publication date: September 18, 2014
    Applicant: FARO Technologies, Inc.
    Inventors: Yazid Tohme, Robert E. Bridges
  • Publication number: 20140267627
    Abstract: In a computer-implemented method and system for capturing the condition of a structure, the structure is scanned with a three-dimensional (3D) scanner. The 3D scanner generates 3D data. A point cloud or 3D model is constructed from the 3D data. The point cloud or 3D model is then analyzed to determine the condition of the structure.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: James M. Freeman, Roger D. Schmidgall, Patrick Harold Boyer, Nicholas U. Christopulos, Jonathan D. Maurer, Nathan Lee Tofte, Jackie O. Jordan, II
  • Patent number: 8836781
    Abstract: Technology for system and method of providing surrounding information of a vehicle which accurately calculates positions of obstacles around the vehicle is provided. The system includes a plurality of image acquisition units installed in the vehicle at a preset interval, an image acquisition unit selector which selects at least two image acquisition units of the plurality of image acquisition units and receives image data from the selected acquisition unit selector, and a control unit which recognizes an obstacle from the image data received from the image acquisition units, calculates a position of the obstacle, and controls the image acquisition unit selector to select the at least two image acquisition units of the plurality of image acquisition units according to information for vehicle speed of the vehicle.
    Type: Grant
    Filed: December 12, 2011
    Date of Patent: September 16, 2014
    Assignee: Hyundai Motor Company
    Inventors: Jae Pil Hwang, Sung Bo Sim, Eui Yoon Chung
  • Patent number: 8836768
    Abstract: User wearable eye glasses include a pair of two-dimensional cameras that optically acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact. Glasses systems intelligently signal process and map acquired optical information to rapidly ascertain a sparse (x,y,z) set of locations adequate to identify user gestures. The displayed imagery can be created by glasses systems and presented with a virtual on-glasses display, or can be created and/or viewed off-glasses. In some embodiments the user can see local views directly, but augmented with imagery showing internet provided tags identifying and/or providing information as to viewed objects. On-glasses systems can communicate wirelessly with cloud servers and with off-glasses systems that the user can carry in a pocket or purse.
    Type: Grant
    Filed: August 23, 2013
    Date of Patent: September 16, 2014
    Assignee: Aquifi, Inc.
    Inventors: Abbas Rafii, Tony Zuccarino
  • Patent number: 8836767
    Abstract: An imaging apparatus and an imaging method, wherein a focus location direction discriminator estimates a direction toward a focus location based on two contrast evaluation values corresponding to two imaging optical systems having different image focusing locations. Therefore, the imaging apparatus and the imaging method can directly start an AF operation at current image focusing locations of two imaging optical systems, without setting the image focusing locations to fixed initial locations in a focusing operation of the imaging optical systems. As a result, AF operation time and power consumption are reduced without failing to find a direction of image focusing location.
    Type: Grant
    Filed: December 14, 2011
    Date of Patent: September 16, 2014
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Yuki Endo
  • Publication number: 20140253691
    Abstract: A system which identifies the position and shape of an object in 3D space includes a housing having a base portion and a body portion, the base portion including electrical contacts mating with a lighting receptacle. A camera, an image analyzer and power conditioning circuitry are within the housing. The image analyzer, coupled to the camera for receipt of camera image data, is configured to capture at least one image of the object and to generate object data indicative of the position and shape of the object in 3D space. The power conditioning circuitry converts power from the lighting receptacle to power suitable for the system. The object data can be used to computationally construct a representation of the object. Some examples include a database containing a library of object templates, the image analyzer being configured to match the 3D representation to one of the templates.
    Type: Application
    Filed: March 5, 2014
    Publication date: September 11, 2014
    Applicant: LEAP MOTION, INC.
    Inventor: David Holz
  • Publication number: 20140253693
    Abstract: An information processing apparatus that acquires first image data captured by a first image capturing unit; acquires second image data captured by a second image capturing unit; controls a display to operate in a first mode in which the first and second images are simultaneously displayed; controls the display to operate in a second mode in which a relationship of the second image with the first image is indicated; and selects between the first and second modes based on a predetermined condition.
    Type: Application
    Filed: October 19, 2012
    Publication date: September 11, 2014
    Applicant: SONY CORPORATION
    Inventor: Yasuhito Shikata
  • Publication number: 20140253692
    Abstract: Architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room). The cameras and projectors are calibrated, allowing the development of a multi-dimensional (e.g., 3D) model of the objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects. The architecture incorporates the depth data from all depth cameras, as well as color information, into a unified multi-dimensional model in combination with calibrated projectors. In order to provide visual continuity when transferring objects between different locations in the space, the user's body can provide a canvas on which to project this interaction. As the user moves body parts in the space, without any other object, the body parts can serve as temporary “screens” for “in-transit” data.
    Type: Application
    Filed: May 19, 2014
    Publication date: September 11, 2014
    Applicant: Microsoft Corporation
    Inventors: Andrew David Wilson, Hrvoje Benko
  • Patent number: 8830318
    Abstract: An on-vehicle three-dimensional video system is provided for a vehicle and a method is provided for monitoring a surrounding environment of a vehicle. The on-vehicle three-dimensional video system includes, but is not limited to cameras, a display screen, a control module, and a power supply device. The cameras are provided in pairs for filming the surrounding environment of the vehicle from different angles, and the display screen is able to bring about a three-dimensional video effect according to pairs of video signals from the cameras. With the on-vehicle three-dimensional video system, a realistic three-dimensional output of the surrounding environment of the vehicle is realized on the display screen so that the driver can clearly know about the precise relative position of a corresponding portion of the vehicle with respect to the surrounding environment.
    Type: Grant
    Filed: October 22, 2010
    Date of Patent: September 9, 2014
    Assignee: GM Global Technology Operations LLC
    Inventors: Peter G. Diehl, Sam Yang, Li Shen, Huan Lu
  • Patent number: 8830304
    Abstract: An information processing apparatus, which provides images for stereoscopic viewing by synthesizing images obtained by capturing an image of real space by a main image sensing device and sub image sensing device to a virtual image, measures the position and orientation of the main image sensing device, calculates the position and orientation of the sub image sensing device based on inter-image capturing device position and orientation held in a holding unit and the measured position and orientation of the main image sensing device. Then the information processing apparatus calculates an error using the measured position and orientation of the main image sensing device, the calculated position and orientation of the sub image sensing device, and held intrinsic parameters of the main image sensing device and sub image sensing device. The information processing apparatus calibrates the held information based on the calculated error.
    Type: Grant
    Filed: April 21, 2010
    Date of Patent: September 9, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Sonoko Miyatani, Kenji Morita, Masakazu Fujiki
  • Patent number: 8823821
    Abstract: Multiview videos are acquired by overlapping cameras. Side information is used to synthesize multiview videos. A reference picture list is maintained for current frames of the multiview videos, the reference picture indexes temporal reference pictures and spatial reference pictures of the acquired multiview videos and the synthesized reference pictures of the synthesized multiview video. Each current frame of the multiview videos is predicted according to reference pictures indexed by the associated reference picture list with a skip mode and a direct mode, whereby the side information is inferred from the synthesized reference picture. In addition, the skip and merge modes for single view video coding are modified to support multiview video coding by generating a motion vector prediction list by also considering neighboring blocks that are associated with synthesized reference pictures.
    Type: Grant
    Filed: July 9, 2012
    Date of Patent: September 2, 2014
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Dong Tian, Feng Zou, Anthony Vetro
  • Patent number: 8823775
    Abstract: An embodiment of the invention may provide a portable, inexpensive two-view 3D stereo vision imaging system, which acquires a 3D surface model and dimensions of an object. The system may comprise front-side and back-side stereo imagers which each have a projector and at least two digital cameras to image the object from different perspectives. An embodiment may include a method for reconstructing an image of a human body from the data of a two-view body scanner by obtaining a front scan image data point set and a back scan image data point set. A smooth body image may be gained by processing the data point sets using the following steps: (1) data resampling; (2) initial mesh generation; (3) mesh simplification; and (4) mesh subdivision and optimization.
    Type: Grant
    Filed: April 30, 2010
    Date of Patent: September 2, 2014
    Assignee: Board of Regents, The University of Texas System
    Inventors: Bugao Xu, Wurong Yu
  • Patent number: 8823777
    Abstract: Systems and methods may provide for determining a one-dimensional (1D) disparity between a plurality of rectified images, and extracting depth information from the plurality of rectified images based at least in part on the 1D disparity. In one example, the 1D disparity is in the horizontal direction and the images are rectified with respect to one another in the vertical direction.
    Type: Grant
    Filed: March 30, 2011
    Date of Patent: September 2, 2014
    Assignee: Intel Corporation
    Inventors: Ziv Aviv, Omri Govrin
  • Patent number: 8823776
    Abstract: A method that includes capturing depth information associated with a first field of view of a depth camera. The depth information is represented by a first plurality of depth pixels. The method also includes capturing color information associated with a second field of view of a video camera that substantially overlaps with the first field of view of the depth camera. The color information is represented by a second plurality of color pixels. The method further includes enhancing color information represented by at least one color pixel of the second plurality of color pixels to generate an enhanced image. The enhanced image adjusts an exposure characteristic of the color information captured by the video camera. The at least one color pixel is enhanced based on depth information represented by at least one corresponding depth pixel of the first plurality of depth pixels.
    Type: Grant
    Filed: May 20, 2010
    Date of Patent: September 2, 2014
    Assignee: Cisco Technology, Inc.
    Inventors: Dihong Tian, J. William Mauchly, Joseph T. Friel
  • Publication number: 20140240464
    Abstract: An electronic device (100) includes a depth sensor (120), a first imaging camera (114, 116), and a controller (802). The depth sensor (120) includes a modulated light projector (119) to project a modulated light pattern (500). The first imaging camera (114, 116) is to capture at least a reflection of the modulated light pattern (500). The controller (802) is to selectively modify (1004) at least one of a frequency, an intensity, and a duration of projections of the modulated light pattern by the modulated light projector responsive to at least one trigger event (1002). The trigger event can include, for example, a change (1092) in ambient light incident on the electronic device, detection (1094) of motion of the electronic device, or a determination (1096) that the electronic device has encountered a previously-unencountered environment.
    Type: Application
    Filed: February 28, 2013
    Publication date: August 28, 2014
    Applicant: Motorola Mobility LLC
    Inventor: Johnny Lee
  • Publication number: 20140240465
    Abstract: [Object] To preferably set a focus distance and a convergence distance. [Solving Means] A three-dimensional image pickup apparatus 100 is provided with a left lens optical system 121L and a right lens optical system 121R including a pair of right and left image pickup lenses disposed at a predetermined inter axial distance. Further, a focus ring that adjusts the focus of the left lens optical system 121L and the right lens optical system 121R and a control circuit that adjusts a convergence distance from a convergence point at which optical axes of the pair of right and left image pickup lenses are intersected to the image pickup lenses. The control circuit adds the offset distance to the focus distance and adjusts the convergence distance with the offset distance set as a distance from a focus point to a convergence point to be set.
    Type: Application
    Filed: March 15, 2012
    Publication date: August 28, 2014
    Applicant: Sony Corporation
    Inventors: Masamiki Kawase, Hiromi Hoshino
  • Publication number: 20140240468
    Abstract: A frame-sequential multiwavelength imaging system comprises a wavelength switching device for producing a repeated series of different wavelength profiles, a detector for detecting the dynamic scene and a signal processing unit for synthesizing a dynamic multiwavelength image of the dynamic scene. The signal processing unit may comprise at least one input device at least one logic device and at least one output device. The system can be used in a method to perform multiwavelength imaging of a dynamic scene, typically for surgical purposes.
    Type: Application
    Filed: September 27, 2012
    Publication date: August 28, 2014
    Inventor: Gilbert D. Feke
  • Publication number: 20140240467
    Abstract: An image processing system comprises an image processor configured to identify one or more potentially defective pixels associated with at least one depth artifact in a first image, and to apply a super resolution technique utilizing a second image to reconstruct depth information of the one or more potentially defective pixels. Application of the super resolution technique produces a third image having the reconstructed depth information. The first image may comprise a depth image and the third image may comprise a depth image corresponding generally to the first image but with the depth artifact substantially eliminated. An additional super resolution technique may be applied utilizing a fourth image. Application of the additional super resolution technique produces a fifth image having increased spatial resolution relative to the third image.
    Type: Application
    Filed: May 17, 2013
    Publication date: August 28, 2014
    Applicant: LSI Corporation
    Inventors: Alexander A. Petyushko, Alexander B. Kholodenko, Ivan L. Mazurenko, Denis V. Parfenov, Dmitry N. Babin
  • Publication number: 20140240466
    Abstract: The technology disclosed relates to adjusting the monitored field of view of a camera and/or a view of a virtual scene from a point of view of a virtual camera based on the distance between tracked objects. For example, if the user's hand is being tracked for gestures, the closer the hand gets to another object, the tighter the frame can become—i.e., the more the camera can zoom in so that the hand and the other object occupy most of the frame. The camera can also be reoriented so that the hand and the other object remain in the center of the field of view. The distance between two objects in a camera's field of view can be determined and a parameter of a motion-capture system adjusted based thereon. In particular, the pan and/or zoom levels of the camera may be adjusted in accordance with the distance.
    Type: Application
    Filed: February 21, 2014
    Publication date: August 28, 2014
    Applicant: Leap Motion, Inc.
    Inventor: David HOLZ
  • Patent number: 8817078
    Abstract: There is provided a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. There is provided a method comprising receiving input data from a plurality of clients for modifying a virtual environment presented using the virtual rendering system, obtaining, from the virtual rendering system, a virtual camera configuration of a virtual camera in the virtual environment, programming the video capture system using the virtual camera configuration to correspondingly control a robotic camera in a real environment, capturing a video capture feed using the robotic camera, obtaining a virtually rendered feed using the virtual camera showing the modifying of the virtual environment, rendering the composite render by processing the feeds, and outputting the composite render to the display.
    Type: Grant
    Filed: November 30, 2009
    Date of Patent: August 26, 2014
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Gay, Aaron Thiel
  • Patent number: 8817066
    Abstract: A mirror assembly adapted for use in a panoramic imaging system for capturing a panoramic image includes a mirror for optically coupling to a fisheye lens having a first field of view, the mirror configured for reflecting an image of a second field of view through the fisheye lens. A housing has a first end and a second end, the mirror being secured proximate to the first end, and the second end having an engagement portion for securing the mirror assembly to the panoramic imaging system. When the mirror assembly is secured to the panoramic imaging system, the mirror is optically coupled to the fisheye lens and a detector for capturing a first portion of the panoramic image and a second portion of the panoramic image, the first portion of the panoramic image having a portion overlapping the second portion of the panoramic image.
    Type: Grant
    Filed: April 22, 2011
    Date of Patent: August 26, 2014
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Gurunandan Krishnan, Shree K. Nayar
  • Publication number: 20140232830
    Abstract: In order to provide a stereoscopic imaging apparatus that can acquire, in a case in which a brightness different is large, parallax information containing a parallax image in all image capturing frames, provided are an image acquisition unit that acquires a first image and a second image different from the first image in exposure time, a gain and offset correction unit 102 that corrects brightness of one of the acquired first and second images, a parallax calculation unit 103 that calculates parallax from one of the images corrected by the brightness correction unit and the other of the images and outputs a parallax image and parallax information, and a combination image generation unit 105 that combines the acquired first and second images together to generate a combination image and outputs the generated combination image.
    Type: Application
    Filed: October 1, 2012
    Publication date: August 21, 2014
    Applicant: Hitachi Automotive Sytems, Ltd.
    Inventor: Atsushi Ichige
  • Publication number: 20140232828
    Abstract: A system and methods for monitoring work processes. Once example computer-implemented method includes recording a three-dimensional work trajectory. The work trajectory comprises a representation of the actual motion of one or more markers. The method further includes comparing the work trajectory to a work template. The work template comprises a representation of the desired motion of the one or more markers. The method further includes sending a success indication to a display when the work trajectory is similar to the work template and sending a failure indication to the display when the work trajectory is dissimilar to the work template.
    Type: Application
    Filed: February 20, 2013
    Publication date: August 21, 2014
    Applicant: Toyota Motor Engineering & Manufacturing North America, Inc
    Inventor: Toyota Motor Engineering & Manufacturing North America, Inc
  • Publication number: 20140232829
    Abstract: The present invention relates to an image processing system for pointing or making an augmented reality image by selectively capturing a left or right image of a stereo image of a 3D display, and detecting a mark from the captured image.
    Type: Application
    Filed: September 29, 2012
    Publication date: August 21, 2014
    Inventor: Moon Key LEE
  • Patent number: 8810633
    Abstract: Disclosed is a method of image coding for joint decoding of images from different viewpoints using distributed coding techniques. The method receives a first set of features (205) and error correction bits (203) corresponding to a first image (201) obtained at a first viewpoint (122) and a second set of features (425) from a second image (254, 415) corresponding to a second viewpoint (124). An approximation (437) of said first image (201) at said first viewpoint (122) is determined (432, 434, 436) an based on the first and second sets of features (205, 425) and the second image at the second viewpoint. A reliability measure (445) of the approximation of the first image is then determined (450) by joint decoding (438) the approximation (437) using the error correction bits (203). The approximation of the first image is then refined iteratively (460, 438) based on the reliability measure (445) and image information (448) derived from the joint decoding.
    Type: Grant
    Filed: November 29, 2010
    Date of Patent: August 19, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ka-Ming Leung, Zhonghua Ma
  • Patent number: 8810632
    Abstract: A communication network comprising a collaborative photography group including a plurality of cameras having synchronized photographing times, is provided. The plurality of cameras may share location information, direction angle information, and image information generated by photographing an object, and generate a three-dimensional (3D) image of the object.
    Type: Grant
    Filed: October 1, 2010
    Date of Patent: August 19, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chan Soo Hwang, Yoon Chae Cheong, Do Kyoon Kim, Kee Chang Lee, Ouk Choi
  • Publication number: 20140225993
    Abstract: Methods and apparatus for splitting light received from at least one subject into at least first and second components of light, converting the first component of light into a first electrical signal representing a base image of the at least one subject, dispersing the second component of light into at least a right component of light and a left component of light; converting the right component of light into a second electrical signal representing a right detection image at a first angle; and converting the left component of light into a third electrical signal representing a left detection image at a second angle different from the first angle. Additionally, the right detection image may be used to transform the base image into a right parallax image, and the left detection image may be used to transform the base image into a left parallax image.
    Type: Application
    Filed: October 1, 2012
    Publication date: August 14, 2014
    Applicant: Sony Corporation
    Inventors: Yoshihiko Kuroki, Eiji Otani
  • Publication number: 20140225990
    Abstract: In one aspect, an image processing method for processing images is provided, comprising the steps of: obtaining, from an optical sensor, at least two images, determining an image warping function at least partially compensating the distortion, applying the determined image warping function to the image including the distortion, and calculating by a processing unit, and outputting, a depth and/or disparity image from the at least two images.
    Type: Application
    Filed: January 10, 2014
    Publication date: August 14, 2014
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Nils EINECKE, Julian EGGERT
  • Publication number: 20140225992
    Abstract: In a minimally invasive surgical system, an image capture unit includes a prism assembly and sensor assembly. The prism assembly includes a beam splitter, while the sensor assembly includes coplanar image capture sensors. Each of the coplanar image capture sensors has a common front end optical structure, e.g., the optical structure distal to the image capture unit is the same for each of the sensors. A controller enhances images acquired by the coplanar image capture sensors. The enhanced images may include (a) visible images with enhanced feature definition, in which a particular feature in the scene is emphasized to the operator of minimally invasive surgical system; (b) images having increased image apparent resolution; (c) images having increased dynamic range; (d) images displayed in a way based on a pixel color component vector having three or more color components; and (e) images having extended depth of field.
    Type: Application
    Filed: April 22, 2014
    Publication date: August 14, 2014
    Applicant: Intuitive Surgical Operations, Inc.
    Inventor: IAN McDOWALL
  • Publication number: 20140225991
    Abstract: An image capturing apparatus and a method for obtaining a depth information of field thereof are provided. The image capturing apparatus includes a first image capturer, a second image capturer and a controller. The first image capturer performs an image capturing operation according to a plurality of focal lengths to respectively obtain a plurality of zoom images. The second image capturer performs an image capturing operation according to one focal length to obtain a fixed-focus image. The controller is coupled to the first and second image capturers, and the controller generates a plurality of depth information of field respectively corresponding to the focal lengths of the zoom images according to an image difference of the fixed-focus image and the zoom image.
    Type: Application
    Filed: February 14, 2014
    Publication date: August 14, 2014
    Applicant: HTC CORPORATION
    Inventors: Chun-Ta Lin, Fu-Cheng Fan
  • Publication number: 20140225989
    Abstract: A method of aligning the optical axes of two cameras, wherein the cameras are parts of a camera apparatus and the two cameras can be used for producing 3D films, wherein plural steps are automatically run through. The invention also relates to a method of controlling a camera rig comprising two cameras for producing 3D films according to the table of values according to such method, wherein during shooting upon zooming at least one camera is automatically moved via camera motors. The invention also relates to a camera rig comprising two cameras that are preferably juxtaposed or superimposed having a control which realizes a method as explained.
    Type: Application
    Filed: March 8, 2012
    Publication date: August 14, 2014
    Inventors: Christian Wieland, Robert Siegl
  • Patent number: 8803951
    Abstract: There is provided a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. There is provided a method for integrating a virtual rendering system and a video capture system for outputting a composite render to a display, the method comprising obtaining, from the virtual rendering system, a virtual camera configuration of a virtual camera in a virtual environment, programming the video capture system using the virtual camera configuration to correspondingly control a robotic camera in a real environment, capturing a video capture feed using the robotic camera, obtaining a virtually rendered feed using the virtual camera, rendering the composite render by processing the feeds, and outputting the composite render to the display.
    Type: Grant
    Filed: January 4, 2010
    Date of Patent: August 12, 2014
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Gay, Aaron Thiel
  • Patent number: 8803943
    Abstract: The present disclosure uses at least three cameras to monitor even a large-scale area. Displacement and strain are measured in a fast, convenient and effective way. The present disclosure has advantages on whole field, far distance and convenience.
    Type: Grant
    Filed: September 21, 2011
    Date of Patent: August 12, 2014
    Assignee: National Applied Research Laboratories
    Inventors: Chi-Hung Huang, Yung-Hsiang Chen, Wei-Chung Wang, Tai-Shan Liao
  • Patent number: 8803952
    Abstract: A depth-mapping method comprises exposing first and second detectors oriented along different optical axes to light dispersed from a scene, and furnishing an output responsive to a depth coordinate of a locus of the scene. The output increases with an increasing first amount of light received by the first detector during a first period, and decreases with an increasing second amount of light received by the second detector during a second period different than the first.
    Type: Grant
    Filed: December 20, 2010
    Date of Patent: August 12, 2014
    Assignee: Microsoft Corporation
    Inventors: Sagi Katz, Avishai Adler, Giora Yahav, John Tardif
  • Publication number: 20140218483
    Abstract: Disclosed are an object positioning method and an object positioning device based on object detection results of plural stereo cameras. The method comprises a step of obtaining, when each of the plural stereo cameras continuously carries out tracking and detection with respect to each of objects, positional information of the corresponding object; a step of generating, based on the positional information of the corresponding object, a trajectory of the corresponding object; and a step of carrying out a merging process with respect to the trajectories generated corresponding to the plural stereo cameras so as to determine at least one object position.
    Type: Application
    Filed: January 27, 2014
    Publication date: August 7, 2014
    Inventors: Xin WANG, Shengyin FAN
  • Publication number: 20140218484
    Abstract: A stereoscopic image pickup apparatus includes a plurality of image pickup units configured to acquire a plurality of images at each viewpoint by photographing an object from a plurality of different viewpoints, a measuring unit configured to measure an object distance which is a distance between the plurality of image pickup units and the object, a calculation unit configured to calculate an effective range where the plurality of images becomes parallax viewable stereoscopically, a control unit configured to control a focal length of the plurality of the image pickup units so that the object distance and the focal length are within the effective range and an image expanding and reduction unit configured to expand each image region of the plurality of images according to a control of the focal length by the control unit.
    Type: Application
    Filed: January 27, 2014
    Publication date: August 7, 2014
    Applicant: Canon Kabushiki Kaisha
    Inventor: Koji Iwashita
  • Publication number: 20140218482
    Abstract: The use of widely separated and coordinated cameras allows trains to recognize obstructions and calculate distance to them to a degree which enables them to react quickly and brake early enough to avoid accidents. This applies to hazards such as fallen trees, stalled cars, people, and other trains on the rails. The system can also apply to crossings, enabling them to see approaching trains and gauge their distance, velocity, and deceleration, so that they can be shut down early and alarms sounded immediately. These systems are autonomous, using software which allows trains to know exactly where they are and at what speed they are travelling independently of external signals, including GPS, allowing a measure of safety beyond normal communications. These systems can also work in the infra-red, allowing compensation for fog and rain.
    Type: Application
    Filed: February 5, 2013
    Publication date: August 7, 2014
    Inventor: John H. Prince
  • Publication number: 20140218485
    Abstract: A disparity calculation apparatus for a stereo camera implements ranging of an object that includes consecutive similar patterns. In stereo matching, if a plurality of corresponding point candidates are present in a sum of absolute differences or similar evaluation value distribution for a target point, an evaluation value map is generated by superimposing an evaluation value distribution of a target point, for which a plurality of corresponding points are determined to be present, and an evaluation value distribution of each other target point present in a peripheral area of that target point. The shape of an object is represented in real space around a target point for which a plurality of corresponding points are determined to be present. The true distance of a railing that extends in a straight line is determined by extracting a line segment with the strongest linearity in the evaluation value map.
    Type: Application
    Filed: April 10, 2014
    Publication date: August 7, 2014
    Applicant: PANASONIC CORPORATION
    Inventors: Takuya NANRI, Hirofumi NISHIMURA
  • Patent number: 8797387
    Abstract: A self calibrating stereo camera includes first and second spatial transform engines for directly receiving first and second images, respectively, of an object. The first and second spatial transform engines are coupled to a stereo display for displaying a fused object in stereo. A calibration module is coupled to the first and second spatial transform engines for aligning the first and second images, prior to display to a viewer. The first and second point extracting modules, respectively, receive the first and second images for extracting interest points from each image. A matching points module is coupled to the first and second point extracting modules for matching the interest points extracted by the first and second point extracting modules. The calibration module determines alignment error between the first and second images, in response to the interest point matches calculated by the matching points module.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: August 5, 2014
    Assignee: Aptina Imaging Corporation
    Inventors: Anthony R. Huggett, Graham Kirsch
  • Publication number: 20140210950
    Abstract: Described are systems and methods for measuring objects using stereoscopic imaging. After determining keypoints within a set of stereoscopic images, a user may select a desired object within an imaged scene to be measured. Using depth map information and information about the boundary of the selected object, the desired measurement may be calculated and displayed to the user on a display device. Tracking of the object in three dimensions and continuous updating of the measurement of a selected object may also be performed as the object or the imaging device is moved.
    Type: Application
    Filed: January 31, 2013
    Publication date: July 31, 2014
    Applicant: QUALCOMM INCORPORATED
    Inventors: Kalin Mitkov Atanassov, Vikas Ramachandra, James Wilson Nash, Sergiu Radu Goma
  • Patent number: 8791987
    Abstract: A portable electronic device with 3D image capture capability and an image difference control method thereof are disclosed. The portable electronic device comprises a first and a second image capture module, a subject distance estimator and an image difference control mechanism. The first and second image capture modules are operative to capture a first image and a second image, respectively, to form a 3D image. Before image capturing, the subject distance estimator estimates a subject distance indicating how far a subject to be captured is, and the image difference control mechanism adjusts a distance between the first and second image capture modules based on the subject distance by moving at least one of the first and second image capture modules. In this manner, the image difference between the first and second images is properly controlled to perfectly form the 3D image.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: July 29, 2014
    Assignee: HTC Corporation
    Inventor: Chia-Chu Ho
  • Publication number: 20140204181
    Abstract: The present invention concerns the field of stereoscopic photography devices and more precisely calibration thereof. The invention proposes a method for calibrating a stereoscopic photography device that calculates a set of correction parameters for the images. These parameters are classed according to an order of importance. The first-order correction parameters are estimated first, the second-order correction parameters secondly. Advantageously, the first-order parameters are refined by taking account of the estimation values of the second-order parameters. Advantageously, a measurement of the relevance of the scene is carried out before the actual calibration.
    Type: Application
    Filed: February 22, 2012
    Publication date: July 24, 2014
    Applicant: MOBICLIP
    Inventors: Alexandre Delattre, Jérôme Larrieu
  • Publication number: 20140204182
    Abstract: Apparatus and methods disclosed herein operate to monitor times of receipt of start-of-frame indications associated with frames received from multiple image sensors at a video controller. Time differences between the times of receipt of the frames are calculated. Embodiments herein alter one or more frame period determining parameter values associated with the image sensors if the time differences equal or exceed frame synchronization hysteresis threshold values. Parameter values are adjusted positively and/or negatively to decrease the time differences. The parameter values may be reset at each image sensor when the time differences become less than the frame synchronization hysteresis threshold value as additional frames are received at the video controller.
    Type: Application
    Filed: March 25, 2014
    Publication date: July 24, 2014
    Applicant: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Gregory R. Hewes, Fred W. Ware
  • Patent number: 8786718
    Abstract: There is provided an image processing apparatus comprising: an acquisition unit configured to acquire captured images captured by a plurality of image capturing units for capturing an object from different viewpoints; a specifying unit configured to specify a defective image from the plurality of captured images; a determination unit configured to determine a weight for each captured image based on a position of the image capturing unit that has captured the defective image specified by the specifying unit; and a synthesis unit configured to generate a synthesized image by weighting and synthesizing the plurality of captured images based on the weights determined by the determination unit.
    Type: Grant
    Filed: January 23, 2012
    Date of Patent: July 22, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Keiichi Sawada
  • Patent number: 8786671
    Abstract: A driving recorder system and a coordinate positioning method thereof. The system comprises a curved image lens, an operation module, a processing module, a display module and a storage module. The curved image lens captures the curved image of the surrounding areas thereof. The operation module restores the curved image into a restored image. The processing module receives the restored image and adds time data to the restored image. The display module displays the restored image and the time data. The storage module stores the restored image and the time data.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: July 22, 2014
    Assignee: dadny, Inc.
    Inventors: Daniel Shih, Mao Hui Wu
  • Patent number: 8786681
    Abstract: A method performed by one or more processors includes: receiving model data defining a three-dimensional scene; rendering the three-dimensional scene into a primary view image showing the three-dimensional scene from a view of a primary camera; and generating, for each of at least some pixels in the primary view image, a disparity value that defines a disparity between a location of the pixel in the primary view image and an indicated location of the pixel in a secondary view image showing the three-dimensional scene from a view of a secondary camera.
    Type: Grant
    Filed: July 5, 2011
    Date of Patent: July 22, 2014
    Assignee: Lucasfilm Entertainment Company, Ltd.
    Inventors: Patrick N. P. Conran, Douglas Moore, Jason Billington
  • Patent number: 8787654
    Abstract: A system and method for measuring the potential eyestrain felt by audiences while watching a 3D presentation, e.g., a stereoscopic motion picture, are provided. The eyestrain measurement system and method of the present disclosure is based on the measurement of disparity (or depth) and disparity transition of stereoscopic images of the 3D presentation. The system and method of the present disclosure provide for acquiring a first image and a second image from a first segment, estimating disparity of at least one point in the first image with at least one corresponding point in the second image, estimating disparity transition of a sequence of first and second images, and determining potential eyestrain felt while viewing the 3D presentation based on the disparity and the disparity transition of the sequence of the first and second images.
    Type: Grant
    Filed: May 12, 2008
    Date of Patent: July 22, 2014
    Assignee: Thomson Licensing
    Inventors: Dong-Qing Zhang, Ana B. Benitez