More Than Two Cameras Patents (Class 348/48)
  • Publication number: 20130176403
    Abstract: An omnidirectional stereoscopic camera and microphone system consisting of one or more left and right eye camera and microphone pairs positioned relative to each other such that omnidirectional play back or a live feed of video and omni-directional acoustic depth perception can be achieved. A user or users can select a direction of gaze as well as to hear, and share the experience visually and audibly with the system as if the user or users are physically present. The sensor system orientation is tracked and known by compass and/or other orientation sensors enabling users to maintain gaze direction, independent of sensor system orientation changes.
    Type: Application
    Filed: January 30, 2012
    Publication date: July 11, 2013
    Inventor: Kenneth Varga
  • Patent number: 8483273
    Abstract: The method includes obtaining identification information representing a multi-view video data stream, obtaining initialization information of a reference picture list for a random access slice based on the identification information, initializing the reference picture list using the initialization information, obtaining modification information for the initialized reference picture list, determining an assignment modification value for modifying an inter-view reference index in the initialized reference picture list according to the modification information, and modifying the initialized reference picture list for inter-view prediction using the determined assignment modification value.
    Type: Grant
    Filed: September 25, 2009
    Date of Patent: July 9, 2013
    Assignee: LG Electronics Inc.
    Inventors: Byeong Moon Jeon, Seung Wook Park, Yong Joon Jeon, Yeon Kwan Koo
  • Publication number: 20130141546
    Abstract: An apparatus capable of improving the estimation accuracy of information on a subject including a distance up to the subject is provided. According to an environment recognition apparatus 1 of the present invention, a first cost function is defined as a decreasing function of an object point distance Z. Thus, the longer the object point distance Z is, the lower the first cost of a pixel concerned is evaluated. This reduces the contribution of the first cost of a pixel highly probable to have a large measurement or estimation error of the object point distance Z to the total cost C. Thereby, the estimation accuracy of a plane parameter ?q representing the surface position and posture of the subject is improved.
    Type: Application
    Filed: December 4, 2012
    Publication date: June 6, 2013
    Applicants: TOKYO INSTITUTE OF TECHNOLOGY, HONDA MOTOR CO., LTD.
    Inventors: HONDA MOTOR CO., LTD., TOKYO INSTITUTE OF TECHNOLOGY
  • Publication number: 20130141547
    Abstract: An image processing apparatus acquires camera images captured by cameras mounted in a vehicle, specifies a position of each pixel of an image lower than a horizontal line among the acquired camera images on a bottom surface of a virtual projection plane that is a hemisphere of an infinite circle with a planar bottom surface, specifies a position of each pixel of an image higher than the horizontal line on a hemisphere surface of the virtual projection plane, specifies a position of a pixel of each camera images specified on the virtual projection plane on a stereoscopic projection plane, specifies each position on an image frame corresponding to the position of the pixel of each camera images specified on the stereoscopic projection plane based on a predetermined point of view position and renders a value of a pixel of a corresponding camera images at each specified position.
    Type: Application
    Filed: January 29, 2013
    Publication date: June 6, 2013
    Applicant: FUJITSU LIMITED
    Inventor: FUJITSU LIMITED
  • Publication number: 20130135446
    Abstract: An exemplary street view creating method includes obtaining images captured by at least three cameras in close proximity. The method then extracts the distance information from the obtained images. Next, the method determines images captured by cameras in different orientations and at different precise locations. The method further creates virtual 3D models based on the determined images and the extracted distance information. Then, the method determines any overlapping portion between any two original images. The method aligns any portions of synchronous images which are determined as common or overlapping.
    Type: Application
    Filed: December 17, 2011
    Publication date: May 30, 2013
    Applicant: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: HOU-HSIEN LEE, CHANG-JUNG LEE, CHIH-PING LO
  • Patent number: 8451324
    Abstract: Disclosed are a 3D image display apparatus and method. The 3D image display apparatus may adjust a number of viewpoints of a 3D image, distance between viewpoints, and other parameters through varying a display pattern of a viewing zone generating unit and a distance between an image display unit and the viewing zone generating unit. Accordingly, the 3D image display apparatus may effectively express the 3D image suitable for various viewing circumstances.
    Type: Grant
    Filed: March 9, 2009
    Date of Patent: May 28, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ju Yong Park, Gee Young Sung, Du-Sik Park, Dong Kyung Nam, Yun-Tae Kim
  • Patent number: 8442383
    Abstract: An image capturing apparatus selects one of image capturing conditions to be used for capturing images as a reference condition when a total of image capturing time of one frame in each image capturing condition to be used for capturing images is longer than one frame period at a predetermined frame rate, and captures images at the predetermined frame rate under the reference condition, and captures images at a frame rate lower than the predetermined frame rate under the other image capturing conditions. A playback apparatus detects a motion between frames of a moving image captured under the reference condition when the image capturing condition of the playback moving image is not the reference condition, and generates an interpolation frame for interpolating between frames of the playback moving image based on the detected motion.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: May 14, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yoshihisa Furumoto
  • Publication number: 20130107015
    Abstract: An image capture device includes: first and second shooting sections, each of which is configured to shoot an image of a subject; a disparity calculating section configured to generate a depth map based on first and second images that have been shot by the first and second shooting sections, respectively; and an image generating section configured to generate, based on the depth map and the first image, a third image that has as high a resolution as the first image and that forms part of a 3D image. The image generating section is controlled to determine whether or not to generate the third image based on at least one of the states of the first and second images, a zoom power during shooting, and the tilt of the image capture device during shooting.
    Type: Application
    Filed: December 14, 2012
    Publication date: May 2, 2013
    Applicant: Panasonic Corporation
    Inventor: Panasonic Corporation
  • Patent number: 8432435
    Abstract: A catadioptric camera creates image light fields from a 3D scene by creating ray images defined as 2D arrays of ray-structure picture-elements (ray-xels). Each ray-xel captures light intensity, mirror-reflection location, and mirror-incident light ray direction. A 3D image is then rendered from the ray images by combining the corresponding ray-xels.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: April 30, 2013
    Assignee: Seiko Epson Corporation
    Inventors: Yuanyuan Ding, Jing Xiao
  • Publication number: 20130100256
    Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
    Type: Application
    Filed: October 21, 2011
    Publication date: April 25, 2013
    Applicant: Microsoft Corporation
    Inventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
  • Patent number: 8427746
    Abstract: The present invention discloses a stereoscopic image display system and a method of controlling the same. An eye tracking module locates current 3D spatial positions of the viewer's eyes, and generates the information of both left and right eyes' current 3D spatial positions. A control module controls a display device that can alter the direction of the light outputted, and outputs images on the display device in time multiplex mode. The light containing the left eye image is outputted to the position of left eye instead of right eye at one time point, and the light containing the right eye image is outputted to the position of right eye instead of left eye at another time point, so that a stereoscopic image is perceived according to the parallax theory. The present invention enlarges the visual range of stereoscopic image and achieves a better stereoscopic image visual experience for viewers.
    Type: Grant
    Filed: July 16, 2010
    Date of Patent: April 23, 2013
    Assignee: Infovision Optoelectronics (Kunshan) Co. Ltd.
    Inventor: Bingyu Si
  • Patent number: 8427531
    Abstract: A stereoscopic image display apparatus according to an embodiment includes: a display device including a display panel including pixels, and an optical plate controlling light rays emitted from pixels; a camera provided in the display device; a face tracking unit making a decision whether a viewer exists in front of the display device based on an image picked up by the camera, and if the viewer exists, sampling and detecting a distance from the display device to the viewer and a position of the viewer; a memory storing the position of the viewer sampled and detected by the face tracking unit; and an image display control unit estimating the position of the viewer based on the position of the viewer stored in the memory and driving and controlling the display panel on based on the estimated position, when the face tracking unit does not recognize that the viewer exists.
    Type: Grant
    Filed: February 23, 2012
    Date of Patent: April 23, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kazuhiro Takashima, Kiyoshi Hoshino
  • Publication number: 20130095920
    Abstract: Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.
    Type: Application
    Filed: October 13, 2011
    Publication date: April 18, 2013
    Applicant: Microsoft Corporation
    Inventors: Kestutis Patiejunas, Kanchan Mitra, Patrick Sweeney, Yaron Eshet, Adam G. Kirk, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, David Harnett, Amit Mital, Simon Winder
  • Patent number: 8421845
    Abstract: A stereoscopic image generating apparatus is constituted by: an image obtaining section, for obtaining a plurality of image groups, each constituted by a plurality of images for generating stereoscopic images and obtained by photography of subjects from different viewpoints; and an image layout section, for generating a stereoscopic layout image, in which a single image group selected from among the plurality of image groups as an image of interest is arranged at a predetermined position on a display screen in a stereoscopically viewable manner, and selected images from among the image groups other than the single image group are arranged such that they appear to be inclined and facing toward the predetermined position on the display screen at at least one of the right and left sides of the predetermined position.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: April 16, 2013
    Assignee: FUJIFILM Corporation
    Inventors: Eiji Ishiyama, Mikio Watanabe
  • Patent number: 8416284
    Abstract: A stereoscopic image capturing apparatus includes a first image acquisition unit including a first image formation lens unit forming an image of an object, and a first image sensor having a plurality of capturing pixels to receive the image formed by the first formation lens unit, and a second image acquisition unit including a second image formation lens unit forming a image of the object, a first lens array unit having a plurality of lenses to receive the image formed by the second image formation lens, and a second image sensor having a plurality of capturing pixels to receive the formed image through the first lens array unit. The second image acquisition unit is disposed at a distance in a horizontal direction from the first image acquisition unit when viewed from the object.
    Type: Grant
    Filed: August 4, 2009
    Date of Patent: April 9, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Yoshiharu Momonoi, Yuki Iwanaka
  • Patent number: 8411744
    Abstract: The present invention provides a method of decoding a video signal. The method includes the steps of obtaining view information of a picture from the video signal and generating information for reference picture management using the view information.
    Type: Grant
    Filed: March 30, 2007
    Date of Patent: April 2, 2013
    Assignee: LG Electronics Inc.
    Inventors: Byeong Moon Jeon, Seung Wook Park, Yong Joon Jeon, Yeon Kwan Koo
  • Patent number: 8409913
    Abstract: A semiconductor device includes a semiconductor substrate having at least one surface provided with a semiconductor element, wherein the semiconductor substrate includes a region of a first conductivity type, the region being formed in a surface layer portion of the semiconductor substrate; a first diffusion region of a second conductivity type, the first diffusion region having a first impurity concentration and being formed in the surface layer portion, and a pn junction being formed between the first diffusion region and the region of the first conductivity type; and a first metal silicide film formed on part of a portion of the surface corresponding to the first diffusion region.
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: April 2, 2013
    Assignee: Fujitsu Semiconductor Limited
    Inventor: Masaya Katayama
  • Patent number: 8395659
    Abstract: Systems and methods for identifying moving objects from received images are disclosed. Images are received from a stereo image capture device and from a moving image capture device. Distances from the stereo image capture device to points in a captured image are stored in a stereo distance map and distances from the moving image capture device are determined from pairs of images captured by the moving image capture device and stored in a moving distance map. Stereo disparities are determined from distances in the stereo distance map and motion disparities are determined from distances in the moving distance map. A scale is generated from the stereo disparities and the motion disparities. The scale is applied to the motion disparities and the scaled motion disparities and the stereo disparities are used to identify pixels in an image associated with a moving object.
    Type: Grant
    Filed: August 26, 2010
    Date of Patent: March 12, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventor: Morimichi Nishigaki
  • Publication number: 20130044181
    Abstract: Embodiments of the present invention disclose a system and method for multi-viewpoint video capture. According to one embodiment, the system includes a camera housing for accommodating both a first multi-imager set and a second multi-imager set, with each multi-imager set including a plurality of optical cameras having different viewpoint directions and configured to produce a source image. Furthermore, each camera in the first multi-imager set and the second multi-imager set include corresponding cameras facing in approximately the same viewpoint direction. The first multi-imager set is positioned laterally adjacent to the second multi-imager set such that lines joining a center of projection of corresponding cameras in the first multi-imager set and second multi-imager set are approximately parallel.
    Type: Application
    Filed: May 14, 2010
    Publication date: February 21, 2013
    Inventors: Henry Harlyn Baker, Henry Sang, JR., Nelson Liang An Chang
  • Patent number: 8369406
    Abstract: Provided are an apparatus and method for predictive coding/decoding for improving a compression rate of multiview video using one or two additional reference frame buffers. The predictive encoding apparatus includes: a multiview reference picture providing unit for providing a reference picture for a predictive encoding according to temporal and spatial GOP structure information; a prediction unit for creating a vector by predicting which part of the reference picture inputted from the multiview reference picture providing unit is referred by a picture to currently encode; a transforming and quantizing unit for obtaining a difference signal between the predicted signal inputted from the prediction unit and a picture signal to currently encode, transforming the obtained difference signal, quantizing the transformed signal, and compressing the quantized signal; and an entropy encoding unit for encoding the quantized signal and the vectors according to a predetermined scheme and outputting the encoded signal.
    Type: Grant
    Filed: July 18, 2006
    Date of Patent: February 5, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Dae-Hee Kim, Nam-Ho Hur, Soo-In Lee, Yung-Lyul Lee, Jong-Ryul Kim, Suk-Hee Cho
  • Patent number: 8370873
    Abstract: A method of operation of three dimensional (3D) stereoscopic television consistent with certain implementations involves turning on or installing a set of 3D glasses on a viewer to cause the set of 3D glasses to enter an active operational mode; and at the 3D glasses, emitting a signal to the television that causes the television to switch from a 2D display mode to a 3D display mode. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
    Type: Grant
    Filed: March 9, 2010
    Date of Patent: February 5, 2013
    Assignee: Sony Corporation
    Inventor: Peter Rae Shintani
  • Publication number: 20130027523
    Abstract: The embodiments of the present invention relates to a method and a processor for representing a 3D scene. In the method, one 3D component of the 3D scene to be represented, captured at least three different views (v1, v2, v3) is projecting to a predefined view (vF). A value associated with each projected view regarding the 3D component is then determined and consistency among the projected views regarding the 3D component is detected. Moreover, a consistency value regarding the 3D component is determined based on the determined values associated with the respective projected view, and the determined values are replaced by the determined consistency value on at least one of the three projected 3D components.
    Type: Application
    Filed: November 24, 2010
    Publication date: January 31, 2013
    Applicant: Telefonaktiebolaget L M Ericsson (PUBL)
    Inventors: Ivana Girdzijauskas, Markus Flierl, Apostolos Georgakis, Pravin Kumar Rana, Thomas Rusert
  • Publication number: 20130002827
    Abstract: An apparatus and method for capturing a light field geometry using a multi-view camera that may refine the light field geometry varying depending on light within images acquired from a plurality of cameras with different viewpoints, and may restore a three-dimensional (3D) image.
    Type: Application
    Filed: May 30, 2012
    Publication date: January 3, 2013
    Applicant: Samsung Electronics Co., LTD.
    Inventors: Seung Kyu Lee, Do Kyoon Kim, Hyun Jung Shim
  • Publication number: 20130002794
    Abstract: A system that incorporates teachings of the present disclosure may include, for example, receive a request for a telepresence seat at an event, obtain media content comprising event images of the event that are captured by an event camera system, receive images that are captured by a camera system at a user location, provide the media content and video content representative of the images to a processor for presentation at a display device utilizing a telepresence configuration that simulates the first and second users being present at the event, where the providing of the first and second video content establishes a communication session between the first and second users. Other embodiments are disclosed.
    Type: Application
    Filed: June 30, 2011
    Publication date: January 3, 2013
    Applicant: AT&T Intellectual Property I, LP
    Inventors: Tara Hines, Andrea Basso, Aleksey Ivanov, Jeffrey Mikan, Nadia Morris
  • Patent number: 8334893
    Abstract: A method for combining range information with an optical image is provided. The method includes capturing a first optical image of a scene with an optical camera, wherein the first optical image comprising a plurality of pixels. Additionally, range information of the scene is captured with a ranging device. Range values are then determined for at least a portion of the plurality of pixels of the first optical image based on the range information. The range values and the optical image are combined to produce a 3-dimensional (3D) point cloud. A second optical image of the scene from a different perspective than the first optical image is produced based on the 3D point cloud.
    Type: Grant
    Filed: November 7, 2008
    Date of Patent: December 18, 2012
    Assignee: Honeywell International Inc.
    Inventor: Randolph G. Hartman
  • Patent number: 8334895
    Abstract: Apparatus and systems, as well as methods and articles, may operate to capture a portion of an omniscopic or omni-stereo image using one or more image capture media. The media may be located substantially perpendicular to a converging ray originating at a viewpoint on an inter-ocular circle and having a convergence angle between zero and ninety degrees from a parallel viewpoint baseline position that includes a non-converging ray originating at the viewpoint. The media may also be located so as to be substantially perpendicular to a non-converging ray originating at a first viewpoint at a first endpoint of a diameter defining an inter-ocular circle, wherein the origin of the non-converging ray gravitates toward the center of the inter-ocular circle as spherical imagery is acquired.
    Type: Grant
    Filed: January 18, 2011
    Date of Patent: December 18, 2012
    Assignee: Micoy Corporation
    Inventor: Trent N. Grover
  • Publication number: 20120314037
    Abstract: The present invention relates to a system and method for providing 3D imaging. The system comprises: a) two or more cameras for allowing to generate stereoscopic information only for selected regions in a common or coincident field of view captured simultaneously by both of said two cameras, in order to provide distance measurements information to object(s) located in said selected regions; b) at least one 3DLR module for providing 3D image, wherein said two cameras and said 3DLR module are positioned in such a way that, at least partially, they will be able to capture similar or coincident field of view information; and c) a processing unit for generating said 3D imaging according to said stereoscopic information and said 3D image by using image processing algorithm(s) in order to provide 3D imaging information in real-time.
    Type: Application
    Filed: February 22, 2011
    Publication date: December 13, 2012
    Applicant: Ben-Gurion University of the Negev
    Inventors: Youval Nehmadi, Hugo Guterman
  • Patent number: 8319826
    Abstract: A three-dimensional image communication terminal can make communication in which there are a sense of being engaged on a place and a sense of reality by use of a three-dimensional image with naturalness and a high robust characteristic. A three-dimensional image communication terminal includes a three-dimensional image input section, a transmitting section that transmits an input image to a communication partner after image processing, a three-dimension image display section which monitor-displays a human image or an object image which was shot, and a telephone calling section which receives three-dimensional image information from a partner and communicates with the other end by voice. The three-dimensional image display section includes an integral photography type horizontal/vertical parallax display device.
    Type: Grant
    Filed: January 12, 2010
    Date of Patent: November 27, 2012
    Assignee: Panasonic Corporation
    Inventor: Takashi Kubara
  • Patent number: 8319828
    Abstract: Provided is a highly efficient 2D-3D switchable display device. The image display device includes a display unit forming an image, and a switching visual field separation unit switching the image formed by the display unit into a 2D image or a 3D image, and comprising a lens array for separating a visual field and a liquid crystal lens controlled so as to have a refractive power offsetting or reinforcing a refractive power of the lens array.
    Type: Grant
    Filed: September 28, 2007
    Date of Patent: November 27, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dae-sik Kim, Sergey Shestak, Kyung-hoon Cha
  • Patent number: 8310557
    Abstract: The present invention relates to camera arrangements with backlighting detection. The camera arrangements are capable of simultaneously capturing real scene data from various viewpoints. This data may include illumination data impinging the scene. The illumination data may then be utilized to alter the apparent illumination of a second image, either real or virtual, which is to be superimposed over the real scene so that the illumination across the entire superimposed scene is consistent. The camera arrangements may utilize combinations of umbilical cables and light tubes to expand or contract the field of capture. The camera arrangements may also include in-line signal processing of the data output.
    Type: Grant
    Filed: July 2, 2007
    Date of Patent: November 13, 2012
    Inventor: Peter R. Rogina
  • Patent number: 8305430
    Abstract: A visual odometry system and method for a fixed or known calibration of an arbitrary number of cameras in monocular configuration is provided. Images collected from each of the cameras in this distributed aperture system have negligible or absolutely no overlap. The relative pose and configuration of the cameras with respect to each other are assumed to be known and provide a means for determining the three-dimensional poses of all the cameras constrained in any given single camera pose. The cameras may be arranged in different configurations for different applications and are made suitable for mounting on a vehicle or person undergoing general motion. A complete parallel architecture is provided in conjunction with the implementation of the visual odometry method, so that real-time processing can be achieved on a multi-CPU system.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: November 6, 2012
    Assignee: SRI International
    Inventors: Taragay Oskiper, John Fields, Rakesh Kumar
  • Patent number: 8305428
    Abstract: A stereo video shooting and viewing device includes: a body, having two groups of eyepieces spaced apart from each other by a certain distance corresponding to a distance between two human eyes; two micro display screens, disposed on front ends of the eyepieces; two digital camera lenses, disposed on an outer side of the body, spaced apart from each other by a certain distance corresponding to the distance between two human eyes, and used for synchronously capturing images with a visual angle difference corresponding to that of the human eyes; and a main control unit (MCU), connected to the two micro display screens and the two digital camera lenses, and used for processing the images synchronously captured by the two digital camera lenses and image signals received from exterior, and displaying the images on the two micro display screens separately.
    Type: Grant
    Filed: May 20, 2008
    Date of Patent: November 6, 2012
    Assignee: Inlife-Handnet Co., Ltd
    Inventor: Chao Hu
  • Patent number: 8300087
    Abstract: A sequential pattern comprising contiguous black frames inserted between left and right 3D video and/or graphics frames may be displayed on an LCD display. The pattern may comprise two or three contiguous left frames followed by contiguous black frames followed by two or three contiguous right frames followed by contiguous black frames. The left and/or right frames may comprise interpolated frames and/or may be displayed in ascending order. The contiguous black frames are displayed longer than liquid crystal response time. 3D shutter glasses are synchronized with the black frames. A left lens transmits light when left frames followed by contiguous black frames are displayed and a right lens transmits light when right frames followed by contiguous black frames are displayed. A 3D pair of 24 Hz frames or two 3D pairs of 60 Hz frames per pattern are displayed on a 240 Hz display.
    Type: Grant
    Filed: October 23, 2009
    Date of Patent: October 30, 2012
    Assignee: Broadcom Corporation
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov, Sunkwang Hong
  • Patent number: 8299504
    Abstract: A two-dimensional, temporally modulated electromagnetic wavefield, preferably in the ultraviolet, visible or infrared spectral range, can be locally detected and demodulated with one or more sensing elements. Each sensing element consists of a resistive, transparent electrode (E) on top of an insulated layer (O) that is produced over a semiconducting substrate whose surface is electrically kept in depletion. The electrode (E) is connected with two or more contacts (C1; C2) to a number of clock voltages that are operated synchronously with the frequency of the modulated wavefield. In the electrode and in the semiconducting substrate lateral electric fields are created that separate and transport photogenerated charge pairs in the semiconductor to respective diffusions (D1; D2) close to the contacts (C1; C2).
    Type: Grant
    Filed: January 19, 2009
    Date of Patent: October 30, 2012
    Assignee: MESA Imaging AG
    Inventor: Peter Seitz
  • Publication number: 20120268571
    Abstract: A multiview face capture system may acquire detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints. A lighting system may illuminate a face with polarized light from multiple directions. The light may be polarized substantially parallel to a reference axis during a parallel polarization mode of operation and substantially perpendicular to the reference axis during a perpendicular polarization mode of operation. Multiple cameras may each capture an image of the face along a materially different optical axis and have a linear polarizer configured to polarize light traveling along its optical axis in a direction that is substantially parallel to the reference axis. A controller may cause each of the cameras to capture an image of the face while the lighting system is in the parallel polarization mode of operation and again while the lighting system is in the perpendicular polarization mode of operation.
    Type: Application
    Filed: April 18, 2012
    Publication date: October 25, 2012
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: PAUL E. DEBEVEC, ABHIJEET GHOSH, GRAHAM FYFFE
  • Patent number: 8270762
    Abstract: A method of calibrating the intensity with which light is emitted through the light-emission face of an optical fiber light line assembly employs (i) a data processing system with computer memory, (ii) a camera communicatively linked to the data processing system, and (iii) a signal-responsive scoring device communicatively linked to the data processing system. Light is introduced into the assembly, an image of the emitting face is captured, and corresponding image data is stored in computer memory. The image data is segmented into plural image-data segments corresponding to physical sub-regions of the imaged face. The image-data segments are algorithmically analyzed to identify the sub-region that emits inputted light with the lowest intensity, and emission-intensity data associated with that sub-region is identified as reference emission data.
    Type: Grant
    Filed: July 18, 2009
    Date of Patent: September 18, 2012
    Assignee: Schott Corporation
    Inventor: Robert E. Abel
  • Patent number: 8269818
    Abstract: The systems and methods described herein include, among other things, a technique for calibrating the outputs of multiple sensors, such as CCD devices, that have overlapping fields of view and mapping the pixels of those outputs to the pixels of a display screen by means of a lookup table so that a user can see a selected field of view within the larger fields of view that are seen by the sensors.
    Type: Grant
    Filed: August 13, 2007
    Date of Patent: September 18, 2012
    Assignee: Tenebraex Corporation
    Inventor: Peter W. J. Jones
  • Patent number: 8259160
    Abstract: The present invention provides a method for generating free viewpoint video image in three-dimensional movement capable of synthesizing the free viewpoint video image from a viewpoint which looks down on an object from above. This method includes a process of taking multi-viewpoint video images using a plurality of cameras located on an identical plane and a camera not located on the identical plane, a process of generating video image at a viewpoint having the same azimuth as a desired viewpoint and located on the plane from the multi-viewpoint video images of the cameras on the plane, and a process of generating video image at the desired viewpoint from video image of the camera not located on the plane and video image at the viewpoint having the same azimuth as the desired viewpoint and located on the plane.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: September 4, 2012
    Assignee: KDDI Corporation
    Inventors: Akio Ishikawa, Shigeyuki Sakazawa, Atsushi Koike
  • Publication number: 20120206568
    Abstract: A computer-readable storage medium can be configured to store code to trigger, when a computing device is in a stereoscopic mode, generation of a stereoscopic image based on a first image captured using a first image capture device in a first portion of a computing device and based on a second image captured using a second image capture device in a second portion of the computing device. The code including code to trigger, when in a multi-image mode, concurrent display of at least a portion of a third image captured using the first image capture device in a first region of a display and at least a portion of a fourth image captured using the second image capture device in a second region of the display mutually exclusive from the first region of the display. The code including code to receive an indicator configured to trigger the computing device to change between the stereoscopic mode and the multi-image mode.
    Type: Application
    Filed: February 10, 2011
    Publication date: August 16, 2012
    Applicant: GOOGLE INC.
    Inventor: Amy Han
  • Patent number: 8243122
    Abstract: The present invention provides a method of generating a virtual viewpoint video image when the virtual viewpoint position is not located on a plane where a camera is disposed. In an environment in which a plurality of cameras having a horizontal optical axis are disposed in a real zone (for example, on the circumference) which surrounds an object, a video image of an arbitrary viewpoint on the circumference is generated. Further, by synthesizing video images photographed by a camera, a free viewpoint video image is generated from a virtual viewpoint (viewpoint from a high or low position) where no camera is placed. According to a method of achieving this, a travel distance of a display position is calculated by the local region synthesizing portion and this travel distance is reflected to the free viewpoint video image of a local region.
    Type: Grant
    Filed: July 30, 2008
    Date of Patent: August 14, 2012
    Assignee: KDDI Corporation
    Inventors: Akio Ishikawa, Shigeyuki Sakazawa, Atsushi Koike
  • Patent number: 8244542
    Abstract: A method, article of manufacture, and apparatus for monitoring a location having a plurality of audio sensors and video sensors are disclosed. In an embodiment, this comprises receiving auditory data, comparing a portion of the auditory data to a lexicon comprising a plurality of keywords to determine if there is a match to a keyword from the lexicon, and if a match is found, selecting at least one video sensor to monitor an area to be monitored. Video data from the video sensor is archived with the auditory data and metadata. The video sensor is selected by determining video sensors associated with the areas to be monitored. A lookup table is used to determine the association. Cartesian coordinates may be used to determine positions of components and their areas of coverage.
    Type: Grant
    Filed: March 31, 2005
    Date of Patent: August 14, 2012
    Assignee: EMC Corporation
    Inventors: Christopher Hercules Claudatos, William Dale Andruss, Richard Urmston, John Louis Acott
  • Patent number: 8243123
    Abstract: Additional cameras, optionally in conjunction with markers or projectors, capture three-dimensional information about the environment and characters of a filmed scene. This data is later used to convert, generally as a post-production process under highly automated computer control, or as a post broadcast process, a relatively high quality two-dimensional image stream to three-dimensional or stereoscopic, generally binocular, format.
    Type: Grant
    Filed: February 2, 2006
    Date of Patent: August 14, 2012
    Inventors: David M. Geshwind, Anne C. Avellone
  • Patent number: 8237791
    Abstract: Feeds from cameras are better visualized by superimposing images based on the feeds onto map based on a two- or three-dimensional virtual map. For example, a traffic camera feed can be aligned with a roadway included in the map. Multiple videos can be aligned with roadways in the map and can also be aligned in time.
    Type: Grant
    Filed: March 19, 2008
    Date of Patent: August 7, 2012
    Assignee: Microsoft Corporation
    Inventors: Billy Chen, Eyal Ofek
  • Patent number: 8233032
    Abstract: A method for generating and displaying a three-dimensional image viewable from different angles includes the steps of generating a plurality of images of a three-dimensional object from a plurality of angles. Each image is displayed from an angle corresponding to the generated angle on a display surface of a screen, typically having a plurality of display surfaces. The screen is rotated such that each viewable angle of each image is displayed at least twenty-four times per second so as to appear constant.
    Type: Grant
    Filed: June 9, 2009
    Date of Patent: July 31, 2012
    Inventor: Bartholomew Garibaldi Yukich
  • Patent number: 8228372
    Abstract: A digital video editing and playback system and methods of editing and playing back digital video are provided. The system includes a video processor adapted to receive video segments from multiple sources. The video segments include synchronization information. The video processor includes software instructions adapted to be executed by the video processor. The software instructions are adapted to evaluate the synchronization information from various video segments and to form associations between video segments from different sources that correspond to a common event.
    Type: Grant
    Filed: January 5, 2007
    Date of Patent: July 24, 2012
    Assignee: Agile Sports Technologies, Inc.
    Inventor: Christopher Griffin
  • Patent number: 8222996
    Abstract: An embodiment of the present invention provides a radio frequency identification (RFID) tag, comprising at least one light emitting diode (LED) that is controlled by the RFID's logic and powered by the RFID's power harvesting circuit, wherein the RFID tag is capable of being interrogated by an RFID reader and reporting its unique identification number by RF backscatter and/or controlling the illumination state of the at least one LED.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: July 17, 2012
    Assignee: Intel Corporation
    Inventors: Joshua R. Smith, Daniel Yeager, Ali Rahimi
  • Patent number: 8217993
    Abstract: Apparatus to capture three-dimensional images of a subject selected from the group consisting of an animate object, an inanimate object, a human, an animal, a biological mass or a portion of a subject comprises a plurality of image capturing device modules, each module comprises a plurality of image-capturing devices; and a rigid support structure supporting the plurality of image capturing device modules to define a space wherein said subject may be disposed. The rigid support structure supports all of the modules in predetermined relationship to each other and to the space. The rigid support structure further supports the modules in positions such that each module is positioned to capture a group of first images of a corresponding surface portion of the subject disposed within the space such that each group of first images captured by the corresponding module captures a substantially different surface portion of the subject disposed within the space.
    Type: Grant
    Filed: March 20, 2009
    Date of Patent: July 10, 2012
    Assignee: Cranial Technologies, Inc.
    Inventors: Jerold N. Luisi, Timothy R Littlefield, Jeanne K Pomatto-Hertz
  • Publication number: 20120154519
    Abstract: A chassis assembly is disclosed including a chassis and plurality of image sensors fixedly mounted to the chassis. The number of image sensors may vary, but in one example, there are three image sensors, arranged in an equilateral triangle within the chassis. Each image sensor includes a camera, which may be a video camera, and a catadioptric mirror. The mirror in each image sensor is fixedly mounted with respect to the camera via a stem and a collar for mounting the mirror to the chassis.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Habib Zargarpour, Alex Garden, Ben Vaught, Michael Rondinelli
  • Publication number: 20120154548
    Abstract: Methods are disclosed for capturing image data from three or more image sensors, and for processing the captured image data into left views of a panorama taken from each image sensor and right views taken of the panorama from each image sensor. The left views are combined and used as the left perspective of the panorama, and the right views are combined as used as the right perspective of the panorama, in the stereoscopic view.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Habib Zargarpour, Alex Garden, Ben Vaught, Sing Bing Kang, Michael Rondinelli
  • Patent number: 8179431
    Abstract: A compound eye photographing apparatus including: a plurality of photographing units for photographing a subject at a plurality of photographing positions to obtain a plurality of images of the subject; a subject detection unit for detecting a predetermined subject from a base image which is one of the plurality of images; a subject information generation unit for generating subject information which includes information of the position and size of the predetermined subject in the base image; a photographing information generation unit for generating photographing information which includes information of the baseline length, convergence angle, focal length, and zoom magnification of each of the plurality of photographing units at the time of photographing, and a determination unit for determining whether or not the predetermined subject detected from the base image is included in another image other than the base image and outputting the determination result.
    Type: Grant
    Filed: March 25, 2009
    Date of Patent: May 15, 2012
    Assignee: Fujifilm Corporation
    Inventor: Tomonori Masuda