Picture Signal Generators (epo) Patents (Class 348/E13.074)
  • Publication number: 20120275667
    Abstract: Apparatus and a method for generating a rectified image. First pixel information corresponding to a first image is received from a first imager. Second pixel information corresponding to a second image is received from a second imager. A plurality of facial feature points of a portrait in each of the first and second images are identified. A fundamental matrix is generated based on the detected facial features. An essential matrix is generated based on the fundamental matrix. Rotational and translational information corresponding to the first and second imagers are generated based on the essential matrix. The rotational and translational information are applied to at least one of the first and second images to generate at least one rectified image.
    Type: Application
    Filed: August 25, 2011
    Publication date: November 1, 2012
    Applicant: APTINA IMAGING CORPORATION
    Inventor: CHENG LU
  • Publication number: 20120268574
    Abstract: An imager integrated circuit intended to cooperate with an optical system configured to direct light rays from a scene to an inlet face of the circuit, the circuit being configured to perform a simultaneous stereoscopic capture of N images corresponding to N distinct views of the scene, each of the N images corresponding to light rays directed by a portion of the optical system which is different from those directing the rays corresponding to the N-1 other images, including: N subsets of pixels made on a same substrate, each of the N subsets of pixels being intended to perform the capture of one of the N associated images, means interposed between each of the N subsets of pixels and the inlet face of the circuit, and configured to pass the rays corresponding to the image associated with said subset of pixels and block the other rays.
    Type: Application
    Filed: March 28, 2012
    Publication date: October 25, 2012
    Applicant: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENE ALT
    Inventor: Pierre GIDON
  • Publication number: 20120270600
    Abstract: This invention relates to a case for portable, handheld electronic devices such as those manufactured by Apple Inc., including the iPod touch, iPad and/or the iPhone, which case supplements the functional properties which are not present on a particular model of the portable, handheld electronic devices such as the iPod touch, iPad and/or the iPhone by providing an integrated speaker and/or microphone and/or at least one camera, in addition to providing the usual protective and/or cushioning aspects of an ordinary case.
    Type: Application
    Filed: April 25, 2012
    Publication date: October 25, 2012
    Inventor: STEVE TERRY ZELSON
  • Publication number: 20120268572
    Abstract: A 3D video camera is provided. The 3D video camera includes a first camera lens for providing a first sensing signal, a second camera lens for providing a second sensing signal, and an image processing unit for receiving the first sensing signal and the second sensing signal to generate a first eye image and a first comparison image to accordingly generate 3D depth information.
    Type: Application
    Filed: July 26, 2011
    Publication date: October 25, 2012
    Applicant: MSTAR SEMICONDUCTOR, INC.
    Inventor: Kun-Nan Cheng
  • Publication number: 20120268573
    Abstract: An imaging system for the fluorescence-optical visualization of a two-dimensional or three-dimensional object is provided. The imaging system comprising an illumination unit, which is designed and provided for emitting optical radiation in a predetermined wavelength range in order to illuminate the object and excite a fluorescent substance contained in the object, and a capturing unit, which is designed and provided for capturing an optical signal from the region of the object and for splitting the optical signal into a fluorescence signal having a first wavelength range and a signal of visible light having a second wavelength range. The optical capturing unit has an optoelectronic converter having a plurality of partial regions and serving for converting the fluorescence signal into a first electronic data signal and the signal of visible light into a second electronic data signal.
    Type: Application
    Filed: June 8, 2010
    Publication date: October 25, 2012
    Applicant: W.O.M. WORLD OF MEDICINE AG
    Inventors: Karl-Heinz Günter Schönborn, Andreas Bembenek, Jörn Ole Becker, Martin Bock, Andreas Lutz
  • Publication number: 20120268570
    Abstract: A digital cinematographic and projection process that provides a means of 3D stereoscopic imagery that is not adversely affected by the standard frame rate of 24 frames per second, as is the convention in the motion picture industry worldwide. A method for photographing and projecting moving images in three dimensions includes recording a moving image with a first and a second camera simultaneously and interleaving a plurality of frames recorded by the first camera with a plurality of frames recorded by the second camera. The step of interleaving includes retaining odd numbered frames recorded by the first camera and deleting the even numbered frames, retaining even numbered frames recorded by the second camera and deleting the odd numbered frames, and creating an image sequence by alternating the retained images from the first and second camera.
    Type: Application
    Filed: December 23, 2010
    Publication date: October 25, 2012
    Applicant: TRUMBULL VENTURES LLC
    Inventor: Douglas Trumbull
  • Publication number: 20120268569
    Abstract: An aspect of the invention provides a composite camera system that comprises a first camera including a first imaging unit; a second camera including a second imaging unit; a mount unit configured to detachably mount thereon the first camera and the second camera, wherein scenes captured by the first imaging unit and second imaging unit in a mounted state coincide with each other in vertical position; and a creation unit configured to create a three-dimensional image on the basis of images representing the scenes captured by the first imaging unit and the second imaging unit in the mounted state.
    Type: Application
    Filed: April 16, 2012
    Publication date: October 25, 2012
    Applicant: SANYO ELECTRIC CO., LTD.
    Inventor: Mitsuaki KUROKAWA
  • Publication number: 20120268566
    Abstract: A three-dimensional color image sensor includes color pixels and depth pixels therein. A semiconductor substrate is provided with a depth region therein, which extends adjacent a surface of the semiconductor substrate. A two-dimensional array of spaced-apart color regions are provided within the depth region. Each of the color regions includes a plurality of different color pixels therein (e.g., red, blue and green pixels) and each of the color pixels within each of the spaced-apart color regions are spaced-apart from all other color pixels within other color regions.
    Type: Application
    Filed: April 19, 2012
    Publication date: October 25, 2012
    Inventors: Won-Joo Kim, Yoon-Dong Park, Hyoung-Soo Ko
  • Publication number: 20120268571
    Abstract: A multiview face capture system may acquire detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints. A lighting system may illuminate a face with polarized light from multiple directions. The light may be polarized substantially parallel to a reference axis during a parallel polarization mode of operation and substantially perpendicular to the reference axis during a perpendicular polarization mode of operation. Multiple cameras may each capture an image of the face along a materially different optical axis and have a linear polarizer configured to polarize light traveling along its optical axis in a direction that is substantially parallel to the reference axis. A controller may cause each of the cameras to capture an image of the face while the lighting system is in the parallel polarization mode of operation and again while the lighting system is in the perpendicular polarization mode of operation.
    Type: Application
    Filed: April 18, 2012
    Publication date: October 25, 2012
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: PAUL E. DEBEVEC, ABHIJEET GHOSH, GRAHAM FYFFE
  • Publication number: 20120268568
    Abstract: A mobile computing device comprising one or more sensors and an electronic display is disclosed herein. The one or more sensors are adapted to determine a distance between the mobile computing device and a mobile computing device user, and are also adapted to determine a position of the mobile computing device relative to the mobile computing device user. The electronic display is adapted to modify visual content on the electronic display relative to a change in at least one of, the distance between the mobile computing device and a mobile computing device user, and the position of the mobile computing device relative to the mobile computing device user.
    Type: Application
    Filed: April 19, 2011
    Publication date: October 25, 2012
    Applicant: QUALCOMM INNOVATION CENTER, INC.
    Inventors: Xintian Li, Xuerui Zhang
  • Publication number: 20120262549
    Abstract: A method of generating a predictive picture quality rating is provided. In general, a disparity measurement is made of a three-dimensional image by comparing left and right sub-components of the three-dimensional image. Then the left and right sub-components of the three-dimensional image are combined (fused) into a two-dimensional image, using data from the disparity measurement for the combination. A predictive quality measurement is then generated based on the two-dimensional image, and further including quality information about the comparison of the original three-dimensional image.
    Type: Application
    Filed: April 15, 2011
    Publication date: October 18, 2012
    Applicant: TEKTRONIX, INC.
    Inventor: KEVIN M. FERGUSON
  • Publication number: 20120262551
    Abstract: The 3D image capture device includes a light-transmitting section with n transmitting areas (where n is an integer and n?2) that have different transmission wavelength ranges and each of which transmits a light ray falling within a first wavelength range, a solid-state image sensor that includes a photosensitive cell array having a number of unit blocks, and a signal processing section that processes the output signal of the image sensor. Each unit block includes n photosensitive cells including a first photosensitive cell that outputs a signal representing the quantity of the light ray falling within the first wavelength range. The signal processing section generates at least two image data with parallax by using a signal obtained by multiplying a signal supplied from the first photosensitive cell by a first coefficient, which is real number that is equal to or greater than zero but less than one.
    Type: Application
    Filed: August 19, 2011
    Publication date: October 18, 2012
    Applicant: PANASONIC CORPORATION
    Inventors: Yasunori Ishii, Masao Hiramoto
  • Publication number: 20120262548
    Abstract: An endoscopic apparatus is provided. The endoscopic apparatus includes a light projection unit for configured to selectively projecting patterned light onto a body part, an imaging unit configured to capturing an image of the body part on which shadows corresponding to the predefined portions are formed due to the patterned light, and an image processing unit configured to generate an image showing depth information of the body part based on sizes of the shadows formed on the body part. Certain predefined portions of an emission surface of the patterned light may be blocked in a pattern.
    Type: Application
    Filed: October 17, 2011
    Publication date: October 18, 2012
    Inventors: Wonhee Choe, Jae-guyn Lim, Seong-deok Lee
  • Publication number: 20120262550
    Abstract: Measuring three surface sets on an object surface with a measurement device and scanner, each surface set being 3D coordinates of a point on the object surface. The method includes: the device sending a first light beam to the first retroreflector and receiving a second light beam from the first retroreflector, the second light beam being a portion of the first light beam, a scanner processor and a device processor jointly configured to determine the surface sets; selecting the source light pattern and projecting it onto the object to produce the object light pattern; imaging the object light pattern onto a photosensitive array to obtain the image light pattern; obtaining the pixel digital values for the image light pattern; measuring the translational and orientational sets with the device; determining the surface sets corresponding to three non-collinear pattern elements; and saving the surface sets.
    Type: Application
    Filed: April 11, 2012
    Publication date: October 18, 2012
    Applicant: FARO TECHNOLOGIES, INC.
    Inventor: Robert E. Bridges
  • Publication number: 20120262552
    Abstract: A video sharing system is described to annotate and navigate tourist videos. An example video sharing system enables non-linear browsing of multiple videos and enriches the browsing experience with contextual and geographic information.
    Type: Application
    Filed: December 17, 2010
    Publication date: October 18, 2012
    Applicant: Microsoft Corporation
    Inventors: Bo Zhang, Ying-Qing Xu, Bill (Billy) P. Chen, Eyal Ofek, Baining Guo
  • Publication number: 20120262554
    Abstract: A method of remotely viewing a video from a selected viewpoint selected by the viewer from a continuous segment, including, receiving a recording of a video of a subject recorded using a first depth video camera that records a video comprising a sequence of picture frames and additionally records a depth value for pixels of the picture frames; receiving a recording of a video of the subject recorded using a standard video camera or a second depth video camera positioned to record a video at a viewpoint that differs from the viewpoint of the depth video camera; using the recordings to render a viewable video from the selected viewpoint; and display the rendered viewable video to the viewer.
    Type: Application
    Filed: June 12, 2012
    Publication date: October 18, 2012
    Applicant: Technion Research and Development Foundation Ltd.
    Inventors: Craig Gotsman, Alexander Bogomjakov
  • Publication number: 20120262553
    Abstract: A depth image acquiring device is provided, which includes at least one projecting device and at least one image sensing device. The projecting device projects a projection pattern to an object. The image sensing device senses a real image. In addition, the projecting device also serves as a virtual image sensing device. The depth image acquiring device generates a disparity image by matching three sets of dual-images formed by two real images and one virtual image, and generates a depth image according to the disparity image. In addition, the depth image acquiring device also generates a depth image by matching two real images, or a virtual image and a real image without verification.
    Type: Application
    Filed: April 10, 2012
    Publication date: October 18, 2012
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chia-Chen Chen, Wen-Shiou Luo, Tung-Fa Liou, Yu-Tang Li, Chang-Sheng Chu, Shuang-Chao Chung
  • Publication number: 20120257016
    Abstract: In three-dimensional modeling apparatus, an image obtaining section obtains image sets picked up by stereoscopic camera. A generating section generates three-dimensional models. A three-dimensional model selecting section selects a first three-dimensional model and a second three-dimensional model to be superimposed on the first three-dimensional model among generated three-dimensional models. A extracting section extracts first and second feature points from the selected first and second three-dimensional model. A feature-point selecting section selects feature points having a closer distance to stereoscopic camera from the extracted first and second feature points. A parameter obtaining section obtains a transformation parameter for transforming a coordinate of the second three-dimensional model into a coordinate system of the first three-dimensional model. A transforming section transforms the coordinate of the second three-dimensional model into the coordinate system of the first three-dimensional model.
    Type: Application
    Filed: April 5, 2012
    Publication date: October 11, 2012
    Applicant: CASIO COMPUTER CO., LTD.
    Inventors: Mitsuyasu NAKAJIMA, Takashi Yamaha
  • Publication number: 20120257022
    Abstract: An imaging apparatus includes: an imaging section converting image light impinging thereon through a lens device into an electrical imaging signal; an imaging process section processing the imaging signal output by the imaging section; an output section converting the imaging signal processed by the imaging process section into an image signal in a predetermined format and outputting the image signal; a terminal for synchronization for connection with another imaging apparatus; and an imaging control section controlling imaging at timing in synchronism with the other imaging apparatus and putting the lens device in the same state of control as the state of control of the other imaging apparatus when the communication with the other imaging apparatus can be performed through the terminal section for synchronization.
    Type: Application
    Filed: March 30, 2012
    Publication date: October 11, 2012
    Inventors: Hidekazu SUTO, Masamiki Kawase, Fumio Sekiya
  • Publication number: 20120257024
    Abstract: Disclosed is a stereoscopic imaging device which answers the need for system compatibility over the entire range of long-distance (telescopic) imaging to close-up imaging, and which can faithfully reproduce stereoscopic images on the display side without adjustment. To that end, stereoscopic imaging device is disclosed in which the imaging lens optical axes (phi (L), (R)) in an imaging unit provided with imaging lenses and imaging elements (S) are arranged so as to be laterally parallel, and the distance between the optical axes (DL) is set to the interpupillary distance (B) of a human. One reference window (Wref) is defined as a virtual view frame in the image viewfield of said imaging unit.
    Type: Application
    Filed: November 25, 2010
    Publication date: October 11, 2012
    Inventor: Minoru Inaba
  • Publication number: 20120257023
    Abstract: To provide a control system of a stereo imaging device that is able to obtain a stereo image, which can be viewed stereoscopically, even when relative-position relationship of a pair of imaging units is unknown. A control system of a stereo imaging device includes a pair of imaging units, an error calculation unit and a control unit. A pair of the imaging units has a digital or analog degree of freedom so as to be able to control at least a yaw, pitch, roll and zoom factor, and can capture video frames using imaging elements. The error calculation unit uses each of images captured by a pair of the imaging units to calculate a rotating error and zoom error of each imaging unit on the basis of a difference from a predetermined standard convergence model of each imaging unit.
    Type: Application
    Filed: June 20, 2012
    Publication date: October 11, 2012
    Applicant: BI2-Vision CO.
    Inventors: Xiaolin Zhang, Zining Zhen
  • Publication number: 20120257017
    Abstract: Noncontact coordinate measurement. With a 3D image recording unit, a first three-dimensional image of a first area section of the object surface is electronically recorded in a first position and first orientation, the first three-dimensional image being composed of a multiplicity of first pixels, with which in each case a piece of depth information is coordinated. First 3D image coordinates in an image coordinate system are coordinated with the first pixels. The first position and first orientation of the 3D image recording unit in the object coordinate system are determined by a measuring apparatus coupled to the object coordinate system by means of an optical reference stereocamera measuring system. First 3D object coordinates in the object coordinate system are coordinated with the first pixels from the knowledge of the first 3D image coordinates and of the first position and first orientation of the 3D image recording unit.
    Type: Application
    Filed: May 17, 2012
    Publication date: October 11, 2012
    Applicant: LEIGA GEOSYSTEMS AG
    Inventors: Bo PETTERSSON, Knut SIERCKS, Benedikt ZEBHAUSER
  • Publication number: 20120257019
    Abstract: [Object] To maintain perspective consistency with individual objects in an image when displaying captions (caption units) based on an ARIB method in a superimposed manner. [Solution] Pieces of caption data of individual caption units are inserted as pieces of caption sentence data (caption codes) of a caption sentence data group into a caption data stream. A data unit of extended display control (data unit parameter=0x4F) for transmitting display control information is newly defined. In a PES stream of a caption data group, disparity information is inserted into a data unit for transmitting display control information, thereby associating caption sentence data (caption sentence information) with disparity information. On the receiver side, appropriate disparity can be given to caption units that are to be superimposed on a left-eye image and a right-eye image.
    Type: Application
    Filed: October 14, 2011
    Publication date: October 11, 2012
    Inventor: Ikuo Tsukagoshi
  • Publication number: 20120257020
    Abstract: Techniques are provided for determining distance to an object in a depth camera's field of view. The techniques may include raster scanning light over the object and detecting reflected light from the object. One or more distances to the object may be determined based on the reflected image. A 3D mapping of the object may be generated. The distance(s) to the object may be determined based on times-of-flight between transmitting the light from a light source in the camera to receiving the reflected image from the object. Raster scanning the light may include raster scanning a pattern into the field of view. Determining the distance(s) to the object may include determining spatial differences between a reflected image of the pattern that is received at the camera and a reference pattern.
    Type: Application
    Filed: June 19, 2012
    Publication date: October 11, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Dawson Yee, John Lutian
  • Publication number: 20120257018
    Abstract: Provided is a stereoscopic display device provided with a stereoscopic display panel and a display controller, the stereoscopic display panel including a lenticular lens, a color filter substrate, a TFT substrate, etc. Unit pixels arranged in a horizontal direction parallel to the direction in which both eyes of viewer are arranged are alternately used as left-eye pixels and right-eye pixels. The display controller determined, according to temperature information from a temperature sensor, the contraction/expansion of the lens by a stereoscopic image generating module and generates 3D image data for driving the display panel in which the amount of disparity in a specific disparity direction is corrected on the basis of parameter information defined by an effective linear expansion coefficient inherent in the stereoscopic display panel, or the like and the magnitude of the temperature to thereby ensure a predetermined stereoscopic visual recognition range even when the lens are contracted/expanded.
    Type: Application
    Filed: December 3, 2010
    Publication date: October 11, 2012
    Applicant: NLT Technologies, Ltd.
    Inventors: Koji Shigemura, Di Wu, Michiaki Sakamoto
  • Publication number: 20120257021
    Abstract: A game apparatus includes a CPU, and in a mode of examining a flower based on an imaged image, the CPU activates two outward cameras to allow a user to image a flower. The CPU obtains color information, shape information and a size of the imaged flower from the imaged image. A shape category is obtained from the shape information, and with the shape category, data for search included in a database for search is filtered. Then, by comparing color information, shape information, and the size of the imaged flower with the data for search to be used, a score of a degree of approximation of the color information and scores of the degree of matching of the shape information and size, etc. are obtained. Then, images of flowers as candidates are presented in the descending order of the score (similarity level).
    Type: Application
    Filed: August 8, 2011
    Publication date: October 11, 2012
    Applicant: NINTENDO CO., LTD.
    Inventors: Satoshi Kira, Kentaro Nishimura, Shinya Saito, Ken-Ichi Minegishi, Takamitsu Tsuji, Naoshi Suzue
  • Publication number: 20120249745
    Abstract: It is proposed that, on the assumption that the surrounding area forms a known topography, a representation is produced from a form of the topography, the camera position relative to the topography and the image in the form of a virtual representation of the view from an observation point which is at a distance from the camera position. This makes it possible to select an advantageous perspective of objects which are imaged in the image, thus making it possible for an operator to easily identify the position of the objects relative to the camera.
    Type: Application
    Filed: May 18, 2012
    Publication date: October 4, 2012
    Applicant: DIEHL BGT DEFENCE GMBH & CO. KG
    Inventors: FLORIAN GAGEL, JÖRG KUSHAUER
  • Publication number: 20120249732
    Abstract: An image processing apparatus includes a receiving module, first and second processors, depth information generation module, and an image generation module. The receiving module receives a moving image including images. The first and second processors decode the moving image inputted from the receiving module and generate decoded data for each of the images. The depth information generation module generates, based on the decoded data generated by the first processor, depth information concerning the decoded data. The image generation module generates a parallax image, if an image with which the decoded data generated by the second processor is associated is subsequent to an image with which the depth information generated by the depth information generation module is associated, by using the decoded data generated by the second processor and depth information concerning an image subsequent to the image with which the generated depth information is associated.
    Type: Application
    Filed: November 30, 2011
    Publication date: October 4, 2012
    Inventor: Fumitoshi MIZUTANI
  • Publication number: 20120249746
    Abstract: A set of tools in a media composition system for stereoscopic video provides visualizations of the perceived depth field in video clips, including depth maps, depth histograms, time-based depth histogram ribbons and curves displayed in association with a media timeline, and multi-panel displays including views of clips temporally adjacent to a clip being edited. Temporal changes in perceived depth that may cause viewer discomfort are automatically detected, and when they exceed a predetermined threshold, the editor is alerted. Depth grading tools facilitate matching depths in an outgoing clip to those in an incoming clip. Depth grading can be performed automatically upon detection of excessively large or rapid perceived depth changes.
    Type: Application
    Filed: March 28, 2011
    Publication date: October 4, 2012
    Inventors: Katherine H. Cornog, Shailendra Mathur, Stephen McNeill
  • Publication number: 20120249752
    Abstract: An imaging apparatus includes: a first lens group disposed on a subject side of a diaphragm in the vicinity of which two polarizers that polarize light from a subject are disposed, the polarizers being first and second polarizers whose polarization directions are perpendicular to each other; a second lens group disposed on the side of the diaphragm where an imaging device is present, over a photodetection surface of which third and fourth polarizers are disposed, the third and fourth polarizers having polarization directions parallel to the polarization direction of the first and second polarizers, respectively; and an image processor that produces stereoscopic images based on image data produced by converting light incident on the imaging device through the first lens group and the second lens group. The second lens group has positive refracting power, and characteristics of the first lens group and the second lens group satisfy certain conditions.
    Type: Application
    Filed: March 13, 2012
    Publication date: October 4, 2012
    Applicant: Sony Corporation
    Inventor: Tomohiko Baba
  • Publication number: 20120253596
    Abstract: A driver assistance system includes a stereo vision system that includes at least one camera and at least one sensor disposed on or in a vehicle; a roadside marker detection unit configured to receive stereo image data from the stereo vision system, and to detect roadside markers from the stereo image data; and a road path estimation unit configured to estimate a road path in a direction of travel of the vehicle, based on the roadside markers detected by the roadside marker detection unit.
    Type: Application
    Filed: March 22, 2012
    Publication date: October 4, 2012
    Inventors: Faroog IBRAHIM, Michael J. Higgins-Luthman
  • Publication number: 20120249743
    Abstract: A method that highlights a depth-of-field (DOF) region of an image and performs additional image processing by using the DOF region. The method includes: obtaining a first pattern image and a second pattern image that are captured by emitting light according to different patterns from an illumination device; detecting a DOF region by using the first pattern image and the second pattern image; determining weights to highlight the DOF region; and generating the highlighted DOF image by applying the weights to a combined image of the first pattern image and the second pattern image.
    Type: Application
    Filed: April 2, 2012
    Publication date: October 4, 2012
    Applicant: Korea Institute of Science and Technology
    Inventors: Jaewon KIM, Ig Jae KIM, Sang Chul AHN
  • Publication number: 20120249748
    Abstract: A stereoscopic image pickup apparatus includes an objective optical system of an afocal optical system, which includes two or more lens groups that form a subject as a real image or a virtual image and that are disposed on the same optical axis; a plurality of image pickup optical systems that allow a plurality of subject light beams, which are emitted from different paths of the objective optical system, to be imaged as independent images, respectively, by a plurality of independent lens groups; and a plurality of image pickup devices that are provided in correspondence with the plurality of image pickup optical systems, and that convert the images, which are imaged by the plurality of image pickup optical systems, to image signals.
    Type: Application
    Filed: March 23, 2012
    Publication date: October 4, 2012
    Inventor: Hidetoshi NAGANO
  • Publication number: 20120249739
    Abstract: Provided is a system and method for scanning a target area, including capturing images from onboard a platform for use in producing one or more stereoscopic views. A first set of at least two image sequences of at least two images each, covering the target area or a subsection thereof is captured. As the platform continues to move forward, at least one other set of images covering the same target area or subsection thereof is captured. At least one captured image from each of at least two of the sets may be used in producing a stereoscopic view.
    Type: Application
    Filed: November 21, 2011
    Publication date: October 4, 2012
    Applicant: ELTA SYSTEMS LTD.
    Inventors: Victor GOSTYNSKI, Dror LUBIN
  • Publication number: 20120249747
    Abstract: Systems and methods may provide for determining a one-dimensional (1D) disparity between a plurality of rectified images, and extracting depth information from the plurality of rectified images based at least in part on the 1D disparity. In one example, the 1D disparity is in the horizontal direction and the images are rectified with respect to one another in the vertical direction.
    Type: Application
    Filed: March 30, 2011
    Publication date: October 4, 2012
    Inventors: Ziv Aviv, Omri Govrin
  • Publication number: 20120249740
    Abstract: A three-dimensional image sensor may include a light source module configured to emit at least one light to an object, a sensing circuit configured to polarize a received light that represents the at least one light reflected from the object and configured to convert the polarized light to electrical signals, and a control unit configured to control the light source module and sensing circuit. A camera may include a receiving lens; a sensor module configured to generate depth data, the depth data including depth information of objects based on a received light from the objects; an engine unit configured to generate a depth map of the objects based on the depth data, configured to segment the objects in the depth map, and configured to generate a control signal for controlling the receiving lens based on the segmented objects; and a motor unit configured to control focusing of the receiving lens.
    Type: Application
    Filed: March 28, 2012
    Publication date: October 4, 2012
    Inventors: Tae-Yon LEE, Joon-Ho LEE, Yoon-Dong PARK, Kyoung-Ho HA, Yong-Jei LEE, Kwang-Hyuk BAE, Kyu-Min KYUNG, Tae-Chan KIM
  • Publication number: 20120249749
    Abstract: A method of optical sensing comprising generating optical data associated with an object of interest, generating non-optical data associated with the object of interest, and analyzing the optical and non-optical data.
    Type: Application
    Filed: March 30, 2012
    Publication date: October 4, 2012
    Applicant: ATS AUTOMATION TOOLING SYSTEMS INC.
    Inventors: Jason STAVNITZKY, Ian CAMERON
  • Publication number: 20120249750
    Abstract: A variety of implementations are described. At least one implementation modifies one or more images from a stereo-image pair in order to produce a new image pair that has a different disparity map. The new disparity map satisfies a quality condition that the disparity of the original image pair did not. In one particular implementation, a first image and a second image that form a stereo image pair are accessed. A disparity map is generated for a set of features from the first image that are matched to features in the second image. The set of features is less than all features in the first image. A quality measure is determined based on disparity values in the disparity map. The first image is modified, in response to the determined quality measure, such that disparity for the set of features in the first image is also modified.
    Type: Application
    Filed: December 13, 2010
    Publication date: October 4, 2012
    Applicant: THOMSON LICENSING
    Inventors: Izzat Izzat, Feng Li
  • Publication number: 20120249751
    Abstract: At least one implementation determines whether two cameras are in parallel or are converging, based on an automated analysis of images from the cameras. One particular implementation determines the disparity of a foreground point and a background point. If the sign of the two disparities are the same, then the particular implementation decides that the cameras are in parallel. Otherwise, the particular implementation decides that the two cameras are converging. More generally, various implementations access a first image and a second image that form a stereo image pair. Multiple features are selected that exist in the first image and in the second image. An indicator of depth is determined for each of the multiple features. It is determined whether the first camera and the second camera were arranged in a parallel arrangement or a converging arrangement based on the values of the determined depth indicators.
    Type: Application
    Filed: December 10, 2010
    Publication date: October 4, 2012
    Applicant: THOMSON LICENSING
    Inventors: Tao Zhang, Dong Tian
  • Publication number: 20120249738
    Abstract: A depth camera computing device is provided, including a depth camera and a data-holding subsystem holding instructions executable by a logic subsystem. The instructions are configured to receive a raw image from the depth camera, convert the raw image into a processed image according to a weighting function, and output the processed image. The weighing function is configured to vary test light intensity information generated by the depth camera from a native image collected by the depth camera from a calibration scene toward calibration light intensity information of a reference image collected by a high-precision test source from the calibration scene.
    Type: Application
    Filed: March 29, 2011
    Publication date: October 4, 2012
    Applicant: MICROSOFT CORPORATION
    Inventor: Guy Gilboa
  • Publication number: 20120249744
    Abstract: An imaging module includes a matrix of detector elements formed on a single semiconductor substrate and configured to output electrical signals in response to optical radiation that is incident on the detector elements. A filter layer is disposed over the detector elements and includes multiple filter zones overlying different, respective, convex regions of the matrix and having different, respective passbands.
    Type: Application
    Filed: April 3, 2012
    Publication date: October 4, 2012
    Applicant: PRIMESENSE LTD.
    Inventors: Benny Pesach, Erez Sali, Alexander Shpunt
  • Publication number: 20120249742
    Abstract: The invention relates to visualizing freeform surfaces, like NURBS surfaces, from three-dimensional construction data via Virtual beams from a virtual camera are sent out of a virtual image plane in a scene having at least one object and at least one freeform surface. Lighting values are calculated for each point where a beam intersects the freeform surface. The lighting values are then attributed to the pixels associated with the different points of intersection. The freeform surface is defined by two parameters (u, v), and related equations define all points of the surface of the freeform surface The subdivision of the freeform surface for determining the intersections with the beams based on the two parameters (u, v) is regular, so that the surface fragments form meshes of a two-dimensional grid of the freeform surface in the parameter space.
    Type: Application
    Filed: March 30, 2012
    Publication date: October 4, 2012
    Inventor: Oliver Abert
  • Publication number: 20120249753
    Abstract: A capturing method for a plurality of images with different view-angles and a capturing system using the same are provided. The capturing method for the images with different view-angles includes the following steps. An appearance image of an object is captured by an image capturing unit at a capturing angle. A light reflection area of the appearance image is detected by a detecting unit, and a dimension characteristic of the light reflection area is analyzed by the same. Whether the dimension characteristic of the light reflection area is larger than a first predetermined value is determined. If the dimension characteristic of the light reflection area is larger than the first predetermined value, then the capturing angle is adjusted within a first adjusting range. After the step of adjusting the capturing angle within a first adjusting range is performed, the step of capturing the appearance image is performed again.
    Type: Application
    Filed: February 10, 2012
    Publication date: October 4, 2012
    Inventors: Ya-Hui Tsai, Kuo-Tang Huang, Chin-Kuei Chang, Chun-Lung Chang
  • Publication number: 20120249741
    Abstract: A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include capturing an image and spatial data with a body mounted camera and sensor array, receiving an input indicating a first anchor surface, calculating parameters with respect to the body mounted camera and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface. Further operations may include receiving a second input indicating a second anchor surface within the captured image that is different from the first anchor surface, calculating parameters with respect to the second anchor surface and displaying the virtual object such that the virtual object appears anchored to the selected second anchor surface and moved from the first anchor surface.
    Type: Application
    Filed: March 29, 2012
    Publication date: October 4, 2012
    Inventors: Giuliano MACIOCCI, Andrew J. EVERITT, Paul MABBUTT, David T. BERRY
  • Publication number: 20120242793
    Abstract: Disclosed are a display device and a method of controlling the same. The display device and the method of controlling the same include a camera capturing a gesture made by a user, a display displaying a stereoscopic image, and a controller controlling presentation of the stereoscopic image in response to a distance between the gesture and the stereoscopic image on a virtual space and an approach direction of the gesture with respect to the stereoscopic image. Accordingly, the presentation of the stereoscopic image can be controlled in response to a distance and an approach direction with respect to the stereoscopic image.
    Type: Application
    Filed: March 21, 2011
    Publication date: September 27, 2012
    Inventors: Soungmin Im, Sangki Kim
  • Publication number: 20120242797
    Abstract: According to one embodiment, a video displaying apparatus includes a separator, a generator and a controller. The separator is configured to separate a video signal for 3D video display into first and second video signals. The generator is configured to generate a first video frame in which a frame of the first video signal is displayed in a first area on a screen, to generate a second video frame in which a frame of the first or second video signal is displayed in a second area different from the first area, to generate a third video frame similar to the first video frame, and to generate a fourth video frame in which a frame of the second or first video signal is displayed in the second area. The controller is configured to sequentially display the first to fourth video frames in this order.
    Type: Application
    Filed: November 18, 2011
    Publication date: September 27, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Kiyomi WATANABE
  • Publication number: 20120242800
    Abstract: Disclosed herein are systems and methods for gesture capturing, detection, recognition, and mapping them into commands which allow one or many users to interact with electronic games or any electronic device interfaces. Gesture recognition methods, apparatus and system are disclosed from which application developers can incorporate gesture-to-character inputs into their gaming, learning or the like applications. Also herein are systems and methods for receiving 3D data reflecting hand, fingers or other body parts movements of a user, and determining from that data whether the user has performed gesture commands for controlling electronic devices, or computer applications such as games or others.
    Type: Application
    Filed: March 23, 2012
    Publication date: September 27, 2012
    Inventors: Dan Ionescu, Bogdan Ionescu, Shahidul M. Islam, Cristian Gadea, Viorel Suse
  • Publication number: 20120242796
    Abstract: A Depth Map (DM) is able to be utilized for many parameter settings involving cameras, camcorders and other devices. Setting parameters on the imaging device includes zoom setting, aperture setting and shutter speed setting.
    Type: Application
    Filed: March 25, 2011
    Publication date: September 27, 2012
    Applicant: SONY CORPORATION
    Inventors: Florian Ciurea, Gazi Ali, Alexander Berestov, Chuen-Chien Lee
  • Publication number: 20120242804
    Abstract: According to one embodiment, an image processing apparatus includes an image capturing unit and a timing adjustment unit. The image capturing unit captures a first image and a second image. The timing adjustment unit adjusts the frame timing of the first image and the frame timing of the second image captured to the image capturing unit. The timing adjustment unit makes an adjustment for delaying the frame timing of the second image to the frame timing of the first image possible.
    Type: Application
    Filed: January 30, 2012
    Publication date: September 27, 2012
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Takayuki OGASAHARA
  • Publication number: 20120242787
    Abstract: A monitoring camera for generating a 3-dimensional (3D) image and a method of generating a 3D image using the same are provided. The monitoring camera includes: an imaging unit that is configured to laterally rotate and photograph an object to generate at least two images; and a controller that captures overlapping portions of images generated by the imaging unit, and generates a 3-dimensional (3D) image based on the overlapping portions.
    Type: Application
    Filed: March 23, 2012
    Publication date: September 27, 2012
    Applicant: SAMSUNG TECHWIN CO., LTD.
    Inventor: Jae-yoon OH