Having Two 2d Image Pickup Sensors Representing The Interocular Distance (epo) Patents (Class 348/E13.014)
  • Patent number: 9106923
    Abstract: A three-dimensional (3D) compressing method and apparatus is disclosed. The 3D video compressing apparatus determines whether a motion of consecutive frames exists when a depth of a multi-view video is estimated, performs a depth estimation when the motion exists, and compresses the 3D video by using a color video motion vector as a depth video motion vector.
    Type: Grant
    Filed: July 19, 2010
    Date of Patent: August 11, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaejoon Lee, Du-Sik Park, Ho Cheon Wey, Il Soon Lim, Seok Lee, Jin Young Lee
  • Patent number: 8648357
    Abstract: A radiation-emitting device includes a first active semiconductor layer embodied for the emission of electromagnetic radiation and for direct contact with connection electrodes, and a second active semiconductor layer embodied for the emission of electromagnetic radiation and for direct contact with connection electrodes. The first active semiconductor layer and the second active semiconductor layer are arranged in a manner stacked one above another.
    Type: Grant
    Filed: December 3, 2008
    Date of Patent: February 11, 2014
    Assignee: OSRAM Opto Semiconductor GmbH
    Inventor: Siegfried Herrmann
  • Publication number: 20130258062
    Abstract: Provided is a method for generating a 3D stereoscopic image, which includes: generating at least one 3D mesh surface by applying 2D depth map information to a 2D planar image; generating at least one 3D solid object by applying a 3D template model to the 2D planar image; arranging the 3D mesh surface and the 3D solid object on a 3D space and fixing a viewpoint; providing an interface so that cubic effects of the 3D mesh surface and the 3D solid object are correctable on the 3D space, and correcting the cubic effects of the 3D mesh surface and the 3D solid object according to a control value input through the interface; and obtaining a 3D solid image by photographing the corrected 3D mesh surface and 3D solid object with at least two cameras.
    Type: Application
    Filed: June 5, 2012
    Publication date: October 3, 2013
    Applicant: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Junyong NOH, Sangwoo LEE, Young-Hui KIM
  • Patent number: 8520061
    Abstract: A method of zero-D dimming backlights for 3D or multi-view displays using right and left image data. The method receives right and left pixel luminance values for the right and left image data, and it remaps the right and left pixel luminance values using a factor based upon an average luminance value or based upon a luminance percentile value or a modified factor that further includes the absolute difference between left and right pixel luminance values. The factor or modified factor are selectively used to remap particular right and left pixel luminance values based upon a disparity consideration. The method results in power savings by dimming the backlight without perceptible or substantial loss in brightness of the display.
    Type: Grant
    Filed: December 14, 2009
    Date of Patent: August 27, 2013
    Assignee: 3M Innovative Properties Company
    Inventors: Martin J. Vos, Glenn E. Casner
  • Publication number: 20130182079
    Abstract: An object's position and/or motion in three-dimensional space can be captured. For example, a silhouette of an object as seen from a vantage point can be used to define tangent lines to the object in various planes (“slices”). From the tangent lines, the cross section of the object is approximated using a simple closed curve (e.g., an ellipse). Alternatively, locations of points on an object's surface in a particular slice can also be determined directly, and the object's cross-section in the slice can be approximated by fitting a simple closed curve to the points. Positions and cross sections determined for different slices can be correlated to construct a 3D model of the object, including its position and shape. A succession of images can be analyzed to capture motion of the object.
    Type: Application
    Filed: March 7, 2012
    Publication date: July 18, 2013
    Applicant: OCUSPEC
    Inventor: David Holz
  • Publication number: 20130093854
    Abstract: A three dimensional shape measurement apparatus comprising: a projection unit configured to perform a projection operation to a measurement area; a photographing unit configured to photograph a target object in the measurement area undergoing the projection operation; and a measurement unit configured to measure a three dimensional shape of the target object based on the photographed image, wherein the measurement area includes a measurement reference surface serving as a reference for a focus position of a photographing optical system of the photographing unit, and is defined based on a projection range of the projection unit and a photographing range of the photographing unit, and the focus position is set deeper than a position of the measurement reference surface when observed from the photographing unit.
    Type: Application
    Filed: October 1, 2012
    Publication date: April 18, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: CANON KABUSHIKI KAISHA
  • Publication number: 20130083168
    Abstract: There is provided a calibration apparatus for a camera module capable of calibrating the difference in optical characteristics between left and right images of a binocular camera module in real time by capturing images of a plurality of rotating test boards. The calibration apparatus of a camera module includes: a test unit including two or more mutually connected test boards, the test boards having images captured by a camera module and rotating at a pre-set angle; and a calibration unit receiving the images of the test boards captured by the camera module and calibrating optical characteristics thereof.
    Type: Application
    Filed: December 29, 2011
    Publication date: April 4, 2013
    Applicant: SAMSUNG ELCTRO-MECHANICS CO., LTD.
    Inventors: Joo Hyun KIM, Jagarlamudi Veera Venkata PRASAD, Nagaraj AVINASH, Soon Seok KANG
  • Publication number: 20130033583
    Abstract: An image display device and controlling method thereof are disclosed, by which a 3D stereoscopic image are more comfortably appreciated in consideration of appreciation environment. In a mobile terminal including a display unit configured to variably display a 2D (2-dimensional) image and a 3D stereoscopic image of a parallax barrier type, the present invention includes determining a play mode of a source image including a left eye image and a right eye image for implementation of the 3D stereoscopic image, if the determined play mode is a 3D play mode, determining a brightness of the source image, and enhancing an output brightness of the source image in accordance with an output brightness of the source image.
    Type: Application
    Filed: June 27, 2012
    Publication date: February 7, 2013
    Applicant: LG ELECTRONICS INC.
    Inventors: Namsu LEE, Hyunghoon OH
  • Publication number: 20130027516
    Abstract: Various configurations of cameras and cameral elements such as CCDs or the like are disclosed for use in three-dimensional imaging of interior spaces based upon distance measurements.
    Type: Application
    Filed: July 27, 2012
    Publication date: January 31, 2013
    Inventors: Douglas P. Hart, Federico Frigerio, Douglas M. Johnston, Manas C. Menon, Daniel Vlasic
  • Publication number: 20130021443
    Abstract: A camera system including: a substrate having a coding pattern printed thereon and a handheld digital camera device. The camera device includes: a digital camera unit having a first image sensor for capturing images and a color display for displaying captured images to a user; an integral processor configured for: controlling operation of the first image sensor and color display; decoding an imaged coding pattern printed on a substrate, the printed coding pattern employing Reed-Solomon encoding; and performing an action in the handheld digital camera device based on the decoded coding pattern. The decoding includes the steps of: detecting target structures defining the extent of the data area; determining the data area using the detected target structures; and Reed-Solomon decoding the coding pattern contained in the determined data area.
    Type: Application
    Filed: September 15, 2012
    Publication date: January 24, 2013
    Inventor: Kia Silverbrook
  • Publication number: 20130016187
    Abstract: A method and apparatus for reducing convergence accommodation conflict. The method includes estimating disparities between images for different lens, analyzing the estimated disparities, selecting a point of convergence, determining the amount of shift relating to the convergence point selected, and performing adjustment to the disparity to maintain a disparity value below a threshold.
    Type: Application
    Filed: July 16, 2012
    Publication date: January 17, 2013
    Applicant: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Buyue Zhang, Aziz Umit Batur
  • Publication number: 20120314035
    Abstract: A human-perspective stereoscopic camera is an apparatus that is used to capture stereoscopic viewing material that is from the perspective of one person's eyes. The apparatus mainly comprises a left telescopic tube, a right telescopic tube, a fixed pivoting mount, a lateral movement mechanism, and a platform with a channel as a support structure. The left telescopic tube and the right telescopic tube each comprise a first cylinder, a second cylinder, and an optics assembly. The first cylinder and the second cylinder are the means for the telescopic movement. The optics assembly comprises a lens, a digital single-lens reflex camera, a camera mount, a plate, and a pole.
    Type: Application
    Filed: June 13, 2012
    Publication date: December 13, 2012
    Inventor: John Peter Hall
  • Publication number: 20120293631
    Abstract: Example methods are disclosed for generating an image of a rodent in an arena that include generating two pictures of the rodent by two spaced apart cameras, whereas both cameras capture one or more of the rodent or the arena from above and generating a three-dimensional vertical profile of the rodent on the pictures. An example method also includes storing the vertical profile as an image of the rodent.
    Type: Application
    Filed: May 18, 2012
    Publication date: November 22, 2012
    Inventors: Stephan Schwarz, Christian Gutzen
  • Publication number: 20120287248
    Abstract: A specimen for measuring a material under multiple strains and strain rates. The specimen including a body having first and second ends and a gage region disposed between the first and second ends, wherein the body has a central, longitudinal axis passing through the first and second ends. The gage region includes a first gage section and a second gage section, wherein the first gage section defines a first cross-sectional area that is defined by a first plane that extends through the first gage section and is perpendicular to the central, longitudinal axis. The second gage section defines a second cross-sectional area that is defined by a second plane that extends through the second gage section and is perpendicular to the central, longitudinal axis and wherein the first cross-sectional area is different in size than the second cross-sectional area.
    Type: Application
    Filed: May 11, 2012
    Publication date: November 15, 2012
    Inventors: Donald L. Erdman III, Vlastimil Kunc, Srdjan Simunovic, Yanli Wang
  • Publication number: 20120268570
    Abstract: A digital cinematographic and projection process that provides a means of 3D stereoscopic imagery that is not adversely affected by the standard frame rate of 24 frames per second, as is the convention in the motion picture industry worldwide. A method for photographing and projecting moving images in three dimensions includes recording a moving image with a first and a second camera simultaneously and interleaving a plurality of frames recorded by the first camera with a plurality of frames recorded by the second camera. The step of interleaving includes retaining odd numbered frames recorded by the first camera and deleting the even numbered frames, retaining even numbered frames recorded by the second camera and deleting the odd numbered frames, and creating an image sequence by alternating the retained images from the first and second camera.
    Type: Application
    Filed: December 23, 2010
    Publication date: October 25, 2012
    Applicant: TRUMBULL VENTURES LLC
    Inventor: Douglas Trumbull
  • Publication number: 20120257022
    Abstract: An imaging apparatus includes: an imaging section converting image light impinging thereon through a lens device into an electrical imaging signal; an imaging process section processing the imaging signal output by the imaging section; an output section converting the imaging signal processed by the imaging process section into an image signal in a predetermined format and outputting the image signal; a terminal for synchronization for connection with another imaging apparatus; and an imaging control section controlling imaging at timing in synchronism with the other imaging apparatus and putting the lens device in the same state of control as the state of control of the other imaging apparatus when the communication with the other imaging apparatus can be performed through the terminal section for synchronization.
    Type: Application
    Filed: March 30, 2012
    Publication date: October 11, 2012
    Inventors: Hidekazu SUTO, Masamiki Kawase, Fumio Sekiya
  • Publication number: 20120236127
    Abstract: A processor configured to receive respective image data, representative of images, of the same subject scene from two or more image capture sources spaced apart at a particular predetermined distance; identify corresponding features from the respective image data; determine the change in position of the identified features represented in the respective image data; and identify the depth-order of the identified features according to their determined relative change in position to allow for depth-order display of the identified features according to their determined relative change in position.
    Type: Application
    Filed: December 4, 2009
    Publication date: September 20, 2012
    Applicant: NOKIA CORPORATION
    Inventors: Pasi Ojala, Radu Ciprian Bilcu
  • Publication number: 20120236126
    Abstract: A conventional stereo image pickup device forms an appropriate stereo image by changing the stereo base in accordance with the distance to a subject, but fails to select an appropriate focal length or stereo base by simply using the subject distance without obtaining the subject size. A stereo image pickup device (1000) estimates the size of a subject in accordance with the imaging mode set in a typical two-dimensional camera, and estimates the subject distance using the estimated size of the subject and the focal length, and further calculates the stereo base that enables an optimum disparity to be achieved using the subject distance, determines the stereo base, and aligns the two imaging units (the first imaging unit (103) and the second imaging unit (104)). As a result, the stereo image pickup device forms a stereo image having an appropriate stereoscopic effect.
    Type: Application
    Filed: November 18, 2010
    Publication date: September 20, 2012
    Inventors: Kenjiro Tsuda, Hiroaki Shimazaki, Tatsuro Juri, Hiromichi Ono
  • Publication number: 20120194649
    Abstract: A system for three-dimensional visualization of object in a scattering medium includes a sensor for receiving light from the object in the scattering medium and a computing device coupled to the sensor and receiving a plurality of elemental images of the object from the sensor. The computing device causes the elemental images to be magnified through a virtual pin-hole array to create an overlapping pattern of magnified elemental images. The computing device also averages overlapping portions of the element images to form an integrated image.
    Type: Application
    Filed: July 30, 2010
    Publication date: August 2, 2012
    Applicant: UNIVERSITY OF CONNECTICUT
    Inventors: Bahram Javidi, Inkyu Moon, Robert T. Schulein, Myungjin Cho, Cuong Manh Do
  • Publication number: 20120188345
    Abstract: A method for streaming and viewing a user's video and audio experiences includes removably mounting an image recording device in close proximity to a user's eyes such that the image recording device is operable to record the user's visual and audio experiences. A real time signal is created by the image recording device, including at least one of video footage, still images, or audio captured by the image recording device. The real time signal is streamed from the image recording device to a server using at least one communications network. The real time signal is transmitted from the server to a remote communication device for producing an output that is perceptible to a viewer so that said viewer can experience the user's environment and the user's relationship to the environment.
    Type: Application
    Filed: January 25, 2012
    Publication date: July 26, 2012
    Applicant: PAIRASIGHT, INC.
    Inventor: Christopher A. Salow
  • Publication number: 20120188346
    Abstract: A monitoring device for monitoring an entry or exit area of an access opening of a vehicle to a building component having an optical camera system including a first camera for providing a first image and a second camera for providing a second image, so that a stereo image can be generated from the first and the second image. The monitoring device also includes an analysis unit, such that at least one object in the entry or exit area can be detected from the stereo image and/or a position of the building component relative to at least one part of the vehicle can be determined from the stereo image for monitoring an entry or exit area.
    Type: Application
    Filed: August 17, 2010
    Publication date: July 26, 2012
    Applicant: KNORR-BREMSE SYSTEME FÜR SCHIENENFAHRZEUGE GMBH
    Inventor: Andreas Schnabl
  • Publication number: 20120188335
    Abstract: A plurality of video input units generate video frames and provide shooting characteristics. A 3D video frame generator creates a 3D video frame by combining a plurality of video frames, which are provided from the plurality of video input units, respectively, and provides 3D video frame composition information indicating a composition type of the plurality of video frames included in the 3D video frame, and resolution control information indicating adjustment/non-adjustment of resolutions of the video frames. A 3D video frame encoder outputs an encoded 3D video stream by encoding the 3D video frame provided from the 3D video frame generator. A composition information checker checks 3D video composition information including the shooting information, the 3D video frame composition information, and the resolution control information. A 3D video data generator generates 3D video data by combining the 3D video composition information and the encoded 3D video stream.
    Type: Application
    Filed: January 18, 2012
    Publication date: July 26, 2012
    Inventors: Gun III LEE, Kwang-Cheol Choi, Jae-Yeon Song, Seo-Young Hwang
  • Publication number: 20120182399
    Abstract: A method and arrangement for estimating 3D-models in a street environment using a stereo sensor technique. At least one pair of sensors are arranged in pairs and are mounted on a bracket. Each pair of sensors is positioned in a common plane. The sensors of each pair are positioned based upon contrast information such that low levels of contrasts in an image plane are avoided. The pairs of sensors are mutually positioned relative to an essentially horizontal plane of the bracket such that the sensors of a sensor pair are positioned horizontally at a distance from each other and one of the sensors above the horizontal plane of the bracket and the other under the horizontal plane.
    Type: Application
    Filed: June 30, 2009
    Publication date: July 19, 2012
    Applicant: SAAB AB
    Inventors: Ingmar Andersson, Folke Isaksson, Leif Haglund, Johan Borg, Johan Bejeryd
  • Publication number: 20120182398
    Abstract: Provided are an apparatus for obtaining status information of a crystalline lens and equipment including the same. The apparatus generates a reference light and directs the reference light to be perpendicularly incident to the crystalline lens. At least one light receiving unit that is disposed beyond a visual field of the eyeball is configured to directly receive scattered lights generated when the reference light incident from the light source unit to the crystalline lens is scattered against the crystalline lens. The apparatus calculates thickness information of the crystalline lens using information about the scattered lights received by the at least one light receiving unit.
    Type: Application
    Filed: December 21, 2011
    Publication date: July 19, 2012
    Inventors: Dae-Shik Seo, Byoung-Yong Kim
  • Publication number: 20120162385
    Abstract: Disclosed herein are an apparatus and method for acquiring 3D depth information. The apparatus includes a pattern projection unit, an image acquisition unit, and an operation unit. The pattern projection unit projects light, radiated by an infrared light source, into a space in a form of a pattern. The image acquisition unit acquires an image corresponding to the pattern using at least one camera. The operation unit extracts a pattern from the image, analyzes results of the extraction, and calculates information about a 3D distance between objects existing in the space.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 28, 2012
    Inventors: Ji-Young PARK, Jae-Ho Lee, Seung-Ki Hong, Hee-Kwon Kim, Seung-Woo Nam
  • Publication number: 20120162386
    Abstract: Disclosed herein are an apparatus and method for correcting an error in a stereoscopic image. The apparatus includes two cameras, a region extraction unit, a phase difference calculation unit, an error value extraction unit, and an error analysis unit. The cameras capture left and right images forming the stereoscopic image. The region extraction unit extracts the main regions of interest from each of the images. The phase difference calculation unit calculates the difference in phase between the main regions of interest of the left image and the main regions of interest of the right image. The error value extraction unit extracts the value of a camera disposition error, corresponding to locations of the two cameras, based on the difference in phase. The error analysis unit determines whether the value of the camera disposition error is within a set range, and corrects the error in the stereoscopic image.
    Type: Application
    Filed: December 22, 2011
    Publication date: June 28, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hye-Sun KIM, Yun-Ji BAN
  • Publication number: 20120149432
    Abstract: Methods for storing on a storage or memory medium, and retrieving, and displaying of multiple images in a registered manner, the images have been recorded concurrently. The images may comprise at least 2 video programs. A camera system for recording multiple concurrent images is also disclosed. Lenses and corresponding image sensors are calibrated to have calibrated and associated settings for recording multiple images that are substantially registered images. A registered image may be displayed on a single display. It may also be displayed on multiple displays. A camera for recording and displaying registered multiple images may be part of a mobile phone.
    Type: Application
    Filed: February 20, 2012
    Publication date: June 14, 2012
    Inventor: Peter Lablans
  • Publication number: 20120140037
    Abstract: A motionless adaptive focus stereoscopic scene capture apparatus employing tunable liquid crystal lenses is provided. The apparatus includes at least two image sensors preferably fabricated as a monolithic stereo image capture component and at least two corresponding tunable liquid crystal lenses preferably fabricated as a monolithic focus adjustment component. Using a variable focus tunable liquid crystal lens at each aperture stop provides constant magnification focus control. Controlled spatial variance of a spatially variant electric field applied to the liquid crystal of each tunable liquid crystal lens provides optical axis shift enabling registration between stereo images. A controller implements coupled auto-focusing methods employing multiple focus scores derived from at least two camera image sensors and providing multiple tunable liquid crystal lens drive signals for synchronous focus acquisition of a three dimensional scene.
    Type: Application
    Filed: December 5, 2011
    Publication date: June 7, 2012
    Applicant: LENSVECTOR, INC.
    Inventors: Tigran GALSTIAN, Peter P. CLARK, Suresh VENKATRAMAN
  • Publication number: 20120127277
    Abstract: Embodiments of the invention may include a scanning device to scan three dimensional objects. The scanning device may generate a three dimensional model. The scanning device may also generate a texture map for the three dimensional model. Techniques utilized to generate the model or texture map may include tracking scanner position, generating depth maps of the object and generation composite image of the surface of the object.
    Type: Application
    Filed: January 30, 2012
    Publication date: May 24, 2012
    Applicant: NextPat Limited
    Inventors: Mark S. Knighton, Peter J. DeLaurentis, William D. McKinley, David S. Agabra
  • Publication number: 20120086783
    Abstract: A method and apparatus is disclosed for scanning a body. The system comprises a processor and a range camera capable of capturing at least a first set of depth images of the body rotated to 0 degrees and at least a second set of depth images of the body rotated to x degrees, wherein x is >0 degrees, and x<360 degrees. A first set of computer instructions executable on the processor is capable of calculating a first set of three dimensional points from the first set of depth images and a second set of three dimensional points from the second set of depth images. A second set of computer instructions executable on the processor is capable of rotating and translating the first and second set of three dimensional points into a final set of three dimensional points. A third set of computer instructions executable on the processor is capable of creating a three dimensional mesh from the final set of three dimensional points.
    Type: Application
    Filed: June 13, 2011
    Publication date: April 12, 2012
    Inventor: Raj Sareen
  • Publication number: 20120002008
    Abstract: The current invention presented here is an apparatus for securely managed recording, transformation, storage, and projection of digital or quantum media images and video by a server transforming laser light into a standalone or reflected audio video spatial point targeted display in free space, or reflected surface, for virtual or non virtual viewing and listening. The method and mechanism presented allows any individual or groups to view and listen to content in any space managed by an Illumination Transformer Audio Video Manager Interactive Server Transmitter (ITAVMIST) server, GSense H3DVARVP-IL or high resolution devices, goggles, glasses or without devices, and be able to view, video and listen to sound, in binaural stereo and 2D 3D video.
    Type: Application
    Filed: July 4, 2010
    Publication date: January 5, 2012
    Inventors: David Valin, Alex Socolof
  • Publication number: 20110304704
    Abstract: An imaging apparatus includes an audio/video capture mechanism having an audio/video processing unit, an image synchronization unit, a first imaging unit, a second imaging unit and a sound input/output unit; and an output mechanism movably installed on a side of the audio/video capture mechanism, and including a display unit installed on a surface of the output mechanism and coupled to the audio/video capture mechanism, and the second imaging unit is installed on another surface of the output mechanism, such that when the imaging apparatus is used for taking a picture, the first and second imaging units are used for capturing images from a same viewing angle, and the image synchronization unit is provided for synchronously combining the images to form a three-dimensional (3D) image, so as to achieve the effects of capturing and playing the 3D image.
    Type: Application
    Filed: June 9, 2010
    Publication date: December 15, 2011
    Applicant: DIGILIFE TECHNOLOGIES CO., LTD.
    Inventor: Chen-Ping Yang
  • Publication number: 20110279652
    Abstract: The invention presents a method for comparing the similarity between image patches comprising the steps of receiving form at least two sources at least two image patches, wherein each source supplies an image patch, comparing the received image patches by extracting a number of corresponding subpart pairs from each image patch, calculating a normalized local similarity score between all corresponding subpart pairs, calculating a total matching score by integrating the local similarity scores of all corresponding subpart pairs, and using the total matching score as an indicator for an image patch similarity, determining corresponding similar image patches based on the total matching score.
    Type: Application
    Filed: May 6, 2011
    Publication date: November 17, 2011
    Applicant: HONDA RESEARCH INSTITUTE EUROPE GMBH
    Inventors: Julian EGGERT, Nils EINECKE
  • Publication number: 20110090307
    Abstract: The invention relates to a method for live construction of a video sequence comprising a modelled 3D object, the method comprising the following steps: pre-calculating data representative of at least one first image of a three-dimensional environment and a first item of associated depth information, live calculation of: data representative of at least one second image representing said modelled object on which is mapped a current image of a live video stream, and a second item of depth information associated with said at least one second image, and composing live said sequence by combining said at least one first image and said at least one second image according to said first and second items of depth information.
    Type: Application
    Filed: June 29, 2009
    Publication date: April 21, 2011
    Inventors: Jean-Eudes Marvie, Gerard Briand
  • Patent number: 7917332
    Abstract: A system and method of controlling a sensor to sense one target from a plurality of targets includes predicting states of the targets. A set of probability distributions is generated. Each probability distribution in the set represents a setting or settings of at least one control parameter of the sensor. An expected information gain value for each control parameter in the set is calculated. The information gain value represents an expected quality of a measurement of one of the targets taken by the sensor if controlled according to the control parameter, based on the predicted state of the target. Updating the set of probability distributions takes place to identify the sensor control parameters that maximise the expected information gain value. The sensor is then controlled in accordance with the maximising control parameters.
    Type: Grant
    Filed: November 12, 2007
    Date of Patent: March 29, 2011
    Assignee: BAE Systems PLC
    Inventors: Antony Waldock, David Nicholson
  • Publication number: 20080018731
    Abstract: A stereoscopic parameter embedding apparatus comprising: a video image input unit operable to input a plurality of pieces of video image data to be processed sequentially; a parameter input unit operable to input stereoscopic parameters for converting a video image into a stereoscopic image, each of which is respectively associated with each of the plurality of video image data; a converter operable to convert each of the input stereoscopic parameters into binary data; and an embedding unit operable to embed bar-code image data corresponding to the binary data in each of the plurality of pieces of video image data.
    Type: Application
    Filed: February 18, 2005
    Publication date: January 24, 2008
    Inventor: Kazunari Era