Stereoscopic Image Signal Generation (epo) Patents (Class 348/E13.003)

  • Patent number: 9473762
    Abstract: A 3D camera (10) in accordance with the stereoscopic principle for detecting depth maps (52) of a monitored zone 12) is set forth which has at least two camera modules (14a-b) each having an image sensor (16a-b) in mutually offset perspectives for taking two-dimensional starting images (42) as well as a stereoscopic unit (28) which is configured for the application of a stereoscopic algorithm for generating a depth map (46, 50) in that mutually associated part regions are recognized in two two-dimensional images taken within a disparity zone from offset perspectives and their distance is calculated with reference to the disparity.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: October 18, 2016
    Assignee: SICK AG
    Inventors: Volker Zierke, Matthias Heinz
  • Patent number: 9001115
    Abstract: A method of using a computer to generate virtual autostereoscopic images from a three-dimensional digital data set is disclosed. The method includes establishing a first point of view and field of view of a subject volume including a region of interest. The method includes reading at least one scene parameter associated with the field of view of the subject volume. The method includes determining a second point of view offset some distance and along some vector from the first point of view based on a value derived from at least one scene parameter. The second point of view at least partially overlaps the first field of view. The first and second points of view each create a view plane with a view orthogonal to the subject volume.
    Type: Grant
    Filed: January 21, 2010
    Date of Patent: April 7, 2015
    Assignee: Vision III Imaging, Inc.
    Inventors: Christopher A. Mayhew, Craig M. Mayhew
  • Patent number: 8977037
    Abstract: Disclosed herein are methods and systems for creating stereoscopic images. A left-eye view image for a stereoscopic image and an imperfect right-eye view image may be received. A smooth optical flow may be generated from the received image representing the left-eye view to the imperfect image representing the right-eye view to produce a first candidate right-eye view. Objects are identified in the imperfect image, and merged into the first candidate right-eye view image to create a right-eye view image. A stereoscopic image is created from the left-eye view image and right-eye view image.
    Type: Grant
    Filed: August 31, 2012
    Date of Patent: March 10, 2015
    Assignee: Google Inc.
    Inventors: Hui Fang, Ming Yin
  • Patent number: 8866884
    Abstract: An image processing apparatus includes an image input unit that inputs a two-dimensional image signal, a depth information output unit that inputs or generates depth information of image areas constituting the two-dimensional image signal, an image conversion unit that receives the image signal and the depth information from the image input unit and the depth information output unit, and generates and outputs a left eye image and a right eye image for realizing binocular stereoscopic vision, and an image output unit that outputs the left and right eye images. The image conversion unit extracts a spatial feature value of the input image signal, and performs an image conversion process including an emphasis process applying the feature value and the depth information with respect to the input image signal, thereby generating at least one of the left eye image and the right eye image.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: October 21, 2014
    Assignee: Sony Corporation
    Inventors: Atsushi Ito, Toshio Yamazaki, Seiji Kobayashi
  • Publication number: 20140285619
    Abstract: One exemplary embodiment involves identifying a plane defined by a plurality of three-dimensional (3D) track points rendered on a two-dimensional (2D) display, wherein the 3D track points are rendered at a plurality of corresponding locations of a video frame. The embodiment also involves displaying a target marker at the plane defined by the 3D track points to allow for visualization of the plane, wherein the target marker is displayed at an angle that corresponds with an angle of the plane. Additionally, the embodiment involves inserting a 3D object at a location in the plane defined by the 3D track points to be embedded into the video frame. The location of the 3D object is based at least in part on the target marker.
    Type: Application
    Filed: June 25, 2012
    Publication date: September 25, 2014
    Applicant: Adobe Systems Incorporated
    Inventors: James Acquavella, David Simons, Daniel M. Wilk
  • Patent number: 8803948
    Abstract: A broadcast receiver and a 3D subtitle data processing method thereof are disclosed. A method for processing three dimensional (3D) subtitle data includes receiving, by a receiver, a broadcast signal including 3D subtitle data, extracting, by an extracting unit, subtitle display information for a base view and extended subtitle display information for an extended view from the 3D subtitle data, and controlling, by a controller, a 3D subtitle display using the subtitle display information for the base view and the extended subtitle display information for the extended view.
    Type: Grant
    Filed: December 1, 2009
    Date of Patent: August 12, 2014
    Assignee: LG Electronics Inc.
    Inventors: Jong Yeul Suh, Jin Pil Kim, Ho Taek Hong
  • Patent number: 8610707
    Abstract: A three-dimensional (3D) imaging system and method are disclosed. A motion estimation and motion compensation (MEMC) unit performs the motion estimation (ME) and the motion compensation (MC) on a reference frame and a current frame, thereby generating an MEMC generated frame and motion vector (MV) information. A depth controller partitions the reference frame into multiple regions according to the MV information, thereby generating depth information. A scene-mode depth generator generates a depth map according to the reference frame and the depth information. A depth-image-based rendering (DIBR) unit generates a scene-mode generated frame according to the reference frame and the depth map.
    Type: Grant
    Filed: September 3, 2010
    Date of Patent: December 17, 2013
    Assignee: Himax Technologies Ltd.
    Inventor: Ying-Ru Chen
  • Publication number: 20130162766
    Abstract: Provided are a computer program product, system, and method for processing a source video stream of frames of images providing a first type of output format to generate a supplemental video stream, wherein the source video stream and the supplemental video stream are processed by a video processor to produce an output video stream having a second type of output format. The source video stream has a plurality of frames of digital images comprising an array of pixels for the first type of output format. The color values in the frames of the source video stream are transformed to different color values to produce a modified video stream. The frames in the modified video stream are overlaid onto the corresponding frames in the source video stream with an opacity value less than 100% to produce frames in the supplemental video stream.
    Type: Application
    Filed: December 22, 2011
    Publication date: June 27, 2013
    Applicant: 2DINTO3D LLC
    Inventor: Oren Samuel Cohen
  • Publication number: 20130113879
    Abstract: A method and system for transmitting and viewing video content is described. In one aspect a plurality of versions of 3D video content may be generated. Each version of the 3D video content may include a different viewing depth profile for the 3D video content. Data representative of a viewing distance between a viewer of 3D video content and a device may be determined. Based upon the received data, a particular version of the 3D video content of the plurality of versions having a viewing depth profile corresponding to the viewing distance may be determined.
    Type: Application
    Filed: November 4, 2011
    Publication date: May 9, 2013
    Applicant: COMCAST CABLE COMMUNICATIONS, LLC
    Inventor: Michael Chen
  • Publication number: 20130106995
    Abstract: Disclosed herein are a display apparatus for a vehicle and a method of controlling the same. The display apparatus includes: a user interface; a sensing unit providing approaching person information; a display unit; and a controlling unit controlling the display screen of the display unit based on an input through the user interface and changing and providing usable menus according to an approaching person based on the approaching person information provided through the sensing unit when the movement of the vehicle is sensed by the sensing unit.
    Type: Application
    Filed: January 16, 2012
    Publication date: May 2, 2013
    Applicant: SAMSUNG ELECTRO-MECHANICS CO., LTD.
    Inventors: Hae Jin Jeon, In Taek Song
  • Publication number: 20130106997
    Abstract: A portable terminal for generating and reproducing stereoscopic data is provided. More particularly, an apparatus and method for providing stereoscopic audio by applying a sense of distance to audio data by the use of subject information of image data when generating the stereoscopic data are provided. An apparatus for generating stereoscopic data in the portable terminal includes an image processor for applying a stereoscopic effect to image data by acquiring the image data for generating the stereoscopic data via a plurality of cameras, and for recognizing subject motion information of the image data. An audio processor applies a stereoscopic effect to audio data in accordance with the subject motion information ascertained from video data after acquiring audio data for generating the stereoscopic data.
    Type: Application
    Filed: September 12, 2012
    Publication date: May 2, 2013
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae-Hyun KIM, Kyung-Seok OH, Kyoung-Ho BANG, In-Yong CHOI
  • Publication number: 20130050426
    Abstract: A method for extending the dynamic range of a depth map by deriving depth information from a synthesized image of a plurality of images captured at different light intensity levels and/or captured over different sensor integration times is described. In some embodiments, an initial image of an environment is captured while the environment is illuminated with light of a first light intensity. One or more subsequent images are subsequently captured while the environment is illuminated with light of one or more different light intensities. The one or more different light intensities may be dynamically configured based on a degree of pixel saturation associated with previously captured images. The initial image and the one or more subsequent images may be synthesized into a synthesized image by applying high dynamic range imaging techniques.
    Type: Application
    Filed: August 30, 2011
    Publication date: February 28, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Sam M. Sarmast, Donald L. Doolittle
  • Publication number: 20130050421
    Abstract: A control method of an image processing apparatus, the method including: receiving a two-dimensional (2D) video signal containing a plurality of 2D video frames; offsetting an object in a first frame among the plurality of 2D video frames so as to generate a three-dimensional (3D) video frame corresponding to the 2D video frame; and generating the 3D video frame corresponding to the first frame by compensating a hole area of pixel data generated by the offsetting of the object in the first frame based on a preset reference frame.
    Type: Application
    Filed: June 29, 2012
    Publication date: February 28, 2013
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Mi-yeon LEE
  • Publication number: 20130038696
    Abstract: A catadioptric camera creates images light fields from a 3D scene by creating ray images defined as 2D arrays of ray-structure picture-elements (ray-xels). Each ray-xel capture light intensity, mirror-reflection location, and mirror-incident light ray direction. A 3D image is then rendered from the ray images by combining the corresponding ray-xels.
    Type: Application
    Filed: August 10, 2011
    Publication date: February 14, 2013
    Inventors: Yuanyuan Ding, Jing Xiao
  • Publication number: 20130021438
    Abstract: The 3D video processing unit combines video feeds from two unsynchronized video sources, such as left and right video cameras, in real-time, to generate a 3D image for display on a video monitor. The processing unit can also optionally receive video data from a third video source and use that data to generate a background image visible on all or a selected portion or portions of the video monitor. An alpha data generator inspects the video data held within respective buffer circuits associated with the left and right channels and generates an alpha data value for each pixel. These alpha data values are used within an alpha blending mixer to control whether a pixel is displayed or suppressed. Synchronization of the unsynchronized video sources occurs within the processing unit after alpha data values have been generated for each left and right channels.
    Type: Application
    Filed: March 30, 2011
    Publication date: January 24, 2013
    Applicant: DESIGN & TEST TECHNOLOGY, INC.
    Inventor: Lawrence J. Tucker
  • Publication number: 20130010060
    Abstract: An Instant Messaging (IM) client and a method for implementing 3D (three-dimensional) video communication. When it is determined that a local video capture device supports 3D video capturing and an opposite side requests to start a 3D video, the 3D video capturing is started. After performing coding on captured 3D video stream according to a preset parameter, a coded 3D video stream is sent. A receiver receives and decodes the coded 3D video stream to display the 3D video.
    Type: Application
    Filed: September 12, 2012
    Publication date: January 10, 2013
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Jing LV
  • Publication number: 20120293619
    Abstract: The invention relates to a method for generating a three-dimensional (3D) video signal to enable simultaneous display of a 3D primary video signal and a secondary video signal on a 3D display, the 3D primary video signal comprising a base video signal and a subsidiary signal enabling 3D display, and the method comprising the steps of providing as the secondary video signal a two-dimension (2D) secondary video signal, and formatting the base video signal, the subsidiary signal and the 2D secondary video signal to generate the 3D video signal.
    Type: Application
    Filed: December 10, 2010
    Publication date: November 22, 2012
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.
    Inventors: Philip Steven Newton, Wiebe De Haan, Dennis Daniel Robert Jozef Bolio
  • Publication number: 20120293626
    Abstract: Disclosed herein is a 3D distance measurement system. The 3D distance measurement system includes an image projection device for projecting a pattern image including one or more patterns on a target object, and an image acquisition device for acquiring a projected pattern image, analyzing the projected pattern image using the patterns, and then reconstructing a 3D image. Each of the patterns includes one or more preset identification factors so that the patterns can be uniquely recognized, and each of the identification factors is one of a point, a line, and a surface, or a combination of two or more of a point, a line, and a surface. The 3D distance measurement system is advantageous in that it reconstructs a 3D image using a single pattern image, thus greatly improving processing speed and the utilization of a storage space and enabling a 3D image to be accurately reconstructed.
    Type: Application
    Filed: May 17, 2012
    Publication date: November 22, 2012
    Applicant: IN-G Co., Ltd.
    Inventors: Suk-Han LEE, Dae-Sik KIM, Yeon-Soo KIM
  • Publication number: 20120287233
    Abstract: A method for personalized video depth adjustment includes receiving a video frame, obtaining a frame depth map based on the video frame, and determining content genre of the video frame by classifying content of the video frame into one or more categories. The method also includes identifying a user viewing the video frame, retrieving depth preference information for the user from a user database, and deriving depth adjustment parameters based on the content genre and the depth preference information for the user. The method further includes adjusting the frame depth map based on the depth adjustment parameters, and providing a 3D video frame for display at a real-time playback rate on a user device of the user. The 3D video frame is generated based on the adjusted frame depth map.
    Type: Application
    Filed: December 29, 2009
    Publication date: November 15, 2012
    Inventors: Haohong Wang, Glenn Adler
  • Publication number: 20120281064
    Abstract: A method and apparatus to provide simplified control over image configuration from virtually any capture device and allows that image to be recorded and/or projected or displayed on any monitor is disclosed. This universality enables general ease of use, and uncouples the capture device from expensive system support, thus providing a method to more efficiently utilize resources.
    Type: Application
    Filed: May 3, 2011
    Publication date: November 8, 2012
    Applicants: Citynet LLC, 3D Surgical Solutions
    Inventors: Ray Hamilton Holloway, Christopher John Borg, Clifton Bradley Parker, Clifton Earl Parker
  • Publication number: 20120236107
    Abstract: A system and method for manipulating images in a videoconferencing session provides users with a 3-D-like view of one or more presented sites, without the need for 3-D equipment. A plurality of cameras may record a room at a transmitting endpoint, and the receiving endpoint may select one of the received video streams based upon a point of view of a conferee at the receiving endpoint. The conferee at the receiving endpoint will thus experience a 3-D-like view of the presented site.
    Type: Application
    Filed: May 11, 2011
    Publication date: September 20, 2012
    Applicant: Polycom, Inc.
    Inventor: Yaakov Moshe Rosenberg
  • Publication number: 20120154528
    Abstract: According to one embodiment, an image processing device includes a motion detector and a depth generator. The motion detector is configured to detect a motion vector of a video signal. The depth generator is a depth generating means configured to generate depth data of the video signal based on the motion vector. The depth generator is configured to generate the depth data when the video signal is a still image.
    Type: Application
    Filed: June 30, 2011
    Publication date: June 21, 2012
    Inventors: Ryo Hidaka, Akihiro Oue
  • Publication number: 20120133732
    Abstract: A method for performing video display control within a video display system includes: dynamically utilizing two of a plurality of buffers as on-screen buffers for three-dimensional (3D) frames, wherein the plurality of buffers is positioned within the video display system; and during utilizing any of the two of the plurality of buffers as an on-screen buffer, dynamically utilizing at least one other buffer of the plurality of buffers as at least one off-screen buffer for at least one 3D frame. An associated video processing circuit and an associated video display system are also provided. In particular, the video processing circuit is positioned within the video display system, where the video processing circuit operates according to the method.
    Type: Application
    Filed: November 26, 2010
    Publication date: May 31, 2012
    Inventors: Guoping Li, Chin-Jung Yang, Geng Li, Te-Chi Hsiao
  • Publication number: 20120106921
    Abstract: An encoding method is provided, according to which video streams obtained by compression-coding original images are contained in one transport stream. The video streams contained in the transport stream include a video stream that constitutes 2D video and video streams that constitute 3D video. When containing such video streams in the transport stream, a descriptor specifying the video streams constituting the 3D video is contained in a PMT (Programmable Map Table) of the transport stream.
    Type: Application
    Filed: October 25, 2011
    Publication date: May 3, 2012
    Inventors: Taiji Sasaki, Takahiro Nishi, Toru Kawaguchi
  • Publication number: 20120086773
    Abstract: A method and apparatus for providing a three-dimensional (3D) image is provided. In the method, first-viewpoint image data and second-viewpoint image data that provide a 3D image are obtained. Time information that represents points of time that the first-viewpoint image data and the second-viewpoint image data are to be processed is produced, based on relation information to indicate that the first-viewpoint image data and the second-viewpoint image data are a pair of pieces of image data. The first-viewpoint image data, the second-viewpoint image data, and the time information are transmitted. The first-viewpoint image data is transmitted in a first stream, and the second-viewpoint image data is transmitted in a second stream.
    Type: Application
    Filed: June 15, 2011
    Publication date: April 12, 2012
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hong-seok PARK, Jae-jun LEE, Houng-sog MIN, Dae-jong LEE
  • Publication number: 20120075418
    Abstract: Disclosed are a video processing apparatus, a content providing server, and control methods thereof. The video processing apparatus includes: a receiver which receives a two-dimensional (2D) video signal; a communication unit which communicates with a content providing server providing a supplementary video signal for a three-dimensional (3D) video signal corresponding to the 2D video signal; a signal processor which processes the 2D video signal and the supplementary video signal; and a controller which controls the communication unit to receive the supplementary video signal corresponding to the received 2D video signal from the content providing server, and the signal processor to generate the 3D video signal based on the received supplementary video signal and the received 2D video signal. Accordingly, it is possible to generate and reproduce the 3D video signal corresponding to the 2D video signal.
    Type: Application
    Filed: April 27, 2011
    Publication date: March 29, 2012
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jung-su KIM, Keum-yong OH, Da-na JUNG
  • Publication number: 20120050469
    Abstract: An image processing device of present invention includes a characteristic value extracting unit which receives an input of video data forming a plurality of three dimensional images, and extracts a characteristic value indicating a position of each three dimensional image in a depth direction, and a characteristic value correcting unit which corrects the characteristic value to make stereoscopic effects of the plurality of three dimensional images uniform.
    Type: Application
    Filed: August 18, 2011
    Publication date: March 1, 2012
    Applicant: SONY CORPORATION
    Inventor: Masaaki Takesue
  • Publication number: 20120044329
    Abstract: High dynamic range imaging includes creating a high dynamic range image file for a subject image captured from a first angle. The high dynamic range image file is created by an image capturing device and is stored on the image capturing device. High dynamic range imaging also includes receiving, at the image capturing device over a communications network, a second high dynamic range image file for the subject image captured from a second angle. The second high dynamic range image file is subject to an authorization requirement. High dynamic range imaging further includes creating, by the image capturing device, a composite high dynamic range image file by combining elements of both of the high dynamic range image files, and generating, by the first image capturing device, a three-dimensional high dynamic range image from the composite high dynamic range image file.
    Type: Application
    Filed: October 31, 2011
    Publication date: February 23, 2012
    Applicant: AT&T INTELLECTUAL PROPERTY 1, L.P.
    Inventor: Steven Tischer
  • Publication number: 20120019616
    Abstract: A 3D image capturing and playing device includes: a 3D image capturing module, an audio capturing module, a control unit, and a video playing module. The 3D image capturing module, which includes a first image capturer, a second image capturer, and a frame composer, captures a 3D image of an external object and converts the 3D image into a digital image signal. The video playing module includes a display converter, a DLP projection engine, a liquid crystal display (LCD), and a synchronous signal emitter. The 3D image capturing and playing device not only has the two horizontally aligned image capturers for capturing the 3D image, but also is built-in with the DLP projection engine and the LCD for projecting or directly displaying the 3D image. Hence, the device is self-contained and easy to operate, thereby meeting the consumer need of taking, playing, and projecting 3D images.
    Type: Application
    Filed: July 26, 2010
    Publication date: January 26, 2012
    Inventor: Wei Hong CHEN
  • Publication number: 20120013711
    Abstract: It is provided a method for generating a 3D representation of a scene, initially represented by a first video stream captured by a certain camera at a first set of viewing configurations. The method includes providing video streams compatible with capturing the scene by cameras, and generating an integrated video stream enabling three-dimensional display of the scene by integration of two video streams. The method includes calculating parameters characterizing a viewing configuration by analysis of elements having known geometrical parameters. The scene may be a sport scene which a playing field, a group of on-field objects and a group of background objects. The method includes segmenting a frame to those portions, separately associating each portion to the different viewing configuration, and merging them into a single frame.
    Type: Application
    Filed: April 7, 2010
    Publication date: January 19, 2012
    Applicant: Stergen Hi-Tech Ltd.
    Inventors: Michael Tamir, Itzhak Wilf
  • Publication number: 20110292175
    Abstract: A broadcast receiver and a 3D subtitle data processing method thereof are disclosed. A method for processing three dimensional (3D) subtitle data includes receiving, by a receiver, a broadcast signal including 3D subtitle data, extracting, by an extracting unit, subtitle display information for a base view and extended subtitle display information for an extended view from the 3D subtitle data, and controlling, by a controller, a 3D subtitle display using the subtitle display information for the base view and the extended subtitle display information for the extended view.
    Type: Application
    Filed: December 1, 2009
    Publication date: December 1, 2011
    Inventors: Jong Yeul Suh, Jin Pil Kim, Ho Taek Hong
  • Publication number: 20110292171
    Abstract: Provided are systems and methods for managing distribution of three dimensional visual media to a plurality of displays. For example, there is a content server for managing distribution of three dimensional (3D) visual media to a client over a network, where the client has a display. The content server comprises a server processor configured to determine a plurality of supported 3D modes of the client, enable one the plurality of supported 3D modes, and provide the 3D visual media to the client for presentation on the display of the client.
    Type: Application
    Filed: April 6, 2011
    Publication date: December 1, 2011
    Applicant: BROADCOM CORPORATION
    Inventors: Stephen Ray Palm, William John Fassl
  • Publication number: 20110273532
    Abstract: An apparatus of transmitting stereoscopic image data includes an image data output unit configured to output stereoscopic image data including left-eye image data and right-eye image data about a certain program; and an image data transmitting unit configured to transmit a transport stream including the stereoscopic image data about the certain program output from the image data output unit. The image data transmitting unit incorporates identification information indicating whether disparity data is transmitted in the transport stream. The disparity data is used to add disparity to superimposed information to be superimposed on an image generated from the left-eye image data and the right-eye image data.
    Type: Application
    Filed: April 21, 2011
    Publication date: November 10, 2011
    Applicant: Sony Corporation
    Inventors: Naohisa Kitazato, Ikuo Tsukagoshi
  • Publication number: 20110273538
    Abstract: A light beam reflected from an ocular fundus 1a is split into a pair of right and left light beams by a two-aperture stop 31 disposed in a position conjugate with an anterior ocular segment 1b of an eye 1 to be examined. The pair of right and left ocular fundus images having a parallax is formed as intermediate images from the split light beams at the position of a photographic mask 43. The optical path of the pair of ocular fundus images formed as intermediate images is split by a pair of optical path splitting lenses 51, 52 disposed in a position substantially conjugate with the two-aperture stop. One ocular fundus image is re-formed on half of an imaging plane 53a of an imaging element 53, and the other ocular fundus image is re-formed separately on the other half of the imaging plane. With such a structure, two ocular fundus images for three-dimensional viewing can be efficiently obtained without using a prism for splitting the optical path.
    Type: Application
    Filed: August 26, 2009
    Publication date: November 10, 2011
    Inventor: Takayoshi Suzuki
  • Publication number: 20110267427
    Abstract: New systems and methods are hereby provided that inherently and naturally resolve the challenges of synthesizing coordinated inputs from multiple cameras. For example, a multi-sensor mediator may collect the input data from multiple sensors, and generate a composite signal that encodes the combined data from the different sensors. The multi-sensor mediator may then relay the composite signal to a sensor controller, as if the signal were coming from a single sensor. A computing device that receives the input from the sensor controller may then generate an output based on the composite signal, which may include processing the composite signal to combine the separate signals from the different sensors, such as to provide a stereo image output, for example. The multi-sensor mediator makes such an output possible by ensuring coordinated input and processing of the input from the multiple sensors, for example.
    Type: Application
    Filed: July 12, 2011
    Publication date: November 3, 2011
    Applicant: Microsoft Corporation
    Inventors: Roy Chun Beng Goh, Raymond Xue, Thomas C. Oliver, Stephen C. Cooper
  • Publication number: 20110255775
    Abstract: Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also includes determining one or more properties of the captured images. The method also includes calculating an offset in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image. Further, the method includes determining that a capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images. The method also includes generating the three-dimensional image based on the corrected first and second still images.
    Type: Application
    Filed: May 25, 2011
    Publication date: October 20, 2011
    Applicant: 3DMEDIA CORPORATION
    Inventors: Michael McNamer, Marshall Robers, Tassos Markas, Jason Paul Hurst, Jon Boyette
  • Publication number: 20110228049
    Abstract: Apparatus for presenting a stereoscopic image to a viewer, the apparatus comprising: a stereoscopic video system comprising: first and second image sensors for acquiring, respectively, first and second images of a scene; a display system for displaying the first image to the first eye of the viewer and the second image to the second eye of the viewer; and parallax adjusting means for adjusting the parallax between the first and second images.
    Type: Application
    Filed: March 14, 2011
    Publication date: September 22, 2011
    Inventors: Yuri Kazakevich, John E. Kennedy
  • Publication number: 20110164109
    Abstract: Motion picture scenes to be colorized/depth enhanced (2D->3D) are broken into separate elements, backgrounds/sets or motion/onscreen-action. Background and motion elements are combined into composite frame which becomes a visual reference database that includes data for all frame offsets used later for the computer controlled application of masks within a sequence of frames. Masks are applied to subsequent frames of motion objects based on various differentiating image processing methods, including automated mask fitting/reshaping. Colors and/or depths are automatically applied to masks throughout a scene from the composite background and to motion objects.
    Type: Application
    Filed: December 22, 2010
    Publication date: July 7, 2011
    Inventors: Tony BALDRIDGE, Barry Sandrew
  • Publication number: 20110157304
    Abstract: Natural 3D images are created on a standard television screen or a movie screen which can be viewed without requiring the use of special glasses or viewing devices. For television, the incoming 2D signal and a delayed 2D signal are supplied simultaneously to the input of the television receiver to produce a 3D image on the screen. For motion pictures, the film and a staggered film portion separated by a predetermined number of frames are fed simultaneously and in opposite directions through a film gate of a projector and moved intermittently therein in a predetermined sequence to produce a 3D image on the motion picture screen.
    Type: Application
    Filed: June 30, 2010
    Publication date: June 30, 2011
    Inventor: Sam Savoca
  • Publication number: 20110157309
    Abstract: Systems, methods and apparatuses are described herein for encoding a plurality of video frame sequences, wherein each video frame sequence corresponds to a different perspective view of the same subject matter. In accordance with various embodiments, the encoding is performed in a hierarchical manner that leverages referencing between frames of different ones of the video frame sequences (so-called “external referencing”), but that also allows for encoded representations of only a subset of the video frame sequences to be provided when less than all sequences are required to support a particular viewing mode of a display system that is capable of displaying the subject matter in a two-dimensional mode, a three-dimensional mode, or a multi-view three-dimensional mode.
    Type: Application
    Filed: December 30, 2010
    Publication date: June 30, 2011
    Applicant: BROADCOM CORPORATION
    Inventors: James D. Bennett, Jeyhan Karaoguz
  • Publication number: 20110157308
    Abstract: A three-dimensional image reproducing apparatus includes: a data source unit operable to output stream data of 3D contents; a data generating unit operable to output a 3D video signal the 3D contents and an audio signal of the 3D contents on the basis of the stream data; and a data transmission interface including a 2D video signal generating unit operable to generate a predetermined 2D video signal. The data transmission interface is operable to: receive the audio signal of the 3D contents; convert the audio signal of the 3D contents and the predetermined 2D video signal to a signal conforming to a 2D video transmission format; and output the converted signal to an external device.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 30, 2011
    Applicant: PANASONIC CORPORATION
    Inventor: EIJI MANSHO
  • Publication number: 20110149039
    Abstract: The present invention relates to a device and a method for allowing a user to reproduce new 3-D scene from one or two or more given 2-D video frames that already exist. The present invention provides a device and a method for automating post-image processing requiring a lot of manual work, thereby making it possible to produce and edit new 3-D representation from the existing 2-D video.
    Type: Application
    Filed: December 16, 2010
    Publication date: June 23, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jae Hwan KIM, Jae Hean Kim, Jin Ho Kim, Il Kwon Jeong
  • Publication number: 20110126160
    Abstract: A method of providing a three-dimensional (3D) image and a 3D display apparatus applying the same are provided. If a predetermined instruction is input in 2D mode, display mode is changed to 3D mode. A predetermined format is applied to an incoming image, and the resultant image is displayed in 3D mode. If the predetermined instruction is input again in 3D mode, another format is applied to the incoming image and the resultant image is displayed. As a result, a viewer can conveniently select a 3D image format of the incoming image.
    Type: Application
    Filed: October 13, 2010
    Publication date: May 26, 2011
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ji-youn HAN, Chang-seog KO
  • Publication number: 20110122229
    Abstract: The invention relates to an endoscopic imaging system for observation of an operative site (2) inside a volume (1) located inside the body of an animal, comprising: Image capture means (3) intended to provide data on the operative site (2), Processing means (4) to process data derived from the image capture means, Display means (5) displaying data processed by the processing means (4), characterized in that: The image capture means (3) comprise at least two image capture devices that are independent and mobile relative to one another, the image capture devices being intended to be positioned inside the volume (1), and in that: The processing means (4) comprise computing means to obtain three-dimensional data on the observed operative site (2) from data provided by the image capture devices, and means to reconstruct a three-dimensional image of a region of interest of the operative site from the three-dimensional data obtained.
    Type: Application
    Filed: August 20, 2008
    Publication date: May 26, 2011
    Applicant: UNIVERSITE JOSEPH FOURIER - GRENOBLE 1
    Inventors: Philippe Cinquin, Sandrine Voros, Christophe Boschet, Celine Fouard, Alexandre Moreau-Gaudry
  • Publication number: 20110090309
    Abstract: An image processing apparatus includes a receiving unit configured to receive an encoded stream, an image capture type, and image capturing order information, the encoded stream being produced by encoding image data of multi-viewpoint images including images from multiple viewpoints that form a stereoscopic image, the image capture type indicating that the multi-viewpoint images have been captured at different timings, the image capturing order information indicating an image capturing order in which the multi-viewpoint images have been captured; a decoding unit configured to decode the encoded stream received by the receiving unit to generate image data; and a control unit configured to control a display apparatus to display multi-viewpoint images corresponding to the image data generated by the decoding unit in the same order as the image capturing order in accordance with the image capture type and image capturing order information received by the receiving unit.
    Type: Application
    Filed: September 27, 2010
    Publication date: April 21, 2011
    Applicant: SONY CORPORATION
    Inventors: Teruhiko SUZUKI, Yoshitomo Takahashi, Takuya Kitamura
  • Publication number: 20110090311
    Abstract: A video communication method, device, and system are provided, which relate to the filed of video communications, so as to solve problems that currently scenes of two parties of communication need special arrangement to improve sense of reality and scene contents cannot be displayed in a 3D video mode. Therefore, scenes of two parties of communication do not need special arrangement to improve the sense of reality of users, and the scene contents can be displayed in a 3D video mode. The video communication method, device, and system are applicable to video communication such as common video chat, video telephony, and video conference.
    Type: Application
    Filed: December 17, 2010
    Publication date: April 21, 2011
    Inventors: Ping Fang, Chen Liu, Yuan Liu
  • Publication number: 20110050848
    Abstract: Tools are described for preparing digital dental models for use in dental restoration production processes, along with associated systems and methods. Dental modeling is improved by supplementing views of three-dimensional models with still images of the modeled subject matter. Video data acquired during a scan of the model provides a source of still images that can be displayed alongside a rendered three-dimensional model, and the two views (model and still image) may be synchronized to provide a common perspective of the model's subject matter. This approach provides useful visual information for disambiguating surface features of the model during processing steps such as marking a margin of a prepared tooth surface for a restoration. Interactive modeling tools may be similarly enhanced.
    Type: Application
    Filed: June 27, 2008
    Publication date: March 3, 2011
    Inventors: Janos Rohaly, Robert N. Nazzal, Edward K. Tekeian, Ilya A. Kriveshko, Eric B. Paley
  • Publication number: 20110037830
    Abstract: The present invention provides a technique for receiving one or more view signals, each containing information about multiple input images; and forming a combined stereoscopic image signal based at least partly on characterizing the sub-pixel configuration to allow for any sub-pixel configuration to be controlled by a couple of parameters that can be changed depending on which display that the combined stereoscopic image signals is to be displayed.
    Type: Application
    Filed: April 24, 2008
    Publication date: February 17, 2011
    Applicant: NOKIA CORPORATION
    Inventor: Lachlan Pockett
  • Publication number: 20110001793
    Abstract: It is possible to perform three-dimensional shape measurement with easy processing, regardless of whether an object is moving or not. An image capturing unit (103) captures a captured image (I) including both a real image (I2) of the object (113R) and a mirror (101). A light amount changing unit (63a) changes a light amount of a virtual image (I1). An image separating unit (captured image separating unit 104) specifies, as a virtual image (Ib1), an image in a region having a different light amount (R1), in a captured image (Ia) in which the light amount is changed and a captured image (Ib) in which the light amount is not changed, and specifies an image in a region having the same light amount (R2) as a real image (Ib2). A three dimensional shape is reconstructed from the real image and so on that are specified.
    Type: Application
    Filed: July 10, 2009
    Publication date: January 6, 2011
    Inventors: Takaaki Moriyama, Akira Uesaki, Tadashi Yoshida, Yudai Ishibashi
  • Publication number: 20100328436
    Abstract: Methods and systems for anonymized video analysis are described. In one embodiment, a first silhouette image of a person in a living unit may be accessed. The first silhouette image may be based on a first video signal recorded by a first video camera. A second silhouette image of the person in the living unit may be accessed. The second silhouette image may be of a different view of the person than the first silhouette image. The second silhouette image may be based on a second video signal recorded by a second video camera. A three-dimensional model of the person in voxel space may be generated based on the first silhouette image, the second silhouette image, and viewing conditions of the first video camera and the second video camera. In some embodiments, information on falls, gait parameters, and other movements of the person living unit are determined. Additional methods and systems are disclosed.
    Type: Application
    Filed: June 1, 2010
    Publication date: December 30, 2010
    Applicant: THE CURATORS OF THE UNIVERSITY OF MISSOURI
    Inventors: Marjorie Skubic, James M. Keller, Fang Wang, Derek T. Anderson, Erik Edward Stone, Robert H. Luke, III, Tanvi Banerjee, Marilyn J. Rantz