Processing Stereoscopic Image Signals (epo) Patents (Class 348/E13.064)
-
Patent number: 12250461Abstract: An image controller, an image processing system, and an image correcting method are provided. A first controller obtains a first image from an image capturing apparatus. The first controller converts the first image into a second image according to a converting operation. The converting operation includes deformation correction, and the deformation correction is used to correct deformation of one or more target objects in the first image. A second controller detects the target object in the second image to generate a detected result. The first controller corrects the converting operation according to the detected result. A visual experience may thus be improved in this way.Type: GrantFiled: June 7, 2022Date of Patent: March 11, 2025Assignee: GENESYS LOGIC, INC.Inventors: Shi-Ming Hu, Hsueh-Te Chao
-
Patent number: 12210094Abstract: A medical imaging apparatus comprises processing circuitry configured to: receive three-dimensional flow data, wherein the three-dimensional flow data comprises data acquired by medical imaging of a subject; perform a first intensity projection to process first flow data corresponding to a first region in the three-dimensional flow data having a first direction of flow, thereby obtaining a first color; perform a second, independent intensity projection to process second flow data corresponding to a second region in the three-dimensional flow data having a second direction of flow which is different from the first direction of flow, thereby obtaining a second color; combine the first color and the second color to obtain a combined color; and generate volume rendering image data based on the combined color.Type: GrantFiled: August 4, 2022Date of Patent: January 28, 2025Assignee: CANON MEDICAL SYSTEMS CORPORATIONInventor: Magnus Wahrenberg
-
Patent number: 12174297Abstract: A distance measurement device according to the present disclosure includes: a light detection unit that receives light from a subject; a depth calculation section that calculates depth information of the subject on the basis of an output of the light detection unit; and an artifact removal section that divides an image into respective segments on the basis of the depth information and validates a segment of the respective segments in which a number of pixels exceeds a predetermined threshold and invalidates a segment in which the number of pixels is less than or equal to the predetermined threshold.Type: GrantFiled: July 20, 2020Date of Patent: December 24, 2024Assignee: SONY SEMICONDUCTOR SOLUTIONS CORPORATIONInventor: Kazuki Ohashi
-
Patent number: 12177600Abstract: Aspects of the present disclosure involve a system for providing virtual experiences. The system accesses an image depicting a person and one or more camera parameters representing a viewpoint associated with a camera used to capture the image. The system extracts a portion of the image comprising the depiction of the person. The system processes, by a neural radiance field (NeRF) machine learning model, the one or more camera parameters to render an estimated depiction of a scene from the viewpoint associated with the camera used to capture the image. The system combines the portion of the image comprising the depiction of the person with the estimated depiction of the scene to generate an output image and causes the output image to be presented on a client device.Type: GrantFiled: June 29, 2022Date of Patent: December 24, 2024Assignee: Snap Inc.Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Ma'ayan Mishin Shuvi
-
Patent number: 12134347Abstract: A lamp system capable of improving ease of use by outputting an image without distortion when one image is divided and output from the left and right lamps, respectively. The lamp system includes a left lamp installed on a left side of a moving object in one direction to output an allocated first image, a right lamp installed on a right side of the moving object in one direction to output an allocated second image, and a control unit receiving original image information from the outside to allocate the first image and the second image to the left lamp and the right lamp, respectively, based on the original image but change and allocate an overlapping area that is an overlapping portion of the first image and the second image.Type: GrantFiled: December 15, 2022Date of Patent: November 5, 2024Assignee: HYUNDAI MOBIS CO., LTD.Inventor: Myeong Je Kim
-
Patent number: 12022052Abstract: An apparatus for three-dimension (3D) reconstruction includes; an event trigger module that determines whether to perform a 3D reconstruction, a motion estimation module than obtains motion information, and a reconstruction module that receives a first front image having a first view point and a second front image having a second view point, and obtains 3D coordinate values of a camera coordinate system based on the first front image and the second front image. Here, each of the first front image and the second front image includes planes, and each of the planes is perpendicular to the ground and includes feature points.Type: GrantFiled: June 10, 2021Date of Patent: June 25, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jaewoo Lee, Sangjun Lee, Wonju Lee
-
Patent number: 11962906Abstract: An image processing system may include a first image sensor, a second image sensor, and an image processing device. The image processing device may be configured to obtain a first image and a second image by respectively processing the first image data and the second image data. The image processing device may output an image based on the first image when a zoom factor of the output image is lower than a first reference value, generate a correction image by correcting locations of second reference coordinates of the second image based on first reference coordinates of the first image when the zoom factor of the output image is between the first reference value and the second reference value, and may output an image based on the second image when the zoom factor exceeds the second reference value.Type: GrantFiled: November 29, 2021Date of Patent: April 16, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Dong-Jin Park, Jeehong Lee, Seongyeong Jeong, Duck-soo Kim
-
Patent number: 11953665Abstract: An imaging apparatus includes a first reflection optical system and a second reflection optical system having mutually different optical axes, each of the first and second reflection optical systems includes a plurality of reflecting surfaces, a first imaging portion configured to receive an imaging light reflected by the first reflection optical system, a second imaging portion configured to receive an imaging light reflected by the second reflection optical system, a first member, a second member, and a frame. A part of the plurality of reflecting surfaces of the first reflection optical system are reflecting surfaces provided on the frame. Among the plurality of reflecting surfaces of the first reflection optical system, a final-stage reflecting surface configured to reflect the imaging light toward the first imaging portion is a first reflecting surface formed on a surface of the first member.Type: GrantFiled: September 29, 2020Date of Patent: April 9, 2024Assignee: Canon Kabushiki KaishaInventors: Naoto Fuse, Chiaki Inoue, Ichiro Kanazashi, Atsushi Takata, Kazuhiro Kochi, Kouga Okada
-
Patent number: 11949844Abstract: Embodiments of the present invention relate to the technical field of image processing, and disclose an image data processing method and apparatus, an image processing chip, and an aircraft. The method comprises: receiving K channels of image data; performing splitting processing on the L channels of second image data in the K channels of image data to obtain M channels of third image data; performing format conversion processing on the (N?M) channels of first image data and the M channels of third image data to obtain a color image in a preset format; and performing image processing on a gray part component of the color image in the preset format to obtain a depth map. The method can better meet the requirements of multi-channel image data processing.Type: GrantFiled: September 30, 2021Date of Patent: April 2, 2024Assignee: AUTEL ROBOTICS CO., LTD.Inventor: Zhaozao Li
-
Patent number: 11929102Abstract: A decoding system decodes a video stream, which is encoded video information. The decoding system includes a decoder that acquires the video steam and generates decoded video information, a maximum luminance information acquirer that acquires maximum luminance information indicating the maximum luminance of the video stream from the video stream, and an outputter that outputs the decoded video information along with the maximum luminance information. In in a case where the video stream includes a base video stream and an enhanced video stream, the decoder generates base video information by decoding the base video stream, an enhanced video information by decoding the enhanced video stream, and generates the decoded video information based on the base video information and the enhanced video information, and the outputter outputs the decoded video information, along with the maximum luminance information.Type: GrantFiled: December 20, 2021Date of Patent: March 12, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Hiroshi Yahata, Tadamasa Toma, Tomoki Ogawa
-
Patent number: 11776205Abstract: One or more image and/or depth cameras capture images and/or depths of a physical environment over time. A computer system processes the images to create a static 3-dimensional (3D) model representing stationary structure and a dynamic 3D model representing moving or moveable objects within the environment. The system visually overlays the dynamic 3D model over the static 3D model in a user interface. Through the user interface, a user can create virtual spatial interaction sensors, each of which is defined by a volume of space within the environment. A virtual spatial interaction sensor can be triggered, based on analysis of the dynamic 3D model by the computer system, whenever a moveable object within the environment intersects the defined volume of the sensor. Times and durations of intersections can be logged and used for process refinement.Type: GrantFiled: June 9, 2021Date of Patent: October 3, 2023Inventors: James Keat Hobin, Valentin Heun
-
Patent number: 11776148Abstract: Computing a height of a building is performed by inputting a pair of two-dimensional (2-D) aerial images of a city along with their metadata. Using the metadata, a three-dimensional (3-D) vector from each image toward the location of the camera when each image was taken is determined. A plurality of pairs of corresponding image points from the images are computed, in each pair of image points an image point of one image identifies the same physical point on the building as the second image point of the second image. Next, the images are superimposed, and for each pair of image points, determine the intersection of the 3-D vector of the first image originating at the first image point with the 3-D vector of the second image originating at the second image point. Each intersection is a 3-D position and the height is determined from the median of these 3-D positions.Type: GrantFiled: February 8, 2023Date of Patent: October 3, 2023Assignee: Blackshark.ai GmbHInventors: Christian Poglitsch, Thomas Holzmann, Stefan Habenschuss, Christian Pirchheim, Shabab Bazrafkan
-
Patent number: 11769465Abstract: A computing system includes a storage device and processing circuitry. The processing circuitry is configured to obtain an image frame that comprises a plurality of pixels that form a pixel array. Additionally, the processing circuitry is configured to determine that a region of the image frame belongs to a trigger content type. Based on determining that the region of the image frame belongs to the trigger content type, the processing circuitry is configured to modify the region of the image frame to adjust a luminance of pixels of the region of the image frame based on part on an ambient light level in a viewing area of the user; and output, for display by a display device in the viewing area of the user, a version of the image frame that contains the modified region.Type: GrantFiled: November 5, 2021Date of Patent: September 26, 2023Assignee: Optum, Inc.Inventors: Jon Kevin Muse, Gregory J. Boss, Komal Khatri
-
Patent number: 11727659Abstract: A method for processing a three-dimensional (3D) image includes acquiring a frame of a color image and a frame of a depth image, and generating a frame by combining the acquired frame of the color image with the acquired frame of the depth image. The generating of the frame includes combining a line of the color image with a corresponding line of the depth image.Type: GrantFiled: February 24, 2022Date of Patent: August 15, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hojung Kim, Yongkyu Kim, Hoon Song, Hongseok Lee
-
Patent number: 11687714Abstract: Disclosed are computer-implemented methods and systems for generating text descriptive of digital images, comprising using a machine learning model to pre-process an image to generate initial text descriptive of the image; adjusting one or more inferences of the machine learning model, the inferences biasing the machine learning model away from associating negative words with the image; using the machine learning model comprising the adjusted inferences to post-process the image to generate updated text descriptive of the image; and processing the generated updated text descriptive of the image outputted by the machine learning model to fine-tune the updated text descriptive of the image.Type: GrantFiled: August 20, 2020Date of Patent: June 27, 2023Assignee: Adobe Inc.Inventors: Pranav Aggarwal, Di Pu, Daniel ReMine, Ajinkya Kale
-
Patent number: 11679302Abstract: A training system operates to provide an online training system that includes an instructional portion, a performance portion and a results report. A trainer can create training modules and/or training programs and then launch a training program in which one or more users, utilizing a user system, can participate. A training program consists of one or more training modules. Each training module is configured to first run in a demonstration mode, in which the operator streams a demonstration of the physical activity required by the training module to one or more user systems. Users can thus view the demonstration of the physical activity on the user systems. The training module then enters a trial or trial mode in which each of the participants then engage in performing the physical activity. The user systems present a video/graphical display to show various performance criteria of the training module.Type: GrantFiled: April 29, 2022Date of Patent: June 20, 2023Assignee: DribbleUp, Inc.Inventors: Eric Forkosh, Marc Forkosh, Ben Paster
-
Patent number: 11470297Abstract: A method for automatic selection of viewpoint characteristics and trajectories in volumetric video presentations includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a set of desired viewpoint characteristics for a volumetric video traversal of the scene, determining a trajectory through the plurality of video streams that is consistent with the set of desired viewpoint characteristics, rendering a volumetric video traversal that follows the trajectory, wherein the rendering comprises compositing the plurality of video streams, and publishing the volumetric video traversal for viewing on a user endpoint device.Type: GrantFiled: May 17, 2021Date of Patent: October 11, 2022Assignee: AT&T Intellectual Property I, L.P.Inventors: David Crawford Gibbon, Tan Xu, Zhu Liu, Behzad Shahraray, Eric Zavesky
-
Patent number: 9727784Abstract: A system for vector extraction comprising a vector extraction engine stored and operating on a network-connected computing device that loads raster images from a database stored and operating on a network-connected computing device, identifies features in the raster images, and computes a vector based on the features, and methods for feature and vector extraction.Type: GrantFiled: June 3, 2015Date of Patent: August 8, 2017Assignee: DIGITALGLOBE, INC.Inventors: Jacek Grodecki, Seth Malitz, Josh Nolting
-
Patent number: 8982184Abstract: A method for compensating for cross-talk in a 3-D projector-camera system having a controller including a processor and system memory and at least two channels, includes the steps of calibrating the projector-camera system, computing cross-talk factors applicable to the projector-camera system, and correcting new image data for cross-talk based upon the computed cross-talk factors. The system is calibrated by sequentially projecting and capturing, with a camera, a calibration image for each channel, to capture cross-talk between the channels. The controller can compute the cross-talk factors based upon the captured calibration images.Type: GrantFiled: December 8, 2009Date of Patent: March 17, 2015Assignee: Hewlett-Packard Development Company, L.P.Inventors: Niranjan Damera-Venkata, Nelson Liang An Chang
-
Patent number: 8976229Abstract: An image analysis apparatus for processing a 3D video signal comprising successive pairs of images representing different respective views of a scene to generate an image depth indicator comprises a correlator configured to correlate image areas in one of the pair of images with image areas in the other of the pair of images so as to detect displacements of corresponding image areas between the two images; a graphics generator configured to generate a graphical representation of the distribution of the displacements, with respect to a range of possible displacement values, across the pair of images; and a display generator for generating for display the graphical representation in respect of a current pair of images and in respect of a plurality of preceding pairs of images, so as to provide a time-based representation of variations in the distribution of the displacements.Type: GrantFiled: April 6, 2012Date of Patent: March 10, 2015Assignee: Sony CorporationInventors: Sarah Elizabeth Witt, Richard Jared Cooper
-
Patent number: 8947503Abstract: A video processing system may receive a first frame comprising pixel data for a first 3-D view of an image, which may be referred to as first 3-D view pixel data, and receive a second frame comprising pixel data for a second 3-D view of the image, which may be referred to as second 3-D view pixel data. The system may generate a multi-view frame comprising the first 3-D view pixel data and the second 3-D view pixel data. The system may make a decision for performing processing of the image, wherein the decision is generated based on one or both of the first 3-D view pixel data and/or the second 3-D view pixel data. The system may process the 3-D multi-view frame based on the decision. The image processing operation may comprise, for example, deinterlacing, filtering, and cadence processing such as 3:2 pulldown.Type: GrantFiled: December 8, 2010Date of Patent: February 3, 2015Assignee: Broadcom CorporationInventors: Darren Neuman, Jason Herrick, Christopher Payson, Qinghua Zhao
-
Publication number: 20140104380Abstract: Techniques are disclosed for rendering images. The techniques include receiving an input image associated with a source space, the input image comprising a plurality of source pixels, and applying an adaptive transformation to a source pixel, where the adaptive transformation maps the source pixel to a target space associated with an output image comprising a plurality of target pixels. The techniques further include determining a target pixel affected by the source pixel based on the adaptive transformation. The techniques further include writing the transformed source pixel into a location in the output image associated with the target pixel.Type: ApplicationFiled: October 17, 2012Publication date: April 17, 2014Applicant: DISNEY ENTERPRISES, INC.Inventors: Pierre GREISEN, Simon HEINZLE, Michael SCHAFFNER, Aljosa Aleksej Andrej SMOLIC
-
Publication number: 20140022340Abstract: In one example, a method includes identifying a pixel in an image frame that is a candidate for causing crosstalk between the image frame and a corresponding image frame in a multiview image system. The method further includes, for a pixel identified as a candidate for causing crosstalk, applying crosstalk correction to the pixel. The method further includes applying a location-based adjustment to the pixel, wherein the location-based adjustment is based at least in part on which of two or more portions of the image frame the pixel is in.Type: ApplicationFiled: July 18, 2012Publication date: January 23, 2014Applicant: QUALCOMM INCORPORATEDInventors: Gokce Dane, Vasudev Bhaskaran
-
Publication number: 20140009575Abstract: A method and apparatus for processing a video data stream. The video data stream is received from a video camera system. The video data stream comprises a plurality of images of a scene. A plurality of image pairs is selected from the plurality of images. A first image of a first area in the scene overlaps a second image of a second area in the scene in each image pair in the plurality of image pairs. Each image pair in the plurality of image pairs is adjusted to form a plurality of adjusted image pairs. The plurality of adjusted image pairs is configured to provide a perception of depth for the scene when the plurality of adjusted image pairs is presented as a video.Type: ApplicationFiled: September 11, 2009Publication date: January 9, 2014Applicant: The Boeing CompanyInventors: Dewey Rush Houck, II, Andrew T. Perlik, III, Brian Joseph Griglak, William A. Cooney, Michael Gordon Jackson
-
Publication number: 20140009576Abstract: In one embodiment, the method includes receiving at least one tile of a current frame of video data. The method further includes determining whether the tile is a static tile or a dynamic tile based on the current frame and a corresponding tile in an earlier frame. The method further includes partitioning pixels of a static tile into at least one bin, the number of bins being greater than the number of color values that were permitted for the corresponding tile in the earlier frame. The static tile is a tile that has not changed from the earlier frame. The method further includes partitioning pixels of a changed tile into at least one bin, the number of bins being less than the number of color values that were permitted for the corresponding tile in the earlier frame. The changed tile is a tile that has changed from the earlier frame.Type: ApplicationFiled: July 5, 2012Publication date: January 9, 2014Applicant: ALCATEL-LUCENT USA INC.Inventors: Ilija Hadzic, Hans Woithe, Martin Carroll
-
Publication number: 20140002591Abstract: Embodiments of apparatuses, systems, and methods for a temporal hole filling are described. Specifically, an embodiment of the present invention may include a depth-based hole filling process that includes a background modeling technique (SGM). Beneficially, in such an embodiment, holes in the synthesized view may be filled effectively and efficiently.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Applicant: Hong Kong Applied Science and Technology Research Institute Co., Ltd.Inventors: Sun Wenxiu, Oscar C. Au, Lingfeng Xu, Yujun Li, Wei Hu, Lu Wang
-
Publication number: 20130321575Abstract: A “Dynamic High Definition Bubble Framework” allows local clients to display and navigate FVV of complex multi-resolution and multi-viewpoint scenes while reducing computational overhead and bandwidth for rendering and/or transmitting the FVV. Generally, the FVV is presented to the user as a broad area from some distance away. Then, as the user zooms in or changes viewpoints, one or more areas of the overall area are provided in higher definition or fidelity. Therefore, rather than capturing and providing high definition everywhere (at high computational and bandwidth costs), the Dynamic High Definition Bubble Framework captures one or more “bubbles” or volumetric regions in higher definition in locations where it is believed that the user will be most interested. This information is then provided to the client to allow individual clients to navigate and zoom different regions of the FVV during playback without losing fidelity or resolution in the zoomed areas.Type: ApplicationFiled: August 30, 2012Publication date: December 5, 2013Applicant: MICROSOFT CORPORATIONInventors: Adam Kirk, Neil Fishman, Don Gillett, Patrick Sweeney, Kanchan Mitra, David Eraker
-
Publication number: 20130242045Abstract: In a video signal processor, a difference value calculation section calculates a difference value between video signals at two or more different points of view, which have been input to a video signal input section, at each predetermined timing. A compensation value calculation section holds the difference value between the video signals, which have been obtained at the predetermined timing, at each predetermined time interval. The compensation value calculation section calculates a compensation value with reduced compensation variations along a time axis using the held difference value. A video signal compensation section compensates the video signals using the obtained compensation value. Furthermore, the compensation value calculation section also calculates the compensation value with reduced compensation variations along the time axis based on information at imaging, which has been obtained by an information input section at each predetermined timing.Type: ApplicationFiled: November 5, 2012Publication date: September 19, 2013Applicant: PANASONIC CORPORATIONInventor: Hisako CHIAKI
-
Publication number: 20130215220Abstract: A method for forming a stereoscopic video of a scene from first and second input digital videos captured using respective first and second digital video cameras, wherein the first and second input digital videos include overlapping scene content and overlapping time durations. The method includes determining camera positions for each frame of the first and second input digital videos, and determining first-eye and second-eye viewpoints for each frame of the stereoscopic video. First-eye and second-eye images are formed for each frame of the stereoscopic video responsive to the corresponding video frames in the first and second input digital videos and the associated camera positions.Type: ApplicationFiled: February 21, 2012Publication date: August 22, 2013Inventors: Sen Wang, Kevin Edward Spaulding
-
Patent number: 8493440Abstract: A switchable autostereoscopic display device comprises a display panel having an array of display pixels for producing a display, the display pixels being arranged in rows and columns; and an imaging arrangement for directing the output from different pixels to different spatial positions to enable a stereoscopic image to be viewed. The imaging arrangement is electrically switchable between at least three modes comprising a 2D mode and two 3D modes. The imaging arrangement comprises an electrically configurable graded index lens array. The display can be switched between a number of modes to enable the display to be adapted or to adapt itself to the image content to be displayed and/or the display device orientation.Type: GrantFiled: December 11, 2006Date of Patent: July 23, 2013Assignee: Koninklijke Philips N.V.Inventors: Marcellinus Petrus Carolus Michael Krijn, Gerardus Petrus Karman, Willem Lubertus Ijzerman, Oscar Henrikus Willemsen
-
Publication number: 20130176388Abstract: The invention relates to three dimensional video imaging, in which at least a left view and a right view of a moving scene are produced and a map of disparities is produced for all pixels of the successive images of a video sequence.Type: ApplicationFiled: February 17, 2012Publication date: July 11, 2013Inventors: Guillaume Boisson, Paul Kerbiriou, Valter Drazic
-
Publication number: 20130169748Abstract: A system and method for adjusting the perceived depth of stereoscopic images are provided. The system includes a disparity estimator, a disparity processor and a warping engine. The disparity estimator is configured to receive a stereoscopic image, to estimate disparities in the stereoscopic image, and to generate an estimator signal comprising the estimated disparities. The disparity processor is configured to receive the estimator signal from the disparity estimator and a depth control signal that is generated based on a user input. The disparity processor is also configured to generate a processor signal based on the estimator signal and the depth control signal. The warping engine is configured to receive the processor signal and to generate an adjusted stereoscopic image by warping the processor signal based on a model.Type: ApplicationFiled: December 30, 2011Publication date: July 4, 2013Applicant: STMicroelectronics (CANADA), Inc.Inventor: Eduardo R. Corral-Soto
-
Publication number: 20130113887Abstract: An apparatus and method for measuring 3-dimensional (3D) interocular crosstalk is disclosed. A light sensor detects luminance of a stereoscopic image displayed in a display and outputs a luminance value indicating the detected luminance. A controller calculates 3D interocular crosstalk based on a gray difference and a residual luminance ratio.Type: ApplicationFiled: November 9, 2012Publication date: May 9, 2013Applicant: LG ELECTRONICS INC.Inventor: LG ELECTRONICS INC.
-
Publication number: 20130083163Abstract: A method for encoding a multi-view frame in a video encoder is provided that includes computing a depth quality sensitivity measure for a multi-view coding block in the multi-view frame, computing a depth-based perceptual quantization scale for a 2D coding block of the multi-view coding block, wherein the depth-based perceptual quantization scale is based on the depth quality sensitive measure and a base quantization scale for the 2D frame including the 2D coding block, and encoding the 2D coding block using the depth-based perceptual quantization scale.Type: ApplicationFiled: September 28, 2012Publication date: April 4, 2013Applicant: TEXAS INSTRUMENTS INCORPORATEDInventor: Texas Instruments Incorporated
-
Patent number: 8411133Abstract: A mobile terminal and panoramic photographing method for the same are provided. The panoramic photographing method includes displaying a preview image upon selection of a panoramic mode, successively capturing a first partial image and second partial image in response to input of a shooting start signal, setting a photographing direction through a comparison between the first partial image and second partial image, and producing a panoramic image in the set photographing direction. As a result, the user does not have to set the photographing direction to capture a panoramic image using a mobile terminal.Type: GrantFiled: September 16, 2008Date of Patent: April 2, 2013Assignee: Samsung Electronics Co., Ltd.Inventors: Nam Jin Kim, Seung Hyun Cho
-
Publication number: 20130070049Abstract: A system and method for converting two dimensional video to three dimensional video includes a processor having an input for receiving a two dimensional image data and an output for outputting three dimensional image data to a display. The processor is configured to receive two dimensional image data, segment a specific object in the two dimensional image data based on variations in brightness and sharpness in the two dimensional image data to identify and locate the specific object in the two dimensional image data. The processor is also configured to adjust the depth value of the specific object over the period of time as the size of the specific object changes in each of the two dimensional images or adjust the depth value of the specific object over the period of time as the size of the specific object changes in each of the two dimensional images.Type: ApplicationFiled: September 15, 2011Publication date: March 21, 2013Applicant: BROADCOM CORPORATIONInventors: Hyeong-Seok Victor Ha, Jason Chui-Hsun Yang
-
Publication number: 20130057644Abstract: Techniques are disclosed for generating autostereoscopic video content. A multiscopic video frame is received that includes a first image and a second image. The first and second images are analyzed to determine a set of image characteristics. A mapping function is determined based on the set of image characteristics. At least a third image is generated based on the mapping function and added to the multiscopic video frame.Type: ApplicationFiled: August 31, 2012Publication date: March 7, 2013Applicant: DISNEY ENTERPRISES, INC.Inventors: Nikolce Stefanoski, Aljoscha Smolic, Manuel Lang, Miquel À. Farré, Alexander Hornung, Pedro Christian Espinosa Fricke, Oliver Wang
-
Publication number: 20130050420Abstract: An exemplary image processing method includes obtaining disparity information, and generating output image data by performing an image processing operation upon input image data according to the disparity information. An exemplary image processing apparatus includes a disparity information acquisition circuit and an image processing circuit. The disparity information acquisition circuit is arranged for obtaining disparity information. The image processing circuit is coupled to the disparity information acquisition circuit, and arranged for generating output image data by processing input image data according to the disparity information.Type: ApplicationFiled: March 18, 2012Publication date: February 28, 2013Inventors: Ding-Yun Chen, Cheng-Tsai Ho, Chi-Cheng Ju
-
Publication number: 20130050412Abstract: A viewpoint position sets, when a stereoscopic video image is provided that includes a left-eye parallax image and a right-eye parallax image having a predetermined parallax distribution and that is observed from a given viewpoint, a virtual viewpoint for observing a subject from another viewpoint. A parallax image generation unit generates a left-eye and a right-eye parallax image for providing a desired parallax distribution that are obtained when an observation is made from the virtual viewpoint, by shifting the image cut-out position of at least one of the left-eye or the right-eye parallax image. When the shift amount at the image cut-out position changes due to a position change of the virtual viewpoint, the parallax image generation unit generates a parallax image while changing the shift amount in stages from a shift amount obtained before the change to a shift amount obtained after the change.Type: ApplicationFiled: August 8, 2012Publication date: February 28, 2013Applicant: SONY COMPUTER ENTERTAINMENT INC.Inventors: Takayuki Shinohara, Hidehiko MORISADA, Aritoki KAWAI
-
Publication number: 20130050415Abstract: In one embodiment, a method of handling a data frame in a video transmitter device comprises receiving a two-dimensional image frame having a first number of lines and a first number of column, receiving a depth map associated with the two-dimensional image frame, the depth map having a second number of lines and a second number of columns, scaling down the two-dimensional image frame and the depth map to obtain a second two-dimensional image frame and a second depth map of smaller sizes, assembling the second two-dimensional image frame with the second depth map into a data frame, and transmitting the data frame from a video transmitter device to a video receiver device. In other embodiments, video transmitter and receiver devices are also described.Type: ApplicationFiled: August 30, 2011Publication date: February 28, 2013Applicant: HIMAX TECHNOLOGIES LIMITEDInventor: Tzung-Ren WANG
-
Publication number: 20130050422Abstract: System and method for video processing. At least one overdrive (OD) look-up table (LUT) is provided, where the at least one OD LUT is dependent on input video levels and at least one parameter indicative of at least one attribute of the system or a user of the system. Video levels for a plurality of pixels for an image are received, as well as the at least one parameter. Overdriven video levels are generated via the at least one OD LUT based on the video levels and the at least one parameter. The overdriven video levels are provided to a display device for display of the image. The reception of video levels and at least one parameter, the generation of overdriven video levels, and the provision of overdriven video levels, may be repeated one or more times in an iterative manner to display a sequence of images.Type: ApplicationFiled: August 29, 2012Publication date: February 28, 2013Inventor: Mark F. Flynn
-
Publication number: 20130050426Abstract: A method for extending the dynamic range of a depth map by deriving depth information from a synthesized image of a plurality of images captured at different light intensity levels and/or captured over different sensor integration times is described. In some embodiments, an initial image of an environment is captured while the environment is illuminated with light of a first light intensity. One or more subsequent images are subsequently captured while the environment is illuminated with light of one or more different light intensities. The one or more different light intensities may be dynamically configured based on a degree of pixel saturation associated with previously captured images. The initial image and the one or more subsequent images may be synthesized into a synthesized image by applying high dynamic range imaging techniques.Type: ApplicationFiled: August 30, 2011Publication date: February 28, 2013Applicant: MICROSOFT CORPORATIONInventors: Sam M. Sarmast, Donald L. Doolittle
-
Publication number: 20130047186Abstract: A custom interface depth may be provided. A content stream, such as a three-dimensional television signal, comprising a plurality of video planes may be displayed. In response to receiving a request to adjust a depth of at least one of the video planes, the display depth of the requested video plane may be adjusted relative to at least one other video plane. The depth of a video plane containing a scaled version of the three-dimensional television signal may be adjusted relative to a video plane displaying an electronic program guide.Type: ApplicationFiled: August 18, 2011Publication date: February 21, 2013Applicant: Cisco Technology, Inc.Inventors: James Alan Strothman, James Michael Blackmon
-
Publication number: 20130038688Abstract: Continuous Adjustable Pulfrich Filter Spectacles are provided with lenses with continuously changeable optical densities, so that viewing of 2D movies is optimized for visualization in natural 3D. Method and means are disclosed for the continuous Adjustable Pulfrich Filter Spectacles to perform two independent optimizations to achieve optimized 3Deeps visual effects on 2D movies. First they compute the optical density setting of the lenses for optimal viewing of 2D movies as 3D. Then they continuously render the lenses of the spectacles to these optical densities optimized for characteristics of the electro-optical material from which the lenses of the spectacles are fabricated. The invention works for both 3DTV and 3D Cinema theater viewing.Type: ApplicationFiled: October 1, 2012Publication date: February 14, 2013Inventors: Kenneth Martin JACOBS, Ronald Steven KARPF
-
Publication number: 20130027511Abstract: To provide an onboard environment recognition system capable of preventing, with a reduced processing load, erroneous recognition caused by light from a headlight of a vehicle in the surroundings.Type: ApplicationFiled: July 26, 2012Publication date: January 31, 2013Applicant: Hitachi, Ltd.Inventors: Masayuki TAKEMURA, Shoji Muramatsu, Takeshi Shima, Masao Sakata
-
Publication number: 20130027513Abstract: A system for adjusting the perceived depth of 3D content in response to a viewer input control signal. The system comprises: 1) a content source providing an input left stereoscopic image and an input right stereoscopic image; 2) a disparity estimator to receive the input left and right stereoscopic images, detect disparities between the input left and right stereoscopic images, and generate a disparities array; and 3) processing circuitry to fill in occlusion areas associated with the disparities array and apply a scale factor to the detected disparities to thereby generate a scaled disparities array. The system further comprises a warping engine to receive the scaled disparities array and generate an output left stereoscopic image and an output right stereoscopic image. The output left and right stereoscopic images have a different perceived depth than the input left and right stereoscopic images.Type: ApplicationFiled: July 24, 2012Publication date: January 31, 2013Applicant: STMicroelectronics (Canada), Inc.Inventor: Eduardo R. Corral-Soto
-
Publication number: 20130027514Abstract: A method of generating a broadcasting bitstream for a digital caption broadcast, the method including: receiving video data in which a video including at least one of a two-dimensional (2D) video and a three-dimensional (3D) video is encoded; determining caption data for reproduction in conjunction with the video, and 3D caption converting information including information relating to a converting speed of an offset for reproducing the caption data as a 3D caption; and outputting a bitstream for a digital broadcast by multiplexing the received video data, the determined caption data, and the determined 3D caption converting information.Type: ApplicationFiled: April 14, 2011Publication date: January 31, 2013Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Bong-je Cho, Yong-tae Kim, Yong-seok Jang, Dae-jong Lee, Jae-seung Kim, Ju-hee Seo
-
Publication number: 20130010069Abstract: From a bit stream, at least the following are decoded: a stereoscopic image of first and second views; a maximum positive disparity between the first and second views; and a minimum negative disparity between the first and second views. In response to the maximum positive disparity violating a limit on positive disparity, a convergence plane of the stereoscopic image is adjusted to comply with the limit on positive disparity. In response to the minimum negative disparity violating a limit on negative disparity, the convergence plane is adjusted to comply with the limit on negative disparity.Type: ApplicationFiled: June 25, 2012Publication date: January 10, 2013Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Veeramanikandan Raju, Wei Hong, Minhua Zhou
-
Publication number: 20130010062Abstract: A method and system for preparing subtitles for use in a stereoscopic presentation are described. The method allows a subtitle to be displayed without being truncated or masked by comparing the subtitle's initial footprint with an image display area. If any portion of the initial footprint lies outside the image display area, the subtitle is adjusted according to adjustment information, which includes at least one of: a scale factor, a translation amount and a disparity change, so that the adjusted subtitle lies completely within the image display area. Furthermore, the disparity of the subtitle can be adjusted by taking into account the disparities of one or more objects in an underlying image to be displayed with the subtitle.Type: ApplicationFiled: April 1, 2011Publication date: January 10, 2013Inventor: William Gibbens Redmann
-
Publication number: 20130010059Abstract: An apparatus and a method for performing a parallax control of a left image and a right image applied to display of a stereoscopic image are provided. The apparatus includes a left image transforming unit for generating a left image-transformed image by changing a phase of an image signal of a left image which is to be presented to a left eye in a right direction or a left direction and a right image transforming unit for generating a right image-transformed image by changing a phase of an image signal of a right image which is to be presented to a right eye in the left direction or the right direction. For example, each image transforming unit generates a differential signal by applying, to an input image, differential filter coefficients of coefficient series of opposite characteristics, and generates a parallax-controlled transformed signal using combining processing in which the differential signal or a non-linear signal of this differential signal is added to an original image signal.Type: ApplicationFiled: January 5, 2011Publication date: January 10, 2013Applicant: Sony CorporationInventor: Seiji Kobayashi