FORMATTING 3D CONTENT FOR LOW FRAME-RATE DISPLAYS

- CIRCA3D, LLC

Converting three-dimensional (3D) video content for use by a variety of different display devices of different capabilities involves decoding a 3D video signal into two frame buffers. One implementation includes receiving an input 3D video signal that comprises video frames having image data for viewing by a first eye and image data for viewing by a second eye. The implementation includes generating, from the input 3D signal, a first and second frame buffer. The first frame buffer can store a complete first image for viewing by the first eye, while the second frame buffer can store a complete second image for viewing by the second eye. The implementation can further comprise encoding the complete first image into a first video frame and encoding the complete second image into a second video frame of an output 3D video signal.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a U.S. National Stage Application corresponding to PCT Patent Application No. PCT/US2011/027175, filed Mar. 4, 2011, which claims priority to U.S. Provisional Application No. 61/416,708, filed Nov. 23, 2010, entitled “3D VIDEO CONVERTER.” The present application is also a continuation-in-part of: PCT Patent Application No. PCT/US2011/025262, filed Feb. 17, 2011, entitled “BLANKING INTER-FRAME TRANSITIONS OF A 3D SIGNAL;” PCT Patent Application No. PCT/US2011/027933, filed Mar. 10, 2011, entitled “DISPLAYING 3D CONTENT ON LOW FRAME-RATE DISPLAYS;” PCT Patent Application No. PCT/US2011/027981, filed Mar. 10, 2011, entitled “SHUTTERING THE DISPLAY OF INTER-FRAME TRANSITIONS;” PCT Patent Application No. PCT/US2011/032549, filed Apr. 14, 2011, entitled “ADAPTIVE 3-D SHUTTERING DEVICES;” and PCT Patent Application No. PCT/US2011/031115, filed Apr. 4, 2011, entitled “DEVICE FOR DISPLAYING 3D CONTENT ON LOW FRAME-RATE DISPLAYS.” The entire content of each of the foregoing applications is incorporated by reference herein.

BACKGROUND

1. The Field of the Invention

This invention relates to systems, methods, and computer program products related to conversion and presentation of three-dimensional video content.

2. Background and Relevant Art

Three-dimensional (3D) display technology involves presenting two-dimensional images in such a manner that the images appear to the human brain to be three-dimensional. The process typically involves presenting “left” image data to the left eye, and “right” image data to the right eye. When received, the brain perceives this data as a 3D image. 3D display technology generally incorporates the use of a filtering or blanking device, such as glasses, which filter displayed image data to the correct eye. Filtering devices can be passive, meaning that image data is filtered passively (e.g., by color code or by polarization), or active, meaning that the image data is filtered actively (e.g., by shuttering).

Traditional display devices, such as computer monitors, television sets, and portable display devices, have been either incapable of producing suitable image data for 3D viewing, or have produced an inferior 3D viewing experience using known devices and processes. For instance, viewing 3D content from traditional display devices, generally results in blurry images and/or images that have “ghosting” effects, both of which may cause headache, discomfort, and even nausea in the viewer. This is true even for display devices that incorporate more recent display technologies, such as Liquid Crystal Display (LCD), Plasma, Light Emitting Diode (LED), Organic Light Emitting Diode (OLED), etc

Recently, 3D display devices specifically designed for displaying 3D content have become increasingly popular. These 3D display devices are generally used in connection with active filtering devices (e.g., shuttering glasses) to produce 3D image quality not previously available from traditional display devices. These 3D display devices, however, are relatively expensive when compared to traditional display devices.

Furthermore, producers of 3D content and manufacturers of 3D display devices have developed a variety of 3D encoding formats for encoding 3D image data. Unfortunately, particular 3D display devices use corresponding specific 3D encoding formats. As a result, traditional display devices generally are not compatible with 3D content produced for 3D display devices. Thus, consumers are faced with the purchase of expensive 3D display devices, even when they may already have traditional display devices available. Furthermore, if a consumer decides to purchase a specialized 3D display device they are limited to displaying 3D content specifically designed for their particular 3D display device. Accordingly, there a number of considerations to be made regarding the display of 3D content.

BRIEF SUMMARY OF THE INVENTION

One or more implementations of the present invention provide systems, methods, and computer program products configured to enable users to view three-dimensional (3D) content encoded in a variety of 3D encoding formats on a broad range of display devices. One or more implementations can convert received 3D input content into 3D output content that is customized for a destination display device. Even when viewed on traditional display devices, the 3D output content can provide a visual quality that can match or even exceed the visual quality of 3D content viewed on specialized 3D display devices. Accordingly, implementations of the present invention can alleviate or eliminate the need to purchase a 3D display device by allowing a viewer to view 3D content encoded in virtually any 3D encoding format on traditional display devices.

For example, one or more implementations can include a method of formatting three-dimensional video content for display on a low-frame rate display device by decoding three-dimensional video content into two frame buffers. The method can involve receiving an input 3D video signal. The input 3D video signal can include video frames having image data for viewing by a first eye and image data for viewing by a second eye. The method can also involve generating a first frame buffer from the input three-dimensional video signal that includes a complete first image for viewing by the first eye. In addition, the method can involve generating a second frame buffer from the input three-dimensional video signal that includes a complete second image for viewing by the second eye. Furthermore, the method can involve generating an output three-dimensional video signal by encoding the complete first image from the first frame buffer into a first video frame and encoding the complete second image from the second frame buffer into a second video frame.

In addition, a method of decoding a video frame into two frame buffers to generate a three-dimensional video signal capable of display on a low-frame rate display device can involve receiving an input video frame of a 3D video signal. The input video frame including a first image for viewing by a first eye and second image for viewing by a second eye. Additionally, the method can involve decoding the first image of the input video frame into a first frame buffer. The method can also involve decoding the second image of the input video frame into a second frame buffer. Furthermore, the method can involve encoding a first output video frame from the first frame buffer. The first output video frame can include image data from only the first image. In addition, the method can involve encoding a second output video frame from the second frame buffer. The second output video frame can include image data from only the second image.

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

Additional features and advantages of exemplary implementations of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary implementations. The features and advantages of such implementations may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features will become more fully apparent from the following description and appended claims, or may be learned by the practice of such exemplary implementations as set forth hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that the figures are not drawn to scale, and that elements of similar structure or function are generally represented by like reference numerals for illustrative purposes throughout the figures. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

FIG. 1 illustrates a schematic diagram of a conversion system for decoding three-dimensional content in accordance one or more implementations of the present invention;

FIG. 2 illustrates flow diagrams demonstrating decoding various three-dimensional encoding formats into first and second frame buffers in accordance with one or more implementations of the present invention;

FIG. 3 illustrates a schematic diagram of a system for use in formatting and displaying a three-dimensional video signal in accordance with one or more implementations of the present invention;

FIG. 4 illustrates a schematic diagram of the video processing device in accordance with one or more implementations of the present invention;

FIG. 5 illustrates a schematic diagram of shuttering the display of 3D video content in response to a blanking signal in accordance one or more implementations of the present invention;

FIG. 6 illustrates a flowchart of a series of acts in a method in accordance with an implementation of the present invention of formatting three-dimensional video content for display on a low-frame rate display device by decoding three-dimensional video content into two frame buffers; and

FIG. 7 illustrates a flowchart of a series of acts in a method in accordance with an implementation of the present invention of decoding a video frame into two frame buffers to generate a three-dimensional video signal capable of display on a low-frame rate display device.

DETAILED DESCRIPTION

One or more implementations of the present invention provide systems, methods, and computer program products configured to enable users to view three-dimensional (3D) content encoded in a variety of 3D encoding formats on a broad range of display devices. One or more implementations can convert received 3D input content into 3D output content that is customized for a destination display device. Even when viewed on traditional display devices, the 3D output content can provide a visual quality that can match or even exceed the visual quality of 3D content viewed on specialized 3D display devices. Accordingly, implementations of the present invention can alleviate or eliminate the need to purchase a 3D display device by allowing a viewer to view 3D content encoded in virtually any 3D encoding format on traditional display devices.

Specialized 3D display devices attempt to provide an enhanced 3D viewing experience by modifying physical characteristics of the display device and by employing customized 3D encoding formats. For example, specialized 3D display devices can include physical modifications that increase the frame rate (i.e., the number of unique video frames the display device can draw in a given amount of time) compared to traditional or low-frame rate display devices. Additionally, specialized 3D display devices can also reduce the frame overlap interval (i.e., the period of time that elapses when transitioning between two frames) compared to traditional or low-frame rate display devices. When displaying customized 3D encoding formats, these physical enhancements, can reduce potential side-effects of 3D viewing (e.g., motion blurring or ghosting).

One or more implementations of the present invention provide for converting incoming 3D encoding formats, such as those used by specialized 3D display devices, into outgoing 3D content tailored for display by particular display device. One or more implementations can include generating two frame buffers from incoming 3D content. Each frame buffer can include complete image data for a corresponding eye. Additionally, one or more implementations can also include generating output 3D content from these frame buffers, based on physical characteristics of the destination display device. Thus, one or more implementations of the present invention can allow viewing of 3D content on a broad range of display devices, including display devices that would otherwise be incompatible with the 3D content, because the output 3D content is tailored to the destination display device.

Additionally, one or more implementations can include generating a blanking signal in conjunction with the output 3D video signal that blanks, some of, or all of the frame overlap interval of the destination display device from the user's view. Blanking the frame overlap interval can reduce or eliminate common side-effects of 3D viewing (e.g., motion blurring or ghosting) without modifying the destination display device (e.g., increasing the frame rate). Thus, one or more implementations allow for viewing of 3D content on devices not designed for specifically for 3D content display, such as devices with lower frame rates and longer frame overlap intervals.

FIG. 1, for example, illustrates a schematic diagram of a conversion system 100 for decoding 3D content from a variety of 3D encoding formats in accordance one or more implementations of the present invention. FIG. 1 illustrates that the conversion system 100 can receive input 3D content 102 and can generate output 3D content 104. As illustrated, input 3D content 102 can comprise one or more video frames 114a, 114b that encode 3D image data using one or more of a variety of 3D encoding formats. As discussed more fully in connection with FIG. 2, the input 3D content 102 can use the one or more video frames 114a, 114b to encode left eye image data and right eye image data using any of a variety of techniques.

FIG. 1 also illustrates that the conversion system 100 can include a decoder 106 and frame buffers 110, 112. The decoder 106 can detect the 3D encoding format of input 3D content 102. The decoder 106 can then decode left eye image data into one frame buffer (e.g., frame buffer 110) and decode right eye image data into the other buffer (e.g., frame buffer 112). As illustrated more fully in FIG. 2, decoding can involve using corresponding frame buffers to construct complete left eye image data and complete right eye image data. For instance, decoding may involve decoding a plurality of video frames to construct a complete image, which is stored in a frame buffer. Of course, decoding may involve decoding one video frame to construct complete image data for each frame buffer, or decoding a single video frame to construct complete image data for both frame buffers.

Additionally, FIG. 1 illustrates that the conversion system 100 can include an encoder 108 that generates output 3D content 104. After the decoder 106 constructs complete left and right eye image data in corresponding frame buffers 110, 112, the encoder 108 can encode this image data into output 3D content 104. In one or more implementations, the encoder 108 encodes the output 3D content 104 in a “flip frame” encoding format, which comprises alternating left image data video frames 116 and right image data video frames 118.

The conversion system 100 can also include a detection module 120. The detection module 120 can identify or determine the display and/or physical characteristics of the display device to which the output 3D content 104 is to be transmitted. For example, the detection module 120 can receive input from a user identifying the display and/or physical characteristics of the display device. Alternatively, upon connection to the display device, the display detection module 120 can determine the display and/or physical characteristics of the display device. Based upon the determined display and/or physical characteristics of the display device, the encoder 108 and/or the decoder 106 can tailor the output 3d content 104 specifically for the display device.

For instance, in one or more implementations, encoder 108 can adjust the frame size (e.g., number of vertical lines of resolution) of frames in output 3D content 104 for view on the destination display device (and based on the input from the detection module 120). Additionally or alternatively, encoder 108 can generate progressive or interlaced video frames in output 3D content 104. Progressive video frames include complete image data the encoded image, while interlaced vide frames contain only partial image data for the encoded image (e.g., odd lines or even lines). Furthermore, encoder 108 can also generate a frame rate customized for view on the destination display device, which may involve upscaling or downscaling the frame rate of input 3D content 102.

In one or more implementations, upscaling the frame rate can involve introducing redundant frames into the output 3D content 104. For instance, the encoder 108 can generate output 3D content 104 that includes a left image data video frame 116 followed by a right image data video frame 118. The encoder 108 can then repeat this sequence by sending the same left image data video frame 116 followed by the same right image data video frame 118. One will appreciate in light of the disclosure herein that repeating the same image data can double the frame rate. For example, if input 3D content 102 has a frame rate of 60 Hz (i.e., decoder 102 decodes 60 complete frames of image data per second), then introducing the same image data twice can result in output 3D content 104 having an upscaled frame rate of 120 Hz.

Downscaling, on the other hand, can involve reducing the number of frames in output 3D content 104. This may be useful when displaying output 3D content 104 on older display devices that may not be optimal for, or capable of, displaying higher frame rates. Downscaling can involve omitting some frames of left and right image data stored in frame buffers 110 and 112. Downscaling can also involve detecting differences between sequential video frames and generating new frames that capture these differences. The system 100 can generate the new frames at a lower rate than the original frames, thereby reducing the frame rate.

One will appreciate in light of the disclosure herein that the decoder 106 and the encoder 108 can comprise hardware, software, or a combination thereof. In one or more implementations the decoder 106 and/or the encoder 108 can comprise one or more constituent hardware decoders and/or encoders, which each decode and/or encode various 3D encoding formats. In one or more other implementations, the decoder 106 and/or the encoder 108 can implement software-based decoders and/or encoders. For example, the decoder 106 and/or the encoder 108 can include one or more Field Programmable Gate Arrays (FPGAs).

The decoder 106 and the encoder 108 can each be capable of encoding and/or decoding both analog and digital content. Thus, the conversion system 100 can convert input 3D content 102 that is digital into output 3D content 104 that is analog. Alternatively, the conversion system 100 can convert input 3D content 102 that is analog into output 3D content 104 that is digital. Of course, the conversion system 100 can also convert digital content to digital content, or analog content to analog content.

Turning now to FIG. 2, a plurality of flow diagrams illustrating 3D encoding formats and resulting frame buffer data in accordance with one or more implementations are shown. In light of the disclosure herein, one will appreciate that the illustrated 3D encoding formats represent but a few of the available 3D encoding formats, and are not limiting to the 3D encoding formats that the system 100 can decode. In view of these examples, one of ordinary skill in the art could ascertain how to decode other 3D encoding formats into separate frame buffers. Furthermore, in one or more implementations, 3D encoding formats and decoding techniques are combined.

In one or more implementations, the input 3D content 102 (FIG. 1) is encodes image data as a plurality of interlaced frames. For example, FIG. 2 illustrates that the one or more video frames 114 (FIG. 1) can include interlaced frames 114a, 114b. Each interlaced frame 114a, 114b can contain only part of the image data for a complete image. For example, one frame 114a can encode odd numbered lines of image data, while another frame 114b can encode even numbered lines of image data. Decoding these frames 114a, 114b can involve decoding each of the interlaced frames and combining the decoded data into a single corresponding frame buffer to construct a complete image. In the illustrated case, decoding involves decoding two interlaced frames 114a, 114b into one frame buffer 110 to construct image data for one eye, and decoding two interlaced frames 114c, 114d into another frame buffer 112 to construct image data for the other eye. Of course, other combinations of interlacing techniques and frame sequences are possible.

Flow diagrams 204 and 206 illustrate 3D encoding formats utilizing spatial compression, which encodes a plurality of images within a single video frame. For example, using “over under” spatial compression in frame 114e, image data for each eye is encoded using the upper and lower halves of the frame. Similarly, using “side by side” spatial compression in frame 114f, image data for each eye is encoded using the left and right halves of the frame 114f. In these contexts, decoding can include extracting image data for both eyes from a single frame, and storing image data for each single eye in corresponding frame buffers 110, 112. One will appreciate that the decoding process can involve resizing and/or altering the aspect ratio of each image. Additionally, the decoding process can involve constructing an image with lower resolution than the spatial frame, etc.

Flow diagram 114c illustrates yet another 3D encoding format that utilizes interleaving. Similar to “over under” and “side by side” encoding, interleaving incorporates image data for both eyes in a single frame. Instead of separating image data on different halves of the frame, however, interleaving interleaves the image data. Interleaving can include interleaving odd lines of one image with even lines of another image, interleaving odd columns of one image with even columns of another image, interleaving opposite checkerboard patterns, etc. In this context, decoding can also include extracting image data for both eyes from a single frame, and storing image data for each single eye in corresponding frame buffers 110, 112.

Thus, the conversion system 100 can receive input 3D content 102 that uses a variety encoding techniques (e.g., spatial compression, interleaving, etc.) and generates output 3D content 104 that is capable of being displayed by a traditional display device (i.e., a low frame rate device). As illustrated in FIG. 2, in one or more implementations the conversion system 100 can receive a single video frame (e.g., video frame 114e, 114f, or 114g) that includes left and right image data. The conversion system 100 can then convert that combined frame 114e, 114f, 114g to separate video frames 110, 112 each encoding only left or only right image data. Such a conversion technique can enable the viewing of the output 3D content 104 on traditional display devices not capable of parsing or displaying the combined frame 114e, 114f, 114g. Furthermore, by combining these conversion techniques with an “inter-frame blanking” signal (discussed more fully herein after), the 3D content is viewable even on traditional display devices having low frame rates.

Turning now to FIG. 3, a schematic diagram of a system 300 for use in decoding a three-dimensional video signal into two frame buffers, in accordance with one or more implementations is shown. FIG. 3 illustrates that the system 300 can include a video processing device 302, one or more blanking devices 304, and a display device 306. These devices can be separate or combined. For instance, in one or more implementations the video processing device 302 and the display device 306 are separate units, while in one or more other implementations these devices form a single unit.

In one or more implementations the video processing device 302 receives a 3D video signal from a media device. The media device can comprise any number of devices capable of transmitting a 3D video signal to the video processing device 302. For example, FIG. 3 illustrates that the media device can comprise a streaming source 308 (e.g., a satellite box, cable box, internet), a gaming device (e.g., XBOX 310, PLAYSTATION 316), a player device (e.g., Blu-Ray player 312, DVD player 314), etc. In one or more implementations, the media device can receive 3D video content from an optical media, such as a DVD or Blu-Ray disc 318. In alternative implementations, the media device can receive the 3D video content via the Internet, a cable provider, or a satellite dish.

Of course, the video processing device 302 can, itself, include one or more media devices. Thus, the video processing device 302 can be directly connected a satellite dish, a cable provider, the Internet, etc. Additionally or alternatively, the video processing device 302 can be a player device that receives 3D video content directly from optical media. In one or more implementations, the video processing device 302 can also comprise a gaming device.

In any event, the video processing device 302 receives input 3D content 102 and converts the input 3D content 102 into output 3D content 104 suitable for the display device 306. The video processing device 302 also transmits the output 3D content 104 to the display device 306 for display to one or more users. Prior to or concurrently with sending the output 3D content 104, the video processing device 302 can also transmit a blanking signal to the blanking devices 304. As instructed by the blanking signal, the blanking devices 304 can synchronously blank portions of the displayed output 3D content 104 from view to create the illusion that two-dimensional images are 3D. The blanking signal is discussed more fully herein below in connection with FIG. 5.

The video processing device 302 can communicate with the display device 306 and the blanking device(s) 304 in any appropriate manner. For instance, an appropriate wired mechanism, such as High Definition Media Interface (HDMI), component, composite, coaxial, network, optical, and the like can couple the video processing device 302 and the display device together. Additionally, or alternatively, an appropriate wireless mechanism, such as BLUETOOTH, Wi-Fi, etc., can couple the video processing device 302 and the display device 306 together. Likewise, any appropriate wired or wireless mechanism (e.g., BLUETOOTH, infrared, etc.) can couple the video processing device 302 and the blanking device(s) 304 together.

One will appreciate that the video processing device 302 can generate any appropriate output signal comprising output 3D content 104. For example, when the video processing device 302 and the display device 306 are coupled via a digital mechanism (e.g., HDMI), the video processing device 302 can generate a digital signal that includes the output 3D content 104. On the other hand, when the video processing device 302 and the display device 306 are coupled via an analog mechanism (e.g., component, composite or coaxial), the video processing device 302 can generate an analog signal that includes the output 3D content 104.

One will appreciate in view of the disclosure herein that the video processing device 302 can take any of a variety of forms. For example, the video processing device 302 may be a set-top box or other customized computing system. The video processing device 302 may also be a general purpose computing system (e.g., a laptop computer, a desktop computer, a tablet computer, etc.). Alternatively, the video processing device 302 a special purpose computing system (e.g., a gaming console, a set-top box, etc.) that has been adapted to implement one or more disclosed features.

The display device 306 can be any one of a broad range of display devices that incorporate a variety of display technologies, both current and future (e.g., Cathode Ray, Plasma, LCD, LED, OLED). Furthermore, the display device 306 can take any of a number of forms, such as a television set, a computer display (e.g., desktop computer monitor, laptop computer display, tablet computer display), a handheld display (e.g., cellular telephone, PDA, handheld gaming device, handheld multimedia device), or any other appropriate form. While the display device 306 can be a display device designed specifically to displaying 3D content, display device 306 can also be a more traditional display device.

The blanking device(s) 304 can be any blanking device(s) configured to interoperate with video processing device 302 and to respond to one or more blanking instructions received via a blanking signal. In one or more implementations, the blanking device(s) 304 comprise shuttering glasses that include one or more shuttering components that selectively block a user's view of the display device 306. In one or more implementations, the blanking device(s) 304 are capable of selectively blanking a left eye view, a right eye view, and a view from both eyes. The blanking device(s) 304 can also refrain from blanking any part of the user's view. When used in connection with an appropriate blanking signal synchronized with displayed 3D content, the blanking device(s) 304 can provide the illusion that two-dimensional content is 3D.

In one or more implementations, the blanking device(s) 304 comprise shuttering components that include one or more liquid crystal layers. The liquid crystal layers can have the property of becoming opaque (or substantially opaque) when voltage is applied (or, alternatively, when voltage is removed). Otherwise, the liquid crystal layers can have the property being transparent (or substantially transparent) when voltage is removed (or, alternatively, when voltage is applied). Thus, the blanking device(s) 304 can apply or remove voltage from the shuttering components to block the user's view, as instructed by the blanking signal.

FIG. 4 illustrates a schematic diagram of the video processing device 302 in accordance with one or more implementations. As illustrated, video processing device 302 can include a plurality of constituent components, including a video receiver component 402, a video transmitter component 404, a processing component 406, and a blanking signal transmitter component 408. Of course, the video processing device 302 can include any number of additional components. The video processing device 302 can also include fewer components than those illustrated.

The video receiver component 402 can receive a 3D video signal from any appropriate source, such as a media device. The video transmitter component 404 can send the 3D video signal to the display device 306, either in its original form or in a modified form. One will appreciate that these components can be combined as a single component, or can even be eliminated altogether (e.g., when the video processing device 302 is integrated with the display device 306). The video receiver component 402 can receive a plurality of 3D video signal formats, and the video transmitter component 404 can transmit a plurality of 3D video signal formats, including a universal 3D video signal format.

The processing component 406 can process the received input 3D content. Processing component 406 can include any number constituent components. In one or more implementations, for example, the processing component 406 comprises conversion system 100. Thus, the processing component 406 can include the decoder 106, the encoder 108, and the frame buffers 110, 112. Using conversion system 100, the processing component 406 can convert the received input 3D content into output 3D content that is customize for the display device 306.

In one or more implementations, converting the received 3D video signal into the output 3D content can include decoding (e.g., with the decoder 106) the received 3D video signal into two frame buffers (e.g., frame buffers 110, 112), in the manner discussed herein above with reference to FIGS. 1 and 2. This can include decoding right eye image data into one frame buffer and decoding left eye image data into the other frame buffer. After decoding the image data into the two frame buffers, the method can also include encoding (e.g., with the encoder 108) the stored left and right image data into output 3D content (e.g., output 3D content 104). In one or more implementations the output 3D content comprises a “flip frame” encoding format. When encoding “flip frame”, the encoding includes generating an alternating sequence of “left” and “right” video frames, each encoding corresponding left eye image data and right eye image data from the frame buffers.

Of course, the “flip frame” encoding format can include any appropriate sequence of frames. For instance, when upscaling the frame rate, the sequence may be L1, R1, L1, R1, L2, R2, etc., where L1 and R1 correspond to first left and right images, and where L2 and R2 correspond to second left and right images. Alternatively, when encoding for a display device 306 the receives interlaced video, the sequence may be L, L, R, R, etc., where each left frame encodes only a portion of the left eye image data and where each right frame encodes only a portion of the right eye image data. Of course, one will appreciate in light of this disclosure that other sequences are also possible.

In one or more implementations, the processing component 406 also generates a blanking signal which corresponds to the generated output 3D content 104. The blanking signal can also take physical characteristics (e.g., frame rate and/or frame overlap interval) of the display device 306 into account. The blanking signal can include one or more blanking instructions that direct the blanking device(s) 304 to synchronously blank a viewer's left and right eyes to produce the illusion that two-dimensional content is 3D. The blanking instructions in the blanking signal also correspond with displayed video frames of output 3D content. The blanking instructions in the blanking signal can also include one or more blanking instructions which blank both eyes during transition periods during which the display device 306 transitions between displaying two video frames. The timing of these “inter-frame blanking” instructions can be based on the frame rate and/or frame overlap interval of the display device 306.

Inter-frame blanking signal transmitter 408 can transmit the generated inter-frame blanking signal to one or more blanking devices (e.g., blanking device(s) 304). As discussed herein above, the blanking signal transmitter 408 can transmit wirelessly (e.g., BLUETOOTH or infrared) or with a wired connection. Also discussed herein above, the blanking signal transmitter 408 can employ any number of protocols, analog or digital. In some circumstances, the blanking signal transmitter 408 is incorporated with the video processing device 302 or even with the display device 306, while in other instances the blanking signal transmitter 408 is a separate device (e.g., a separate USB device).

FIG. 5 illustrates a schematic diagram of a method shuttering the display of 3D video content in response to a blanking signal, in accordance one or more implementations. FIG. 5 illustrates that displaying 3D content according to one or more implementations can include at least three different display states 502, 504, 506. For example, when the display device 306 displays “left eye content” 510 in state 502, the video processing device 302 can send a blanking signal to the blanking device 304 including one or more data packets 511. The data packet 511 can include instructions to use shuttering component 520 to blank the viewer's right eye view of the display device 306. Thus, upon receipt of data packet 511, the blanking device 304 can blank or occlude the viewer's right eye view of the display device 306 using shutting component 520.

Similarly, when the display device 306 displays “right eye content” 514 in state 506, the video processing device 302 can send a blanking signal to the blanking device 304 including one or more data packets 515. The data packet 515 can include instructions to use shuttering component 518 to blank the viewer's left eye view of the display device 306. Thus, upon receipt of data packet 515, the blanking device 304 can blank or occlude the viewer's left eye view of the display device 306 using shutting component 518.

These two blanking states alone can provide the illusion that two-dimensional images are actually 3D. Thus, the customized blanking signal, when combined with customized output 3D content can enable the display of 3D content on a broad range of display devices, even when those devices are not designed for the specific input 3D encoding format.

One or more implementations of the present invention, however, provide for a third “inter-frame blanking” state which entirely blanks all or part of the transition period that occurs during the transition between two video frames. The transition period can occur due to physical limitations of the display device 306 (e.g., lower frame rate and/or long frame overlap) that cause the display device 306 to concurrently display portions of two transitioning images for a prolonged period of time. State 504, for example, illustrates a portion of a frame overlap time interval. During this time, the display device 306 displays an “inter-frame overlap” 512, in which the display device 306 concurrently displays portions of the two video frames (e.g., “left eye content” 510 and “right eye content” 514).

The video processing device 302 can send a blanking signal to the blanking device 304 including one or more data packets 513. The data packet 513 can include instructions to use shuttering components 518, 520 to blank the viewer's entire view of the display device 306. Thus, upon receipt of data packet 513, the blanking device 304 can blank or occlude the viewer's right and left eye views of the display device 306 using shutting components 518, 520. Thus, during the inter-frame overlap, the blanking device 304 can concurrently use both shuttering components 518, 520 to blank portions of the viewer's view of the display device 306 corresponding to both eyes.

By blanking both eyes during a frame overlap time interval, the blanking device 304 can prevent the user from viewing all or part of the inter-frame overlap 304 during all or part of the frame overlap interval. This “inter-frame blanking” can enhance the clarity of the perceived 3D image by reducing or eliminating undesirable effects such as motion blurring and ghosting that can commonly occur on display devices that have lower frame rates and/or longer frame overlap intervals. Thus, inter-frame blanking techniques, when combined with the customized output 3D content, can also enhance the quality of the viewed 3D content, even when that content is viewed on display devices that are not specifically designed for 3D content display (i.e., low frame rate devices).

Accordingly, FIGS. 1-5 provide a number of components and mechanisms for converting a plurality of 3D encoding formats into a universal 3D encoding format by decoding input 3D content into two separate frame buffers and encoding output 3D content from the separate frame buffers. The output 3D content can be a universal 3D encoding format which can be viewed in connection with a blanking device controlled by a blanking signal generated in connection with the output 3D content. Thus, one or more disclosed implementations allow for viewing of 3D content on a broad range of display devices, even when that content is not originally encoded for viewing on those devices.

Additionally, implementations of the present invention can also be described in terms of flowcharts comprising one or more acts in a method for accomplishing a particular result. Along these lines, FIGS. 6-7 illustrate flowcharts of computerized methods of decoding 3D content into two frame buffers. For example, FIG. 6 illustrates a flowchart of a method of decoding a three-dimensional video signal into two frame buffers. Similarly, FIG. 7 illustrates a flowchart of a method of decoding a video frame into two frame buffers to generate a three-dimensional video signal. The acts of FIGS. 6 and 7 are described herein below with respect to the schematics, diagrams, devices and components shown in FIGS. 1-5.

For example, FIG. 6 shows that a method of decoding a 3D video signal into two frame buffers can comprise an act 602 of receiving a 3D video signal. Act 602 can include receiving an input 3D video signal, the input 3D video signal comprising one or more video frames including image data for viewing by a first eye and image data for viewing by a second eye. For example, the act can include the video processing device 302 receiving input 3D content 102 that includes one or more video frames 114 encoding left and right image data. In connection with receiving the input 3D video signal, act 602 can also include determining an encoding type of the input 3D dimensional video signal. In some instances, for example, a plurality of video frames encode the left and right image data (e.g., video frames 114a, 114b, 114c, 114d), while in other instances a single video frame encodes the left and right image data (e.g., video frames 114e, 114f, 114g).

FIG. 6 also shows that the method can comprise act 604 of generating a first frame buffer and act 606 of generating a second frame buffer. Act 604 can include generating a first frame buffer from the input 3D video signal that includes a complete first image for viewing by the first eye. Similarly, act 606 can include generating a second frame buffer from the input 3D video signal that includes a complete second image for viewing by the second eye. For example, acts 604 and 606 can involve the decoder 106 decoding first image data for viewing by a first eye into the first frame buffer 110, and the decoder 106 decoding second image data for viewing by a second eye into the second frame buffer 112.

Generating complete images can include decoding a plurality of video frames for each image. For instance, generating the first frame buffer 110 and generating the second frame buffer 112 can include decoding first image data and second image data from a plurality of interlaced video frames (e.g., interlaced video frames 114a, 114b, 114c, 114d). Of course, generating the first frame buffer 110 and generating the second frame buffer 112 can also include decoding first image data from a single progressive video frame and decoding second image data from a different progressive video frame. Furthermore, generating the first frame buffer 110 and generating the second frame buffer 112 can include decoding the complete first image and the complete second image from a single spatially-compressed video frame. For example, the single spatially-compressed video frame may be encoded using over-under (e.g., video frame 114e), side-by side (e.g., video frame 114f), or interleave (e.g., video frame 114g) techniques.

In addition, FIG. 6 shows that the method can comprise an act 608 of generating an output 3D video signal. Act 608 can include generating an output 3D video signal by encoding the complete first image from the first frame buffer into a first video frame and encoding the complete second image from the second frame buffer into a second video frame. For example, the act can include the encoder 108 encoding image data from the first and second frame buffers 110, 112 into output 3D content 108. Encoding can include determining physical characteristics of the destination display device 306, and customizing the output 3D content 104 based on these physical characteristics.

For instance, when the destination display device takes interlaced frames, encoding the complete first image from the first frame buffer (e.g., frame buffer 110) into the first video frame 116 can include encoding the first image into a first plurality of interlaced video frames that include the first video frame. Similarly, encoding the complete second image from the second frame buffer (e.g., frame buffer 112) into the second video frame 118 can include encoding the second image into a second plurality of interlaced video frames that include the second video frame. Alternatively, when the destination display device takes progressive frames, encoding can include encoding the complete first image into a progressive video frame that comprises the first video frame and encoding the complete second image into a different progressive video frame that comprises the second video frame. Additionally, generating the output 3D video signal can include upscaling or downscaling a frame rate of the output 3D video signal, as compared to the input 3D video signal.

Of course the method can include any number of additional acts. For example, the method can include an act of sending the output 3D video signal to the display device 306. Sending the output 3D video signal can include formatting the output 3D video signal for the display device 306 (e.g., as HDMI, composite, component, etc.) Additionally, the method can include an act of transmitting a blanking signal to one or more blanking devices (e.g., blanking devices 304). As discussed herein above, the blanking signal can instruct the one or more blanking devices 304 to blank the entire display of the output three-dimensional video signal during at least a portion of the display of a transition between a portion of the complete first image and a portion of the complete second image.

In addition to the foregoing, FIG. 7 illustrates a method of decoding a video frame into two frame buffers to generate a 3D video signal. The method can comprise an act 702 of receiving an input video frame. Act 702 can include receiving an input video frame of a 3D video signal, the input video frame including a first image for viewing by a first eye and second image for viewing by a second eye. For example, the input video frame can encode a complete first image and a complete second image (e.g., input video frames 114e, 114f). In this instance, the input video frame can use a spatial compression technique, including at least one of over-under or side-by side. Alternatively, the input video frame can encode only a portion of the first image and the second image, as can be the case if the input video frame uses interlacing (e.g., input video frames 114a, 114b, 114c, 114d).

FIG. 7 also shows that the method can comprise an act 704 of decoding a first image from the input video frame. Act 704 can include decoding the first image of the input video frame into a first frame buffer. For example, the act can include the decoder 106 decoding first image data for viewing by a first eye into the first frame buffer (e.g., frame buffer 110). In particular, the decoder 106 can parse the first image intended for viewing with the first eye from the input video frame 114 and store only image data from the first image in the first frame buffer 110.

FIG. 7 also shows that the method can comprise an act 706 of decoding a second image from the input video frame. Act 706 can include decoding the second image of the input video frame into a second frame buffer. For example, the act can include the decoder 106 decoding second image data for viewing by a first eye into the second frame buffer (e.g., frame buffer 112). In particular, the decoder 106 can parse the second image intended for viewing with the second eye from the input video frame 114 and store only image data from the second image in the second frame buffer 110.

In addition, FIG. 7 shows that the method can comprise an act 708 of encoding a first output video frame. Act 708 can include encoding a first output video frame from the first frame buffer. The first output video frame can include image data from only the first image. For example, act 708 can involve encoder 108 extracting image data from the first image from the first frame buffer 110. The encoder 108 can then encode the image data from the first image into a first video frame 116 of an output 3D video signal 104.

In addition, FIG. 7 shows that the method can comprise an act 710 of encoding a second output video frame. Act 708 can include encoding a second output video frame from the second frame buffer. The second output video frame can include image data from only the second image. For example, act 710 can involve encoder 108 extracting image data from the second image from the second frame buffer 110. The encoder 108 can then encode the image data from the second image into a second video frame 118 of an output 3D video signal 104.

Of course the method can include any number of additional acts. For example, the method can include one or more acts of transmitting the first output video frame and the second output video frame to a display device 306. The display device 306 can include a low frame rate device not specifically designed to display 3D content. The method can include transmitting the first output video frame and the second output video frame to a display device 306 through an HDMI, component, composite, coaxial, network, or optical interface.

Additionally, the method can include generating and transmitting a blanking signal, which includes generating and transmitting a plurality of blanking instructions to one or more blanking devices (e.g., blanking devices 304). For example, a first blanking instruction can direct the one or more blanking devices 304 to blank at least a portion of the first eye's view of a display device during display of the second image, (e.g., state 502), while a second blanking instruction can direct the one or more blanking devices 304 to blank at least a portion of the second eye's view of the display device during display of the first image (e.g., state 506).

Furthermore, a third blanking instruction can direct the one or more blanking devices 304 to blank both the first and second eye's view of the display device during display of at least a portion of the transition between display of the first image and the second image (e.g., state 504). A blanking device can then use these blanking instructions to provide the illusion of 3D content display, and to provide high-quality 3D content display even on lower frame rate devices.

Accordingly, FIGS. 1-7 provide a number of components and mechanisms for decoding 3D video content into two frame buffers and for generating output 3D content. One or more disclosed implementations allow for viewing of 3D content on a broad range of display devices, including devices that that may have lower frame rates and longer frame overlap intervals, or that are not otherwise specifically designed for displaying 3D content.

The implementations of the present invention can comprise a special purpose or general-purpose computing systems. Computing systems may, for example, be handheld devices, appliances, laptop computers, desktop computers, mainframes, distributed computing systems, or even devices that have not conventionally considered a computing system, such as DVD players, Blu-Ray Players, gaming systems, and video converters. In this description and in the claims, the term “computing system” is defined broadly as including any device or system (or combination thereof) that includes at least one physical and tangible processor, and a physical and tangible memory capable of having thereon computer-executable instructions that may be executed by the processor.

The memory may take any form and may depend on the nature and form of the computing system. A computing system may be distributed over a network environment and may include multiple constituent computing systems. In its most basic configuration, a computing system typically includes at least one processing unit and memory. The memory may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term “memory” may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory and/or storage capability may be distributed as well. As used herein, the term “module” or “component” can refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads).

Implementations of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.

Computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.

A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.

Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.

Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.

Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.

The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. At a computer system, the computer system including one or more processors and a memory, a method of formatting three-dimensional video content for display on a low-frame rate display device by decoding three-dimensional video content into two frame buffers, the method comprising the acts of:

receiving an input three-dimensional video signal, the input three-dimensional video signal comprising one or more video frames including image data for viewing by a first eye and image data for viewing by a second eye;
generating a first frame buffer from the input three-dimensional video signal that includes a complete first image for viewing by the first eye;
generating a second frame buffer from the input three-dimensional video signal that includes a complete second image for viewing by the second eye; and
generating an output three-dimensional video signal by encoding the complete first image from the first frame buffer into a first video frame and encoding the complete second image from the second frame buffer into a second video frame.

2. The method of claim 1, further comprising transmitting the output three-dimensional video signal to a display device.

3. The method of claim 1, wherein generating the first frame buffer and generating the second frame buffer comprises decoding first image data and second image data from a plurality of interlaced video frames.

4. The method of claim 1, wherein:

generating the first frame buffer comprises decoding first image data from a single progressive video frame; and
generating the second frame buffer comprises decoding second image data from a different progressive video frame.

5. The method of claim 1, wherein generating the first frame buffer and generating the second frame buffer comprises decoding the complete first image and the complete second image from a single spatially-compressed video frame.

6. The method of claim 1, further comprising determining an encoding type of the input three-dimensional video signal.

7. The method of claim 1, wherein generating the first frame buffer and generating the second frame buffer comprises decoding the input three-dimensional video signal with a composite decoder that decodes a plurality of three-dimensional

8. The method of claim 1, wherein:

encoding the complete first image from the first frame buffer comprises encoding the complete first image into a first plurality of interlaced video frames that include the first video frame; and
encoding the complete second image from the second frame buffer comprises encoding the complete second image into a second plurality of interlaced video frames that include the second video frame.

9. The method of claim 1, wherein:

encoding the complete first image from the first frame buffer comprises encoding the first image into a progressive video frame that comprises the first video frame; and
encoding the complete second image from the second frame buffer comprises encoding the second image into a different progressive video frame that comprises the second video frame.

10. The method of claim 1, further comprising:

sending the output three-dimensional video signal to a display device; and
transmitting a blanking signal to one or more blanking devices, the blanking signal instructing the one or more blanking devices to blank the entire display of the output three-dimensional video signal during at least a portion of the display of a transition between a portion of the complete first image and a portion of the complete second image.

11. The method of claim 1, wherein generating the output three-dimensional video signal comprises at least one of upscaling or downscaling a frame rate of the output three-dimensional video signal compared to a frame rate of the input three-dimensional video signal.

12. At a computer system, the computer system including one or more processors and a memory, a method of decoding a video frame into two frame buffers to generate a three-dimensional video signal capable of display on a low-frame rate display device, the method comprising the acts of:

receiving an input video frame of a three-dimensional video signal, the input video frame including a first image for viewing by a first eye and second image for viewing by a second eye;
decoding the first image of the input video frame into a first frame buffer;
decoding the second image of the input video frame into a second frame buffer; and
encoding a first output video frame from the first frame buffer, the first output video frame including image data from only the first image; and
encoding a second output video frame from the second frame buffer, the second output video frame including image data from only the second image.

13. The method of claim 12, further comprising determining a three-dimensional encoding type of the input video frame.

14. The method of claim 12, further comprising generating an output three-dimensional video signal with an upscaled frame by including duplicate output video frames of the first output video frame and the second output video frame.

15. The method of claim 12, further comprising:

generating a blanking signal that includes a plurality of blanking instructions, including: a first blanking instruction directing one or more blanking devices to blank at least a portion of the first eye's view of a display device during display of the first output video frame, a second blanking instruction directing the one or more blanking devices to blank at least a portion of the second eye's view of the display device during display of the second output video frame, and a third blanking instruction directing the one or more blanking devices to blank both the first and second eye's view of the display device during display of at least a portion of a transition between display of the first output video frame and the second output video frame; and transmitting the blanking signal to the one or more blanking devices.

16. The method of claim 12, wherein the input video frame encodes both the first image and the second image using a spatial compression technique, including at least one of over-under or side-by side.

17. The method of claim 12, further comprising transmitting the first output video frame and the second output video frame to a display device.

18. The method of claim 12, wherein decoding the first image of the input video frame into a first frame buffer includes parsing the first image from the input video frame and storing only image data from the first image in the first frame buffer.

19. The method of claim 18, wherein decoding the second image of the input video frame into a second frame buffer includes parsing the second image from the input video frame and storing only image data from the second image in the first frame buffer.

20. A computer program product for implementing a method for formatting three-dimensional video content for display on a low-frame rate display device by decoding three-dimensional video content into two frame buffers, the computer program product for use at a computer system, the computer program product comprising computer-executable instructions that, when executed by the computer system, cause one or more processors of the computer system to perform a method comprising the acts of:

receiving an input three-dimensional video signal, the input three-dimensional video signal comprising one or more video frames including image data for viewing by a first eye and image data for viewing by a second eye;
generating a first frame buffer from the input three-dimensional video signal that includes a complete first image for viewing by the first eye;
generating a second frame buffer from the input three-dimensional video signal that includes a complete second image for viewing by the second eye; and
generating an output three-dimensional video signal by encoding the complete first image from the first frame buffer into a first video frame and encoding the complete second image from the second frame buffer into a second video frame.
Patent History
Publication number: 20120140032
Type: Application
Filed: Mar 4, 2011
Publication Date: Jun 7, 2012
Applicant: CIRCA3D, LLC (Salt Lake CIty, UT)
Inventor: Timothy A. Tabor (West Jordan, UT)
Application Number: 13/378,649
Classifications
Current U.S. Class: Signal Formatting (348/43); Coding Or Decoding Stereoscopic Image Signals (epo) (348/E13.062)
International Classification: H04N 13/00 (20060101);