METHOD AND APPARATUS FOR PROCESSING 3D VIDEO IMAGE

- Samsung Electronics

A 3D video image processing method including: acquiring three-dimensional (3D) format information of a video image generated from video data to determine a 3D format of the video image; generating, from a first graphic image, a second graphic image corresponding to the determined 3D format of the video image using the 3D format information, the first graphic image being generated from graphic data; and overlaying the video image with the second graphic image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No. 61/075,184, filed on Jun. 24, 2008 in the U.S. Patent and Trademark Office, and the benefit of Korean Patent Application No. 10-2008-0091268, filed on Sep. 17, 2008 in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entirety by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

Aspects of the present invention relate to a method and an apparatus to process a three-dimensional (3D ) video image, and more particularly, to a method and an apparatus to process a 3D video image to be output with subtitles or a menu.

2. Description of the Related Art

Three-dimensional (3D ) video technology has widely spread with development of digital technology. The 3D video technology that gives a two-dimensional (2D) image depth information to represent a more realistic image has been applied to various fields, including communications, gaming, medical services, and broadcasting services.

The human eyes are apart from each other by a predetermined distance in the horizontal direction so that a 2D image is seen by the left and right eyes differently, which is referred to as binocular disparity. The human brain combines two different 2D images to generate a 3D image having depth and reality. Methods of generating 3D images using binocular disparity include a method using glasses and a method using a device including a lenticular lens, a parallax barrier, and parallax illumination without having glasses.

SUMMARY OF THE INVENTION

Aspects of the present invention provide a 3D video image processing method and apparatus to output a video image in a three-dimensional (3D ) format with a graphic image such as subtitles or a menu.

According to an aspect of the present invention, there is provided a video image processing method including: acquiring 3D format information of a video image generated from video data to determine a 3D format of the video image; generating, from a first graphic image generated from graphic data different from the video data, a second graphic image corresponding to the determined 3D format of the video image using the 3D format information; and overlaying the video image with the second graphic image.

According to an aspect of the present invention, when the 3D format is a side-by-side format, the generating of the second graphic image may include horizontally scaling down the frame of the first graphic image to generate sub frames, and generating the second graphic image having a frame including two sub frames arranged in the horizontal direction.

According to an aspect of the present invention, when the 3D format is a top-and-down format, the generating of the second graphic image may include vertically scaling down the frame of the first graphic image to generate sub frames, and generating the second graphic image having a frame including two sub frames arranged in the vertical direction

According to an aspect of the present invention, the acquiring of the 3D format information may include: extracting an identifier representing whether the video image is a 3D format image from the video data; and acquiring the 3D format information from the video data when the video image is a 3D format image using the identifier.

According to an aspect of the present invention, the video image processing method may further include splitting the video image overlaid with the second graphic image into a left-eye image and a right-eye image and outputting the left-eye image and the right-eye image.

According to an aspect of the present invention, the graphic data may include presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu.

According to an aspect of the present invention, when the graphic data includes both the presentation graphic data and the interactive graphic data, the generating of the second graphic image may include generating a second interactive graphic image using a first interactive graphic image generated from the interactive graphic data and the 3D format information, and generating a second presentation graphic image using a first presentation graphic image generated from the presentation graphic data and the 3D format information.

According to an aspect of the present invention, the overlaying of the video image with the second graphic image may include overlaying the video image overlaid with the second presentation graphic image with the second interactive graphic image.

According to another aspect of the present invention, there is provided a video image processing method including: acquiring 3D format information of a video image generated from video data; converting the video image to a 3D interfaced format using the 3D format information; and overlaying the 3D interlaced format video image with a graphic image generated from graphic data.

According to an aspect of the present invention, when the video image is a top-and-down format including an upper image and a lower image, the converting of the format of the video image to the interlaced format may include alternately arranging odd-numbered horizontal lines of the upper image and even-numbered horizontal lines of the lower image or alternately arranging even-numbered horizontal lines of the upper image and odd-numbered horizontal lines of the lower image to convert the video image to the interlaced format video image.

According to an aspect of the present invention, when the video image is a side-by-side format including a left image and a right image, the converting of the format of the video image to the interlaced format may include alternately arranging odd-numbered vertical lines of the left image and even-numbered vertical lines of the right image or alternately arranging even-numbered vertical lines of the left image and odd-numbered vertical lines of the right image to convert the video image to the interlaced format video image.

According to an aspect of the present invention, the video image processing method may further include splitting the video image overlaid with the graphic image into a left-eye image and a right-eye image and outputting the left-eye image and the right-eye image.

According to another aspect of the present invention, there is provided a video image processing apparatus including: a video data decoder to decode video data to generate a video image; a graphic data decoder decoding graphic data to generate a first graphic image; a second graphic image generator to extract 3D format information of the video image from the video data to determine the a 3D format of the video image, and to generate, from the first graphic image, a second graphic image corresponding to the determined 3D format of the video image using the 3D format information; and a blender to overlay the video image with the second graphic image.

According to yet another aspect of the present invention, there is provided a video image processing apparatus including: a video data decoder to decode video data to generate a video image; a graphic data decoder to decode graphic data to generate a graphic image; a format converter to extract 3D format information of the video image from the video data and to convert the video image to a 3D interlaced format video image using the 3D format information; and a blender overlaying the 3D interlaced format video image with the graphic image.

According to still another aspect of the present invention, there is provided a computer readable recording medium storing a program to execute a video image processing method, the method including: acquiring 3D format information of a video image generated from video data to determine a 3D format of the video image; generating a second graphic image, from a first graphic image generated from graphic data, corresponding to the determined 3D format of the video image using the 3D format information; and overlaying the video image with the second graphic image.

According to another aspect of the present invention, there is provided a computer readable recording medium storing a program to execute a video image processing method, the method including: acquiring 3D format information of a video image generated from video data; converting the video image to a 3D interfaced format video image; and overlaying the 3D interlaced format video image with a graphic image generated from graphic data.

According to an aspect of the present invention, there is provided a video image processing method of a video image processing apparatus, the video image processing method including: acquiring, by the video image processing apparatus, three-dimensional (3D ) format information of a video image generated from video data; and overlaying, by the video image processing apparatus, the video image with a graphic image according to the acquired 3D format information, wherein the graphic image is generated from graphic data that is distinct from the video image.

According to an aspect of the present invention, there is provided a computer-readable recording medium implemented by a video image processing apparatus, the computer-readable recording medium including: video data comprising a video image and three-dimensional (3D ) format information used by the video image processing apparatus to overlay the video image with a graphic image in a 3D format, wherein the graphic image is generated from graphic data that is distinct from the video image.

Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

FIG. 1 is a block diagram of a video image processing apparatus according to an embodiment of the present invention;

FIGS. 2A to 2E illustrate examples of video images and graphic images generated by the video image processing apparatus illustrated in FIG. 1;

FIG. 3 is a block diagram of a video image processing apparatus according to another embodiment of the present invention;

FIGS. 4A to 4E illustrate examples of video images and graphic images generated by the video image processing apparatus illustrated in FIG. 3;

FIG. 5 is a flowchart of a video image processing method according to an embodiment of the present invention; and

FIG. 6 is a flowchart of a video image processing method according to another embodiment of the present invention.

DETAILED DESCRIPTION OF THE EMBODIMENTS

Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.

FIG. 1 is a block diagram of a video image processing apparatus 100 according to an embodiment of the present invention, and FIGS. 2A to 2E illustrate video images and graphic images generated by the video image processing apparatus 100 illustrated in FIG. 1. Referring to FIG. 1, the video image processing apparatus 100 includes a video data decoder 110, a graphic data decoder 120, a second graphic image generator 130, a video image buffer 140, a graphic image buffer 150, and a blender 160. While the video image processing apparatus 100 does not include an output device 200 in FIG. 1, it is understood that, according to other embodiments, the video image processing apparatus 100 may include the output device 200. The video image processing apparatus 100 may be a television, a computer, a mobile device, a set-top box, a gaming system, etc. The output device 200 may be a cathode ray tube display device, a liquid crystal display device, a plasma display device, an organic light emitting diode display device, goggles, etc. Moreover, while not required, each of the units 110, 120, 130, 140, 150, 160 can be one or more processors or processing elements on one or more chips or integrated circuits.

A demultiplexer (not shown) demultiplexes an input signal in the form of a bit stream, transmits video data IN1 to the video data decoder 110 and transmits graphic data IN2 to the graphic data decoder 120. The video data decoder 110 decodes the video data IN1 to generate a video image. The video image can be a 2D format image or a 3D format image. FIG. 2A illustrates 3D format video images decoded by the video data decoder 110. As illustrated in FIG. 2A, in the 3D format image, left and right images to be simultaneously displayed are allocated to a single frame. While not required in all aspects, the video image processing apparatus 100 may include a drive to read a disc including the video data and the graphic data, or can be connected to a separate drive.

A 3D image display apparatus sequentially displays an image for the left eye and an image for the right eye to reproduce a 3D image. A viewer recognizes an image to be played without pause when the display apparatus plays the image at a minimum frame rate of 60 Hz based on one eye. As the 3D image is generated when images input through left and right eyes are combined, the display apparatus therefore outputs the 3D image at a minimum frame rate of 120 Hz in order for the viewer to recognize the 3D image to be played without pause. That is, the viewer recognizes the 3D image when left and right images are sequentially displayed at least every 1/12 seconds.

While a display apparatus supports 120 Hz, a player that decodes video data may not have an output terminal supporting 120 Hz. For example, the player may output frame images and transmit the frame images to the display apparatus at 60 Hz (i.e., every 1/60 seconds). Thus, to generate left and right images every 1/120 seconds through the display apparatus using the frame images transmitted from the player every 1/60 seconds, both left and right images must be included in a frame of a video image. According to aspects of the present invention, the video image processing apparatus 100 corresponds to the player and the output device 200 corresponds to the display apparatus. Accordingly, when the video image processing apparatus 100 transmits frame images of a video image to the output device 200 at a frame rate of 60 Hz, the video image has a 3D format including both left and right images in order for the output device 200 to output the video image at a frame rate of 120 Hz. The 3D format includes a top-and-down format, a side-by-side format, and an interlaced format.

A video image in the top-and-down format is illustrated in the upper part of FIG. 2A. In the top-and-down format, a left-eye image and a right-eye image, which are sub-sampled in the vertical direction, are respectively allocated to upper and lower parts of a frame. A video image in the side-by-side format is illustrated in the lower part of FIG. 2A. In the side-by-side format, a left-eye image and a right-eye image, which are sub-sampled in the horizontal direction, are respectively allocated to left and right parts of a frame. The video data decoder 110 transmits the video image to the video image buffer 140.

The graphic data decoder 120 decodes the graphic data received from the demultiplexer to generate a first graphic image. The graphic data may include presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu. When the graphic data includes both the presentation graphic data and the interactive graphic data, the graphic data decoder 120 generates a first presentation graphic image using the presentation graphic data and generates a first interactive graphic image using the interactive graphic data.

FIG. 2B illustrates the first graphic image decoded by the graphic data decoder 120. When the video image is a 2D format image, the first graphic image is overlaid on the 2D video format image to provide subtitles or a menu with respect to the video image. However, when the video image is a 3D format image as illustrated in FIG. 2A, the first graphic image as illustrated in FIG. 2B does not have a format suitable for the 3D format video image. Thus, a proper subtitle or menu is not output when the first graphic image is overlaid on the 3D format video image. Accordingly, when the video image has a 3D format, a graphic image in a format suitable for the 3D format video image is required.

The second graphic image generator 130 extracts an identifier representing whether the video image is a 2D format image or a 3D format image from a header of the video data. The second graphic image generator 130 sends the first graphic image to the graphic image buffer 150 when the second graphic image generator 130 determines that the video image is a 2D format image using the identifier. Conversely, when the second graphic image generator 130 determines that the video image is a 3D format image, the second graphic image generator 130 acquires 3D format information representing a format type of the 3D video image from the header of the video data. The second graphic image generator 130 decodes the 3D format information to recognize the 3D format type of the video image and generates the second graphic image using the 3D format type. While the second graphic image generator 130 is described above to determine whether the video image is a 2D format image or a 3D format image, and to determine the 3D format type, it is understood that aspects of the present invention are not limited thereto. For example, according to other aspects, although a controller is not shown in FIG. 1, a controller may be included in the video image processing apparatus 100 to determine whether the video image is a 2D format image or a 3D format image, acquire the 3D format information when the video image is a 3D format image, and/or notify the second graphic image generator 130 of the 3D format information and/or the 3D format type.

The second graphic image generator 130 generates the second graphic image suitable for the 3D format type of the video image using the first graphic image and the 3D format information. When the video image has a top-and-down format (as illustrated in the upper portion of FIG. 2A), the second graphic image generator 130 vertically samples the frame of the first graphic image to generate sub frames having a reduced vertical size. The second graphic image generator 130 generates two sub frames having a reduced size and vertically arranges the two sub frames to suit the top-and-bottom format of the video image to generate a single frame including the two sub frames. In FIG. 2C, the upper part illustrates the second graphic image generated using the first graphic image illustrated in the upper part of FIG. 2B when the video image has the top-and-down format as illustrated in the upper part of FIG. 2A.

When the video image has a side-by-side format (as illustrated in the lower part of FIG. 2A), the second graphic image generator 130 horizontally samples the frame of the first graphic image to generate sub frames having a reduced horizontal size. The second graphic image generator 130 generates two sub frames and horizontally arranges the two sub frames to suit the side-by-side format to generate a single frame including the two sub frames. The lower part of FIG. 2C illustrates the second graphic image generated using the first graphic image illustrated in the lower part of FIG. 2B when the video image has the side-by-side format as illustrated in the lower part of FIG. 2A.

When the graphic data includes both the presentation graphic data providing subtitles and the interactive graphic data providing a menu, the second graphic image generator 130 generates a second presentation graphic image using the first presentation graphic image and generates a second interactive graphic image using the first interactive graphic image. The second graphic image generator 130 transmits the generated second graphic image (including the second presentation graphic image and/or the second interactive graphic image) to the graphic image buffer 150.

The video image processing apparatus 100 includes a system time clock (STC) counter (not shown). The video image processing apparatus 100 decodes and outputs the video image according to the STC counter. The video image buffer 140 and the graphic image buffer 150 temporarily store the video image and the second graphic image, respectively, and transmit the video image and the second graphic image to the blender 160 when the STC corresponds to a presentation time stamp (PTS).

The blender 160 overlays the video image with the second graphic image and transmits the video image overlaid with the second graphic image to the output device 200. FIG. 2D illustrates video images overlaid with the second graphic images illustrated in FIG. 2C according to the blender 160. The upper part of FIG. 2D illustrates a video image overlaid with the second graphic image converted to suit the top-and-down format when the video image has the top-and-down format. The lower part of FIG. 2D illustrates a video image overlaid with the second graphic image converted to suit the side-by-side format when the video image has the side-by-side format.

When the graphic data includes both the presentation graphic data providing subtitles and the interactive graphic data providing a menu, the blender 160 overlays the video image with the second presentation graphic image, and overlays the video image overlaid with the second presentation graphic image with the second interactive graphic image in sequence. That is, the blender 160 blends the video image with subtitles first, and then blends the video image blended with the subtitles with a menu. However, it is understood that in other aspects, the blender 160 blends the video image with a menu first, and then blends the video image blended with the menu with subtitles.

The output device 200 outputs the image received from the blender 160 as a 3D image OUT1. The output device 200 separates left and right images included in a single 3D format image of the video frame from each other to generate a left-eye image and a right-eye image. The output device 200 alternately displays the left-eye image and the right-eye image at least every 1/120 seconds. The output image OUT1 can be received at a receiving unit through which a user sees an output screen, such as goggles, through wired and/or wireless protocols. FIG. 2E illustrates the left-eye image and the right-eye image output from the output device 200. Alternatively, while not required in all aspects, the video image processing apparatus 100 may transmit the image received from the blender 160 to an external device, or may record the image on a storage medium. For example, the video image processing apparatus 100 may include a drive to record the image on a disc (such as a DVD, Blu-ray, etc.) directly, or can be connected to a separate drive.

As described above, according to aspects of the present invention, when the video image is a 3D format image, a graphic image suitable for the 3D video image format is generated. Furthermore, the 3D video image is suitably overlaid with the graphic image and output with the graphic image.

FIG. 3 is a block diagram of a video image processing apparatus 300 according to another embodiment of the present invention, and FIGS. 4A to 4E illustrate video images and graphic images generated by the video image processing apparatus illustrated in FIG. 3. Referring to FIG. 3, the video image processing apparatus 300 includes a video data decoder 310, a graphic data decoder 320, a format converter 330, a video image buffer 340, a graphic image buffer 350, and a blender 160. While the video image processing apparatus 300 is independent of an output device 400 in FIG. 3, it is understood that aspects of the present invention are not limited thereto, and the video image processing apparatus 300 includes the output device 400 in other aspects. The video image processing apparatus 300 may be a television, a computer, a mobile device, a set-top box, a gaming system, etc. The output device 400 may be a cathode ray tube display device, a liquid crystal display device, a plasma display device, an organic light emitting diode display device, etc. Moreover, while not required, each of the units 310, 320, 330, 340, 350, 160 can be one or more processors or processing elements on one or more chips or integrated circuits.

The video data decoder 310 decodes video data IN3 to generate a video image. The video data decoder 310 transmits the video image to the format converter 330. The format converter 330 determines whether the video image is a 2D format image or a 3D format image and acquires 3D format information of the video image when the video image is a 3D format image. The format converter 330 determines whether the 3D video image is an interlaced format using the 3D format information of the 3D video image. The interlaced format samples a left-eye image and a right-eye image by ½ at a predetermined interval in the vertical or horizontal direction such that the left-eye and right-eye images are alternately located in the vertical or horizontal direction to generate a 3D image.

FIG. 4A illustrates 3D format video images decoded by the video data decoder 310. As described above with reference to FIG. 2A, in a single 3D format image, left and right images to be simultaneously displayed are allocated. The upper part of FIG. 4A illustrates a top-and-down format video image and the lower part of FIG. 4A illustrates a side-by-side format video image.

The graphic data decoder 320 decodes graphic data IN4 to generate a graphic image providing subtitles and/or a menu. The graphic data IN4 may include presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu. When the graphic data includes both the presentation graphic data and the interactive graphic data, the graphic data decoder 320 decodes the presentation graphic data to generate a presentation graphic image providing subtitles and decodes the interactive graphic data to generate an interactive graphic image providing a menu. FIG. 4B illustrates graphic images generated by the graphic data decoder 320. The graphic data decoder 320 sends the generated graphic image(s) to the graphic image buffer 350.

As illustrated in FIG. 4B, the graphic image has a format suitable for a 2D format video image. Thus, a proper subtitles or menu cannot be output when the video image is overlaid with the graphic image if the video image is the top-and-down format or the side-by-side format as illustrated in FIG. 4A.

Accordingly, the format converter 330 extracts an identifier representing whether the video image is a 2D format image or a 3D format image from the header of the video data. When the format converter 330 determines that the video image is a 2D format image using the identifier, the format converter 330 transmits the 2D format video image to the video image buffer 340. When the format converter 330 determines that the video image is a 3D format image, the format converter 330 acquires 3D format information representing the format type of the 3D video format image from the header of the video data. The format converter 330 sends the video image to the video image buffer 340 when the video image is a 3D interlaced format video image. While the format converter 330 is described above to determine whether the video image is a 2D format image or a 3D format image, and to determine the 3D format type, it is understood that aspects of the present invention are not limited thereto. For example, according to other aspects, although a controller is not shown in FIG. 3, the video image processing apparatus 300 may include a controller to determine whether the video image is a 2D format image or a 3D format image, acquire format information of the video image, and/or notify the format converter 330 of the format information when the video image is a 3D format image.

The format converter 330 converts the format of the video image to the interlaced format when the video image is not a 2D image and is not a 3D interlaced format video image. For example, when the video image has a top-and-down format, the format converter 330 alternately arranges odd-numbered horizontal lines of the upper part of the video image and even-numbered horizontal lines of the lower part of the video image or alternately arranges even-numbered horizontal lines of the upper part of the video image and odd-numbered horizontal lines of the lower part of the video image to convert the top-and-down format video image into the interlaced format video image. The upper part of FIG. 4C illustrates the interlaced format video image converted by the format converter 330 when the decoded video image has a top-and-down format. Similarly, when the video image has a side-by-side format, the format converter 330 can alternately arrange even-numbered vertical lines of the left part of the video image and odd-numbered vertical lines of the right part of the video image or alternately arrange odd-numbered vertical lines of the left part of the video image and even-numbered vertical lines of the right part of the video image to convert the side-by-side format video image into the interlaced format video image. The lower part of FIG. 4C illustrates the interlaced format video image converted by the format converter 330 when the decoded video image has a side-by-side format. Accordingly, the format converter 330 transmits the 3D interlaced format video image to the video image buffer 340.

The video image buffer 340 and the graphic image buffer 350 temporarily store the video image and the graphic image, respectively, and transmit the video image and the graphic image to the blender 360 when the STC corresponds to the PTS, as described above with reference to the video image processing apparatus 100 of FIG. 1. The blender 360 overlays the video image with the graphic image and transmits the video image overlaid with the graphic image to the output device 400. When the graphic data includes both the presentation graphic data providing subtitles and the interactive graphic data providing a menu, the blender 360 overlays the video image with the presentation graphic image, and overlays the video image overlaid with the presentation graphic image with the interactive graphic image in sequence. However, it is understood that in other aspects, the blender 160 overlays the video image with the interactive graphic image first, and then overlays the video image overlaid with the interactive graphic image with the presentation graphic image. FIG. 4D illustrates video images overlaid with the graphic images according to the blender 360.

The output device 400 outputs the video image received from the blender 360 as a 3D image OUT2. The output image OUT2 can be received at a receiving unit through which a user sees an output screen, such as goggles, through wired and/or wireless protocols. The output device 400 splits the video image overlaid with the graphic image into a left-eye image and a right-eye image and outputs the left-eye image and the right-eye image. The output device 400 separates left and right images included in a single video frame to generate the left-eye image and the right-eye image and alternately displays the left-eye image and the right-eye image at least every 1/120 seconds. FIG. 4E illustrates left-eye images and right-eye images output from the output device 400. Alternatively, while not required in all aspects, the video image processing apparatus 300 may transmit the image received from the blender 160 to an external device, or may record the image on a storage medium. For example, the video image processing apparatus 300 may include a drive to record the image on a disc (such as a DVD, Blu-ray, etc.) directly, or can be connected to a separate drive.

As described above, aspects of the present invention convert the format of a 3D video image to an interlaced format when the 3D video image is not the interlaced format, overlay the video image with subtitles and/or a menu, and output the video image overlaid with the subtitles and/or the menu.

FIG. 5 is a flow chart of a video image processing method according to an embodiment of the present invention. Referring to FIG. 5, a video image processing apparatus acquires 3D format information of a video image in operation 510. For example, the video image processing apparatus extracts an identifier representing that the video image is a 3D format image from a header of the video data and obtains the 3D format information from the video data when the video image is the 3D format image. Furthermore, the video image processing apparatus decodes graphic data to generate a first graphic image. The video image processing apparatus generates a second graphic image suitable for the format of the video image using the first graphic image and the acquired 3D format information when the video image is the 3D format image in operation 520. For example, when the video image is a side-by-side format, the video image processing apparatus scales down the frame of the first graphic image in the horizontal direction to generate sub frames and arranges the sub frames in the horizontal direction to generate the second graphic image having two sub frames arranged in the horizontal direction. When the video image has a top-and-down format, the video image processing apparatus scales down the frame of the first graphic image in the vertical direction to generate sub frames and arranges the sub frames in the vertical direction to generate the second graphic image including two sub frames arranged in the vertical direction.

The video image processing apparatus overlays the video image with the generated second graphic image in operation 530. When the graphic data includes both presentation graphic data providing subtitles and interactive graphic data providing a menu, the video image processing apparatus overlays the video image with a second presentation graphic image first, and then overlays the video image overlaid with the second presentation graphic image with a second interactive graphic image. Moreover, in some aspects, the video image processing apparatus splits the video image overlaid with the second graphic image into a left-eye image and a right-eye image and outputs the left-eye image and the right-eye image.

FIG. 6 is a flow chart of a video image processing method of a video image processing apparatus according to another embodiment of the present invention. The video image processing apparatus decodes video data to generate a video image and decodes graphic data to generate a graphic image. Furthermore, the video image processing apparatus extracts an identifier representing whether the video image is a 2D format image or a 3D format image from the video data. Referring to FIG. 6, when the video image is the 3D format image, the video image processing apparatus acquires video format information in operation 610. The video image processing apparatus determines whether the video image is an interlaced format in operation 620, and overlays the video image with the graphic image when the video image is the interlaced format in operation 640. When the video image is not the interlaced format (operation 620), the video image processing apparatus converts the format of the video image into the interlaced format in operation 630. Specifically, the video image processing apparatus alternately arranges odd-numbered horizontal lines of the upper part of the video image and even-numbered horizontal lines of the lower part of the video image or alternately arranges even numbered horizontal lines of the upper part of the video image and odd-numbered horizontal lines of the lower part of the video image to convert the video image to an interlace format video image when the video image is a top-and-down format. Conversely, the video image processing apparatus alternately arranges odd-numbered vertical lines of the left part of the video image and even-numbered vertical lines of the right part of the video image or alternately arranges even-numbered vertical lines of the left part of the video image and odd-numbered vertical lines of the right part of the video image to convert the video image to the interlaced format video image when the video image is a side-by-side format. The video image processing apparatus overlays the converted interlaced format video image with the graphic image in operation S640. In some aspects, the video image processing apparatus splits the video image overlaid with the graphic image into a left-eye image and a right-eye image and alternately outputs the left-eye image and the right-eye image at least every 1/120 seconds.

While not restricted thereto, aspects of the present invention can also be embodied as computer-readable code on a computer-readable recording medium. The computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. Aspects of the present invention may also be realized as a data signal embodied in a carrier wave and comprising a program readable by a computer and transmittable over the Internet. Moreover, while not required in all aspects, one or more units of the video image processing apparatus 100 can include a processor or microprocessor executing a computer program stored in a computer-readable medium, such as a local storage.

Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims

1. A video image processing method of a video image processing apparatus, the video image processing method comprising:

acquiring, by the video image processing apparatus, three-dimensional (3D ) format information of a video image generated from video data to determine a 3D format of the video image;
generating, by the video image processing apparatus, from a first graphic image, a second graphic image corresponding to the determined 3D format of the video image using the 3D format information, the first graphic image being generated from graphic data that is distinct from the video image; and
overlaying, by the video image processing apparatus, the video image with the second graphic image.

2. The video image processing method as claimed in claim 1, wherein the generating of the second graphic image comprises:

when the determined 3D format is a side-by-side format, horizontally scaling down a frame of the first graphic image to generate two sub frames; and
generating the second graphic image having a frame including the generated two sub frames arranged in a horizontal direction.

3. The video image processing method as claimed in claim 1, wherein the generating of the second graphic image comprises;

when the determined 3D format is a top-and-down format, vertically scaling down a frame of the first graphic image to generate two sub frames; and
generating the second graphic image having a frame including the generated two sub frames arranged in a vertical direction

4. The video image processing method as claimed in claim 1, wherein the acquiring of the 3D format information comprises:

extracting an identifier representing whether the video image is a 3D format image from the video data; and
acquiring the 3D format information from the video data when the video image is the 3D format image using the identifier.

5. The video image processing method as claimed in claim 1, further comprising:

splitting the video image overlaid with the second graphic image into a left-eye image and a right-eye image; and
outputting the left-eye image and the right-eye image.

6. The video image processing method as claimed in claim 1, further comprising:

transmitting the video image overlaid with the second graphic image to an external device to be output as a left-eye image and a right-eye image.

7. The video image processing method as claimed in claim 1, wherein:

the graphic data includes presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu;
when the graphic data includes the presentation graphic data and the interactive graphic data, the generating of the second graphic image comprises: generating a second interactive graphic image using a first interactive graphic image generated from the interactive graphic data and the 3D format information, and generating a second presentation graphic image using a first presentation graphic image generated from the presentation graphic data and the 3D format information; and
the overlaying of the video image with the second graphic image comprises: overlaying the video image with the second presentation graphic image, and overlaying the video image overlaid with the second presentation graphic image with the second interactive graphic image.

8. The video image processing method as claimed in claim 1, wherein:

the graphic data includes presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu; and
when the graphic data includes the presentation graphic data and the interactive graphic data, the generating of the second graphic image comprises: generating a second interactive graphic image using a first interactive graphic image generated from the interactive graphic data and the 3D format information, and generating a second presentation graphic image using a first presentation graphic image generated from the presentation graphic data and the 3D format information; and
the overlaying of the video image with the second graphic image comprises: overlaying the video image with the second interactive graphic image, and overlaying the video image overlaid with the second interactive graphic image with the second presentation graphic image.

9. The video image processing method as claimed in claim 1, wherein the generating of the second graphic image comprises:

scaling down a frame of the first graphic image to generate two sub frames; and
generating the second graphic image having a frame including the generated two sub frames,
wherein the two sub frames comprise a first sub frame corresponding to a left-eye image and a second sub frame corresponding to a right-eye image.

10. A video image processing method of a video image processing apparatus, the video image processing method comprising:

acquiring, by the video image processing apparatus, three-dimensional (3D ) format information of a video image generated from video data;
converting, by the video image processing apparatus, the video image to a 3D interlaced format video image using the acquired 3D format information; and
overlaying, by the video image processing apparatus, the 3D interlaced format video image with a graphic image generated from graphic data that is distinct from the video image.

11. The video image processing method as claimed in claim 10, wherein the converting of the video image to the 3D interlaced format video image comprises:

when the video image is a top-and-down format including an upper image and a lower image, alternately arranging odd-numbered horizontal lines of the upper image and even-numbered horizontal lines of the lower image or alternately arranging even-numbered horizontal lines of the upper image and odd-numbered horizontal lines of the lower image to convert the video image to the 3D interlaced format video image.

12. The video image processing method as claimed in claim 10, wherein the converting of the video image to the 3D interlaced format video image comprises:

when the video image is a side-by-side format including a left image and a right image, alternately arranging odd-numbered vertical lines of the left image and even-numbered vertical lines of the right image or alternately arranging even-numbered vertical lines of the left image and odd-numbered vertical lines of the right image to convert the video image to the 3D interlaced format video image.

13. The video image processing method as claimed in claim 10, further comprising:

splitting the video image overlaid with the graphic image into a left-eye image and a right-eye image; and
outputting the left-eye image and the right-eye image.

14. The video image processing method as claimed in claim 10, further comprising:

transmitting the video image overlaid with the graphic image to an external device to be output as a left-eye image and a right-eye image.

15. A video image processing apparatus comprising:

a video data decoder to decode video data to generate a video image;
a graphic data decoder to decode graphic data to generate a first graphic image;
a second graphic image generator to extract three-dimensional (3D ) format information of the video image from the video data to determine a 3D format of the video image, and to generate, from the first graphic image, a second graphic image corresponding to the determined 3D format of the video image using the 3D format information; and
a blender overlay the video image with the second graphic image.

16. The video image processing apparatus as claimed in claim 15, wherein when the determined 3D format is a side-by-side format, the second graphic image generator horizontally scales down a frame of the first graphic image to generate two sub frames, and generates the second graphic image including the generated two sub frames arranged in a horizontal direction.

17. The video image processing apparatus as claimed in claim 15, wherein when the determined 3D format is a top-and-down format, the second graphic image generator vertically scales down a frame of the first graphic image to generate two sub frames, and generates the second graphic image including the generated two sub frames arranged in a vertical direction.

18. The video image processing apparatus as claimed in claim 15, wherein the second graphic image generator extracts an identifier representing whether the video image is a 3D format image from the video data and acquires the 3D format information from the video data when the video image is the 3D format image using the identifier.

19. The video image processing apparatus as claimed in claim 15, further comprising an output unit to split the video image overlaid with the second graphic image into a left-eye image and a right-eye image and to output the left-eye image and the right-eye image.

20. The video image processing apparatus as claimed in claim 15, further comprising an output unit to transmit the video image overlaid with the second graphic image to an external device to be output as a left-eye image and a right-eye image.

21. The video image processing apparatus as claimed in claim 15, wherein:

the graphic data includes presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu;
when the graphic data includes the presentation graphic data and the interactive graphic data, the second graphic image generator generates a second interactive graphic image using a first interactive graphic image generated from the interactive graphic data and the 3D format information, and generates a second presentation graphic image using a first presentation graphic image generated from the presentation graphic data and the 3D format information; and
the blender overlays the video image with the second presentation graphic image, and overlays the video image overlaid with the second presentation graphic image with the second interactive graphic image.

22. The video image processing apparatus as claimed in claim 15, wherein:

the graphic data includes presentation graphic data to provide subtitles and/or interactive graphic data to provide a menu;
when the graphic data includes the presentation graphic data and the interactive graphic data, the second graphic image generator generates a second interactive graphic image using a first interactive graphic image generated from the interactive graphic data and the 3D format information, and generates a second presentation graphic image using a first presentation graphic image generated from the presentation graphic data and the 3D format information; and
the blender overlays the video image with the second interactive graphic image, and overlays the video image overlaid with the second interactive graphic image with the second presentation graphic image.

23. The video image processing apparatus as claimed in claim 15, wherein:

the second graphic image generator scales down a frame of the first graphic image to generate two sub frames, and generates the second graphic image having a frame including the generated two sub frames; and
the two sub frames comprise a first sub frame corresponding to a left-eye image and a second sub frame corresponding to a right-eye image.

24. A video image processing apparatus comprising:

a video data decoder to decode video data to generate a video image;
a graphic data decoder to decode graphic data to generate a graphic image;
a format converter to extract three-dimensional (3D ) format information of the video image from the video data and to convert the video image to a 3D interlaced format video image using the extracted 3D format information; and
a blender to overlay the 3D interlaced format video image with the graphic image.

25. The video image processing apparatus as claimed in claim 24, wherein when the video image is a top-and-down format including an upper image and a lower image, the format converter alternately arranges odd-numbered horizontal lines of the upper image and even-numbered horizontal lines of the lower image or alternately arranges even-numbered horizontal lines of the upper image and odd-numbered horizontal lines of the lower image to convert the video image to the 3D interlaced format video image.

26. The video image processing apparatus as claimed in claim 24, wherein when the video image is a side-by-side format including a left image and a right image, the format converter alternately arranges odd-numbered vertical lines of the left image and even-numbered vertical lines of the right image or alternately arranges even-numbered vertical lines of the left image and odd-numbered vertical lines of the right image to convert the video image to the 3D interlaced format video image.

27. The video image processing apparatus as claimed in claim 24, further comprising an output unit to split the video image overlaid with the graphic image into a left-eye image and a right-eye image and to output the left-eye image and the right-eye image.

28. The video image processing apparatus as claimed in claim 24, further comprising an output unit to transmit the video image overlaid with the graphic image to an external device to be output as a left-eye image and a right-eye image.

29. A computer readable recording medium storing a program to execute the method of claim 1 and implemented by the video image processing apparatus.

30. A computer readable recording medium storing a program to execute the method of claim 10 and implemented by the video image processing apparatus.

31. A computer-readable recording medium implemented by a video image processing apparatus, the computer-readable recording medium comprising:

video data comprising a video image and three-dimensional (3D ) format information used by the video image processing apparatus to overlay the video image with a graphic image in a 3D format,
wherein the graphic image is generated from graphic data that is distinct from the video image.
Patent History
Publication number: 20090315979
Type: Application
Filed: Jun 23, 2009
Publication Date: Dec 24, 2009
Applicant: Samsung Electronics Co., Ltd. (Suwon-si)
Inventors: Kil-soo Jung (Osan-si), Hyun-kwon Chung (Seoul), Dae-jong Lee (Suwon-si)
Application Number: 12/489,726
Classifications
Current U.S. Class: Signal Formatting (348/43); Three-dimension (345/419); Stereoscopic (348/42); Scaling (345/660); Stereoscopic Television Systems; Details Thereof (epo) (348/E13.001)
International Classification: H04N 13/00 (20060101); G06T 15/00 (20060101);