METHODS AND APPARATUSES FOR PROCESSING AND DISPLAYING IMAGE

- Samsung Electronics

A method of processing images, the method including determining whether metadata for generating a depth map corresponding to a predetermined title exists for each of a plurality of titles recorded in a disk by using metadata for disk management, and including converting two-dimensional (2D) images in the predetermined title to three-dimensional (3D) images by using the metadata for generating the depth map if metadata for generating the depth map corresponding to the predetermined title exists.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of Korean Application No. 10-2008-0105485, filed Oct. 27, 2008, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

Aspects of the present invention relate to methods and apparatuses for processing and displaying images, and more particularly, to methods and apparatuses for processing and displaying images for determining whether three-dimensional (3D) image conversion is possible for each of a plurality of titles recorded in a disk, by using metadata for disk management and controlling an image displaying apparatus according to the result of the determination.

2. Description of the Related Art

Developments in digital technologies include development of three-dimensional (3D) image technologies. 3D image technology is a technology for displaying more realistic images by adding depth information to two-dimensional (2D) images.

Since human eyes are apart from each other, a 2D image viewed via the left eye is different from a 2D image viewed via the right eye. This phenomenon is known as binocular disparity. The human brain combines two different 2D images to generate a 3D image having perspective so as to correspond to reality.

There are two types of 3D image technologies; technology for generating video data with 3D images and technology for generating 3D images from video data generated with 2D images. These two technologies are being researched and developed together.

SUMMARY OF THE INVENTION

Aspects of the present invention provide a method for determining whether images in titles recorded in a disk can be converted to 3D images for each of a plurality of titles by using metadata for disk management, generating graphic information indicating whether images displayed on a screen are 2D images or 3D images, and automatically changing an image displaying mode of an image displaying device.

Aspects of the present invention provide a method of processing images, the method including determining whether metadata for generating a depth map corresponding to a predetermined title exists for each of a plurality of titles recorded in a disk by using metadata for disk management, and converting two-dimensional (2D) images of the predetermined title to three-dimensional (3D) images by using the metadata for generating the depth map if metadata for generating the depth map corresponding to the predetermined title exists.

According to another aspect of the present invention, the conversion of the 2D images to the 3D images includes extracting depth information regarding a frame included in the predetermined title from the metadata for generating a depth map, generating a depth map by using the depth information, and generating a left image and a right image corresponding to the frame by using the depth map and the frame. Furthermore, the metadata for disk management may include information regarding a location of the metadata for generating a depth map corresponding to the predetermined title.

According to another aspect of the present invention, the conversion of the 2D images to the 3D images may include receiving information, from an image displaying device, indicating whether an image displaying device can only display 2D images or can display both 2D images and 3D images, and converting 2D images in the predetermined title to 3D images if the image displaying device can display both 2D images and 3D images. Furthermore, the method may further include extracting a disk identifier from the disk, transmitting the disk identifier to a server via a communication network, and downloading metadata for disk management corresponding to the disk identifier from a server.

According to another aspect of the present invention, the method may further include generating graphic information indicating that the 2D images in the predetermined title are converted to the 3D images and overlaying the graphic information on the 3D images. Furthermore, the method may, if an image displaying mode of the image displaying device is a 2D image displaying mode, further include generating a mode switching control signal for changing the image displaying mode of the image displaying device to a 3D image displaying mode and transmitting the mode switching control signal to the image displaying device. Furthermore, the method may, if metadata for generating a depth map corresponding to the predetermined title does not exist, further include generating graphic information indicating that images in the predetermined title are 2D images and overlaying the graphic information on the 2D images.

According to another aspect of the present invention, the method may, if an image displaying mode of the image displaying device is a 3D image displaying mode, further include generating a mode switching control signal for changing the image displaying mode of the image displaying device to a 2D image displaying mode and transmitting the mode switching control signal to the image displaying device. Furthermore, the conversion of the 2D images to the 3D images may include extracting shot information from the metadata for generating a depth map, wherein the shot information is for classifying video frames included in the predetermined title into shots and converting frames classified as a predetermined shot from among video frames included in the predetermined title to 3D images by using the shot information, wherein the shot information is information for classifying a series of frames from which the composition of a current frame can be predicted based on the composition of a previous frame in a same group.

According to another aspect of the present invention, the metadata for generating a depth map may include shot type information indicating whether frames in a shot are to be displayed as 2D images or 3D images for each shot, and the conversion of the frames in the predetermined shot to the 3D images may include converting frames in the predetermined shot to the 3D images by using shot type information corresponding to the predetermined shot.

Additionally, aspects of the present invention provide a method of transmitting metadata for disk management, performed by a server communicating with an image processing device via a communication network, the method including receiving a request for metadata for disk management corresponding to a predetermined disk from the image processing device, searching for the metadata for disk management corresponding to the predetermined disk by using a disk identifier of the disk, and transmitting the metadata for disk management corresponding to the predetermined disk to the image processing device, wherein the metadata for disk management corresponding to the disk indicates whether metadata for generating a depth map corresponding to each of a plurality of titles recorded in the disk exists and, if the metadata for generating a depth map exists, indicates a location of the metadata for generating a depth map corresponding to the titles.

Additionally, aspects of the present invention provide an image processing device including a disk management metadata processing unit for determining whether metadata for generating a depth map corresponding to a predetermined title exists for a plurality of titles in the disk by using metadata for disk management, a unit for decoding metadata for generating a depth map, and a three dimensional (3D) image converting unit for converting two dimensional (2D) images in the predetermined title to 3D images by using the metadata for generating a depth map corresponding to the predetermined title if the metadata for generating a depth map corresponding to the predetermined title exists.

Additionally, aspects of the present invention provide a server communicating with an image processing device via a communication network, the server including a transmitting/receiving unit receiving a request for metadata for disk management corresponding to a predetermined disk from the image processing device and transmitting the metadata for disk management corresponding to the predetermined disk to the image processing device in response to the request, a disk management metadata storage unit storing metadata for disk management corresponding to a disk, and a disk management metadata searching unit searching for the metadata for disk management corresponding to the predetermined disk by using a disk identifier of the disk, wherein the disk management metadata corresponding to the disk indicates whether metadata for generating a depth map corresponding to each of a plurality of titles recorded in the disk exists and, if metadata for generating a depth map exists, indicates a location of the metadata for generating a depth map corresponding to the titles.

Additionally, aspects of the present invention provide a computer readable recording medium having recorded thereon a method of processing images, the method including determining whether metadata for generating a depth map corresponding to a predetermined title exists for a plurality of titles recorded in a disk by using metadata for disk management, and if metadata for generating a depth map corresponding to the predetermined title exists, converting two-dimensional (2D) images in the predetermined title to three-dimensional (3D) images by using the metadata for generating a depth map.

Additionally, aspects of the present invention provide a method by which a server, which communicates with an image processing device via a communication network, transmits metadata for disk management, the method including receiving a request for metadata for disk management corresponding to a predetermined disk from the image processing device, searching for the metadata for disk management corresponding to the predetermined disk by using a disk identifier of the disk, and transmitting the metadata for disk management corresponding to the predetermined disk to the image processing device, wherein the metadata for disk management corresponding to the disk indicates whether metadata for generating a depth map corresponding to each of a plurality of titles recorded in the disk exists and, if metadata for generating a depth map exists, indicates a location of the metadata for generating a depth map corresponding to the titles.

Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

FIG. 1 is a diagram illustrating metadata regarding video data, according to an embodiment of the present invention;

FIG. 2 is a diagram showing metadata for disk management;

FIGS. 3A and 3B are diagrams for describing depth information included in the metadata for generating a depth map, that is, the metadata shown in FIG. 1, wherein FIG. 3A is a diagram for describing a depth provided to an image, and FIG. 3B is a diagram for describing a depth provided to an image when the image is viewed from a side;

FIG. 4 is a diagram of an image processing system for describing methods of processing and displaying images, according to an embodiment of the present invention;

FIG. 5 is a block diagram of the image processing device of FIG. 4;

FIG. 6 is a block diagram of the 3D image converting unit of FIG. 5 in closer detail;

FIG. 7 is a module view of a server transmitting metadata for disk management, according to an embodiment of the present invention;

FIGS. 8A and 8B are diagrams showing an image, on which graphic information is overlaid, output by an image processing device; and

FIG. 9 is a flowchart for describing a method of displaying images, according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE EMBODIMENTS

Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.

Hereinafter, the present invention will be described in detail by explaining exemplary embodiments of the invention with reference to the attached drawings.

FIG. 1 is a diagram illustrating metadata regarding video data, according to an embodiment of the present invention. Metadata, according to aspects of the present invention, includes metadata for disk management and metadata for generating a depth map.

Generally, a plurality of titles may be recorded in a single disk. For example, a first playback title, a top menu title, and n movie titles may be recorded in a single disk. Among the titles above, the first playback title is the title read initially when the disk is loaded into an image processing apparatus (not shown), and metadata for generating a depth map is not necessary therein. Furthermore, the top menu title is a title for providing menus corresponding to movie titles, and metadata for generating a depth map may not be necessary therein, too. However, metadata for generating a depth map corresponding to the n movie titles may be required to playback the movie title in 3D images.

Generally, metadata for generating a single depth map is generated with respect to a single title. Thus, in the case where a plurality of titles are recorded in a disk, there may be either metadata for generating a depth map corresponding to a single title or a plurality of pieces of metadata for respectively generating depth maps corresponding to the plurality of titles. Therefore, in the example above, the number of pieces of metadata for generating depth maps, with respect to the n titles, may be n.

Metadata for disk management is data for managing metadata for generating depth maps corresponding to a plurality of titles recorded in a disk, and includes disk identification information and information for managing metadata for generating depth maps corresponding to a plurality of titles recorded in a disk. Descriptions on metadata for disk management will be provided below with reference to FIG. 2.

Metadata for generating a depth map includes information for converting video data frames to 3D images. Video data is formed of a series of frames, and thus metadata for generating a depth map includes information regarding the frames. The information regarding frames includes information for classifying frames according to a predetermined standard. When a group of a series of frames with similarities is referred to as a unit, frames of video data may be classified into a plurality of units. In the present invention, metadata for generating a depth map includes information for classifying frames of video data into predetermined units.

In the present invention, when compositions of frames are similar and thus composition of a current frame can be predicted based on the composition of the previous frame, a series of frames with similar compositions will be referred to as a shot. Metadata for generating a depth map includes shot information for classifying frames of video data into shots. When compositions of frames are changed significantly and thus composition of a current frame is different from that of the previous frame, the current frame and the previous frames are classified into different shots.

Although not shown in FIG. 1, shot information may include information regarding times at which a shot begins and ends. A time at which a shot begins is a time point at which the first frame from among frames classified as a predetermined shot is displayed, and a time at which a shot ends is a time point at which the last frame from among the frames is displayed.

Metadata for generating a depth map may include shot type information corresponding to frames classified into one shot. Shot type information is information instructing frames classified as one shot to be displayed as 2D images or 3D images.

If shot type information instructs frames classified as a predetermined shot to be displayed as 3D images, metadata for generating a depth map may further include information required for converting the frames to 3D images.

To provide a cubic effect to 2D images, the sensation of depth should be given to the 2D images. When a person views a screen, an image displayed on the screen is projected to both eyes of the person. The change in the position of the image when viewed from the different locations of the left eye and the right eye is referred to as parallax. Parallax can be positive parallax, zero parallax, or negative parallax. Positive parallax refers to the case where the image is formed inside the screen and the parallax is equal to or smaller than the distance between the eyes. As the positive parallax increases, a 3D effect, which is as if the image is located inside the surface of the screen, increases.

Zero parallax refers to the case in which an image is projected flat on a screen two-dimensionally. In the case of zero parallax, an image is displayed flat on a screen, and thus a user cannot perceive the 3D effect. Negative parallax refers to the case in which an image is projected in front of a screen. Negative parallax occurs when lines of sight of both eyes intersect each other and gives a 3D effect as if an object protrudes from the screen.

As a method of generating 3D images by providing depth to 2D images, aspects of the present invention use a method of providing depth to 2D images by generating depth maps corresponding to frames. Thus, metadata for generating a depth map includes depth information, which is information for providing depth to frames. The depth information is information to convert 2D images to 3D images by providing depth to frames, and includes depth information for background and depth information for objects.

Images of a frame include a background image and an image of objects not including the background. Depth information for a background is information for providing depth to the background image. Providing depth to the background image means providing depth to composition such as location or structure of the background.

Frames may have various compositions, and thus depth information for the background, which is included in metadata for generating a depth map, includes information regarding type of composition to indicate composition of the background frames. Furthermore, depth information for the background may include coordinate values of backgrounds, depth values corresponding to the coordinate values, and panel position values.

The coordinate values of the backgrounds refer to coordinate values of the backgrounds in frames of 2D images. The depth values refer to the degree of depth to be provided to images, and metadata for generating a depth map includes depth values to be provided to each of a plurality of coordinates in frames of 2D images. The panel position values refer to a position in a screen at which images are formed.

Depth information for objects is information used to generate a depth map regarding objects, such as people or buildings, other than the backgrounds. Depth information for objects includes information regarding times of displaying the objects and object region information. The times of displaying objects refer to time points of displaying frames in which the objects appear. The object region information is information for indicating regions occupied by the objects, and may include information regarding coordinates to indicate the regions occupied by the objects, wherein the information regarding coordinates includes information regarding coordinates at which the objects and backgrounds meet. If required, a mask in which regions of the objects are indicated may be used as the object region information. Descriptions on depth information for background and depth information for the objects will be provided below in closer detail with reference to FIGS. 3 through 6.

According to an embodiment of the present invention, metadata regarding video data includes metadata for disk management and metadata for generating a depth map.

FIG. 2 is a diagram showing metadata for disk management. Referring to FIG. 2, the metadata for disk management includes disk identification information. The disk identification information is information for indicating with which disk the metadata for disk management is associated.

The metadata for disk management includes information indicating whether metadata for generating a depth map exists for each title and information indicating locations of metadata for generating a depth map corresponding to titles having metadata for generating a depth map.

Referring to FIG. 2, in a case where five titles are recorded in a predetermined disk, the metadata for disk management includes information indicating whether metadata for generating a depth map exists for each of the five titles. Referring to FIG. 2, it is clear that metadata for generating a depth map does not exist for the first, second, and fifth titles, whereas metadata for generating a depth map exist for the third and fourth titles. In this case, the metadata for disk management further includes information indicating locations of metadata for generating a depth map, that is, the metadata to be applied to the third and fourth titles. In other words, the metadata for disk management may further include information indicating which metadata for generating a depth map, the metadata applicable to a predetermined disk, is to be applied to predetermined titles.

FIGS. 3A and 3B are diagrams for describing depth information included in the metadata for generating a depth map, that is, the metadata shown in FIG. 1. FIG. 3A is a diagram for describing a depth provided to an image, and FIG. 3B is a diagram for describing a depth provided to an image when the image is viewed from a side.

In the present invention, a depth is provided to a 2D flat frame by using depth information. Referring to FIGS. 3A and 3B, the X-axis direction, which is a direction parallel to a line of sight of a user, indicates the degree of depth of a frame. A depth value refers to the degree of depth of an image, and the depth value in the present invention may be one of 256 values, that is, from 0 to 255. As the depth value approaches zero, the depth of an image increases, and thus the image appears farther from a viewer. In contrast, as the depth value approaches 255, the image appears closer to a viewer.

The panel position refers to a location of a screen at which images are formed, and the panel position value refers to a depth value of an image when parallax is zero, that is, when the image is formed on the screen. As shown in FIG. 3, the panel position value may be one depth value from among 0 to 255. When the panel position value is 255, all images in a frame have either the same depth value as the screen or a depth value less than that of the screen. Thus, the images are formed away from a viewer, that is, inside the screen. In other words, the images in the frame have zero or positive parallax. When the panel position value is 0, all images in a frame have either the same depth value as the screen or a depth value greater than that of the screen, thus, the images are formed in front of the screen. In other words, the images in the frame have zero or negative parallax.

An object is either a person or a building standing parallel to the surface of the screen. As shown in FIG. 3B, a depth value of an object is the same as that of a portion of the background at which the object and the background contact each other. On certain occasions, the depth value of an object may be the same as a panel position value. Depth values of objects are constant in a direction parallel to the surface of the screen.

FIG. 4 is a diagram of an image processing system for describing methods of processing and displaying images, according to an embodiment of the present invention.

Referring to FIG. 4, the image processing system according to the present embodiment includes a server 100, an image processing device 200, and an image displaying device 300.

The image processing device 200 is a device for decoding video data, generating 2D video images, and either converting the 2D video images to 3D images by using metadata for disk management and transmitting the 3D images to the image displaying device 300 or transmitting the 2D video images to the image displaying device 300 without conversion. The image processing device 200 may be a DVD player, a set-top box, or other similar devices.

The image displaying device 300 is a device for displaying images transmitted from the image processing device 200 on a screen, and may be a monitor, a TV, or other similar devices.

The image processing device 200 and the image displaying device 300 are shown as individual devices in FIG. 4. However, the image processing device 200 may include a display unit, which is a unit for performing functions of the image displaying device 300, so that the image processing device 200 may also include the image displaying device 300 to display images on a screen.

The image processing device 200 is connected to the server 100 via a communication network. The communication network may be a wired communication network and/or a wireless communication network.

The server 100 may be operated by a content provider such as a broadcast station or a general content generating company. The server 100 stores content such as audio data, video data, text data, and metadata regarding the audio data, the video data, and the text data.

When a user turns on the image processing device 200 by using a user interface such as a remote control device (not shown), the image processing device 200 receives information, which indicates whether the image displaying device 300 can display 2D images only or the image displaying device 300 can also display 3D images. In the case where the image displaying device 300 can display 3D images, the image processing device 200 extracts disk identification information from a loaded disk, transmits the disk identification information to the server 100, and requests metadata corresponding to the disk identification information.

The server 100 determines whether metadata corresponding to a predetermined disk is stored in the server 100 by using disk identification information transmitted from a user, and, if the metadata is stored in the server 100, transmits the metadata to the image processing device 200. The image processing device 200 may identify metadata downloaded from the server 100 corresponding to disk identification information and store the metadata in a predetermined location within the image processing device 200.

On certain occasions, metadata may be recorded in a disk in which video data is stored. The metadata may be recorded in one or more of a lead-in area, a user data area, and a lead-out area of a disk. Furthermore, metadata for disk management and metadata for generating a depth map may be stored separately. In other words, metadata for disk management may be stored in the server 100 and metadata for generating a depth map may be stored in a disk, however, aspects of the present invention are not limited thereto.

The image processing device 200 generates 2D images by decoding titles recorded in a disk. The image processing device 200 uses metadata for disk management either downloaded from the server 100 or extracted from a disk to determine whether metadata for generating depth maps for each of a plurality of titles recorded in a loaded disk exists.

If it is determined that metadata for generating a depth map corresponding to a predetermined title exists, the image processing device 200 searches for metadata for generating a depth map corresponding to the predetermined title by using information regarding a location of the metadata for generating a depth map.

The image processing device 200 converts 2D images of a predetermined title to 3D images by using metadata for generating a depth map corresponding to the predetermined title. The image processing device 200 extracts depth information for the background and depth information for the objects corresponding to frames included in a predetermined title from metadata for generating a depth map, and generates depth maps for each of the background and the objects by using the extracted depth information. The image processing device 200 generates a complete depth map by combining the depth map for the background and the depth maps for the objects, and generates left images and right images, that is, 3D images corresponding to 2D images by using the complete depth map and the 2D images.

Before transmitting the 3D images to the image displaying device 300, the image processing device 200 may generate graphic information to indicate that the images transmitted to the image displaying device 300 are 3D images. The graphic information may be emoticons, texts, images, etc. The image processing device 200 overlays generated graphic information on the 3D images and transmits the 3D images to the image displaying device 300.

The image displaying device 300 displays video images, on which graphic information is overlaid, as 3D images. A user can recognize that images currently displayed by the image displaying device 300 are 3D images by using graphic information overlaid thereon. In the case where the image displaying mode of the image displaying device 300 is a 2D image displaying mode, a user may change the image displaying mode of the image displaying device 300 by using a remote control device (not shown), or the like, to display 3D images.

On certain occasions, the image processing device 200 may automatically change the image displaying mode of the image displaying device 300. The image processing device 200 determines whether the image displaying mode of the image displaying device 300 is set to the 2D image displaying mode or the 3D image displaying mode either by receiving information regarding the image displaying mode of the image displaying device 300 or by using information indicating whether images transmitted to the image displaying device 300 prior to a current image are 2D images or 3D images.

In the case where the image displaying mode of the image displaying device 300 is the 2D image displaying mode, the image processing device 200 may generate a control signal switching the image displaying device 300 to the 3D image displaying mode and transmit the control signal to the image displaying device 300. Thus, the image displaying device 300 can display 3D images.

If the image processing device 200 determines that metadata for generating a depth map corresponding to a predetermined title does not exist, decoded 2D images are transmitted to the image displaying device 300 without conversion. Before transmitting the 2D images to the image displaying device 300, the image processing device 200 generates graphic information to indicate that the images transmitted to the image displaying device 300 are 2D images, overlays the generated graphic information on the 2D images, and transmits the 2D images to the image displaying device 300.

The image displaying device 300 displays video images on which graphic information indicating that the video images are 2D images are overlaid. A user may recognize that the images currently displayed by the image displaying device 300 are 2D images based on the graphic information overlaid on the video images. In the case where the image displaying mode of the image displaying device 300 is the 3D image displaying mode, a user may switch the image displaying device 300 to display 2D images by using a remote control device (not shown), or the like. Furthermore, as described above, the image processing device 200 may automatically recognize and change the image displaying mode of the image displaying device 300. In the case where the image displaying device 300 is configured in the 3D image displaying mode, the image processing device 200 may generate a control signal instructing the image displaying device 300 to switch to the 2D image displaying mode and transmit the control signal to the image displaying device 300 such that the image displaying device 300 can display 2D images.

The image displaying device 300 sequentially displays a left-eye image and a right-eye image on a screen. A user perceives that images are continuously and seamlessly displayed when images are displayed at least at a frame rate of 60 Hz per eye. Thus, it is necessary for the image displaying device 300 to display images at least at a frame rate of 120 Hz for a user to recognize images input via the left and right eyes in conjunction as 3D images. The image displaying device 300 alternately displays left-eye images and right-eye images in frames every 1/120th second.

In the case where the image processing device 200 and the image displaying device 300 support high definition multimedia interface (HDMI), the image processing device 200 and the image displaying device 300 may transmit and receive data via HDMI. HDMI is a non-compressive digital video/audio interface standard, and provides an interface between devices supporting HDMI. HDMI includes three communication channels; a transition minimized differential signaling (TMDS) channel, a display data channel (DDC), and a consumer electronics control (CEC) channel.

First, a case in which the image processing device 200 and the image displaying device 300 transmits and receives data using a TMDS channel will be described below. Transition minimized differential signaling (TMDS) on HDMI carries video, audio, and auxiliary data via one of three modes called the video data period, the data island period, and the control period. During the video data period, the pixels of an active video line are transmitted. During the data island period (which occurs during the horizontal and vertical blanking intervals), audio and auxiliary data are transmitted within a series of packets. The control period occurs between video and data island periods. During the control period, the image processing device 200 may transmit a mode switching control signal to the image displaying device 300 to instruct the image displaying device 300 to be switched to either the 3D image displaying mode or the 2D image displaying mode.

Next, a case in which the image processing device 200 and the image displaying device 300 transmit a mode switching control signal using a CEC line will be described below. A CEC line transmits control data transmitted for controlling devices using HDMI.

Control data transmitted via the CEC line may include information indicating that transmitted data is control data regarding mode switching, information instructing the image displaying device 300 to be switched to the 3D image displaying mode or the 2D image displaying mode, and an address of the image displaying device 300 which will receive the control data.

According to an embodiment of the present invention, the image processing device 200 may generate graphic information indicating whether images to be displayed are 2D images or 3D images by using metadata for disk management, overlay the graphic information on the images, generate a control signal for switching the image displaying mode of the image displaying device 300 according to the current image displaying mode of the image displaying device 300, and transmit the control signal to the image displaying device 300. Thus, the image displaying mode of the image displaying device 300 may be automatically switched even if a user does not manually switch the image displaying mode of the image displaying device 300.

FIG. 5 is a block diagram of the image processing device 200 of FIG. 4, according to an embodiment of the present invention. Referring to FIG. 5, the image processing device 200, according to the present embodiment includes a video data decoding unit 210, a disk management metadata processing unit 220, a unit 230 for decoding metadata for generating a depth map, a 3D image converting unit 240, a video image buffer 250, a graphic information processing unit 260, and a graphic information buffer 270, and a blender 280. Although not shown in FIG. 5, the image processing device 200 may further include a communication unit for exchanging data with the external server 100 via a communication network, and include a local storage unit for storing data downloaded via the communication unit. Furthermore, although not shown in FIG. 5, the image processing device 200 may include a system time clock (STC) counter. The image processing device 200 decodes and outputs data according to the STC counter.

The video data decoding unit 210 reads either video data from a disk or video data downloaded and stored in the local storage unit and decodes the video data.

When a disk is loaded into the image processing device 200, the metadata for disk management processing unit 220 reads metadata from the disk. In the case where metadata is not stored in the disk, the disk management processing metadata unit 220 extracts disk identification information from the disk and transmits the disk identification information to the server 100 via the communication unit (not shown). The image processing device 200 may receive metadata regarding a predetermined disk and store the metadata in the local storage unit (not shown) in order using disk identification information.

The disk management metadata processing unit 220 determines whether metadata for generating a depth map corresponding to video data of a predetermined title is included for each title in the disk by using metadata for disk management.

When there is no metadata for generating a depth map corresponding to video data of a predetermined title, the 3D image converting unit 240 transmits 2D images decoded by the video data decoding unit 210 to the video image buffer 250 without conversion. The disk management metadata processing unit 220 controls the graphic information processing unit 260 to generate graphic information indicating that the images are 2D images. The graphic information processing unit 260 transmits the generated graphic information to the graphic information buffer 270.

The disk management metadata processing unit 220 determines whether the image displaying mode of the image displaying device 300 is the 3D image displaying mode or the 2D image displaying mode. If it is determined that the image displaying device 300 is configured in the 3D image displaying mode, the disk management metadata processing unit 220 generates a mode switching control signal instructing the image displaying device 300 to be switched to the 2D image displaying mode and transmits the mode switching control signal to the image displaying device 300.

The video image buffer 250 and the graphic information buffer 270 temporarily store video images and graphic information, respectively. When the STC is equal to a presentation time stamp (PTS), the video image buffer 250 and the graphic information buffer 270 transmit the video images and the graphic information to a blender 280. The blender 280 overlays the video images and the graphic information and transmits them to the image displaying device 300.

In the case where metadata for generating a depth map corresponding to video data of a predetermined title exists, the disk management metadata processing unit 220 determines the location of particular metadata for generating a depth map corresponding to video data of the predetermined title. The disk management metadata processing unit 220 controls the unit for decoding metadata for generating the depth map 230 to decode the metadata for generating a depth map corresponding to video data of the predetermined title. The unit 230 decodes the metadata for generating a depth map and extracts depth information.

The 3D image converting unit 240 uses the depth information extracted by the unit 230 to generate a depth map corresponding to the 2D images decoded by the video data decoding unit 210, and converts the 2D images to 3D images by using the depth map.

The disk management metadata processing unit 220 controls the graphic information processing unit 260 to generate graphic information indicating that the images are 3D images. The graphic information processing unit 260 transmits the generated graphic information to the graphic information buffer 270.

The disk management metadata processing unit 220 determines whether the image displaying mode of the image displaying device 300 is the 3D image displaying mode or the 2D image displaying mode. If it is determined that the image displaying device 300 is configured in the 2D image displaying mode, the disk management metadata processing unit 220 generates a mode switching control signal instructing the image displaying device 300 to be switched to the 3D image displaying mode.

The image processing device 200 transmits the images to be overlaid with graphic information indicating that the images are 3D images and the mode switching control signal to the image displaying device 300.

Additionally, a predetermined title may include both the frames to be displayed as 2D images and the frames to be displayed as 3D images. In this case, metadata for generating a depth map corresponding to the predetermined title only includes depth information with respect to frames to be displayed as 3D images.

The unit decoding metadata for generating the depth map 230 decodes metadata for generating a depth map and extracts shot information regarding frames of video data classified into a predetermined shot. The unit 230 uses shot type information to determine whether the frames classified into a predetermined shot are to be displayed as 2D images or 3D images. When it is necessary to convert frames classified into a predetermined shot to 3D images, the unit 230 extracts depth information regarding the frames and transmits the depth information to the 3D image converting unit 240.

In this case, the unit 220 controls the graphic information processing unit 260 to generate graphic information indicating whether the images to be displayed are 2D images or 3D images, generates a mode switching control signal for changing the image displaying mode of the image displaying device 300, and transmits them to the image displaying device 300.

According to an embodiment of the present invention, a user can recognize whether images currently displayed are 2D images or 3D images by using graphic information displayed together with the images. Furthermore, the image displaying mode of the image displaying device 300 may be automatically switched even if a user does not manually change the image displaying mode.

FIG. 6 is a block diagram of the 3D image converting unit 240 of FIG. 5 in closer detail, according to an embodiment of the present invention. Referring to FIG. 6, the 3D image converting unit 240 includes a background depth map generating unit 610, an object depth map generating unit 620, a filtering unit 630, and a depth map buffer unit 640, and a stereo rendering unit 650.

The background depth map generating unit 610 receives type information of a background, coordinate values of the background, depth values of the background corresponding to the coordinate values, and panel position values, which are included in depth information for the background, from the unit decoding metadata for generating the depth map 230, and generates a depth map corresponding to the background by using the received values. The background depth map generating unit 610 transmits the depth map generated corresponding to the background to the filtering unit 630.

The object depth map generating unit 620 receives object identification information and object type information, which are included in depth information for an object, from the unit 230 and generates a depth map corresponding to the object. In the case where the object identification information is information regarding a mask, the object depth map generating unit 620 receives the mask to be applied to a corresponding frame and generates a depth map corresponding to the object by using the mask. The object depth map generating unit 620 transmits the depth map to the filtering unit 630.

The filtering unit 630 applies filters to the depth map corresponding to the background and the depth map corresponding to an object. The depth map, corresponding to the object, has a depth value corresponding to the surface of a screen. The filtering unit 630 may apply filters to the object to provide a 3D effect to the object having depth values corresponding to the surface of a screen. In the case where the depth map corresponding to the background is a flat surface, that is, in the case where all depth values of the background are equal to a panel position value, filters may be applied thereto to provide a 3D effect to the background.

The depth map buffering unit 640 temporarily stores a depth map corresponding to the background transmitted from the filtering unit 630. When a depth map corresponding to an object is generated, the depth map buffering unit 640 combines the depth map corresponding to the background and the depth map corresponding to the object and updates a depth map corresponding to a frame. In the case where there is a plurality of objects, the depth map buffering unit 640 updates a depth map by sequentially overlaying depth maps corresponding to the plurality of objects. When the depth map is completed, the depth map buffering unit 640 transmits the completed depth map to a stereo rendering unit 650.

The stereo rendering unit 650 generates a left-eye image and a right-eye image by using video images received from the video data decoding unit 210 and a depth map 640 received from the depth map buffer unit 640, and generates a 3D format image including both of the left-eye image and the right-eye image. Examples of 3D formats include top and down format, side by side format, and interlaced format. The stereo rendering unit 650 transmits the 3D format image to the image displaying device 300.

FIG. 7 is a module view of a server transmitting metadata for disk management, according to an embodiment of the present invention. Referring to FIG. 7, a server 100 includes a plurality of application modules 740 including a transmitting/receiving processing module 741, a disk management metadata storage module 743, and a metadata for disk management searching module 745. The transmitting/receiving processing module 741 processes communication with the image processing device 200, and the disk management metadata storage module 743 stores and manages metadata for disk management, depth information for the background, video data, and other similar data and information.

The metadata for disk management searching module 745 uses disk identification information, which is transmitted from a user, to search for metadata for disk management having disk identification information requested by the user.

The overall configuration of the server 100 will be described below by describing application modules thereof. The server 100 may use various operating systems (OS) as a system OS. The OS provides high level commands to an application program interface (API) 701 to control operations of each of the application modules 740. The server 100 which includes a high level command processing unit 710, identifies the corresponding application modules 740 based on high level commands provided by the API 701, and decodes the high level commands and provides the decoded high level commands to a corresponding module of an application module control unit 720.

The application module control unit 720 controls operations of the application modules 740 according to a command provided by the high level command processing unit 710. In other words, the high level command processing unit 710 determines whether an application module 740 corresponding to a high level command provided via the API 710 exists. If the corresponding application module 740 exists, the high level command processing unit 710 decodes the high level command so that it may be understood by the application module 740 and either transmits the decoded high level command to a corresponding mapping unit or controls message transmission. Therefore, the application module control unit 720 includes mapping units 721, 725, and 729 and interface units 723, 727, and 731 corresponding to the transmitting/recording processing module 741, the metadata for disk management storage module 743 for storing metadata for disk management, and the metadata for disk management searching module 745 for searching metadata for disk management, respectively.

The transmitting/receiving processing module mapping unit 721 receives a high level command for performing communication with a recording device 100, maps the high level command to a command in a device level recognizable by the transmitting/receiving processing module 741 can process, and provides the mapped high level command to the transmitting/receiving processing module 741 via the transmitting/receiving processing module interface unit 723.

The disk management metadata storage module mapping unit 725 and the disk management metadata searching module interface unit 727 store metadata for disk management. The disk management metadata storage module mapping unit 725 receives a high level command for using the disk management metadata storage module 743 from the high level command processing unit 710, maps the high level command to a command in a device level, and provides the mapped command to the disk management metadata storage module 743 via the disk management metadata storage module interface unit 727.

The disk management metadata searching module 745 searches for metadata for disk management requested by a user. The disk management metadata searching module mapping unit 729 receives a high level command applied via the high level command processing unit 710 and maps the high level command to a command at a device level recognizable by the disk management metadata searching module 745. The command at the device level is provided to the disk management metadata searching module 745 via the disk management metadata searching module interface unit 731.

FIGS. 8A and 8B illustrate images, on which graphic information is overlaid, displayed by the image displaying device 300.

Examples of methods whereby the image displaying device 300 displays images as 3D images include a method of displaying 3D images by using goggles synchronized with the image displaying device 300. In this case, a user can view 3D images by wearing the goggles. Methods of embodying 3D images without using goggles include a method of displaying images such that 3D images can only be viewed at a predetermined point, also known as a sweet spot, by using a display device including a lenticular lens, a parallax barrier, parallax illumination, etc. In this case, a user can view 3D images at a predetermined sweet spot.

FIG. 8A illustrates graphic information displayed corresponding to a method using goggles for displaying 3D images and FIG. 8B illustrates graphic information displayed corresponding to a method not using goggles, respectively.

In a method in which the image displaying device 300 displays 3D images on a screen in synchronization with goggles, the image processing device 200 generates goggle-shaped graphic information, overlays the graphic information on an image, and transmits the image to the image displaying device 300. FIG. 8A shows an image overlaid with graphic information indicating whether the image is a 2D image or a 3D image. The left image of FIG. 8A illustrates goggle-shaped graphic information overlaid on an image. The goggle-shaped graphic information for indicates that the image displayed by the image displaying device 300 is a 3D image. Thus, a user can recognize that the currently displayed image is a 3D image from the goggle-shaped graphic information, and can view the image three-dimensionally by wearing goggles. The right image of FIG. 8A illustrates crossed-out goggle-shaped graphic information overlaid on an image. The crossed-out goggle-shaped graphic information indicates that the image displayed by the image displaying device 300 is a 2D image. Thus, a user can recognize that the currently displayed image is a 2D image from the crossed-out goggle-shaped graphic information, and can view the image without wearing goggles.

In the case of a method in which the image displaying device 300 displays 3D images on a screen such that a user can view the images three-dimensionally at a predetermined sweet spot, the image processing device 200 generates graphic information indicating that a user has to be at a predetermined sweet spot, overlays the graphic information on an image, and transmits the image to the image displaying device 300. The left image of FIG. 8B shows an upright figure-shaped graphic information overlaid on an image. The upright figure-shaped graphic information indicates that the image displayed by the image displaying device 300 is a 3D image. Thus, a user can recognize that the image displayed by the image displaying device 300 is a 3D image, and can view the image three-dimensionally at a predetermined sweet spot. The right image of FIG. 8B shows a horizontally aligned figure-shaped graphic information overlaid on an image. The horizontally aligned figure-shaped graphic information indicates that the image displayed by the image displaying device 300 is a 2D image. Thus, a user can recognize that the image displayed by the image displaying device 300 is a 2D image, and can view the image not only at a predetermined sweet spot but also at other spots.

FIG. 9 is a flowchart for describing a method of displaying images, according to an embodiment of the present invention. Referring to FIG. 9, the image processing device 200 generates 2D images by decoding video data, and determines whether the image displaying device 300 is a device capable of displaying both 2D images and 3D images (operation 910). If the image displaying device 300 is capable of displaying 2D images only, the image processing device 200 transmits 2D images to the image displaying device 300 (operation 980).

If the image displaying device 300 is capable of displaying both 2D images and 3D images, the image processing device 200 extracts a unique identifier of a disk from the disk and transmits the unique identifier to the server 100 (operation 920). The image processing device 200 downloads metadata for disk management corresponding to the unique identifier from the server 100 (operation 930). The image processing device 200 uses the metadata for disk management to determine whether metadata for generating a depth map corresponding to a predetermined title exists for each of a plurality of titles recorded in the disk (operation 940).

If metadata for generating a depth map corresponding to a predetermined title does not exist, the image processing device 200 generates graphic information indicating that images in the title are 2D images, overlays the graphic information on the 2D images, and transmits the images to the image displaying device 300 (operation 990). If images in a previous title transmitted to the image displaying device 300 are 3D images, the image processing device 200 generates a mode switching control signal for changing the image displaying mode of the image displaying device 300 to the 2D image displaying mode, and transmits the mode switching control signal to the image displaying device 300 (operation 1000).

If metadata for generating a depth map corresponding to a predetermined title exists, the image processing device 200 converts the 2D images to 3D images by using the metadata for generating a depth map (operation 950). The image processing device 200 generates graphic information indicating that the converted images are 3D images, overlays the graphic information on the 3D images, and transmits the images to the image displaying device 300 (operation 960). If images in a previous title transmitted to the image displaying device 300 are 2D images, the image processing device 200 generates a mode switching control signal for changing the image displaying mode of the image displaying device 300 to the 3D image displaying mode, and transmits the mode switching control signal to the image displaying device 300 (operation 970).

According to aspects of the present invention, it can be determined whether images in titles recorded in a disk can be converted to 3D images for each of a plurality of titles by using metadata for disk management. According to a result of the determination, information indicating whether images displayed on a screen are 2D images or 3D images can be generated, and an image displaying mode of an image displaying device can be automatically changed.

Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims

1. A method of processing images, the method comprising:

determining whether metadata for generating a depth map corresponding to a predetermined title exists for each of a plurality of titles recorded in a disk by using metadata for disk management; and
converting two-dimensional (2D) images of the predetermined title to three-dimensional (3D) images by using the metadata for generating the depth map if the metadata for generating the depth map corresponding to the predetermined title exists.

2. The method of claim 1, wherein the converting of the 2D images to the 3D images comprises:

extracting depth information regarding a frame included in the predetermined title from the metadata for generating a depth map;
generating a depth map by using the depth information; and
generating a left image and a right image corresponding to the frame by using the depth map and the frame.

3. The method of claim 1, wherein the metadata for disk management comprises information regarding a location of the metadata for generating a depth map corresponding to the predetermined title.

4. The method of claim 1, wherein the converting of the 2D images to the 3D images comprises:

receiving information, from an image displaying device, indicating whether the image displaying device can only display 2D images or can display both 2D images and 3D images; and
converting 2D images in the predetermined title to 3D images, if the image displaying device can display both 2D images and 3D images.

5. The method of claim 1, further comprising:

extracting a disk identifier from the disk;
transmitting the disk identifier to a server via a communication network; and
downloading metadata for disk management corresponding to the disk identifier from a server.

6. The method of claim 1, further comprising:

generating graphic information indicating that the 2D images in the predetermined title are converted to the 3D images; and
overlaying the graphic information on the 3D images.

7. The method of claim 6, further comprising, if an image displaying mode of the image displaying device is a 2D image displaying mode:

generating a mode switching control signal for changing the image displaying mode of the image displaying device to a 3D image displaying mode; and
transmitting the mode switching control signal to the image displaying device.

8. The method of claim 1, further comprising, if metadata for generating a depth map corresponding to the predetermined title does not exist:

generating graphic information indicating that images in the predetermined title are 2D images; and
overlaying the graphic information on the 2D images.

9. The method of claim 8, further comprising, if an image displaying mode of the image displaying device is a 3D image displaying mode:

generating a mode switching control signal for changing the image displaying mode of the image displaying device to a 2D image displaying mode; and
transmitting the mode switching control signal to the image displaying device.

10. The method of claim 1, wherein the converting of the 2D images to the 3D images comprises:

extracting shot information from the metadata for generating a depth map, wherein the shot information is for classifying video frames included in the predetermined title into shots; and
converting frames classified as a predetermined shot from among video frames included in the predetermined title to 3D images by using the shot information,
wherein the shot information is information for classifying a series of frames from which the composition of a current frame can be predicted based on the composition of a previous frame in a same group.

11. The method of claim 10, wherein the metadata for generating a depth map comprises shot type information indicating whether frames in a shot are to be displayed as 2D images or 3D images for each shot, and

the converting of the frames in the predetermined shot to the 3D images comprises converting frames in the predetermined shot to the 3D images by using shot type information corresponding to the predetermined shot.

12. The method of claim 10, wherein the converting of the 2D images in the predetermined title to the 3D images comprises:

extracting depth information corresponding to frames in the predetermined title from the metadata for generating a depth map;
generating a depth map by using the depth information; and
generating a left image and a right image corresponding to the frame by using the depth map and the frame.

13. The method of claim 10, wherein the metadata for disk management comprises information regarding a location of the metadata for generating a depth map corresponding to the predetermined title.

14. The method of claim 10, wherein the converting of the frames in the predetermined shot to 3D images comprises:

receiving information, from an image displaying device, indicating whether the image displaying device can only display 2D images or can display both 2D images and 3D images; and
converting the frames in the predetermined shot to 3D images, if the image displaying device can display both 2D images and 3D images.

15. The method of claim 10, further comprising:

extracting a disk identifier from the disk;
transmitting the disk identifier to a server via a communication network; and
downloading metadata for disk management corresponding to the disk identifier from a server.

16. The method of claim 10, further comprising:

generating graphic information indicating that the 2D images of frames in the predetermined shot are converted to 3D images; and
overlaying the graphic information on the 3D images.

17. The method of claim 16, further comprising, if an image displaying mode of the image displaying device is a 2D image displaying mode:

generating a mode switching control signal for changing the image displaying mode of the image displaying device to a 3D image displaying mode; and
transmitting the mode switching control signal to the image displaying device.

18. The method of claim 10, further comprising:

when metadata for generating a depth map with respect to the predetermined title does not exist, generating graphic information indicating that images in the predetermined title are 2D images; and
overlaying the graphic information on the 2D images.

19. The method of claim 18, further comprising, if an image displaying mode of the image displaying device is a 3D image displaying mode:

generating a mode switching control signal for changing the image displaying mode of the image displaying device to a 2D image displaying mode; and
transmitting the mode switching control signal to the image displaying device.

20. A method of transmitting metadata for disk management, performed by a server communicating with an image processing device via a communication network, the method comprising:

receiving a request for metadata for disk management corresponding to a predetermined disk from the image processing device;
searching for the metadata for disk management corresponding to the predetermined disk by using a disk identifier of the disk; and
transmitting the metadata for disk management corresponding to the predetermined disk to the image processing device,
wherein the metadata for disk management corresponding to the disk indicates whether metadata for generating a depth map corresponding to each of a plurality of titles recorded in the disk exists and, if the metadata for generating a depth map exists, indicates a location of the metadata for generating a depth map corresponding to the titles.

21. An image processing device comprising:

a disk management metadata processing unit determining whether metadata for generating a depth map corresponding to a predetermined title exists for a plurality of titles in the disk by using metadata for disk management;
a unit for decoding metadata for generating a depth map; and
a three dimensional (3D) image converting unit converting two dimensional (2D) images in the predetermined title to 3D images by using the metadata for generating a depth map corresponding to the predetermined title if the metadata for generating a depth map corresponding to the predetermined title exists.

22. The image processing device of claim 21, wherein the 3D image converting unit generates a depth map by using depth information corresponding to a frame in the predetermined title, wherein the depth information is extracted from the metadata for the generating of the depth map, and generates a left image and a right image corresponding to the frame by using the depth map and the frame.

23. The image processing device of claim 21, wherein the metadata for disk management comprises information regarding a location of the metadata for generating a depth map corresponding to the predetermined title.

24. The image processing device of claim 21, wherein the metadata for disk management processing unit receives information indicating whether an image displaying device can only display 2D images or can display both 2D images and 3D images from the image displaying device, and

if the image displaying device can display both 2D images and 3D images, the 3D image converting unit converts 2D images in the predetermined title to 3D images.

25. The image processing device of claim 21, further comprising a communication unit exchanging data with a server via a communication network,

wherein the disk management metadata processing unit extracts a disk identifier from the disk, and
the communication unit transmits the disk identifier to the server and downloads metadata for disk management corresponding to the disk identifier from a server.

26. The image processing device of claim 21, further comprising:

a graphic information generating unit generating graphic information indicating that 2D images in the predetermined title are converted to 3D images; and
a blender overlaying the graphic information on the 3D images.

27. The image processing device of claim 26, wherein, if an image displaying mode of the image displaying device is a 2D image displaying mode, the metadata for disk management processing unit generates a mode switching control signal for changing the image displaying mode of the image displaying device to a 3D image displaying mode.

28. The image processing device of claim 21, further comprising:

a graphic information generating unit generating graphic information indicating that images in the predetermined title are 2D images, if metadata for generating a depth map corresponding to the predetermined title does not exist; and
a blender overlaying the graphic information on the 2D images.

29. The image processing device of claim 28, wherein, if an image displaying mode of the image displaying device is a 3D image displaying mode, the disk management metadata processing unit generates a mode switching control signal for changing the image displaying mode of the image displaying device to a 2D image displaying mode.

30. The image processing device of claim 21, wherein the 3D image converting unit extracts shot information from the metadata for generating a depth map, the information for classifying video frames in the predetermined title into shots and converts frames in a predetermined shot from among video frames in the predetermined title to 3D images by using the shot information, wherein the shot information is information for classifying a series of frames from which the composition of a current frame can be predicted based on that of a previous frame into the same group.

31. The image processing device of claim 30, wherein the metadata for generating a depth map comprises shot type information indicating whether frames in a shot are to be displayed as 2D images or 3D images for each shot, and

wherein the 3D image converting unit converts frames in the predetermined shot to the 3D images by using shot type information corresponding to the predetermined shot.

32. The image processing device of claim 30, wherein the 3D image converting unit generates a depth map by using depth information corresponding to frames in the predetermined title, wherein the depth information is extracted from the metadata for generating a depth map, and generates a left image and a right image corresponding to the frame by using the depth map and the frame.

33. The image processing device of claim 30, wherein the metadata for disk management comprises information regarding a location of the metadata for generating a depth map corresponding to the predetermined title.

34. The image processing device of claim 30, wherein the metadata for disk management processing unit receives information indicating whether an image displaying device can only display 2D images or can display both 2D images and 3D images from the image displaying device, and

if the image displaying device can display both 2D images and 3D images, the 3D image converting unit converts the frames in the predetermined shot to 3D images.

35. The image processing device of claim 30, further comprising a communication unit for communicating with a server via a communication network,

wherein the disk management metadata processing unit extracts a disk identifier from the disk,
the communication unit transmits the disk identifier to a server via a communication network and downloads metadata for disk management corresponding to the disk identifier from a server.

36. The image processing device of claim 30, further comprising:

a graphic information generating unit for generating graphic information indicating that 2D images in the predetermined shot are converted to 3D images; and
a blender for overlaying the graphic information on the 3D images.

37. The image processing device of claim 36, wherein, if an image displaying mode of an image displaying device is a 2D image displaying mode, the disk management metadata processing unit generates a mode switching control signal for changing the image displaying mode of the image displaying device to a 3D image displaying mode.

38. The image processing device of claim 30, further comprising:

a graphic information generating unit for generating graphic information indicating that images in the predetermined title are 2D images, if metadata for generating a depth map corresponding to the predetermined title does not exist; and
a blender for overlaying the graphic information on the 2D images.

39. The image processing device of claim 38, wherein, if an image displaying mode of an image displaying device is a 3D image displaying mode, the disk management metadata processing unit generates a mode switching control signal for changing the image displaying mode of the image displaying device to a 2D image displaying mode.

40. A server communicating with an image processing device via a communication network, the server comprising:

a transmitting/receiving unit receiving a request for metadata for disk management corresponding to a predetermined disk from the image processing device and transmitting the metadata for disk management corresponding to the predetermined disk to the image processing device in response to the request;
a disk management metadata storage unit for storing metadata for disk management corresponding to a disk; and
a disk management metadata searching unit searching for the disk management metadata corresponding to the predetermined disk by using a disk identifier of the disk,
wherein the disk management metadata corresponding to the disk indicates whether metadata for generating a depth map corresponding to each of a plurality of titles recorded in the disk exists and, if metadata for generating a depth map exists, indicates a location of the metadata for generating a depth map corresponding to the titles.

41. A computer readable recording medium having recorded thereon a method of processing images, the method comprising:

determining whether metadata for generating a depth map corresponding to a predetermined title exists for a plurality of titles recorded in a disk by using metadata for disk management; and
if metadata for generating a depth map corresponding to the predetermined title exists, converting two-dimensional (2D) images in the predetermined title to three-dimensional (3D) images by using the metadata for generating a depth map.

42. A method by which a server, which communicates with an image processing device via a communication network, transmits metadata for disk management, the method comprising:

receiving a request for metadata for disk management corresponding to a predetermined disk from the image processing device;
searching for the metadata for disk management corresponding to the predetermined disk by using a disk identifier of the disk; and
transmitting the metadata for disk management corresponding to the predetermined disk to the image processing device,
wherein the metadata for disk management corresponding to the disk indicates whether metadata for generating a depth map corresponding to each of a plurality of titles recorded in the disk exists and, if metadata for generating a depth map exists, indicates a location of the metadata for generating a depth map corresponding to the titles.
Patent History
Publication number: 20100103168
Type: Application
Filed: Sep 22, 2009
Publication Date: Apr 29, 2010
Applicant: Samsung Electronics Co., Ltd (Suwon-si)
Inventors: Kil-soo JUNG (Osan-si), Sung-wook PARK (Seoul), Hyun-Kwon CHUNG (Seoul), Dae-jong LEE (Suwon-si)
Application Number: 12/564,201
Classifications
Current U.S. Class: Three-dimension (345/419)
International Classification: G06T 15/00 (20060101);