CONTENT PLAYING METHOD AND APPARATUS
A method and apparatus for realistically playing content stimulating sight and hearing senses of a user corresponding to a location of the user, by determining a first location of a user, mapping a content space displayed on a display unit corresponding with an actual space in which the user is positioned based on the first determined location, determining a virtual viewpoint in the content space corresponding to a second location of the user, and playing content corresponding to the determined virtual viewpoint.
Latest Samsung Electronics Patents:
- DIGITAL CONTROL METHOD FOR INTERLEAVED BOOST-TYPE POWER FACTOR CORRECTION CONVERTER, AND DEVICE THEREFOR
- RAMP SIGNAL GENERATOR AND IMAGE SENSOR AND ELECTRONIC DEVICE INCLUDING THE SAME
- ULTRASOUND IMAGING DEVICE AND CONTROL METHOD THEREOF
- DECODING APPARATUS, DECODING METHOD, AND ELECTRONIC APPARATUS
- PRINTED CIRCUIT BOARD
This application claims priority under 35 U.S.C. §119(a) to Korean Patent Application Nos. 10-2011-0070959 and 10-2011-0114883, which were filed in the Korean Intellectual Property Office on Jul. 18, 2011, and Nov. 7, 2011, respectively, the entire disclosures of each of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates generally to a content playing method and apparatus, and more particularly, to a method for playing content corresponding to a location of a user and an apparatus thereof.
2. Description of the Related Art
In recent years, the demand for Three-Dimensional (3D) image technology has increased and with more common use of digital broadcasting, stereoscopic image use in 3D TV and 3D information terminals is being actively researched. In general, the stereoscopic image implemented through 3D technology is formed by a stereo sight principle as experienced through two eyes. Because two eyes are spaced apart from each other by approximately 65 mm, a binocular parallax acts as a main factor of depth. When the left and right eyes view different stereoscopic images, the two different stereoscopic images are transferred to a brain through a retina, wherein the brain combines the two different stereoscopic image such that the user experiences the depth of the stereoscopic image. However, while a 3D TV is capable of showing a 3D image having a fixed viewpoint regardless of a location of a user, the TV cannot provide a realistic image where the user is present in the TV.
SUMMARY OF THE INVENTIONAccordingly, the present invention has been made to solve the above mentioned problems occurring in the prior art, and the present invention provides a method for playing realistic content stimulating at least one of the senses of a user, corresponding to a location of the user, and an apparatus thereof.
According to an aspect of the present invention, there is provided a content playing method including determining a first location of a user; mapping a content space displayed on a display unit to correspond with an actual space in which the user is present based on the first determined location; determining a virtual viewpoint in the content space corresponding to a second location of the user; and playing content corresponding to the determined virtual viewpoint.
According to another aspect of the present invention, there is provided a content playing apparatus including a content collecting unit for collecting content stimulating senses of a user; a content processor for performing a processing operation such that content input from the content collecting unit are played; a content playing unit for playing content input from the content collecting unit; a sensor for collecting information associated with a location of the user such that the content is played corresponding to the location of the user; and a controller for determining a virtual viewpoint in a virtual content space corresponding to the location of the user based on received information from the sensor and controlling such that content corresponding to the determined virtual viewpoint are played.
The above and other aspects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
Herein after, various embodiments of the content playing method and apparatus according to the present invention are described in detail with reference to the accompanying drawings. The same reference numbers are used throughout the drawings to refer to the same or similar elements. Detailed description of well-known functions and structures is omitted to avoid obscuring the subject matter of the present invention.
As used herein, the term “content” refers to an operation for stimulating senses of a user, such as a sense of sight, a sense of hearing, and a sense of touch. For example, the content may be an image, light, voice, and wind. Further, realistic type playback may refer to playing content corresponding to a location of the user. That is, the user may experience the same content differently in different locations of the user. For example, when the content is a car displayed on a screen, a front surface or a side surface of the car is viewed by the user depending on the user location. The content playing method and apparatus are applicable to an electronic device having a function playing content stimulating senses of the user. Specifically, the content playing method and apparatus are applicable to a notebook computer, a desktop PC, a tablet PC, a smart phone, a High Definition TeleVision (HDTV), a smart TV, a 3-Dimensional (3D) TV, an Internet Protocol Television (IPTV), a stereoscopic sound system, a theater system, a home theater, a home network system and the like.
The content playing method and apparatus provide a function for tracking location variation of the user, and a function for realistically playing content corresponding to the tracked location of the user. The content playing method and apparatus according to an embodiment of the present invention may provide a function that receives content, such as images from a content provided through Local Area Network (LAN), wireless LAN, 3-Generation (3G) or 4-Generation (4G) wireless communication network, stores and plays a database including the received images in a real-time manner. The images may include stereoscopic image. The stereoscopic image may become 3D movie, 3D animation, or 3D computer graphics. Further, the stereoscopic image may be a multi-media combined with the stereoscopic sound.
The input unit 110 may include a plurality of input keys and function keys for receiving input of numeral or character information, and for setting various functions. The functions may include arrow keys, side keys, and hot keys set to perform a predetermined function. Further, the input unit 110 creates and transfers a key event associated with user setting and function control of the content playing apparatus 100. The key event may include a power on/off event, a volume control event, a screen on/off event, etc. The controller 190 controls the foregoing elements in response the key event.
The remote controller 120 creates various key events for operating the content playing apparatus 100, converts the created key event into a wireless signal, and transmits the wireless signal to the remote controller receiver 125. Specifically, the remote controller 120 of the present invention may create a start event for requesting realistic type playback and a termination event for terminating the realistic type playback. As illustrated above, the realistic type playback may be defined to play content corresponding to a location of the user. The remote controller receiver 125 converts a received wireless signal into an original key event, and transfers the original key event to the controller 190.
The sensor 130 collects information associated with the location of the user such that the user may track the location of the user, and transfers the collected information to the controller 190. Specifically, the sensor 130 may be implemented by an image sensor or an optical sensor for sensing light of a predetermined wavelength such as infrared ray. Moreover, the sensor 130 converts a sensed physical amount into an electric signal, and an Analog to Digital Converter (ADC) converts the electric signal into data, and transfers the data to the controller 190.
The content collecting unit 140 performs a function for collecting content stimulating senses of the user. Specifically, the content collecting unit 140 performs a function for collecting images and sounds from a network or a peripheral device. That is, the content collecting unit 140 may include a broadcasting receiver 141 and an Internet communication unit 142. Specifically, the broadcasting receiver 141 selects one from a plurality of broadcasting channels, and demodulates a broadcasting signal of the selected broadcasting channel to original broadcasting content. The Internet communication unit 142 includes a wired modem or a wireless modem for receiving various information for home shopping, home banking, and on-line gaming, and MP3 use and additional information with respect thereto. The Internet communication unit 142 may include a mobile communication module (e.g., 3G mobile communication module, 3.5G mobile communication module, and 4G mobile communication module) and a near distance communication module (e.g., a Wi-Fi module).
The content processor 150 performs a processing function to play content from a content collecting unit 140. Specifically, the content processor 150 classifies input content into a stereoscopic image and a stereoscopic sound. The content processor 150 may include a sound processor for decoding and outputting the classified stereoscopic sound to a sound output unit 161, and an image processor for decoding the classified stereoscopic sound into a left image and a right image and outputting the left image and the right image to the image processor 152. Further, the content processor 150 may compress and transfer input content to the controller 190 under the control of the controller 190. Accordingly, the controller 190 transfers compressed content to the memory 170. Specifically, the sound processor 150 may control a direction or a distance of a stereoscopic sound according to the location of the user. In other words, the sound processor 150 may change a type of the sound output from the sound output unit 161 according to a location of the user or change the volume according to a type of the sound. The image processor 160 may control brightness, stereoscopic sensation, and depth according to the location of the user.
The content playing unit 160 performs a function for playing content processed by the content processor 150. The content playing unit 160 may include a sound output unit 161 and an image display unit 162. The sound output unit 161 outputs a decoded stereoscopic sound, and includes a plurality of speakers, for example, 5.1 channel speakers. The image display unit 161 displays a stereoscopic image. The image display unit 162 displays a stereoscopic image with depth, as the stereoscopic image actually exists in a three-dimensional space between the image display unit 162 and the screen through a display unit for displaying a stereoscopic image and a 2D implementing unit for allowing a user to experience depth with respect to a displayed stereoscopic image. The display unit may be implemented as a Liquid Crystal Display (LCD), Organic Light Emitting Diodes (OLED), or Active Matrix Organic Light Emitting Diodes (AMOLED). The 3D implementing unit is a structural element formed in an accumulated with the display unit that makes different images to be recognized at binocular right and left eyes. Generally, the 3D implementing scheme is divided into a glass scheme and a glass-free scheme. The glass scheme includes a color filter scheme, a deflection filter scheme, and a shutter glass scheme. The glass-free scheme includes a lenticular lens scheme and a parallax barrier. Because the 3d implementing scheme is known in the art, a detailed description thereof is omitted.
The memory 170 stores programs and data necessary for an operation of the content playing apparatus 100. The memory 170 may be configured by a volatile storage medium, a nonvolatile storage medium, or a combination thereof. The volatile storage medium includes a semiconductor memory such as RAM, DRAM, or SRAM. The non-volatile storage medium may include a hard disk. Further, the memory 170 may be divided into a data area and a program area. Specifically, a data area of the memory 170 may store data created by the controller 160 according to use of the content playing apparatus 100. The data area may store content compressed in a predetermine format provided from the controller 160. The program area of the memory 140 may store an operating system (OS) for booting the content playing apparatus 100 and operating respective elements, and applications for supporting various user functions, for example, a web browser for accessing an Internet server, an MP3 user function for playing other sound sources, an image output function for playing photographs, and a moving playback function. Specifically, the program area of the present invention may store a realistic type playback program. The realistic type playing program may include a routine for determining an initial location of a user, a routine for mapping an actual space with a content space based on the initial location, a routine for tracking location variation, a routine for determining a virtual viewpoint in a content space corresponding to the location of the user, and a routine for playing content corresponding to a time point in the content space. The initial location is defined as a reference value for mapping a content space to an actual space. The actual space is a 3D space in which the user and the display unit are located. The content space is a virtual space in which content displayed through a display unit exists. Further, the virtual viewpoint is defined as a viewpoint of the user in a content space mapped with an actual space.
The interface unit 180 performs a function for connecting the content playing apparatus 100 with a peripheral device in a wired or wireless scheme. The interface unit 180 may include a ZigBee® module, a Wi-Fi module, or a Bluetooth® module. Specifically, the interface unit 180 may receive and transfer a control signal for realistic type playback from the controller 190 to a peripheral device. That is, the controller 190 may control the peripheral device through the interface unit 180. The peripheral device may become a home network device, a stereoscopic sound device, a lamp, an air conditioner, and a heater. In other words, the controller 190 may control the peripheral device to play content for stimulating senses of the user, for example, a touch sense, a sight sense, and a smell sense.
The controller 190 may control an overall operation of the content playing apparatus 100, and signal flow between internal structural elements of the content playing apparatus 100. Further, the controller 190 may control power supply to internal elements in a battery. Moreover, the controller 190 may execute various application stored in the program area. Specifically, in the present invention, if a start event for realistic type playback is sensed, the controller 190 may execute the foregoing realistic type playback program. That is, if the realistic type playback program is executed, the controller 190 determines an initial location of the user and tracks location variation of the user. Further, the controller 190 maps a content space with an actual space based on the initial location, determines a virtual viewpoint corresponding to the tracked location, and controls the content processor 150 to play content corresponding to the virtual viewpoint. Furthermore, the controller 190 may control a peripheral device through the interface unit 180 to play content corresponding to the virtual viewpoint. The realistic type playback function of the controller 190 will be described in detail.
The initiator 520 determines an initial location of the user being a reference value for mapping a content space with an actual space. That is, if a start event for realistic type playback is sensed, the initiator 520 determines coordinates input from the tracker 510 an initial location of the user. Specifically, if the location of the user 303 is not changed within a preset error after watching the content starts, the initiator 520 may determine the location of the user 303 as an initial location. Further, if a predetermined key value is input from the remote controller 120, the initiator 520 may determine a location of the user in the input time of the predetermined key value as the initial location. Further, if a start event is input from the tracker 510, the initiator 520 may determine a location of the user in the input time of the predetermined key value as the initial location. To do this, the tracker 510 may detect a predetermined gesture of the user, for example, action taking a hand down after lifting it using a template matching manner. If the predetermined gesture is detected, the tracker 510 informs the initiator 520 of this.
==X axis conversion==
FW=X_Ratio*DSW, X_Ratio=FW/DSW
==Y axis conversion==
FH=Y_Ratio*DSH, Y_Ratio=FH/DSH
==Z axis conversion=
FL=Z_Ratio*WD at time 0(t0), Z_Ratio=FL/WD at t0 Equation (1)
As illustrated in
==X axis conversion==
FW=X_Ratio*(DSW+X_Adjustment)
X_Ratio=FW/(DSW+X_Adjustment)
==Y axis conversion==
FH=Y_Ratio*(DSH+Y_Adjustment)
Y_Ratio=FH/(DSH+Y_Adjustment)
==Z axis conversion==
FL=Z_Ratio*(WD at t0+Z_Adjustment)
Z_Ratio=FL/(WD at t0+Z_Adjustment) Equation (2)
DSW, DSH, and WD are values depending on the size of the display unit and an actual space. As illustrated in the Equation (2), the space mapper 530 may add or subtract a predetermine adjustment to or from the foregoing values to extend or shorten an actual space mapped to the content space. In other words, the space mapper 530 may control the size of displayed content using the adjustment. The space mapper 530 may receive the adjustment from the remote controller 120 through the receiver 125 at any time, before start or during start of realistic type playback.
Δxu=xu,t+1−xu,t, Δxc=X_Ratio*Δxu, Xc,t+1=xc,t+Δxc Δyu=yu,t+1−yu,t, Δyc=Y_Ratio*Δyu, yc,t+1=yc,t+Δyc Δzu=zu,t+1−zu,t, Δzc=Z_Ratio*Δzu, zc,t+1=zc,t+Δzc Equation (3)
Referring to
A content processing controller 550 receives a virtual viewpoint, namely, coordinates xc,t+1, yc,t+1, zc,t+1 of a virtual camera from a virtual viewpoint determinator 540. Further, the content processing controller 550 may receive an angle adjustment of a virtual viewpoint, namely, an angle control value θ of the virtual camera from the virtual viewpoint determinator 540. Moreover, the content processing controller 550 controls the content processor 150 based on received information to adjust brightness, stereoscopic sensation, and depth of a stereoscopic image. Referring to
The content processing controller 550 controls the content processor 150 to adjust a direction and a distance of a stereoscopic sound. Referring to
The foregoing content playing apparatus 100 may further include constructions that are not mentioned such as a camera, a microphone, and a GPS receiving module. Since the structural elements can be variously changed according to convergence trend of a digital device, no elements can be listed. However, the content playing apparatus 100 may include structural elements equivalent to the foregoing structural elements. Further, the content playing apparatus 100 of the present invention may be substituted by specific constructions in the foregoing arrangements according to the provided form or another structure. This can be easily understood to those skilled in the present art.
That is, the user may be stimulated with a touch sense, and experience to be present in a real content space.
A space mapper 530 maps a content space to an actual space based on the determined initial location in step 1302. A virtual viewpoint determinator 540 computes location variation amounts (Δxu, Δyu, Δzu) of the user in step 1303. Next, the virtual viewpoint determinator 540 compares the computed location variation amounts of the user with an MTT in step 1304. As a comparison result of step 1304, when the computed location variation amounts of the user is greater than the MTT, the process proceeds to step 1305. The virtual viewpoint determinator 540 determines and transfers locations xc,t+1, yc,t+1, and zc,t+1 of a virtual camera, namely, a virtual viewpoint to the content processing controller 550 using the Equation (3) in step 1305.
The content processing controller 550 controls the content processor 150 based on the received virtual viewpoint in step 1306. That is, the content processing controller 550 controls the content processor 150 to play content corresponding to the received virtual viewpoint. Further, the content processing controller 550 controls the content processor 150 based on the received virtual viewpoint to adjust a direction or a distance of a stereoscopic sound in step 1306. The content processing controller 550 may control a peripheral device, such as home network devices based on the virtual viewpoint to adjust an intensity of wind, temperature, humidity, or brightness.
Next, the content processing controller 550 determines whether a termination event of realistic type playback is sensed in step 1307. If the termination event of realistic type playback is sensed at step 1307, a process for realistic playback is terminated. Conversely, if the termination event is not sensed, the process returns to step 1303.
A method for providing a user interface in a portable terminal according to an embodiment of the present invention as described above may be implemented in an executable program command form by various computer means and be recorded in a computer readable recording medium. The computer readable recording medium may include a program command, a data file, and a data structure individually or a combination thereof. In the meantime, the program command recorded in a recording medium may be specially designed or configured for the present invention or be known to a person having ordinary skill in a computer software field to be used.
The computer readable recording medium includes Magnetic Media such as hard disk, floppy disk, or magnetic tape, Optical Media such as Compact Disc Read Only Memory (CD-ROM) or Digital Versatile Disc (DVD), Magneto-Optical Media such as floptical disk, and a hardware device such as ROM. RAM, flash memory storing and executing program commands. Further, the program command includes a machine language code created by a compiler and a high-level language code executable by a computer using an interpreter. The foregoing hardware device may be configured to be operated as at least one software module to perform an operation of the present invention, and a reverse operation thereof is the same.
A content playing method and an apparatus thereof according to the present invention have an effect that they may realistically play content stimulating a sense of the user.
Although various embodiments of the present invention have been described in detail herein, many variations and modifications may be made without departing from the spirit and scope of the present invention, as defined by the appended claims.
Claims
1. A content playing method, the method comprising:
- determining a first location of a user;
- mapping a content space displayed on a display unit to correspond with an actual space in which the user is positioned based on the first determined location;
- determining a virtual viewpoint in the content space corresponding to a second location of the user; and
- playing content corresponding to the determined virtual viewpoint.
2. The method of claim 1, wherein playing content comprises displaying an image based on the determined virtual viewpoint.
3. The method of claim 2, wherein displaying an image comprises controlling at least one of brightness, stereoscopic sensation, and depth of the image to display the controlled image.
4. The method of claim 1, wherein playing content comprises outputting a sound based on the determined virtual viewpoint.
5. The method of claim 4, wherein outputting a sound comprises controlling at least one of a direction sense and a distance sense to output the controlled sound.
6. The method of claim 1, wherein the played content includes an image and a sound for stimulating at least a sight sense and a hearing sense of the user.
7. The method of claim 1, wherein determining a virtual viewpoint comprises:
- computing an amount of location variation of the user; and
- determining the virtual viewpoint corresponding to a changed location of the user when the computed location variation amount is greater than a preset Minimum Transition Threshold (MTT).
8. The method of claim 1, wherein mapping a content space comprises computing a coordinate transformation value between a coordinate system of the actual space and a coordinate system of the content space.
9. The method of claim 8, wherein determining a virtual viewpoint comprises determining the virtual viewpoint corresponding to the second location of the user using the coordinate transformation value.
10. The method of claim 8, wherein mapping a content space comprises:
- setting the coordinate system of the actual space using a horizontal length Display Screen Width (DSW) of a screen of the display unit, a vertical length Display Screen Height (DSH) of the screen, and a straight distance Watching Distance (WD) between the screen and the user;
- setting the coordinate system of the content space using a horizontal length Focal Width (FW) of a focus surface and a vertical length Focal Height (FH) of the focus surface, and a straight distance Focal Length (FL) between the focus surface and the virtual viewpoint; and
- computing the coordinate transformation value using the first location when the first location is determined.
11. The method of claim 10, wherein setting the coordinate system of the actual space comprises mapping the actual space reduced or enlarged to the virtual space by adding or subtracting an adjustment for at least one of the horizontal length, the vertical length, and the straight distance.
12. The method of claim 1, further comprising:
- computing an amount of location variation of the user based on the first location of the user;
- applying the computed location variation amount to a trigonometric function to compute an angle variation amount;
- rotating a direction of the virtual viewpoint by the computed angle variation amount; and
- playing content corresponding to the virtual viewpoint the direction of which is rotated.
13. The method of claim 1, wherein determining a virtual viewpoint comprises:
- tracking a location of the user; and
- determining a fixed location of the user as a first location of the mapping when the location of the user is fixed within a preset error for a preset time as the tracked result.
14. The method of claim 1, wherein determining a virtual viewpoint comprises determining the tracked location as the first location of the mapping when a start event for realistic type playback is sensed while tracking a location of the user.
15. The method of claim 1, wherein determining a virtual viewpoint comprises determining the tracked location as the first location of the mapping when a predetermined gesture of the user is sensed while tracking a location of the user.
16. A content playing apparatus, comprising:
- a content collecting unit for collecting content stimulating senses of a user;
- a content processor for performing a processing operation such that content input from the content collecting unit are played;
- a content playing unit for playing content input from the content collecting unit;
- a sensor for collecting information associated with a location of the user such that the content is played corresponding to the location of the user; and
- a controller for determining a virtual viewpoint in a virtual content space corresponding to the location of the user based on received information from the sensor and controlling such that content corresponding to the determined virtual viewpoint are played.
17. The apparatus of claim 16, wherein the controller comprises:
- an initiator for determining a first location of a user based on the received information from the sensor;
- a space mapper for mapping a content space displayed on a display unit to an actual space in which the user is positioned based on the first determined location;
- a virtual viewpoint determinator for determining a virtual viewpoint in the content space corresponding to a second location of the user; and
- a content processing controller for controlling the content processor to play content corresponding to the determined virtual viewpoint.
18. The apparatus of claim 17 wherein the content playing unit comprises an image display unit for displaying an image based on the determined virtual point.
19. The apparatus of claim 18, wherein the image display unit displays an image controlling at least one of brightness, stereoscopic sensation, and depth of the image to display the controlled image.
20. The apparatus of claim 17, wherein the content playing unit outputs a sound based on the determined virtual viewpoint.
21. The apparatus of claim 20, wherein the sound output unit controls at least one of a direction sense and a distance sense to output the controlled sound.
22. The apparatus of claim 16, wherein the content playing unit plays an image and a sound for stimulating at least a sight sense and a hearing sense of the user.
23. The apparatus of claim 17 wherein the virtual viewpoint determinator computes a location variation amount of the user based on the first location, applies the computed location variation amount to a trigonometric function to compute an angle variation amount, rotates a direction of the virtual viewpoint by the computed angle variation amount, and controlling such that content corresponding to a virtual viewpoint the direction of which is rotated are played.
24. The apparatus of claim 17 wherein the controller further comprises a tracker for tracking the location of the user using the received information from the sensor, and the initiator determines a fixed location of the user as a first location of the mapping when the location of the user is fixed within a preset error for a preset time during location tracking of the user.
25. The apparatus of claim 17 wherein the controller further comprises a tracker for tracking the location of the user using the received information from the sensor, and the initiator determines the tracked location of the user as a first location of the mapping when a start event for realistic playback is sensed during location tracking of the user.
26. The apparatus of claim 17 wherein the controller further comprises a tracker for tracking the location of the user using the received information from the sensor, and the initiator determines the tracked location of the user as a first location of the mapping when a preset gesture of the user is sensed during location tracking of the user.
27. The apparatus of claim 17, further comprising an interface unit connecting with a peripheral device having a content playing function,
- wherein the content processing controller controls the peripheral device through the interface unit to play content corresponding to the virtual viewpoint.
Type: Application
Filed: Jan 23, 2012
Publication Date: Jan 24, 2013
Applicant: Samsung Electronics Co., Ltd. (Suwon-si)
Inventors: Sang Keun JUNG (Suwon-si), Hyun Cheol Park (Suwon-si), Moon Sik Jeong (Seongnam-si), Kyung Sun Cho (Seoul)
Application Number: 13/356,220
International Classification: A63F 9/24 (20060101);