PACKED I-FRAMES

Systems and methods for enabling playback control functions of a media player are disclosed. For example, a user of a client device receiving streaming playback of a video stream may perform rewind and fast forward control functions. The client device may implement these playback control functions by retrieving an enhanced playback segment. Using the enhanced playback segment, the media player may display selected frames at a predetermined interval while maintaining a visual cadence that is pleasing to a viewer. In the described embodiments, a client device may render a video stream, receive a command to control a fast forward or rewind playback mode for the video stream, and retrieve, from a distribution server or associated edge cache, one or more enhanced playback segments adapted to implement the user command.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application is a non-provisional application of U.S. Provisional Application Ser. No. 62/005,803, filed May 30, 2014, the contents of which are referenced herein in its entirety.

BACKGROUND

The present invention generally relates to streaming digital media and, in particular, to the organization of coded video data among transmission segments to support enhanced playback functions for streaming media.

Streaming digital media from a remote server to a client device over a communication network is an increasingly popular way for retrieving and viewing various digital media, which may include audio and video streams. Within a media player, users are accustomed to traditional playback control functions, such as rewind and fast forward. To implement these playback control functions for digital media, I-frames are selected at predetermined intervals depending on the fast forward or rewind speed.

However, implementing playback control functions for streaming digital media presents difficulties that are not encountered for their locally-stored counterparts. For example, when implementing fast forward functions, upcoming frames may not yet be stored at a client device. As a result, the client device must retrieve frames from the remote server as part of the streaming operation. The I-frames used for fast forward and rewind functions are intermixed with too many frames of other types to be downloaded efficiently. In addition, characteristics of typical network connections can present further difficulties in retrieving these frames in a manner compatible with achieving desirable qualities of the control functions. For example, it is typically desirable to perform rewind and fast forward functions by displaying frames selected at a predetermined interval in order to achieve a visual cadence that is pleasing to a viewer (e.g., not too choppy). However, retrieving frames over a communications network often involves variable and unpredictable transmission. Such variability and unpredictability has, thus far, caused processes dependent upon those retrieved frames, such as the implementation of playback control functions, to be unreliable.

In light of the drawbacks discussed above, the inventors have developed improved systems and methods for implementing playback control functions for streaming media.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a simplified block diagram of an example media distribution system suitable for use with the present invention.

FIG. 2 illustrates example media player application hosted on a client device according to an example embodiment.

FIG. 3 illustrates a simplified user interface of a media player that implements media stream playback control functions according to an example embodiment.

FIG. 4 illustrates a video stream suitable for use with the present invention.

FIG. 5 illustrates the composition of an enhanced playback segment according to an example embodiment.

FIGS. 6A-6E illustrate various video frame display timelines during normal speed forward and other control playback modes of a media stream according to example embodiments.

FIG. 7 illustrates a method for implementing a playback control function according to an example embodiment.

FIG. 8 illustrates a simplified schematic view of an example client device according to an example embodiment.

DETAILED DESCRIPTION

Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Wherever possible, like reference numbers will be used for like elements.

Systems and methods for enabling playback control functions of a media player are disclosed. For example, a user of a client device receiving a video stream may perform rewind and fast forward control functions. The client device may implement these playback control functions by retrieving a dedicated type of segment to support these functions. Using the packed I-frame segment, the media player may display frames selected at a predetermined interval while maintaining a visual cadence that is pleasing to a viewer. In addition, the use of a packed I-frame segment alleviates transmission problems associated with existing communication systems.

In the described embodiments, a client device may, during a normal playback mode, retrieve from a media source a plurality of transmission segments that contain coded video data of the media asset, the coded video data representing frames of asset content at a first temporal spacing, during an enhanced playback mode, retrieve from the media source at least one transmission segment that contains coded video data of the media asset, the coded video data representing frames of asset content at a second temporal spacing larger than the first, wherein all frames of the transmission segment are coded by intra-coding.

FIG. 1 is a simplified block diagram of an example media distribution system 100 suitable for use with the present invention.

The system 100 may include a distribution server 110 and a client device 120 connected via a communication network 130. The distribution server 110 may include a storage system 115 that may store a variety of media streams str1-str3 (e.g., music, movies, television shows, advertisements, etc.) for download by the client device 120. The distribution server 115 may transmit media streams str1-str3 to the client device 120 via the network 130 in response to client requests. For example, the streaming media may be pre-stored at distribution server 115, as media assets 117. In another example, “live streamed” data may be stored at distribution server 115 on a real-time basis.

One or more media assets 117 may be stored, within the storage system 115. Media assets 117 may be transmitted as a plurality of streams (e.g., str1-str3), each stream having a plurality of coded media segments (e.g., segments 1.1-1.n, 2.1-2.n, 3.1-3.n). Each media stream (e.g., str1) may be coded at a respective bitrate, frame rate, and/or frame size. In the illustrated embodiment, each of the plurality of segments 1.1-1.n, 2.1-2.n, 3.1-3.n may include media content of a predetermined duration (e.g., six seconds). In some instances, the distribution server 110 may store multiple copies of a particular media stream, especially video media streams. The example illustrated in FIG. 1 depicts three coded video streams str1-str3 that are coded at varying bitrates (e.g., 4 Mb/s, 2 Mb/s and 500 Kb/s).

In addition, each of the plurality of segments 1.1-1.n, 2.1-2.n, 3.1-3.n may contain frames coded by a predetermined protocol. For example, video data in each segment 1.1-1.n, 2.1-2.n, 3.1-3.n may be coded according to ITU-T H.265 (commonly “HEVC”), H.264, H.263 or other standard or proprietary protocols. Coded video data typically is coded according to predictive coding techniques that exploit spatial and/or temporal redundancy in a source video sequence. Accordingly, frames of a source video sequence may be coded according to intra-prediction techniques (I-coding) or inter-prediction techniques (often, P- and/or B-coding). I-coded frames may be decoded without reference to other frames in the coded video data but P-coded and B-coded frames cannot be decoded without reference to other frames. P- and B-coding often achieves lower bit rates for individual frames and, therefore, coded video data often has a higher percentage of P- and B-coded frames than I-coded frames. For convenience, these segments are called “normal playback segments” herein.

Embodiments of the present invention may provide, within a distribution server 110, a type of segment to support enhanced playback control functions, called enhanced transmission or “packed I-frame” segments, for convenience. An enhanced transmission segment may store I-coded frames of a media stream spaced at temporal intervals that correspond to the enhanced playback mode being supported. Thus, in an example where segments 1.1-1.n of a media stream str1 are coded at 30 frames per second with various distributions of I-, P- and B-coded frames, a packed I frame segment IS1 may support fast forward and rewind playback modes. In this example, packed I-frame segment IS1 consists of sampled I-frames originating from stream str1.

In some implementations, the normal playback segments, by convention, may include I-frames as the first coded frame in each segment and include other I-coded frames at regular intervals. For example, the distribution server 110 may operate according to a segmentation policy that requires two I-coded frames to appear in every segment and that those I-coded frames appear at regularly-spaced time intervals in display order.

Packed I-frame segments IS1-IS3 may be generated in a variety of ways. For example, I-frames may be copied from normal playback segments 1.1-1.n, 2.1-2.n, 3.1-3.n to the packed I-frame segments IS1-IS3, respectively. In this example, playback of I-frames is used to implement playback control functions. As a result, cache utilization inside communication network is improved and unpredictable latencies are reduced. In another example, normal playback segments 1.1-1.n, 2.1-2.n, 3.1-3.n may be re-encoded in order to produce I-frames more frequently. In this example, only frames being re-encoded as I-frames may be processed. By rendering I-frames at shorter intervals, playback control functions may be provided with a more pleasing cadence to the user. In addition, this approach avoids copying of I-frames from normal playback segments segment 1.1-1.n, 2.1-2.n, 3.1-3.n.

The distribution server 110 also may store a manifest file 116 that, for each media asset 117, provides configuration information for associated streams str1-str3 (e.g., bitrate, size, etc.) and, for each stream, identifies coded video segments and packed I-frame segments that correspond to the stream. The manifest file 116 may also correlate segments of coded video with corresponding segments having varying bitrates. Manifest file 116 may identify segments by a network location resource such as a URL.

The client device 120 may be any electronic device. The client device may include a media player adapted to download streaming media from the distribution server 110. The distribution server 110 may transmit media to the client device 120 via channel 140 and communication network 130. The client device 120 decodes the downloaded segments such that they may be rendered for playback.

Although the client device 120 is illustrated as a tablet computer in FIG. 1, client devices may be provided as a variety of computing platforms, including smartphones, personal computers, laptop computers, media players, other servers, and/or dedicated video conferencing equipment. The network 130 represents any number of networks that convey coded video data among the distribution server 110 and the client device 120, including, for example, wireline and/or wireless communication networks. A communication network 130 may exchange data in circuit-switched and/or packet-switched channels. Representative networks include telecommunications networks, local area networks, wide area networks and/or the Internet. For the purposes of the present discussion, the architecture and topology of the network 130 is immaterial to the operation of the present invention unless discussed hereinbelow.

FIG. 1 illustrates a simplified implementation of media distribution system 100. The example architecture depicted in FIG. 1 may be expanded to accommodate multiple distribution servers, edge caches associated with the distributions servers, content servers, coding servers, client devices, communications networks, etc. In some implementations, it is permissible to provide some servers as dedicated media coding servers and other servers as dedicated media transmission servers.

FIG. 2 illustrates example media player application hosted on a client device according to an example embodiment.

The media player application 200 may include program instructions that are executable by the processor to provide a user interface and media stream control functions to a user of the client device. The media player application 200 may include a media playback component 201, configured to provide the user interface and associated functionality of the media player that enables the user to select and perform playback control functions of media streams. The media player application 200 may also include and a media input/output (I/O) component 202, configured to provide media stream requests to and receive media streams from the media distribution server 110.

FIG. 3 illustrates a simplified user interface 300 of a media player that implements media stream playback control functions according to an example embodiment.

The user interface 300 can include a video display portion 301 to display a video or other visual components of a media stream, and playback control buttons 302 to receive media stream playback control inputs from the user, including one or more of a normal speed forward playback control, a stop playback control, a pause playback control, a rewind playback control, and a fast forward playback control.

FIG. 4 illustrates a video stream 400 suitable for use with the present invention. As shown in FIG. 4, the example video stream 400 may be comprised of a plurality of normal playback segments 410.1-410.n. Each of these segments 410.1-410.n may further include a plurality of frames 420.1-420.n. A variety of frame types are typically assembled to construct a segment, including I-frames (intra-coded), P-frames (predicted) and/or B-frames (bi-predictive).

During coding, a coding server (not shown) may assign to each frame a certain frame type, which can affect the coding techniques that are applied to the respective frame. For example, frames often are assigned as one of the following frame types. An I-frame generally operates as a standalone frame that is coded and decoded without using any other frame in the sequence as a source of prediction. As an I-Frame may contain information for rendering an entire image, an I-frame is typically larger than other frame types. By contrast, P-frames and B-frames generally include references to other frames. A P-frame is one that is coded and decoded using a single previously-coded frame in the sequence as a candidate source of prediction, and a B-frame is one that is coded and decoded using a pair of earlier-coded frames in the sequence as candidate sources of prediction.

The example media stream 400 may be utilized to perform a normal speed forward playback of video media. In addition, the example media stream 400 may also be utilized to perform playback control functions according to conventional techniques or according to embodiments of the present invention.

FIG. 5 illustrates the composition of an enhanced playback segment according to an example embodiment. As shown in FIG. 5, the example video stream 500 may be comprised of a plurality of segments 510.1-510.n. Each of these individual segments 510.1-510.n may further include a plurality of frames 520.1-520.n including I-, P-, and/or B-frames.

In addition, an enhanced playback segment 530 is composed of I-frames from normal playback segments 510.1-510.n. The enhanced playback segment 530 may be a standalone media segment that includes I-frames from normal playback segments 510.1-510.n. For example, an enhanced playback segment may be generated for a predetermined number of media segments. Alternatively, an enhanced playback segment may be generated to store a predetermined number of I-frames. In either example, multiple enhanced playback segments may be used to separately store I-frames of a stream 500.

Consider an example where, each of normal playback segments 510.1-510.n may include approximately 180 frames and include a few I-frames (e.g., 2 or 3) such that an I-frame is rendered every 2-4 seconds. As I-frames may be substantially larger than P-, and B-frames, each enhanced playback segment 530 may include fewer frames than normal playback segments 510.1-510.n. For example, an enhanced playback segment may include 45, 60, or 90 frames.

The use of an enhanced playback segment 530 may cause the rendering of an I-frame every half second to every eighth of a second or faster. For example, when implementing fast forward and rewind control functions at 16×, the rendering of eight I-frames per second produces a visual cadence that is pleasing to a viewer. In another example, when implementing fast forward and rewind control functions at 4×, two I-frames per second may be visually too fragmented for some viewers.

FIGS. 6A-6E illustrate various video frame display timelines during normal speed forward and other control playback modes of a media stream according to example embodiments.

FIG. 6A depicts an embodiment of a timeline of display of video frames by the media player program during a normal speed forward playback of a video component of a media stream that can be produced utilizing media segment files (e.g., 410.1-410.n or 510.1-510.n). In the depicted timeline, the normal speed playback includes displaying each frame I1, P1, B1, I2, P2, B2, I3, P3, B3, . . . of the video stream in order and at a predetermined rate of display. For example, each frame I1, P1, B1, I2, P2, B2, I3, P3, B3, . . . may be displayed for 1/30th of a second during normal playback mode.

FIGS. 6B-6E depict embodiments of a timeline of display of video frames by the media player program during other control playback modes that can be produced by utilizing enhanced playback segments (e.g., 530). During other control playback modes, frames may still be displayed for 1/30th of a second, but the interval between selected frames may be varied. For example, every I-frame or every 10th I-frame may be displayed.

FIG. 6B depicts an embodiment of a relatively slower speed fast forward playback mode that includes displaying each I-frame I1, I2, I3, I4 . . . of the video in a forward order. FIG. 6C depicts an embodiment of a relatively higher speed fast forward playback mode that includes displaying a subset of I-frames I10, I20, I30, I40 . . . of the video in a forward order. In the example of FIG. 6C, every 10th I-frame may be displayed. FIG. 6D depicts an embodiment of a relatively slower speed rewind playback mode that includes displaying only each I-frame I80, I79, I78, I77 . . . of the video stream in a reverse order. FIG. 6E depicts an embodiment of a relatively higher speed rewind playback mode that includes displaying a subset of the I-frames I80, I70, I60, I50 . . . of the video in a reverse order. In the example of FIG. 6E, every 10th I-frame may be displayed.

Using packed I-frames, the media streaming system can provide improved techniques for requesting video frames for control playback functions of a media player. The embodiments reduce or eliminate problems associated with conventional systems, such as delays associated variable or unpredictable transmission of numerous media segments in the communication network.

FIG. 7 illustrates a method 700 for implementing a playback control function according to an example embodiment.

At 701, a media asset may be rendered by a client device. Rendering generally includes buffering and decoding the coded segments of the media asset so that the resulting decoded segments may be rendered by a media player.

At 702, the client device may receive a user command indicating a playback control function, including one or more of a normal speed forward playback control, a stop playback control, a pause playback control, a rewind playback control, and a fast forward playback control.

Next, at 703, the media player determines whether the user-selected command includes one of a fast forward or rewind playback control. If so, the media player may request one or more packed I-frames that may be utilized to implement the user selected control command, at 704. Otherwise, the media player may request additional media segments if the user command denotes a normal speed forward playback control. Additional segments may not be needed if the user selects a stop or pause commands. Lastly, at 706, the media player implements the user-select control.

FIG. 8 is a simplified schematic view of a client device 800 according to an example embodiment.

Client device 800 may include a processing system and control circuitry 802, memory 804, communications circuitry 806, input/output components 808, and display assembly 810. The client device 800 may also include a bus 803 that may provide a data transfer path for transferring data and/or power, to, from, or between various other components of client device 800.

Processing system 802 may control the operation of components within client device 800. For example, processor 802 may receive input signals from an input component 808 and/or drive output signals to display assembly 810. In another example, processor 802 may execute instructions for one or more applications APP0-APPN, including media streaming applications, stored in memory 804.

Memory 804 stores the operating system OS of the client device 800 as well as one or more applications (APP0, APP1 . . . APPN). Included among the applications may be a streaming application service. Memory 804 may include one or more storage mediums, including for example, a hard-drive, flash memory, permanent memory such as read-only memory (“ROM”), semi-permanent memory such as random access memory (“RAM”), any other suitable type of storage component, or any combination thereof. Memory 804 may include cache memory, which may be one or more different types of memory used for temporarily storing data for electronic device applications. Memory 804 may store media streams (e.g., music, image, and video files), software, firmware, preference information (e.g., media playback preferences), wireless connection information, subscription information (e.g., information that tracks podcasts, television shows, or other media a user subscribes to), etc.

Communications circuitry 806 may be provided to enable the client device 800 to communicate with one or more other electronic devices or servers using any suitable communications protocol. For example, communications circuitry 806 may support Wi-Fi (e.g., an 802.11 protocol), Ethernet, Bluetooth, high frequency systems (e.g., 900 MHz, 2.4 GHz, and 5.6 GHz communication systems), infrared, transmission control protocol/internet protocol (“TCP/IP”), hypertext transfer protocol (“HTTP”), real-time transport protocol (“RTP”), real-time streaming protocol (“RTSP”), and other standardized or propriety communications protocols, or combinations thereof.

In some embodiments, one or more components of electronic device 800 may be combined or omitted. Moreover, electronic device 800 may include additional components not depicted in FIG. 8. Client device 800 may be any stationary or portable electronic device, including tablet computers, smartphones, laptop computers, personal computers, set-top boxes, wearable electronic devices, and other consumer electronic products designed to decode and render coded media streams.

Electronic device 800 may also include one or more output components 808 that may render information (e.g., audio and video) to a user of device 800. An output component of client device 800 may take various forms, including, but not limited, to audio speakers, headphones, visual displays via display assembly 810, etc.

Display assembly 810 may include any suitable type of display or interface for presenting visible information to a user of client device 800. In some embodiments, display assembly 810 may include an embedded or coupled display. Display assembly 810 may include, for example, a touch screen, a liquid crystal display (“LCD”), a light emitting diode (“LED”) display, an organic light-emitting diode (“OLED”) display, or any other suitable type of display.

The foregoing discussion has described operation of the embodiments of the present invention in the context of distribution servers, client devices, content servers, and coding servers. Commonly, these components are provided as electronic devices. They can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors. Alternatively, they can be embodied in computer programs that execute on personal computers, notebook computers or computer servers. Similarly, decoders can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors, or they can be embodied in computer programs that execute on personal computers, notebook computers or computer servers. Decoders commonly are packaged in consumer electronics devices, such as gaming systems, DVD players, portable media players and the like and they also can be packaged in consumer software applications such as video games, browser-based media players and the like. And, of course, these components may be provided as hybrid systems that distribute functionality across dedicated hardware components and programmed general purpose processors as desired.

It will be apparent to those skilled in the art that various modifications and variations can be made in the system and method for item to item transitions of the present invention without departing form the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims

1. A method of rendering a media asset at a media player, comprising:

during a normal playback mode, retrieving from a media source a plurality of transmission segments that contain coded video data of the media asset, the coded video data representing frames of asset content at a first temporal spacing;
during an enhanced playback mode, retrieving from the media source at least one transmission segment that contains coded video data of the media asset, the coded video data representing frames of asset content at a second temporal spacing larger than the first, wherein all frames of the transmission segment are coded by intra-coding;
decoding the coded video data of the retrieved segments; and
rendering the decoded video data on a display of the media player.

2. (canceled)

3. (canceled)

4. The method of claim 1, wherein the first transmission segment downloaded during the enhanced playback mode is identified based on a portion of the media asset that was being rendered in normal playback mode when the enhanced playback mode was engaged.

5. The method of claim 1, further comprising

toggling between the normal playback mode and the enhanced playback mode in response to operator commands,
wherein, on each toggling between playback modes, a next transmission segment to be retrieved is identified based on a portion of the media asset that was being rendered when the toggling occurred.

6. The method of claim 1, wherein the second temporal spacing is at least double the first temporal spacing.

7. (canceled)

8. (canceled)

9. The method of claim 1, wherein all frames of the transmission segment are selected from the plurality of transmission segments.

10. The method of claim 1, wherein the transmission segment is generated by re-encoding the plurality of transmission segments using only intra-coding at a reduced frame rate.

11. A non-transitory computer readable medium storing a media streaming application having playback control functions, the media streaming application executable by at least one processing system, the media streaming application comprising instructions for:

during a normal playback mode, retrieving from a media source a plurality of transmission segments that contain coded video data of the media asset, the coded video data representing frames of asset content at a first temporal spacing;
during an enhanced playback mode, retrieving from the media source at least one transmission segment that contains coded video data of the media asset, the coded video data representing frames of asset content at a second temporal spacing larger than the first, wherein all frames of the transmission segment are coded by intra-coding;
decoding the coded video data of the retrieved segments; and
rendering the decoded video data on a display of the media player.

12. The non-transitory computer readable medium of claim 11, wherein the enhanced playback mode is initiated in response to a user command to enter a fast forward playback mode.

13. The non-transitory computer readable medium of claim 11, wherein the enhanced playback mode is initiated in response to a user command to enter a rewind playback mode.

14. The non-transitory computer readable medium of claim 11, wherein the first transmission segment downloaded during the enhanced playback mode is identified based on a portion of the media asset that was being rendered in normal playback mode when the enhanced playback mode was engaged.

15. The non-transitory computer readable medium of claim 11, further comprising

toggling between the normal playback mode and the enhanced playback mode in response to operator commands,
wherein, on each toggling between playback modes, a next transmission segment to be retrieved is identified based on a portion of the media asset that was being rendered when the toggling occurred.

16. (canceled)

17. (canceled)

18. (canceled)

19. The non-transitory computer readable medium of claim 11, wherein all frames of the transmission segment are selected from the plurality of transmission segments.

20. The non-transitory computer readable medium of claim 11, wherein the transmission segment is generated by re-encoding the plurality of transmission segments using only intra-coding at a reduced frame rate.

21. A electronic device comprising:

a processing system;
memory storing one or more programs for execution by the processing system, the one or more programs including instructions for:
during a normal playback mode, retrieving from a media source a plurality of transmission segments that contain coded video data of the media asset, the coded video data representing frames of asset content at a first temporal spacing;
during an enhanced playback mode, retrieving from the media source at least one transmission segment that contains coded video data of the media asset, the coded video data representing frames of asset content at a second temporal spacing larger than the first, wherein all frames of the transmission segment are coded by intra-coding;
decoding the coded video data of the retrieved segments; and
rendering the decoded video data on a display of the media player.

22. (canceled)

23. (canceled)

24. The electronic device of claim 21, wherein the first transmission segment downloaded during the enhanced playback mode is identified based on a portion of the media asset that was being rendered in normal playback mode when the enhanced playback mode was engaged.

25. The electronic device of claim 21, further comprising

toggling between the normal playback mode and the enhanced playback mode in response to operator commands,
wherein, on each toggling between playback modes, a next transmission segment to be retrieved is identified based on a portion of the media asset that was being rendered when the toggling occurred.

26. (canceled)

27. The electronic device of claim 21, wherein the second temporal spacing is at least four times the first temporal spacing.

28. The electronic device of claim 21, wherein the second temporal spacing is at least eight times the first temporal spacing.

29. The electronic device of claim 21, wherein all frames of the transmission segment are selected from the plurality of transmission segments.

30. The electronic device of claim 21, wherein the transmission segment is generated by re-encoding the plurality of transmission segments using only intra-coding at a reduced frame rate.

31. (canceled)

Patent History
Publication number: 20150350622
Type: Application
Filed: Sep 30, 2014
Publication Date: Dec 3, 2015
Inventors: Roger N. Pantos (Cupertino, CA), Zhenheng Li (San Jose, CA)
Application Number: 14/501,941
Classifications
International Classification: H04N 9/804 (20060101); H04N 21/6587 (20060101);