CENTRAL CONTROLLER TO MANAGE NETWORK RESOURCES ACROSS A GROUP OF PLAYBACK DEVICES TO CONTROL STREAMING VIDEO QUALITY ACROSS THE GROUP OF PLAYBACK DEVICES

Methods and apparatuses to manage network resources across a group of playback devices that share the same resources to control video quality across the group of playback devices are provided. A central controller collects information about a plurality of playback devices wherein the playback devices share a network resource. The central controller also collects information regarding the network resource. The central controller allocates the network resource to deliver one or more requested video segments to one or more playback devices based on the information collected from the plurality of playback devices and the information collected regarding the network resource. By providing a central controller that is aware of the available network resources and the needs of the playback devices, the network resource can be managed in a way that improves the quality of experience for end users across all the playback devices while maximizing the efficiency of the network resources.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application Ser. No. 61/488,525, which was filed on May 20, 2011, and is incorporated herein by reference in its entirety.

TECHNICAL FIELD

This disclosure relates to a central controller to manage network resources across a group of playback devices to control streaming video quality across the group of playback devices.

BACKGROUND

A broadband communications system, for example, can be used to deliver high-definition digital entertainment such as video to subscriber premises. For example, FIG. 1 illustrates a high-level broadband communications system 100 for distributing videos from various video sources to various playback devices over a networked environment.

As shown in FIG. 1, a video 105 stored on an Internet Protocol (IP) server 110, for example, can be delivered from the IP server 110 to a destination IP playback device 180 (such as an IP television, mobile phone, tablet, netbook, or notebook computer) over an IP network 120 and then, for example, a cable-based access network 150. The video 105 stored on the IP server 110 typically is processed using either a MPEG-2 or a MPEG-4 codec to produce an MPEG transport stream. The MPEG transport stream then is IP-encapsulated, and the IP-encapsulated MPEG video (“IP/MPEG video”) 115 is transported (for example, progressively or streamed) over the IP network 120 to a cable headend 130 and ultimately to the IP playback device 180.

In the headend 130, IP/MPEG video 115′ representing the IP/MPEG video 115 can be received by a CMTS 132 device and converted to a QAM signal 135 representing the video (“QAM/IP/MPEG signal 135”), for example, for delivery over the cable-based access network 150. The QAM/IP/MPEG signal 135 can then be combined with signals for other services (e.g., voice, non-video high speed data) to produce a combined signal 137. The combined signal 137, which includes the QAM/IP/MPEG signal 135, then can be transmitted over the cable-based network 150 to a subscriber premise. The cable network 150 can take the form of an all-coax, all-fiber, or hybrid fiber/coax (HFC) network, for example.

At the subscriber premise, a combined signal 137′ representing the combined signal 137 can be received by a cable modem (CM) or gateway (GW) device 170 and the IP/MPEG video 115′ representing the IP/MPEG video 115 can be delivered (wirelessly and/or wired) to the IP playback device 180 to process and display the video 105′ representing video 105. Other services (such as high speed data, for example) included in the combined signal 137′ can be delivered to other CPE devices (such as a personal computer 175).

As another example, a video 142 stored on a IP server 140 at the headend 130 can be delivered over the cable-based network 150 to, for example, another subscriber premise where it is received by an IP television 165, for example, via a CM/GW 160. Similar to video 105 stored on the IP server 110, video 142 can be processed using either a MPEG-2 or a MPEG-4 codec to produce an MPEG transport stream. The MPEG transport stream then is IP-encapsulated, and the IP-encapsulated MPEG video (“IP/MPEG video”) 157 can be processed for delivery over the cabled-based network 150 by CMTS 132 in the headend 130 to produce a QAM/IP/MPEG signal 139. The QAM/IP/MPEG signal 139 can then be combined with signals from other services (e.g., QAM/IP/MPEG signal 135) to produce a combined signal such as combined signal 137. The combined signal 137, which includes the QAM/IP/MPEG signal 139, then can be transmitted over the cable-based network 150 to a subscriber premise.

At the subscriber premise, such as the subscriber premise having the IP playback device 165, for example, the combined signal 137″ representing the combined signal 137 can be received by CM/GW 160 and the IP/MPEG video 157′ representing the IP/MPEG video 157 can be delivered to the IP playback device 165 to process and display the video 142′ representing video 142.

Increasingly, videos are delivered to an IP playback device over a networked environment using the Hypertext Transfer Protocol (HTTP) via a series of HTTP request and response messages (“segmented HTTP transport (SHT) method”). For example, a video (e.g., video 105, 142) can be delivered to an IP playback device (e.g., device 180, 165) using HTTP by first partitioning the video file into a series of short video segments where each video segment can be placed on a server (e.g., server 110, 140) and identified by an unique Uniform Resource Locator (URL).

Each video segment typically includes 2 to 10 seconds of the video; however, the video segment can be longer or shorter than this range. An index file that contains information regarding how to retrieve the available video segments for a video is stored on the server and identified by an URL. The index file can include the respective URLs for the video segments or information to construct such URLs.

To play the video, software (“a client”) on an IP playback device first retrieves the index file from the server and then sequentially retrieves video segments from the server using the appropriate URLs. The IP playback device then sequentially plays the video segments on the integrated screen of the IP playback device or on a separately connected display.

More specifically, to play the video, the client can connect to the server and submit a HTTP request message (e.g., an HTTP GET request) to retrieve the index file for the video. The client can connect to the server by creating a (Transmission Control Protocol) TCP connection to port 80 of the server. The server then can send a HTTP response message to the client containing the index file for the desired video. Based on the information in the index file, the client can submit a series of HTTP requests to the server to retrieve the video segments needed to fill the video play out buffer. Initially, the HTTP requests are submitted to the server at a rate faster than the actual play out. Typically, once the playback buffer in the client has reached a minimum target size, the client then sequentially submits HTTP request messages at the rate of the actual play out (for example every 2-10 seconds) to maintain at a pre-defined level the amount of available video segments in the playback buffer.

To support adaptive streaming, the server can store different versions of a video at different bit rates so that a client can download portions of the video at different bit rates as network conditions change. In some implementations, for example, the server stores the video segments at different bit rates and then the index file includes links to alternate index files for the video at the different bit rate streams. The client can switch to an alternate index file at any time during the streaming of the video as conditions warrant resulting in increased or decreased bit rate utilization on the access network.

In other implementations, for example, instead of storing multiple video segments and for each video segment storing different bit rate versions of the video segment, the server can store one file for each bit rate using, for example, the MPEG-4 Part 14 (ISO/IEC 14496-14) (“MP4”) file format or MPEG-2 transport stream (ISO/IEC 13818-1) (“MPEG2TS”) file format. Each MP4 or MPEG2TS file, which corresponds to the video at a particular bit rate, includes multiple video segments. The index file includes a list of the available bit rates for the video and the list of video segments for the video. To play a video, the client sequentially requests video segments of the video at a particular bit rate. When the server receives the request, it extracts the MP4 or MPEG2TS video segment from the MP4 or MPEG2TS file corresponding to the requested bit rate and sends the requested MP4 or MPEG2TS video segment to the client.

End users increasingly desire to receive and watch videos on IP playback devices such as mobile devices including mobile phones, tablets, netbook, or notebook computers. However, the existing SHT methods for delivering videos are implemented independently for each playback device. That is, each playback device (e.g., IP playback device 180, 165) independently selects and requests a certain video quality (e.g., bit rate) for itself without consideration for the needs of other playback devices that share the same network resource(s). The system 100 (for example, the CMTS 132) attempts to deliver the requested video segments at the requested quality levels to the best of its ability. However, there may not be enough available resources to fulfill all the requests. In such an instance, the system 100 may allocate less than the requested bandwidth to some playback device (e.g., IP playback device 180), while other playback devices (e.g., IP playback device 165) are not fully utilizing the bandwidth allotted to them.

Accordingly, in a bandwidth-constrained environment, existing SHT methods implemented by one or more playback devices can cause video quality degradation for one or more other playback devices. For example, if one playback device wants to play a new video or fast forward or rewind a video, the SHT method will attempt to quickly load or re-load the playback device's playback buffer to allow the video to start or resume by requesting video segments at a higher bit rate. This type of unmanaged increase in bandwidth use from one or more playback devices can affect the ability of other playback devices to receive data thereby resulting in video quality degradation for these other playback devices. To avoid this result, existing solutions over-provision bandwidth for each subscriber premise. However, these solutions are inefficient and can be ineffective in reducing video quality degradation across playback devices.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram illustrating a high-level broadband communications system for distributing videos from various video sources to various playback devices over a networked environment.

FIG. 2 is a block diagram illustrating an example system for managing network resources across a group of playback devices to control overall video quality across the group of playback devices.

FIG. 3 is a block diagram illustrating an example broadband communications device operable to managing network resources across a group of playback devices to control overall video quality across the group of playback devices.

DETAILED DESCRIPTION

Various implementations of this disclosure provide apparatuses and methods to manage network resources across a group of playback devices that share the same resources to control video quality across the group of playback devices. Thus, instead of attempting to transfer a requested video segment at a requested bit rate to a requesting playback device to the best of its ability without any consideration for what impact the transfer will have on other playback devices, implementations of this disclosure use a central controller to manage network resources across the group of playback devices to control video quality across the group of playback devices.

Although this disclosure makes reference to a cable-based system and components thereof, this disclosure is not intended to be limited to a cable-based system. It should be understood that the concepts disclosed herein can be applied to any access network and components thereof including any wired or wireless network or a combination of the foregoing. For example, the concepts disclosed herein can be applied to a telephone network, which can include a twisted pair (wire) communications medium, using various Digital Subscriber Line (xDSL) technologies.

FIG. 2 illustrates an example system 200 for managing network resources (e.g., bandwidth) across a group of playback devices 195 to control video quality across the group of playback devices 195. The group of playback devices 195 can include playback devices that share the same resources to be managed.

In some implementations, to control video quality across a group of playback devices 195, a central controller 190 can collect information about the group of playback devices 195 and collect information regarding the network resources (e.g., downstream bandwidth). The central controller 190 then can use this information to allocate resources across the group of playback devices 195 to maintain a certain video quality across the group of playback devices. For example, the central controller 190 can determine at what data rate and/or priority level to deliver a requested video segment to a playback device in the group of playback devices 195. The central controller 190 can communicate a resource allocation decision to one or more network elements where the network element will implement the allocation decision during the processing of a video request from a playback device.

The central controller 190 can receive from one of more of the playback devices any information to aid the central controller 190 in making resource allocation decisions such as pertinent characteristics of the playback device, past and present state of the playback device, and the playback device's immediate needs. For example, the central controller 190 can receive from each of one of more of the playback devices information regarding the playback device (e.g., name, type, version); the play-out video buffer (e.g., default depth/length, maximum allowed depth/length, maximum depth/length, current depth/length); the most recently downloaded video segment (e.g., transfer rate; length (e.g., in bytes), the duration (e.g., in milliseconds) of the most recently downloaded segment); the quality levels (e.g. bit rates) available to the playback device; the quality level currently in use by the player (in bps), and the subscription level of the playback device. The central controller 190 also can receive information informing the central controller 190 when the playback device seeks to switch bit rates. In some implementations, the central controller 190 also can receive information informing the central controller 190 of a trick play request (e.g., a pause request, a fast-forward request, a rewind request, and a slow motion requests). The central controller 190 also can receive the playback device's estimation of the available inbound bandwidth; the percentage of past dropped frames; information concerning the location of the playback device on the network; the interval between transmission of the last video segment request and arrival of the response that contains the first byte of the requested segment; the maximum playback device screen resolution; the content play-out time from the beginning of the session; and the segment sequence number of the currently played segment.

A playback device can transmit the above information, along with an unique identification number to identify the playback device, to the central controller 190 by any means. For example, in some implementations, a playback device can include the above information with a HTTP request message to receive a video segment. One of ordinary skill in the art would know how to include such information in a request message at the playback device. For example, the information can be included as a parameter in a HTTP GET or POST command or included in a HTTP cookie. This disclosure is not limited to any particular method of including such information in a request message. This disclosure can apply to any existing or later developed method for including such information in a request message. As another example, in some implementations, a playback device can transmit the above information to the central controller 190 using the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP) on top of the Internet Protocol (IP).

The central controller 190 can collect information regarding the network resources (e.g., downstream bandwidth) from other network elements such as the CMTS 132. One of ordinary skill in the art would know how to determine available network resources. This disclosure is not limited to any particular method of determining network resources and can apply to any existing or later developed method for determining network resources.

Based on the information received, the central controller 190 can allocate network resources dynamically and/or based on pre-defined rules to achieve a desired result. For example, the central controller 190 can allocate network resources to minimize the degradation of video quality to one or more playback devices. For example, during periods of congestion, the central controller 190 can allocate network resources such that those playback devices that are using a relatively constant bandwidth (e.g., those playback devices that are playing back videos and not using any trick play features or in a start-up state) continue to be allocated the bandwidth needed to play back the videos without any video quality degradation; those playback devices that are in a non-steady state (e.g., those playback devices in a trick play mode or start-up state) are then allocated the remaining bandwidth based on, for example, pre-defined rules to achieve a desired result. As another example, the central controller 190 can allocate network resources to normalize buffer fill ratios across playback devices to prevent a playback device from disproportionately using network resources.

The central controller 190 can provide the resource allocation decisions to the CMTS 132, for example, which can then deliver the requested video segments to the requesting playback devices based on the resource allocation decisions.

By providing a central controller 190 that is aware of the available network resources and the needs of the playback devices, the network resources can be managed in a way that improves the quality of experience for end users across all the playback devices while maximizing the efficiency of the network resources (e.g., bandwidth usage).

FIG. 3 illustrates an example central controller 190 operable to manage network resources across a group of playback devices to control overall video quality across the group of playback devices. The central controller 190 can include a processor 310, a memory 320, a removable data storage unit 330, and an input/output device 340. Each of the components 310, 320, 330, and 340 can, for example, be interconnected using a system bus 350. In some implementations, the central controller 190 can include one of more interconnected boards where each board comprising components 310, 320, 330, and 340. The processor 310 is capable of processing instructions for execution within the central controller 190. For example, the processor 310 can be capable of processing instructions for allocating network resources dynamically and/or based on pre-defined rules to achieve a desired result. In some implementations, the processor 310 is a single-threaded processor. In other implementations, the processor 310 is a multi-threaded processor. The processor 310 is capable of processing instructions stored in the memory 320 or on the storage device 330.

The memory 320 stores information within the central controller 190. For example, memory 320 can store information received from one of more of the playback devices to aid the central controller 190 in making resource allocation decisions and information regarding the network resources. In some implementations, memory 320 can store pre-defined rules regarding network resource allocation. In some implementations, the memory 320 is a computer-readable medium. In other implementations, the memory 320 is a volatile memory unit. In still other implementations, the memory 320 is a non-volatile memory unit.

In some implementations, the storage device 330 is capable of providing mass storage for the central controller 190. In one implementation, the storage device 330 is a computer-readable medium. For example, the storage device 330 can store pre-defined rules regarding network resource allocation. In some implementations, the storage device 330 can store information from the group of playback devices and information regarding the network resources. In some implementations, the storage device 330 is not removable. In various different implementations, the storage device 330 can, for example, include a hard disk device, an optical disk device, flash memory or some other large capacity storage device.

The input/output device 340 provides input/output operations for the central controller 190. In one implementation, the input/output device 340 can include one or more of a wireless interface, WAN/LAN network interface, such as, for example, an IP network interface device, e.g., an Ethernet card, a cellular network interface, a serial communication device, e.g., and RS-232 port, and/or a wireless interface device, e.g., an 802.11 card. In another implementation, the input/output device 340 can include driver devices configured to receive input data and send output data to other input/output devices, as well as sending communications to, and receiving communications from various networks.

Implementations of the device of this disclosure, and components thereof, can be realized by instructions that upon execution cause one or more processing devices to carry out the processes and functions described above. Such instructions can, for example, comprise interpreted instructions, such as script instructions, e.g., JavaScript or ECMAScript instructions, or executable code, or other instructions stored in a computer readable medium.

The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output thereby tying the process to a particular machine (e.g., a machine programmed to perform the processes described herein). The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

To provide for interaction with a user, implementations of the subject matter described in this specification can be operable to interface with a set-top-box (STB); an advanced television; or some other computing device that is integrated with or connected to (directly or indirectly) a display, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user. To provide for input by a user to the computer, implementations of the subject matter described in this specification further can be operable to interface with a keyboard, a pointing device (e.g., a mouse or a trackball), and/or a remote control device.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular implementations of particular inventions. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Particular implementations of the subject matter described in this specification have been described. Other implementations are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results, unless expressly noted otherwise. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.

Implementations of the devices of this disclosure, and components thereof, can be realized by instructions that upon execution cause one or more processing devices to carry out the processes and functions described above. Such instructions can, for example, comprise interpreted instructions, such as script instructions, e.g., JavaScript or ECMAScript instructions, or executable code, or other instructions stored in a computer readable medium.

The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output thereby tying the process to a particular machine (e.g., a machine programmed to perform the processes described herein). The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular implementations of particular inventions. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Particular implementations of the subject matter described in this specification have been described. Other implementations are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results, unless expressly noted otherwise. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.

Claims

1. A method for managing network resources across a group of playback devices, comprising:

collecting information about a plurality of playback devices wherein the plurality of playback devices share a network resource;
collecting information regarding the network resource;
receiving one or more video segment request messages from one or more of the playback devices, respectively; and
allocating the network resource to respond to the one or more requested video segments based on the information collected about the plurality of playback devices and the information collected regarding the network resource.

2. The method of claim 1 wherein the network resource is downstream bandwidth.

3. The method of claim 1 wherein collecting information about a plurality of playback devices comprises collecting information regarding the play state of each of the plurality of playback devices.

4. The method of claim 3 wherein collecting information about a plurality of playback devices comprises collecting information from the plurality of playback devices.

5. The method of claim 3 wherein the play state is one of the following states: a play state, a pause state, a fast forward state, a rewind state, a seek state, and a start-up state.

6. The method of claim 1 wherein collecting information regarding the network resource comprises collecting information regarding the amount of downstream bandwidth available.

7. The method of claim 1 wherein allocating the network resource to respond to the one or more requested video segments based on the information collected about the plurality of playback devices and the information collected regarding the network resource comprises allocating the network resource based on the play states of the requesting playback devices.

8. The method of claim 7 wherein playback devices in a play state are allocated more of the network resource than playback devices in a trick mode state.

9. The method of claim 7 wherein the network resource is allocated to maintain the video quality of one or more playback devices in a play state.

10. The method of claim 7 wherein the network resource is downstream bandwidth.

11. The method of claim 7 wherein the network resource is allocated to prevent one or more playback devices from disproportionately using network resources.

12. The method of claim 1 further comprising communicating the resource allocation decision to a network element.

13. The method of claim 12 wherein the network element is a CMTS.

14. The method of claim 12 wherein the network element is a server.

15. The method of claim 1 further comprising delivering the one or more requested video segments to the one or more playback devices based on the resource allocation determination.

16. A central controller for managing network resources across a group of playback devices, comprising:

one or more storage devices for storing information collected about a plurality of playback devices wherein the plurality of playback devices share a network resource and storing information regarding the network resource;
a processor configured to determine network resource allocations to respond to one or more video segment request messages from one or more of the playback devices, respectively, based on the information collected about the playback devices and the information collected regarding the network resource.

17. The central controller of claim 16 wherein the information collected about the plurality of playback devices comprises information regarding the play state of each of the plurality of playback devices.

18. The central controller of claim 16 wherein the information collected regarding the network resource comprises information regarding the amount of downstream bandwidth available.

19. A system for managing network resources across a group of playback devices, comprising:

means for collecting information about a plurality of playback devices wherein the plurality of playback devices share a network resource;
means for collecting information regarding the network resource;
means for receiving one or more video segment request messages from one or more of the playback devices, respectively; and
means for allocating the network resource to respond to the one or more requested video segments based on the information collected about the plurality of playback devices and the information collected regarding the network resource.

20. The system of claim 19 further comprising means for delivering the one or more requested video segments to the one or more playback devices based on the resource allocation determination.

Patent History
Publication number: 20120297430
Type: Application
Filed: Nov 16, 2011
Publication Date: Nov 22, 2012
Inventors: Marcin Morgos (Warsaw), Marek Bugajski (Norcross, GA), Stephen Kraiman (Doylestown, PA)
Application Number: 13/297,389
Classifications
Current U.S. Class: Vcr-like Function (725/88); Channel Or Bandwidth Allocation (725/95)
International Classification: H04N 21/21 (20110101);