STREAMING VIDEO USING ERASURE ENCODING

- Vudu, Inc.

A system, apparatus and method for presenting a video over a network using erasure codes is described. According to one system and method, the network has nodes, portions of a video being encoded as encoded portions each having sections, the sections for each encoded portion being distributed among segments, the segments being distributed among the nodes; an apparatus including a network interface coupled to the network; a control system coupled to the network interface and configured to initiate a video request and communicate with a subset of the nodes to receive a subset of the segments; and a decoder coupled to the network interface and configured to decode a subset of the sections for each of the encoded portions to generate the portions of the video; and a presentation device coupled to the apparatus, the presentation device presenting the portions of the video.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

1. Field of the Invention

This invention relates generally to the field of video systems. More particularly, the invention relates to a system, apparatus and method for streaming videos.

2. Description of the Related Art

Video on Demand (VOD) systems allow users to request and view videos over a network.

In some VOD systems, a user requests a video using a box (e.g., a set-top receiver) connected through a network to a server farm. In response to a video request, the server farm “streams” the selected video to the box. The video is presented to the user while the rest of the video is being downloaded. As the number of boxes in the network increases, the bandwidth capacity required at the server farm increases.

In other VOD systems, a video library is distributed among the boxes of multiple users. When a user requests a video using their box, the request is serviced over the network by one of the boxes that store the requested video. This peer-to-peer network distributes the bandwidth requirements across the boxes on the network. As the number of boxes in the network increases, the quantity of videos requested increases but the bandwidth capacity also increases since there are more boxes to service the video requests.

However, the capacity to serve a particular video is limited, since a predetermined number of copies of that video are distributed among a fixed number of boxes. Each box has limited bandwidth capacity to serve the video(s) stored in that box. Thus, the peer-to-peer configuration restricts the flexibility to concurrently deliver quantities of particular videos.

In other VOD systems, the entire video is downloaded to the box before it is presented on the user's video system. By lengthening the period over which the video is downloaded, the bandwidth requirements can be reduced whether using a centralized server or a peer-to-peer network. However, the longer download period increases the delay from the time the user requests a video to when that user can watch the requested video.

What is needed is a VOD system that allows a video to be watched soon after the request. What is also needed is a VOD system that allows more flexibility in terms of capacity to concurrently serve particular videos to multiple users. What is further needed is a VOD system that can scale in terms of the number of boxes being served while limiting the increase in bandwidth requirements.

SUMMARY

A system for presenting a video is described. According to one embodiment, portions of a video are encoded into encoded portions each having sections, the sections for each encoded portion being distributed among segments, the segments being distributed among the nodes of a network. The system includes an apparatus including a network interface coupled to the network; a control system coupled to the network interface and configured to initiate a video request and communicate with a subset of the nodes to receive a subset of the segments; and a decoder coupled to the network interface and configured to decode a subset of the sections for each of the encoded portions to generate the portions of the video. A presentation device coupled to the apparatus is configured to present the portions of the video.

An apparatus for processing a video is described including a network interface; a control system coupled to the network interface and configured to send a video request and communicate with a set of nodes; a decoder coupled to the network interface and configured to decode a subset of a set of sections for each of a set of encoded portions to generate a plurality of portions of the video, the subset of the set of sections for each of the set of encoded portions being assembled from a subset of a plurality of segments; and a video interface coupled to the decoder and configured to transmit the set of portions of the video.

A method of processing a video over a network is described including encoding each of a plurality of portions of the video to generate a plurality of encoded portions each having a plurality of sections; distributing each plurality of sections among a plurality of segments; distributing the plurality of segments among the plurality of nodes; requesting the video; identifying a plurality of nodes storing a subset of the plurality of segments; receiving a subset of the plurality of sections for each of the plurality of encoded portions from the plurality of nodes; and decoding each subset of the plurality of sections to generate the plurality of portions of the video.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:

FIG. 1 illustrates a system for streaming a video according to one embodiment of the invention.

FIG. 2 shows a block diagram of one embodiment of an encoding system of the invention.

FIG. 3 shows a block diagram of one embodiment of decoding system of the invention.

FIG. 4 shows a block diagram of a decoding system using primary segments and a backup segment according to one embodiment of the invention.

FIG. 5 shows a block diagram of one embodiment of a decoding system using different subsets of the segments for some encoded portions.

FIG. 6 shows a block diagram of the distribution of primary and secondary segments according to one embodiment of the invention.

FIG. 7 illustrates a system using primary and secondary segments for streaming a video according to one embodiment of the invention.

FIG. 8 is a flow chart that illustrates a method of streaming a video according to one embodiment of the invention.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Described below is a system and method for streaming video. Throughout the description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are not shown in the figures to avoid obscuring the underlying principles of the present invention.

Embodiments of the invention may be implemented in a video on demand (VOD) system as illustrated generally in FIG. 1. The system includes nodes coupled through a network 160 to a server 100 that has a video lookup database 110. In one embodiment, the server 100 is operated by a VOD service provider.

Users of the video on demand service access the service through the network 160. A node 130 has a control system 131, a memory 133 and a decoder 134 coupled through a network interface 132 to the network 160. The node 130 has a video interface 135 coupled to the decoder 134 and coupled through a video connector 138 to a presentation device 139. In some embodiments, the node 130 is a box supplied by the VOD service provider. In other embodiments, the node 130 is a personal computer.

A user can request a video A, for example, through the control system 131. The user can submit that request using a remote control or a keyboard, for example. In response, the control system 131 initiates a video request through the network interface 132 onto the network 160. The server 100 is configured to respond to video requests from the nodes on the network 160. When a request is received, the server 100 accesses the video lookup database 110 to determine which of the nodes on the network 160 store segments associated with video A and are available to serve the segments to the node 130. In one embodiment, the server 100 allocates the nodes that are least likely to be needed for a subsequent video request based on, for example, the popularity of videos associated with the other segments stored on each node.

As an example, the video A is associated with six segments, a segment 151, a segment 152, a segment 153, a segment 154, a segment 155, and a segment 156. In one embodiment, any four of the six segments can be used to generate the video A. The relationship between the videos and the segments are described in more detail with reference to subsequent figures.

For example, the server 100 can allocate a node 121, a node 122, a node 123 and a node 124 in response to the request for the video A from the node 130. The node 121 has a memory 191 that is coupled to the network 160 through a network interface 181. The memory 191 stores the segment 151 corresponding to the video A and a segment 159 corresponding to a video C. A node 122 has a memory 192 that is coupled to the network 160 through a network interface 182. The memory 192 stores the segment 152 corresponding to the video A and a segment 160 corresponding to the video C. A node 123 as a memory 193 that is coupled to the network 160 through a network interface 183. The memory 193 stores the segment 153 corresponding to the video A and a segment 161 corresponding to the video B. A node 124 has a memory 194 that is coupled to the network 160 through a network interface 184. The memory 194 stores the segment 154 corresponding to the video A and a segment 162 corresponding to the video B.

After receiving the allocation of the nodes from the server 100, the network interface 132 communicates with the network interface 181 to receive the segment 151, the network interface 132 communicates with the network interface 182 to receive the segment 152, the network interface 132 communicates with the network interface 183 to receive the segment 153 and the network interface 132 communicates with the network interface 184 to receive the segment 154.

In one embodiment, the download bandwidth of the node 140 is at least four times the upload bandwidth of a typical node in the network 160. Thus, the network interface 132 can download all four segments from the network 160 at the combined rate that the network interface 181, the network interface 182, the network interface 183 and a network interface 184 can upload those four segments to the network 160. In one embodiment, the number of segments is chosen to be such that the concurrent download of all necessary segments is fast enough to enable playback of video in real time. In one embodiment, multiple segments can be downloaded from the same node if that node has enough upstream bandwidth to concurrently upload the multiple segments at the required rate.

The decoder 134 receives the four segments and generates the video according to methods described with reference to subsequent figures. A video interface 135 is coupled to receive the video from the decoder 134 and transmit the video through a video connector 138 to a presentation device 139. In one embodiment, the presentation device 139 is a television. Alternatively, the presentation device 139 is another device capable of audiovisual representation of the video. In some embodiments, the presentation device 139 is only capable of presenting video without sound. In other embodiments, the presentation device 139 is only capable of presenting sound without video.

Another user has a node 140 including a control system 141, a memory 143 and a decoder 144 coupled to a network interface 142 on the network 160. The node 140 has a video interface 145 coupled to the decoder 144 and coupled to a video connector 148 to a presentation device 149.

A user can request, for example, a video B through the control system 141. In response, the control system 141 initiates a video request through the network interface 142 onto the network 160. When a request is received, the server 100 accesses the video lookup database 110 to determine which of the nodes on the network 160 store segments associated with the video B and are available to serve the segments to the node 130.

The video B is associated with six segments: a segment 161, a segment 162, a segment 163, a segment 164, a segment 157 and a segment 158. In one embodiment, any four of the six segments can be used to generate the video B. If the request for video B happens while the node 123 and the node 124 are still serving the segments for video A as described above, the node 123 and the node 124 are temporarily unavailable to serve the segment 161 and the segment 162, respectively, for video B.

In response to the request for video B, the server 100 can allocate a node 125, a node 126, a node 127 and a node 128. The node 125 has a memory 195 that is coupled to the network 160 through a network interface 185. The memory 195 stores the segment 155 corresponding to the video A and the segment 163 corresponding to the video B. The node 126 has a memory 196 that is coupled to the network 160 through a network interface 186. The memory 196 stores a segment 156 corresponding to the video A and the segment 164 corresponding to the video B. A node 127 has a memory 197 that is coupled to the network 160 through a network interface 187. The memory 197 stores a segment 157 corresponding to the video B and the segment 165 corresponding to the video C. A node 128 has a memory 198 that is coupled to the network 160 through a network interface 188. The memory 198 stores a segment 158 corresponding to the video B and the segment 166 corresponding to the video C.

After receiving the allocation of the nodes from the server 100, the network interface 142 requests the segments from the nodes. The network interface 142 communicates with the network interface 185 to receive the segment 151. The network interface 142 communicates with the network interface 186 to receive the segment 152. The network interface 142 communicates with the network interface 187 to receive the segment 153. And the network interface 142 communicates with the network interface 188 to receive the segment 154.

The decoder 144 receives the four segments and decodes the four segments to generate the video according to methods described with reference to subsequent figures. A video interface 145 is coupled to receive the video from the decoder 144 and transmit the video through a video connector 148 to a presentation device 149. The presentation device 149 can be a device capable of audiovisual representation of the video.

Nodes 121-128 are illustrated to emphasize components used to serve segments in response to a video request according to the example video requests described above. In some embodiments, each of these nodes has a control system, decoder and video interface to provide the functionality described with reference to the node 130, for example. In some embodiments, the node 130 is configured to provide similar functionality as described with reference to the node 121, for example.

The example illustrated uses six independent segments. In other embodiments, a different number of segments may be used. In some embodiments, the same segment may be duplicated on multiple nodes to allow for more concurrent requests for the same video. In this example, any four of the six independent segments can be used to generate the video. In other embodiments, a different number of independent segments may be required. It will be apparent to one skilled in the art that duplicate segments cannot be used to establish the minimum number of segments to generate the video.

Because only some of the segments are required to generate the video, there is quite a bit of flexibility in terms on selecting combinations of nodes that can generate the video. As the number of segments increases so does the number of combinations of segments that can be used to generate the video.

FIG. 2 is a block diagram illustrating one embodiment of an encoding system of the present invention.

A video 70 is the source of the video for the encoding process. The process is not limited to any particular video format. In some embodiments, the video 70 format can be Digital Video (DV) Encoder Type 1, DV Encoder Type 2, MPEG-1 (Moving Picture Experts Group format 1), MPEG-2, MPEG-4, or Real Video.

In one embodiment, the video 70 is processed as a sequence of time slices. The time slices are blocks of audiovisual data each including several portions. Two time slices are shown for illustrative purposes. A time slice 61 includes a portion 201, a portion 202, a portion 203 and a portion 204. A time slice 62 includes a portion 205, a portion 206, a portion 207 and a portion 208.

In one embodiment, each portion is encoded using an erasure code, such as a Hamming code, Reed-Solomon code, or Tornado code. An erasure code transforms data of n sections into encoded data with more than n sections such that the original data can be generated from any subset of the sections of the encoded data that include the minimum number of sections for that code. Strict subsets of the encoded data are any combination of less than all the sections of the encoded data. Rateless erasure codes can transform data of n sections into encoded data of an arbitrary number of sections. Erasure codes can also provide an error correcting function allowing the original data to be recovered despite a limited number of bit errors within the sections of the encoded message. In another embodiment, the erasure code used can be the identity function, i.e., the encoded segments are exactly identical to the original segments.

In the illustrated example, each portion is encoded into encoded portions having six blocks of which any four blocks can be used to generate the portion. The portion 201 is encoded into an encoded portion 31 having a section 211, a section 212, a section 213, a section 214, a section 215 and a section 216. The portion 202 is encoded into an encoded portion 32 having a section 221, a section 222, a section 223, a section 224, a section 225 and a section 226. The portion 203 is encoded into an encoded portion 33 having a section 231, a section 232, a section 233, a section 234, a section 235 and a section 236. The portion 204 is encoded into an encoded portion 34 having a section 241, a section 242, a section 243, a section 244, a section 245 and a section 246.

The portion 205 is encoded into an encoded portion 35 having a section 251 a section 252, a section 253, the section 254, section 255 and a section 256. The portion 206 is encoded into encoded portion 36 having a section 261, a section 262, a section 263, a section 264, a section 265 and a section 266. The portion 207 is encoded into an encoded portion 37 having a section 271, a section 272, a section 273, a section 274, a section 275 and a section 276. The portion 208 is encoded into an encoded portion 38 having a section 281, a a section 282, a section 283, a section 284, a section 285 and a section 286.

A segment 21 is generated by assembling the section 211, the section 221, the section 231, the section 241, the section 251, the section 261, the section 271 and the section 281. A segment 22 is generated by assembling the section 212, the section 222, the section 232, the section 242, the section 252, the section 262, the section 272 and the section 282. A segment 23 is generated by assembling the section 213, the section 223, the section 233, the section 243, the section 253, the section 263, the section 273 and the section 283. A segment 24 is generated by assembling the section 214, the section 224, the section 234, the section 244, the section 254, the section 264, the section 274 and the section 284. A segment 25 is generated by assembling the section 215, the section 225, the section 235, the section 245, the section 255, the section 265, the section 275 and the section 285. A segment 26 is generated by assembling the section 216, the section 226, the section 236, the section 246, the section 256, the section 266, the section 276 and the section 286.

In this illustration, two time slices are shown, each with four portions. However, the same technique can be applied to an arbitrary number of time slices using more or less portions. Furthermore, the encoding process may generate more or less sections according to well-known methods.

The segments are distributed over a network 50 to be stored among a set of nodes. In one embodiment, these nodes are configured as described with reference to the block diagram of the node 130 shown in FIG. 1. The segment 21 is distributed to a node 41. The segment 22 is distributed to a node 42. The segment 23 is distributed to a node 43. The segment 24 is distributed to a node 44. The segment 25 is distributed to a node 45 and the segment 26 is distributed to a node 46.

FIG. 3 is a block diagram illustrating one embodiment of a decoding system of the present invention.

In one embodiment, each portion is decoded using an erasure code. Any four of the six segments can be used to generate the video 70. A node 40 requests the video and four nodes are assigned to deliver the segments. In this example, the node 42, the node 44, the node 45 and the node 46 transmit the segment 22, the segment 24, the segment 25 and the segment 26, respectively, over the network 50 to the node 40.

In one embodiment, each portion is decoded using an erasure code. The section 212, the section 214, the section 215 and the section 216 of the encoded portion 31 are decoded to generate the portion 201. The section 222, the section 224, the section 225 and the section 226 of the encoded portion 32 is decoded to generate the portion 202. The section 232, the section 234, the section 235 and a section 236 of the encoded portion 33 are decoded to generate the portion 203. The section 242, the section 244, the section 245 and the section 246 of the include portion 204 are decoded to generate the portion 204. The portion 201, the portion 202, the portion 203 and the portion 204 are assembled to generate the time slice 61.

The section 252, the section 254, the section 255 and the section 256 of the encoded portion 35 are decoded to generate the portion 205. The section 262, the section 264, the section 265 and the section 266 of the encoded portion 36 are decoded to generate the portion 206. The section 272, the section 274, the section 275 and the section 276 of the include portion of the seven are decoded to generate the portion 207. The section 282, the section 284, the section 285 and a section 286 of the encoded portion 38 are decoded to generate the portion 208. The portion 205, the portion 206, the portion 207 and the portion 208 are assembled to generate the time slice 62.

FIG. 4 is a block diagram illustrating another embodiment of a decoding system of the present invention.

In one embodiment, a set of primary segments 90 and a backup segment 91 are allocated in response to a video request. The backup segment 91 can be used to substitute for one of the primary segments 90 in response to a failure, for example, of a node transmitting one of the primary segments 90. In other embodiments, additional backup segments can be allocated to a video request.

In this example, any four of the six segments can be used to generate the video 70. The node 40 requests a video. The set of primary segments 90 allocated to the video request include the segment 22, the segment 24, the segment 25 and a segment 26. The backup segment 91 allocated to the video request is the segment 21. The node 42, the node 44, the node 45 and the node 46 transmit the segment 22, the segment 24, the segment 25 and the segment 26, respectively, over the network 50 to the node 40.

The section 212, the section 214, the section 215, the section 216 of the encoded portion 31 are decoded to generate the portion 201. The section 222, the section 224, the section 225 and the section 226 of the encoded portion 32 is decoded to generate the portion 202. The section 232, the section 234, the section 235 and the section 236 of the encoded portion 33 are decoded to generate the portion 203. The section 242, the section 244, the section 245 and the section 246 of the include portion 204 are decoded to generate the portion 204. The portion 201, the portion 202, the portion 203 and the portion 204 are assembled to generate the time slice 61.

In this example, the backup segment 91 replaces the segment 22 after time slice 61 is processed. The backup segment can be used instead of one of the primary segments when, for example, the node transmitting one of the primary segments 90 malfunctions, is disconnected from the network, or loses power. The node 42 stops transmitting the segment 22. In response, the node 41 starts transmitting the segment 21 over the network 50 to the node 40.

The section 251, the section 254, the section 255 and the section 256 of the encoded portion 35 are decoded to generate the portion 205. The section 261, the section 264, the section 265 and the section 266 of the encoded portion 36 are decoded to generate the portion 206. The section 271, the section 274, the section 275 and the section 276 of the include portion of the seven are decoded to generate the portion 207. The section 281, the section 284, the section 285 and the section 286 of the encoded portion 38 are decoded to generate the portion 208. The portion 205, the portion 206, the portion 207 and the portion 208 are assembled to generate and the time slice 62.

FIG. 5 is a block diagram illustrating another embodiment of a decoding process of the present invention.

The node 40 requests a video. In this example, a set of segments are allocated to a video request and the subset of the set of segments used for at least two of the portions in a time slice are different. In one embodiment, any four of the six segments can be used to generate the video 70. In this example, the node 44, the node 45 and the node 46 transmit the segment 24, the segment 25 and the segment 26, respectively, over the network 50 to the node 40.

The section 212, the section 214, section 215 and section 216 of the encoded portion 31 are decoded to generate the portion 201. The section 222, the section 224, the section 225 in the section 226 of the include portion 32 or decoded to generate the portion 202. The section 231, the section 234, the section 235 and section 236 of the include portion of the three are decoded to generate the portion 203. The section 241, the section 244, the section 245 and the section 246 of the include portion 34 decoded to generate the portion 204. The portion 201, the portion 202, the portion 203 and the portion 204 are assembled to generate the time slice 61.

The section 252, the section 254, the section 255 and the section 256 of the encoded portion 35 are decoded to generate the portion 205, the section 261, the section 264, the section 265 and a section 266 of encoded portion 36 are decoded to generate the portion 206. The section 272, the section 274, the section 275 in the section 276 of the include portion 37 are decoded to generate the portion 207. 285 in the section 286 of the include portion 38 are decoded to generate the portion 208. The portion 205, the portion 206, the portion 207 and the portion 208 are assembled to generate the time slice 62.

FIG. 6 shows a block diagram of the distribution of primary and secondary segments according to one embodiment of the invention.

In one embodiment, each of the segments associated with a video are split into a primary segment and a secondary segment. The primary segment corresponds to the first half of the playback duration of the video. The secondary segment corresponds to the second half of the playback duration of the video.

A segment 600 includes a primary segment 601 and a secondary segment 602. A segment 610 includes the primary segment 611 and a secondary segment 612. A segment 620 includes a primary segment 621 and a secondary segment 622. A segment 630 includes a primary segment 631 and a secondary segment 632. A segment 640 includes a primary segment 641 and a secondary segment 642. A segment 650 includes a primary segment 651 and a secondary segment 652.

The primary segment 601, the primary segment 611, the primary segment 621, the primary segment 631, the primary segment 641 in the primary segment 651 are distributed over a network 680 to a set of nodes 671. The secondary segment 602, the secondary segment 612, the secondary segment 622, the secondary segment 632, the secondary segment 642 and a secondary segment 652 are distributed over a network 680 a set of nodes 672. In one embodiment, the set of nodes 671 is not disjoint from the set of nodes 672.

Nodes requesting a streaming video corresponding to the first half of the video are assigned nodes from the set of nodes 671. Nodes requesting a streaming video corresponding to the second half of the video are assigned nodes from the set of nodes 672. Once a node completes viewing the first half of the video, it is assigned nodes from the set of nodes 672 so that it can start viewing the second half. This assignment is done in advance to ensure that there is no interruption in movie watching as a node transitions from the first half to the second half.

The division of segments into primary and secondary segments can, in the best case, double the peak capacity of concurrent streams of that video for the same average storage used per node. Observe that, at any given point in time,—half the nodes receiving a particular video could be streaming the first half of the video from the set of nodes 671 and half the nodes receiving that video could be streaming the second half of that video from the set of nodes 672. Since different nodes are streaming each half of the video, the peak capacity of concurrent streams of that video doubles as compared to the case in which the whole video is served by the same node.

Approximately the same storage capacity is used, allowing for some overhead, since twice as many nodes store half of the video as compared to the case where the whole video is served by the same node. Since less to space is taken for each video stored on a particular node, more video selections can be stored on a node having a given memory capacity. This allows for more flexibility in the allocation of nodes in response to video requests.

In other embodiments, the video is broken into three or more periods and segments corresponding to each of the periods are distributed among three or more sets of nodes. In some embodiments, breaking the video into more periods can further increase flexibility in the allocation of nodes in response to video requests and the peak capacity of concurrent streams of a particular video.

Embodiments of the invention can be implemented in a VOD system using primary and secondary segments as illustrated generally in FIG. 7. The system includes nodes coupled through a network 760 to a server 700 that has a video lookup database 710.

Users of the video on demand service access the service through the network 760. A node 730 has a control system 731, a memory 733 and a decoder 734 coupled to a network interface 732 to the network 760. The node 730 has a video interface 735 coupled to the decoder 734 and coupled through a video connector 738 to a presentation device 739.

For example, a user can request a video A through the control system 731. In response, the control system 731 initiates a video request for the first period of the video A through the network interface 732 onto the network 760. The server 700 is configured to respond to video requests from the nodes on the network 760. When a request is received, the server 700 accesses the video lookup database 710 to determine which of the nodes on the network 760 store segments associated with the first period of the video A and are available to serve the segments to the node 730.

The server can allocate a node 121, a node 122, a node 123 and a node 124, for example, to serve the first period of the video. The node 721 has a memory 791 that is coupled to the network 760 through a network interface 781. The memory 791 stores the segment 751 corresponding to the first period of video A and a segment 759 corresponding to first period of video B. A node 722 has a memory 792 that is coupled to the network 760 through a network interface 782. The memory 792 stores the segment 752 corresponding to second period of the video A and a segment 760 corresponding to first period of the video B. A node 723 as a memory 793 that is coupled to the network 760 through a network interface 783. The memory 793 stores the segment 763 corresponding to the second period of the video A and a segment 761 corresponding to the second period of the video B. A node 724 has a memory 794 that is coupled to the network 760 through a network interface 784. The memory 794 stores the segment 754 corresponding to the second period of the video A and a segment 762 corresponding to the first period of the video B.

After receiving the allocation of the nodes from the server 700, the network interface 732 communicates with the network interface 781 to receive the segment 751, communicates with the network interface 782 to receive the segment 752, communicate with the network interface 783 to receive the segment 753 and communicates with the network interface 784 to receive the segment 754.

The decoder 734 receives the four segments and generates the video according to methods described with reference to other figures. A video interface 735 is coupled to receive the video from the decoder 734 and transmit the video through a video connector 738 and to a presentation device 739. In one embodiment, the presentation device 739 is a television. Alternatively, the presentation device 739 is another device cable of audiovisual representation of the video.

When streaming the first period of video A is almost complete, the control system 731 initiates a video request for the second period of video A through the network interface 732 onto the network 760. The server 700 accesses the video lookup database 710 to determine which of the nodes on the network 760 store segments associated with the second period of the video A and are available to serve the segments to the node 730.

The server can allocate a node 125, a node 126, a node 127 and a node 128, for example, to serve the first period of the video. The node 721 has a memory 795 that is coupled to the network 760 through a network interface 785. The memory 795 stores the segment 755 corresponding to the second period of video A and a segment 763 corresponding to the first period of video B. A node 726 has a memory 796 that is coupled to the network 760 through a network interface 786. The memory 796 stores the segment 756 corresponding to second period of the video A and a segment 764 corresponding to first period of the video B. A node 727 has a memory 797 that is coupled to the network 760 through a network interface 787. The memory 797 stores the segment 765 corresponding to the second period of the video A and a segment 765 corresponding to the second period of the video B. A node 728 has a memory 798 that is coupled to the network 760 through a network interface 788. The memory 798 stores the segment 758 corresponding to the second period of the video A and a segment 766 corresponding to the second period of the video B.

After receiving the allocation of the nodes from the server 700, the network interface 732 communicates with the network interface 785 to receive the segment 755, communicates with the network interface 786 to receive the segment 756, communicate with the network interface 787 to receive the segment 757 and communicates with the network interface 788 to receive the segment 758. The nodes 721-725 are now released to be available for allocation to stream another video as the newly allocated nodes are streaming the second portion of the video A to the node 730.

The decoder 734 receives the four segments and generates the video. The video is transmitted through the video interface 735 and through the video connector 738 to the presentation device 739.

A node 740 has a control system 741, a memory 743 and a decoder 744 coupled to a network interface 742 in the network 760. The node 740 has a video interface 745 coupled to the decoder 744 and coupled to a video connector 748 to a presentation device 749.

Other users of the VOD service can concurrently access the service through the network 760. A node 740 has a control system 741, a memory 743 and a decoder 744 coupled to a network interface 742 to the network 760. The node 740 has a video interface 745 coupled to the decoder 744 and coupled through a video connector 748 to a presentation device 749.

The user can request a video A, for example, through the control system 141. In response, the control system 141 initiates a video request through the network interface 142 onto the network 160. When a request is received, the server accesses the video lookup database 110 to determine which of the nodes on the network 160 store segments associated with first period of the video A and are available to serve the segments to the node 140. For example, the server 700 can allocate a node 721, a node 722, a node 723 and a node 724 to respond to the request for the first period of video A from the node 130.

After receiving the allocation of the nodes from the server 700, the network interface 742 communicates with the network interface 781 to receive the segment 751, communicates with the network interface 782 to receive the segment 752, communicate with the network interface 783 to receive the segment 753 and communicates with the network interface 784 to receive the segment 754. The decoder 734 receives the four segments and generates the video according to methods described with reference to subsequent figures. A video interface 735 is coupled to receive the video from the decoder 734 and transmit the video through a video connector 738 and to a presentation device 739.

FIG. 8 is a flow chart that illustrates one embodiment of a method of the present invention.

In step 800, the video portions are encoded to generate encoded portions each having multiple sections. In one embodiment, the encoding process is performed using an erasure code, such as a Hamming code, a Reed-Solomon code or a Tornado code.

In step 810, the sections of each of the encoded portions are distributed among a set of segments.

In step 820, the segments are distributed among nodes on a network. In some embodiments, the encoding process is performed as illustrated with reference to FIG. 2.

In step 830 a video is requested. In one embodiment, the node requests the video in response to a user request submitted through a remote control or keyboard.

In step 840 the nodes storing a subset of the segments are identified. In one embodiment, a server accesses a video lookup database to determine which of the nodes on the network store segments associated with the requested video and are available to serve the segments. In one embodiment, the server allocates the nodes that are least likely to be needed for a subsequent video request based on, for example, the popularity of videos associated with the other segments stored on each node.

In step 850, a subset of the sections for each of the encoded portions are received.

In step 860, the subset of the sections for each of the encoded portions is decoded to generate portions of the video. In some embodiments, the decoding process is performed as illustrated with reference to FIGS. 3, 4 and 5.

In step 870, the portions of the video are presented. In one embodiment, the video is displayed on a television or other device capable of audiovisual representation. In some embodiments, the video is presented without sound. In other embodiments, the video is presented without images.

In step the 880, it is determined whether another time slice is available. If yes, step 850 is performed to process subsequent time slices. If no, the flow chart is completed.

Claims

1. A method for distributing media comprising:

dividing at least a slice of a video file into a plurality of portions;
encoding each portion of the plurality of portions to generate encodings including a plurality of sections;
defining segments each including a section from each encoding of the plurality of encodings;
distributing each segment to a different node of a first group of nodes;
receiving, by a server, a request for the video file from a first node; and
transmitting, by the server, an assignment of the first group to the first node.

2. The method of claim 1, wherein the at least a slice of the video file is a first slice of the video file, the video file comprising a second slice, the segments being first segments, the method comprising:

transmitting second segments encoding the second slice to a second group of nodes; and
transmitting an assignment to the second group of nodes to the first node just before playback of the first segment is completed.

3. The method of claim 2, wherein the first and second slice have approximately the same playback duration.

4. The method of claim 1, wherein a number of the segments is such that the cumulative data transmission rate of the first group of nodes to the first node is at least as fast as the playback rate of the video file.

5. The method of claim 1, wherein the number of segments is approximately equal to a ratio of a download bandwidth and an upload bandwidth for a network coupling the server, first node, and the first group of nodes.

6. The method of claim 1, wherein encoding each portion of the plurality of portions to generate encodings including a plurality of sections further comprises encrypting each portion of the plurality of portions.

7. The method of claim 1, wherein encoding each portion of the plurality of portions to generate encodings including a plurality of sections comprises performing erasure encoding of each portion of the plurality of portions.

8. The method of claim 1, wherein encoding each portion of the plurality of portions to generate encodings including a plurality of sections comprises performing erasure encoding such that less than all of the plurality of sections of each encoding are required to recreate a corresponding portion of the plurality of portions.

9. The method of claim 1, further comprising:

receiving, by the first node, the assignment of the first group of nodes;
requesting, by the first node, the segments from the first group of nodes;
receiving, by the first node, the segments from the first group of nodes;
decoding, by the first node, the plurality of portions from the plurality of sections included in the segments;
displaying, by the first node, video according to the plurality of portions.

10. The method of claim 1, wherein the segments include primary segments and at least one backup segment.

11. A system for distributing media comprising one or more processors and one or more memory devices operably coupled to the one or more processors, the one or more memory devices storing executable data effective to cause the one or more processors to:

divide at least a slice of a video file into a plurality of portions;
encode each portion of the plurality of portions to generate encodings including a plurality of sections;
define segments each including a section from each encoding of the plurality of encodings;
distribute each segment to a different node of a first group of nodes;
receive, by a server, a request for the video file from a first node; and
transmit, by the server, an assignment of the first group to the first node.

12. The system of claim 11, wherein the at least a slice of the video file is a first slice of the video file, the video file comprising a second slice, the segments being first segments, the executable data being further effective to cause the one or more processors to:

transmit second segments encoding the second slice to a second group of nodes; and
transmit an assignment to the second group of nodes to the first node just before playback of the first segment is completed.

13. The system of claim 12, wherein the first and second slice have approximately the same playback duration.

14. The system of claim 11, wherein a number of the segments is such that the cumulative data transmission rate of the first group of nodes to the first node is at least as fast as the playback rate of the video file.

15. The system of claim 11, wherein the number of segments is approximately equal to a ratio of a download bandwidth and an upload bandwidth for a network coupling the server, first node, and the first group of nodes.

16. The system of claim 11, wherein the executable data is further effective to encode each portion of the plurality of portions to generate encodings including a plurality of sections by encrypting each portion of the plurality of portions.

17. The system of claim 11, wherein the executable data is further effective to encode each portion of the plurality of portions to generate encodings including a plurality of sections by performing erasure encoding.

18. The system of claim 11, wherein the executable data is further effective to encode each portion of the plurality of portions to generate encodings including a plurality of sections by performing erasure encoding such that less than all of the plurality of sections of each encoding are required to recreate a corresponding portion of the plurality of portions.

19. The system of claim 1, wherein the first node is operable to:

receive the assignment of the first group of nodes;
request the segments from the first group of nodes;
receive the segments from the first group of nodes;
decode the plurality of portions from the plurality of sections included in the segments; and
display video according to the plurality of portions.

20. The system of claim 11, wherein the segments include primary segments and at least one backup segment.

Patent History
Publication number: 20130276040
Type: Application
Filed: Sep 21, 2012
Publication Date: Oct 17, 2013
Applicant: Vudu, Inc. (Santa Clara, CA)
Inventor: Prasanna Ganesan (Menlo Park, CA)
Application Number: 13/624,676
Classifications
Current U.S. Class: Control Process (725/93)
International Classification: H04N 21/258 (20060101);