Parallel Video Streaming

The system and method includes a clone of a clone server that receives cloned copies of high definition video streams. The clone of a clone server generates low definition video streams from the high definition videos streams. The clone of a clone server streams parallel video streams to user computing devices that include both high definition video streams and low definition video streams. The clone of a clone server generates the low definition video streams from the high definition video streams and synchronizes the parallel video streams that are sent to the user computer device. The user computing device displays the received low definition video streams and a user selects one or more of the low definition video streams to be displayed in high definition on the user computer device.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/385,605, filed Sep. 9, 2016, which is incorporated herein by reference.

TECHNICAL FIELD

The subject application teaches embodiments that relate generally to streaming audio and video for sports venues, and specifically to video and audio capture, processing, and streaming of sporting events and practices.

BACKGROUND

Professional broadcasters capture live action events at sporting venues and broadcast live or recorded video to subscribers and television viewing audience. When sporting events are broadcast, viewers generally are limited to viewing an event through the viewpoint of a single camera selected by producers from one or more cameras that capture the sporting event. Most practices and some pre-season games are not broadcast, and minor league games, club level events, and high school sporting events are rarely broadcast or recorded at all. Cameras used by broadcasters are typically large complicated devices designed for professional camera personnel and include high resolution image capturing elements and expensive lenses with variable zoom. Cameras are typically mounted on tripods, slung from wires above sporting events, or attached to weight-bearing harnesses strapped to camera personnel who position themselves nearby to the action taking place on the field. Cameras and expertise for operating the cameras creates a barrier for new entrants to the market, local small-market producers, schools, and individuals wanting to create audio and video of sporting events, either for their own use or for monetizing their work through third party subscription. Broadcasters can offset the costs of obtaining, maintaining, and operating cameras, editing systems, and other broadcasting expenses through marketing and/or subscription revenues from their larger base of advertisers and/or consumers. The present disclosure presents new modalities for streaming audio and video from sporting venues to viewers.

SUMMARY

A method includes receiving cloned copies of a number of high definition video streams by a clone-of-a-clone server and streaming parallel video streams to a user computing device, where the parallel video streams include both the high definition video streams and low definition video streams based on the high definition video streams. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels. The low definition and high definition video streams are substantially identical videos but have different special resolutions. The method can include generating a low definition video stream from the high definition video stream by the clone-of-a-clone server. The method can include synchronizing frames of the parallel video streams that are sent to the user computing device. The method can include receiving the parallel streams on the user computing device from the clone-of-a-clone server, displaying each of the low definition video streams on the user computing device, and displaying a selected high definition video stream on the user computing device. The method can include receiving a user selection of one of the low definition video streams on the user computing device and the high definition video stream that is display is based on the user selection. The method can include receiving a second user selection of a second low definition video stream and switching from the displayed high definition video stream associated with the second user selection. The switching is performed substantially seamlessly from the first high definition video stream to the second high definition video stream. Each of the low definition video streams can be displayed in a low resolution small window and the selected high definition video stream can be displayed in a high resolution large window on the user computing device.

A system includes a clone-of-a-clone server that is configured to receive a number of high definition video streams and streams parallel video streams to one or more user computing devices, where the parallel video streams include both the high definition video streams and low definition video streams based on the high definition video streams. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels. The low definition video can be CIF, VGA, 4CIF, and D1 resolution, while the high definition video steam can be 720p, 1 Megapixel, and 1080p. Other resolutions and video encoding standards can be used. The system can include a number of cameras that are configured to stream high definition video streams and a clone server configured to receive the streaming video from the camera and clone the streaming video onto the clone-of-a-clone server. The clone-of-a-clone server can generate a low definition video stream from each of the high definition video streams that the clone-of-a-clone server receives. The clone-of-a-clone server can synchronize frames of the parallel video streams that are streamed to the user computing device. The system can include a user computing device that is configured to receive the parallel high definition and low definition video streams. The user computing device is configured to display each of the low definition video streams and a selected high definition video stream. The user computing device can be configured to receive a user selection of one of the displayed low definition video streams, display an associated high definition video stream, receive a second user selection, and seamlessly switch to displaying the high definition video stream associated with the second user selection. Each of the low definition video streams can be displayed in a low resolution small window and the selected high definition video stream can be displayed in a high resolution large window on the user computing device.

A system includes a clone-of-a-clone server that is configured to receive a number of cloned high definition video streams, generate low definition video streams from the high definition video streams, and selectively stream parallel high and low definition video streams to a user computing device. The clone-of-a-clone server synchronizes the parallel video streams so as to enable seemless switching on the display of the user computer device between different high resolution video streams. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram of an audio/video system for sporting venues according to an embodiment of the disclosure.

FIG. 2 is a diagram of an impact-resistant camera housing according to an embodiment of the disclosure.

FIG. 3 is a diagram of a sports helmet with integrated audio/video system according to an embodiment of the disclosure.

FIG. 4 is a diagram of example audio/video and network components according to an embodiment of the disclosure.

FIG. 5 is a flowchart of example operations for networking audio/video components according to an embodiment of the disclosure.

FIG. 6 is a flow diagram of example data connections according to an embodiment of the disclosure.

FIG. 7 is a diagram of an example screen for selecting from multiple audio and video feeds according to an embodiment of the disclosure.

FIG. 8 is a flowchart of example operations for custom content creation according to an embodiment of the disclosure.

FIG. 9 is a diagram of components of an example computing device configured for audio/video operations according to an embodiment of the disclosure.

FIG. 10 is a functional block diagram of example modules of an audio/video streaming system.

FIG. 11 is a diagram of example video resolutions.

FIG. 12 is a diagram of an example clone streaming system for parallel streams.

DETAILED DESCRIPTION

The systems and methods disclosed herein are described in detail by way of examples and with reference to the figures. It will be appreciated that modifications to disclosed and described examples, arrangements, configurations, components, elements, apparatuses, devices methods, systems, etc. can suitably be made and may be desired for a specific application. In this disclosure, any identification of specific techniques, arrangements, etc. are either related to a specific example presented or are merely a general description of such a technique, arrangement, etc. Identifications of specific details or examples are not intended to be, and should not be, construed as mandatory or limiting unless specifically designated as such.

The systems and methods disclosed herein describe various aspects of real-time video for sporting venues. Although the disclosed system and method are described below with regard to one or more computing devices and in particular mobile computing devices, the system and method can be used with any suitable computing device including but not limited to mobile phones, smart phones, pad computing devices, laptops, personal computers, desktops, servers, embedded controllers, and so forth. Among other various possibilities

Turning to FIG. 1, an audio/video system 100 for sporting venues is presented. The system 100 includes one or more audio/video streaming devices illustrated as cameras 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13. For example, cameras 1, 11, 12, and 13 can be fixed cameras in an arena, cameras 2, 5, 6, and 9 can be movable cameras that follow players or the action in the arena, camera 4 can be a camera positioned ideally to point at a scoreboard, cameras 3, 7, and 8 can be helmet cameras mounted to the helmets of certain players, and camera 10 can be a pair of helmet cameras configured to provide a view with a 3D virtual reality view from a player's perspective, such as a goalie's view.

The devices can be cameras, microphones, wireless cameras, wireless microphones, helmet cams, and so forth. Wired communications can be provided over Ethernet, for example using UDP or TCP protocols as would be understood in the art. In a configuration, a wired microphone can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as H.264. Typically, wired microphones are analog devices that are connected via cables to a head end unit; long cables require sufficient electrical insulation to avoid interference and substantial gauge wire that makes them expensive and heavy. Even with quality electrical insulation and properly gauged wire, purely analog solutions are subject to attenuation losses and noise, affecting the signal-to-noise ratio of the signal received at the head end unit. By immediately converting the signal from the analog transducer into a digital signal, the digital signal can be carried on less expensive, longer cables without the subject attenuation losses and lower signal-to-noise ratio of a purely analog system. Power over Ethernet (PoE) advantageously can be used to both provide power to devices and to provide a wired communications medium for the devices. Wireless communications can be effected using Wi-Fi or other wireless protocols, including but not limited to Bluetooth or Li-Fi.

The system 100 can include a private network, shown as intranet 110, configured for data communications between the devices and a streaming system 120. The streaming system 120 is configured to support audio and video streams from the devices, and convert them as required, as described below in greater detail. The streaming system 120 can include storage 130 for storing the audio and video streams. The streaming system 120 can allow users 150 to stream audio and video from the devices or from storage 130.

Turning now to FIG. 2, an example impact resistant camera housing 200 is presented. The camera housing 200 is configured to withstand vibrations, shocks, and impact to a camera mounted within the camera housing 200. For example, in a hockey arena it is possible for cameras to come into contact with a flying hockey puck, or be impacted by a hockey stick or a player. The camera housing 200 can protect the camera from the impact, and also ensure that parts from a damaged camera, such as glass or electronics, do not end up on spectators or players or on the ice where sharp or heavy pieces might cause injury.

The camera housing 200 is structurally configured to protect the camera while allowing connection to electrical components such as cables or wires for power and data communications. The camera housing 200 can also be configured to provide clean air for the camera, and remove heat dissipated by the camera.

The camera housing 200 comprises a dome assembly that attaches at one end of a drum 201. The dome assembly comprises a transparent dome cover 203 and a retainer ring 204. The dome assembly can be coupled to the drum 201 using complementary threading, screws, nuts, bolts, washers, (not shown) and the like as would be understood in the art.

A camera can be mounted inside the camera housing 200, for example on a support structure having support members (not shown) that contact the interior wall of the drum 201. The support members can be configured to dampen vibrations as would be understood in the art. An example support structure can be a disk that rests against pliable dampeners that act as support members and that seat the disk along a cross section of the drum 201. A camera can be mounted to the disk, for example using screws or other suitable fasteners.

The drum 201 can include threaded holes 202A, 202B to attach camera angle travel limiters inside the drum 201 thereby limiting the camera rotation to a predetermined angle. Camera angle travel limiters work by limiting camera rotation angle to prevent the camera from becoming damaged during rotation, or to ensure that the camera is always pointed at a certain area of the arena. For example, it may be desirable to use angle travel limiters to ensure that a camera cannot be pointed at spectators accidentally. In a configuration, the threaded holes 202A, 202B do not penetrate the drum 201 and are accessible only from the inside of the drum 201.

A mounting cover comprises retainer ring 205 and cover plate 206. Cover plate 206 can include collar 207 configured to accept a support rod 210 that connects to a support structure 211 and mounting plate 212. The mounting plate 212 can be attached to a structure in the arena such as a wall, ceiling, support beam, and so forth. A quick link 208 can be used as a backup failsafe to further anchor the camera housing 200 to a wall or support structure, for example using metal strings, or rope. This can be used to ensure that the camera housing 200 does not fall onto spectators, players, or the arena if the mounting plate 212 were become detached for any reason. The support rod 210 can be hollow, providing for passage for electrical components such as wires, cables, and so forth. The cover plate 206 can include threaded screw holes 209A, 209B, 209C, 209D for connecting the cover plate 206 and ring 205 to the drum 201. In a configuration, long screws can be used that pass through the drum 201 and also connect the dome assembly to the drum 201.

Referring now to FIG. 3, a helmet 300 that includes a helmet cam is presented. The helmet 300 can include one or more cameras 302 and/or microphones 304. The camera 302 can use a standard definition or high definition frame size and frame rate such as a 720p, 1080i, 1080p, 2k, or 4k at 30 frames per second (fps), 60 fps, or 120 fps, or lower frame rates. A helmet cam for providing a 3D virtual reality video feed can include two spatially separated cameras 302 as would be understood in the art. In a configuration, the microphone 304 can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as MP3. In a configuration the camera 302 and microphone 304 can be a single unit. The camera 302 and microphone 304 are in communication with an embedded controller 306. The embedded controller 306 can include custom designed electronics, for example a chip or microcontroller with a Wi-Fi or other antenna. In a configuration, the embedded controller 306 can include a modified smartphone. In one such configuration, the camera element and microphone element from a smartphone can be displaced from the modified smartphone and used as camera 302 and microphone 304. The embedded controller 306 can stream one or more video or audio streams from the camera 302 and/or microphone 304. Data communications from the embedded controller 306 can include Wi-Fi.

Referring now to FIG. 4, example audio/video and network components 400 are presented. A microphone 450, for example a wired microphone configured to be placed near the glass surrounding a hockey rink, can be connected to a proxy server 410 via an Ethernet cable such as a CAT 6 cable. The communications protocol between the proxy server 410 and microphone 450 can be USB over Ethernet, among other possible protocols as would be understood in the art. The Ethernet cable can provide power to the microphone 450. In another configuration, the microphone can use a wireless network such as Wi-Fi or Li-Fi.

An IP camera 460, for example an IP camera configured to be placed inside of the impact resistant camera housing 200 of FIG. 2, can be connected to a PoE switch 430 using a CAT 6 cable. The PoE switch 430 can provide power to the IP camera 460. The communications protocol between the proxy server 410 and IP camera 460 can be RTSP or real-time streaming protocol, among other possible protocols as would be understood in the art.

A wireless helmet camera 470, for example as described in helmet 300 of FIG. 3, can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440. Wi-Fi router 440 can be connected to the proxy server 410 via PoE switch 430 or by a direct connection to the proxy server 410. The communications protocol between the proxy server 410 and wireless helmet camera 470 can be RTSP (i.e., real-time streaming protocol) H.264, or H.265, among other possible protocols as would be understood in the art.

Similarly, a wireless microphone 480 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440. The proxy server 410 can receive digitized audio, for example an MP3 stream, by establishing a connection with the wireless microphone, for example using hypertext transfer protocol, or HTTP. Other communication protocols could also be used as would be understood in the art.

The proxy server 410 receives audio and video streams from microphones 450, 480 and cameras 460, 470. The proxy server 410 can store the streams to a memory, such as data store 420 for archiving or temporary storage. In a configuration, the proxy server 410 and data store 420 reside in the same hardware. In a configuration, the proxy server 410 can convert each video or audio stream to one or more common formats, sampling or compression rates, and frame sizes. For example, the proxy server 410 can receive a video stream and convert it to a standard H.264 or MPEG video stream prior to storing in data store 420. In a configuration, the proxy server 410 can store two or more different video streams from the same received video stream. For example, the proxy server 410 can convert a received video stream into a small thumbnail-sized video stream and a full size video stream. In an embodiment, two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)

Referring now to FIG. 5, example operations for networking wireless audio and video devices are presented. Operation commences at start block 500 labeled “START” and proceeds to process block 502.

In process block 502, the wireless device is powered on. Processing continues to process block 504.

In process block 504, the wireless device detects a Wi-Fi network. The wireless device can be preconfigured to connect to a specific Wi-Fi network by name, or service set identifier (SSID). The Wi-Fi network may be configured not to broadcast the SSID, for example to prevent the wireless network from being visible on spectators' mobile devices in the arena. In this configuration, the wireless device may detect the Wi-Fi network by querying for the Wi-Fi network using the preconfigured SSID. Processing continues to decision block 506.

In decision block 506, if the wireless device has previously received an IP address, then processing continues to process block 514, otherwise processing continues to process block 508.

In process block 508, the wireless device requests an IP address using the dynamic host control protocol or DHCP. Processing continues to process block 510.

In process block 510, a DHCP server receives the DHCP request from the wireless device and provides an IP address to the wireless device. The DHCP server reserves a fixed IP address for each wireless device. Advantageously, reserving a fixed IP address for each wireless device facilitates determining which video or audio feed belongs to each wireless device. A fixed or reserved IP address simplifies the process of allowing multiple users to receive video feeds from specific wireless devices, as players have helmet cams that may disconnect and reconnect to the Wi-Fi network as they move about the arena during game play. Without fixed or reserved IP addresses, the IP addresses of helmet cams could change during game play and make live streams have to disconnect and reconnect. Processing continues to process block 512.

In process block 512, the wireless device receives the IP address from the DHCP server. Processing continues to process block 514.

In process block 514, the wireless device streams audio and/or video to the proxy server using the configured IP address. Processing continues to decision block 516.

In decision block 516, if the connection to the wireless device drops, then processing continues to decision block 518, otherwise processing continues back to process block 514 to continue streaming the audio and/or video.

In decision block 518, if the connection has dropped due to a power off event or a signal to end streaming, then processing terminates at end block 520, otherwise processing continues back to process block 504 to attempt to reconnect to the Wi-Fi network.

Referring now to FIG. 6, example data connections are illustrated for an embodiment of the audio/video system 600. In an arena 602, such as a hockey arena, a sporting venue, or an entertainment venue in general, one or more fixed or moveable cameras 604, helmet cams 606, and microphones 608 are in data communication with a proxy server 612 through data communications equipment represented by wireless hub 610. The proxy server 612 provides one or more ports through which video and audio data streams can be accessed by users 630, either in real-time or through viewing stored data streams. A firewall 614, such as a specially configured router or dedicated piece of data communications equipment, prevents unauthorized users 630 from accessing data streams from the proxy server 612.

In an embodiment, users 630 first access a website system 620 which provides authentication information for accessing the data streams through the firewall. Authenticated users 630 connect through the firewall to the proxy server 612 and selected data streams are obtained from the proxy server 612 and presented on the users 630 screens. In another embodiment, the website system 620 is able to connect through the firewall 614 and connect to the proxy server 612 that streams to the website system 620. Users 630 that are authenticated on the website system 620 receive data streams that pass through the website system 620 from the proxy server 612. In another embodiment, two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)

Multiple end users 630 can simultaneously use the audio/video system 600. The audio/video system 600 can simultaneously support multiple events occurring in different venues. The audio/video system 600 can allow users 630 to create their own customized streams. For example, a first end user 632 can view different live streams from the audio/video system 600 during a particular sporting event. A second end user 634 can generate a customized stream based on a current live stream, or stored data streams of a previous sporting event. A third end user 636 can stream the customized stream of the second end user 634. Each end user 630 can use a different kind of computing device, for example a mobile device such as a smartphone or tablet, a laptop, a desktop, and so forth. For example, the first end user 632 can be streaming to a mobile computing device that is using a dedicated application or app that has been downloaded to a mobile computing device. The second end user 634 can be using a high end workstation with a fast Internet connection for editing and generating their customized stream. The third end user 636 can be using an Internet browser and clicking a link to access the customized stream of the second end user 636. In a configuration, the bit rate, frame rate, and frame size of the video and audio streams can be optimized for the type of end user computing device and connection speed.

Referring also to FIG. 7, an example screen 700 for selecting from multiple audio and video feeds is presented. The screen 700 includes thumbnail views 710 from each of the cameras and microphones. Some thumbnail views 710 may not include audio or video, either because the feed does not include audio or video, or due to a lost connection. Some thumbnail views, such as thumbnail view 10 may include a left and right view, allowing a user with a 3D viewing device connected to their video device to view a sporting event as a virtual reality experience from one or more of the players' perspectives.

The user can select from one or more of the thumbnail views 710, for example by clicking on a particular thumbnail view 710 or dragging a thumbnail view to a focus window 720. The currently selected video is presented in a focus window 720 that typically is larger than the thumbnail views. Clicking a camera icon associated with each thumbnail view 710 allows a user to select whether video, audio, or both are to be presented to the user, for example via the focus window 720. A user can select video from one device and audio from another device. In an embodiment, the user can customize the screen 700, for example to reorganize the order or size of the thumbnail views 710, to have two or more focus windows. Different user controls and window arrangements can be presented to the user as would be understood in the art. For example, in one configuration the focus window 720 can be selected by the user and clicked to toggle between full screen and the illustrated split screen that includes both the focus window 720 and the thumbnail view 710. In another configuration, clicking on the focus window 720 will cycle between a group of selected thumbnail views 710. This can be particularly useful to a user viewing the event using VR or 3D viewing devices.

Referring now to FIG. 8, example operations of a system for creating custom content are presented. Users and/or the streaming system itself can choose which devices to display in the focus window or focus windows. Other users can be invited to view the custom created content. Operation commences at start block 800 labeled “START” and proceeds to process block 802.

In process block 802, the streaming system receives streams from devices such as cameras and microphones. Processing continues to process block 804.

In process block 804, the streaming system streams one or more device streams to users 808, for example through the selection screen 700 of FIG. 7. At any time, users 808 can join a live stream of a sporting event or view a saved stream in process block 806. Processing continues to decision block 810.

In decision block 810, if the streaming system is configured to auto-select the focus window, then processing continues to process block 812, otherwise processing continues to process block 814.

In process block 812, the streaming system selects a feature that is used to determine the focus window. For example, the streaming system can select the feature to be the camera where the puck is located, or the microphone that is loudest. The selected feature can change dynamically during the game or practice. For example, the selected feature can be the penalty box subsequent to determining that an official has blown a whistle and the clock has been stopped, or the scoreboard after a change to a score on the scoreboard, or a particular player when that player enters the ice in the arena. In this mode, the streaming system attempts to select devices to present the best user experience of the sporting event. Processing continues to process block 818 where the streaming system determines the focus window based on the selected feature.

In decision block 814, if a user manually selects a feature to use as the selected feature, then processing continues to process block 816 to receive the user selection, otherwise processing continue to decision block 820.

In process block 816, the streaming system receives a selection of a feature to use for selecting the focus window from the available devices. For example, a user who is a scout may desire to follow one particular athlete, and thus use the streaming system in a scouting mode. The scout may select as the feature a jersey number of the particular athlete, in which case the streaming system in process block 818 will determine which camera shows the athlete's jersey number best. In another example, an avid fan of a particular player may desire to have that player as the focus of attention while still watching the game in progress, in which case the camera could be selected that displays both the player and the puck the majority of the time while the selected audio device could be from the helmet of the player or the audio device closest to the player. Processing continues to process block 818.

In process block 818, the streaming system determines the focus window from the available cameras and microphones. The steaming system can track players on the ice, or other playing surfaces for other sports, and use player position and motion data to determine the best camera and microphone to use in the focus window. The streaming system can use the selected feature from process block 812 and/or process block 816 in determining the best device to display in the focus window. The streaming system can determine when a particular device is not streaming, or has a connection issue, and switch to the next best device. Processing continues to decision block 820.

In decision block 820, if a user selects a particular device to use in the focus window, for example to override an selected device by the streaming system from process block 818, the processing continues to process block 822, otherwise processing continues to decision block 824.

In process block 822, the streaming system changes the focus window to the user selected device or devices. Processing continues to decision block 824.

In decision block 824, if the user adds user-content to the content stream, then processing continues to process block 826, otherwise processing continue to decision block 828.

In process block 826, a user adds user-created content to the content stream. For example, the user may have a microphone connected to their computing device and can add live commentary, such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators. In another example, sophisticated users can include user-created video such as replay clips or on-screen annotation. Processing continues to decision block 828.

In decision block 828, if the stream is offered to users, then processing continues to process block 830, otherwise processing continues to decision block 834.

In process block 830, a custom stream can be saved. In one configuration, metadata is saved that includes time-stamped tracking of which device(s) were selected for the focus window(s). In this way, the custom stream can be recreated as needed from saved video streams. In another configuration, a new stream can be saved separately for each custom created stream. In another configuration, the original sources or streams can be saved for a configurable period of time, and then purged at a particular expiration date to recover storage space. Similarly, custom streams can be saved and stored for a period of time before being purged. For example, a single custom stream created by the streaming system might be stored indefinitely, while the remaining streams are purged. Processing continues to process block 832.

In process block 832, users can be invited to view a custom stream. For example, a stream automatically generated by the streaming system can be shown on a schedule of available live or saved games for viewing by users. The streaming system can also include user-create custom streams in the schedule, and allow other users to rate user-created streams. In another example, a user that creates custom content can generate a link to their custom stream that can be forwarded to other users, for example through social media. For example, a link can be placed on a FACEBOOK page, a clip and link uploaded to the user's INSTAGRAM or TWITTER account, or a link can be emailed to potentially interested parties, for example using an email list and advertisement. Other uses of social media, either currently extant or yet to be developed, can be utilized as would be understood by one of skill in the art. Processing continues to decision block 834.

In decision block 834, if the sports event is determined to be over or if the saved stream has concluded, then processing terminates at end block 836, otherwise processing continues back to process block 804 to continue streaming content to users.

The costs of creating audio-video content are substantially reduced by allowing users, or the streaming system itself, to determine which video and audio stream to use as the focus window(s), especially when compared to the costs incurred by professional broadcast services such as the major television networks. Further, the use of relatively inexpensive cameras, microphones, and networking equipment allows that equipment to be more or less permanently placed in a sporting venue and used for whatever events occur in the venue, whether they are sporting events, entertainment events, or other events. This opens the opportunity to allow streaming of practices, pre-season games, minor-league games, club-level events, and even high-school events to interested parties. In effect, the present system democratizes the capture, production, and distribution of content from all levels of sporting venues.

Referring now to FIG. 9, an example computing device 900 is presented. Example computing devices 900 can be servers, desktop systems, mobile computing devices, embedded controllers, wireless cams and microphones, and so forth. Included are one or more processors, such as that illustrated by processor 904. Each processor is suitably associated with non-volatile memory, such as read only memory (ROM) 910 and random access memory (RAM) 912, via a data bus 914.

Processor 904 is also in data communication with a storage interface 916 for reading or writing to a data storage system 918, suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.

Processor 904 is also in data communication with a network interface controller (NIC) 930, which provides a data path to any suitable wired or physical network connection via physical network interface 934, or to any suitable wireless data connection via wireless network interface 938 or cellular interface 936, such as one or more of the networks detailed above.

Processor 904 is also in data communication with an input/output (I/O) interface 940 which provides data communication with devices such as a microphone 946 or camera 948 or user peripherals, such as a touchscreen display 944, keyboard, or mouse or any other suitable user interface. It will be understood that functional units are suitably comprised of intelligent units, including any suitable hardware or software platform.

Referring now to FIG. 10, presented are example software modules of an embodiment of the website system of FIG. 6. A user interface module 1002 serves web pages to users and administrators that provides a graphical user interface for logging into the system, viewing camera and audio microphone locations, viewing calendars of upcoming sporting events and archived streams, selecting sporting events or recorded steams to view, receiving video and audio streams from the proxy server through the firewall, customizing the user's thumbnail and focus window views, and interacting with the system in general. User accounts, configuration data, calendar information, stream information, and other data can be stored in a database 1010 or other suitable memory. A scheduler engine 1004 can schedule recordings of sporting events by the proxy server.

An analytics engine 1006 can analyze video and audio streams. For example, the analytics engine 1006 can determine when a video or audio feed has disconnected, and switch a user's focus window to another available stream and switch back once the video or audio feed reconnects. Similarly, the analytics engine 1006 can monitor video or audio streams and either blackout some or all of a stream in real-time, or switch the focus window to a different stream. The analytics engine 1006 can be used to detect objectionable language in audio, or objectionable images in a video feed, for example nudity, political messages, unauthorized advertising, excessive violence, and so forth. In a configuration, the analytics engine 1006 can be rules-based or use heuristics or other suitable analytics to perform an analysis of one or more streams. In a configuration, the analytics engine 1006 can receive a copy of the streams from a proxy server, or clone-of-a-clone system of FIG. 12, and the analytics engine 1006 can be executing on any suitable system as would be understood in the art.

The analytics engine 1006 can also track selected features for determining which stream to use in a focus window. For example, when the website system in being used by a user that is a scout, or if the system is set to use a scout mode, an individual player can be tracked in multiple video streams, for example by jersey number. The analytics engine 1006 can determine the optimal video and audio streams to use to track the selected player or feature being tracked.

The analytics engine 1006 can also perform analysis of helmet cam video and/or audio, for example to track where a player is looking or to determine how the player is moving the helmet. The analytics engine 1006 can determine if rapid helmet movements are suggestive of violent impacts which could cause concussions. The analytics engine 1006 can monitor a helmet cam for video and/or audio that might indicate a concussion, injury, or exhaustion of the player. For example, movements of the helmet that are atypical for the player, such as looking down more often, looking up, not turning the head in one particular direction, not following the puck (or a ball as might be used in other sporting events) or a delay in following the puck or action of the game, not looking where other players are looking, and so forth. In a configuration, a player's typical pattern of helmet movements can be analyzed and saved for reference and comparison. In a configuration, the analytics engine 1006 can send an alert to a coach or medical professional via a text message, email, or other suitable alert, for example using the user interface 1002.

A tracking engine 1008 can track one or more players' movements in the arena. The tracking engine 1008 can turn a player's movements into vector data, or any other suitable position data. The tracking engine 1008 can work in conjunction with the analytics engine 1006. For example, the tracking engine 1008 can provide player position or vector data to the analytics engine 1006 that is used to determine which camera and audio feed to use in the focus window(s). In a configuration, each player can be analyzed to create a digital representation of the players. Example data that can be determined can include position, speed, direction, acceleration, deceleration, linearity, non-linearity, circularity, time, and other measurements as would be understood in the art. In a configuration, the tracking engine 1008 and analytics engine 1006 can determine the correct camera frame to provide to a user based on the player data. For example, the system can sum all of the vectors or kinetic energy for each frame and/or camera stream and switch the focus window to a particular camera stream based on that calculation.

In a configuration, tracking data can be combined with video data to provide a visual representation of players' movements during practice or a game. Similarly, tracking data and/or analytics data can be combined with video and/or audio data to provide player performance information to couches, scouts, and interested viewers and fans.

In a configuration, the tracking engine 1008 can receive position data from helmet cams, for example position data derived from GPS or radio signal triangulation. Tracking and analytics data can be stored in the database 1010 or any other suitable memory.

Referring to FIG. 11, example video resolutions are presented. Standard resolutions can include common intermediate format or CIF at 352×240 or 352×288 pixels, VGA at 640×480 pixels, and 4CIF/D1 at 704×480, 704×576, or 720×480 pixels. High definition resolutions can include 720p at 1280×720 pixels, 1 Megapixel at 1280×1024, and 1080p at 1920×1080 pixels. Ultra high resolution formats are also contemplated, for example QHD at 2560×1440 pixels, UHD or 4K at 3840×2160 pixels, and so forth. Standard resolution can also include QCIF at 176×120 or 176×144 pixels. Steaming video can be interlaced or progressive scan as appropriate for the resolution. Audio can similarly be encoded, for example as 19.2 kb/s PCM, 9.6 kb/s ADPCM, MP3, or any other suitable encoding or compression as would be understood in the art. The disclosed resolutions are presented as non-limiting examples only. Other suitable resolutions can also be used as would be understood in the art.

Referring now to FIG. 12, an example streaming system 1200 is presented. In a venue, such as arena 1214, a plurality of cameras 1202 are configured to stream video across one or more local network connections 1204 to a clone server 1206. The cameras 1202, such as camera 1 through camera n as illustrated, can be configured to stream a high definition video stream, such as 1080p at 1920×1080 pixels. In an embodiment, one or more cameras 1202 can be configured to stream both a low definition video stream, such as CIF at 320×240 pixels, and a high definition video stream. In a configuration, different cameras 1202 can stream in different resolutions. For example, camera 1 could stream in 1080p, while camera 2 streams in 4k and camera n streams using 1 megapixel streaming.

Each camera 1202 streams across a local network connection 1204, such as a LAN, WiFi, LiFi, Power over Ethernet, or any other suitable network for example as described with respect to the devices of FIG. 1. The clone server 1206 receives each of the streams from the cameras 1202. The clone server 1206 can store each of the streams from each of the cameras 1202. In a configuration, the streams are stored temporarily, or ephemerally, before being streamed to one or more clone-of-a-clone servers 1210 and/or to cloud storage 1213. In another configuration, the clone server 1206 can store each stream for a longer period of time, for example as permanent storage.

The clone server 1206 is in network communication, for example using a VPN or virtual private network, with one or more clone-of-a-clone servers 1210 through firewall 1207, which can be a suitable router or other suitable network element. The clone server 1206 clones the live video streams 1208 from the cameras 1202 onto the clone-of-a-clone server 1210. Each clone-of-a-clone server 1210 receives live video streams 1208 associated with each of the cameras 1202. In an embodiment, each clone-of-a-clone server 1210 can receive live video streams 1208 from a subset of all of the available cameras 1202 associated with the clone server 1206. In another embodiment, each clone-of-a-clone server 1210 can receive live video streams 1208 from multiple clone servers 1206 and associated cameras 1202. The clone-of-a-clone servers 1210 can be anywhere in the network, for example in the cloud 1216 as shown, at an ISP or Internet Service Provider, in a colocation premises, in the arena 1214 or any other suitable place. The clone-of-a-clone server 1210 can be hosted by a service company that provides high speed cloud hosting services, such as AMAZON, as would be understood in the art.

The clone server 1206 also sends recorded video streams 1209 to cloud storage 1213. Cloud storage 1213 can include network servers, redundant network storage hosted by third party companies, and other suitable cloud storage as would be understood in the art. In a configuration, the recorded video streams 1209 can include live video streams.

Advantageously, the clone server 1206, the clone-of-a-clone server 1210, and cloud storage 1213 allow the system architecture to easily scale to support any number of cameras 1202 and users. The clone server 1206 aggregates video streams from multiple cameras 1202. Additional clone servers 1206 can be used to accommodate more cameras 1202 as needed. Each clone-of-a-clone server 1210 receives cloned video streams from one or more clone servers 1206 and supports forwarding video streams to multiple users. Additional clone-of-a-clone servers 1210 can be used to accommodate more users when needed. Cloud storage 1213 can be scaled as necessary to support automated recording of live video streams and playback of video streams by users. A web server 1211 can provide front end web services for users to interact with the system and gain access to the live video streams and recorded video from the clone-of-a-clone servers 1210 and cloud storage 1213.

Clone-of-a-clone servers 1210 can be configured to perform other services, for example archiving video, providing user video editing functions, and so forth. In an embodiment, one or more cameras 1202 stream only a single stream of video, for example a single high definition 1080p stream. In this embodiment, the clone-of-a-clone server 1210 receives a clone of each high definition stream from the clone server 1206 and the clone-of-a-clone server 1210 creates an additional low definition video stream such as a CIF stream based on the received high definition stream. Alternatively, the clone server 1206 receives the high definition stream and creates the additional low definition video stream such as a CIF stream based on the high definition stream received from the cameras 1202. In yet another embodiment, the clone server 1206 receives a single stream from some cameras 1202 and multiple streams from other cameras 1202; the clone server 1206 or the clone-of-a-clone server 1210 generates a second stream, for example a second low definition stream, for cameras 1202 that only provide a single stream.

A consumer, for example a user or business located in a consumer premises 1218 such as a home or business office, uses a computing device 1220 that establishes a network connection, for example over the Internet 1212, with the web server 1211. The user interacts with the web server 1211 to view live video streams or recorded video from the clone-of-a-clone servers 1210 or cloud storage 1213.

The computing device 1220 can be a personal computer, a laptop, a tablet device, a smartphone, a smart TV, a video game device, a television set top box, or any other suitable computing device as would be known in the art. The computing device receives parallel streams from each of the cameras 1202, or parallel streams from a subset of the cameras 1202 over the network connection. For example, as illustrated in FIG. 12, the computing device 1220 receives both a low definition CIF stream and a high definition HD stream as parallel streams from each of the cameras 1202 over a network connection via the Internet 1212.

The computing device 1220 can be configured to display the received streams in any suitable or desired configuration or format. For example, the computing device 1220 can run software that displays multiple streams from the cameras 1202 in low definition in smaller preview windows 1222 and a high definition stream of one of the cameras 1202 in a large focus window 1224. A user can select any one of the smaller preview windows 1222 to display the high definition stream of the selected camera 1202 in the large focus window 1224. In a embodiment, there can be two or more large focus windows 1224, each of which can display a different selected stream. Advantageously, because the computing device receives both a low definition stream and a high definition stream associated with each of the cameras 1202, there is no delay, or minimal delay, that is perceived by the user as the user switches between streams from different cameras 1202 in the large focus window 1224. Also advantageously, because the low definition streams are displayed in smaller preview windows 1222, the user does not perceive that those streams are presented in low resolution because of the smaller size of the smaller preview windows 1222.

Both the low definition stream and high definition stream from each camera can be synchronized, such that the start of each frame of video for both streams are in sync. This advantageously allows the picture displayed in both the smaller preview window 1222 and large focus window 1224 to be in perfect sync, preventing the user for perceiving temporal differences. Also, the parallel streams from each of the cameras 1202 can be in sync so that the picture in the large focus window 1224 can smoothly switch between high definition streams from different cameras 1202 without displaying partial frames or experiencing temporal delays during a switch between video sources.

In a configuration, the smaller preview windows 1222 can have the same pixel resolution as the pixel size of the low definition streams from cameras 1202. This can reduce the computation load on the computing device 1220 which does not have to remap each of pixels of the low definition streams into a different pixel size of the smaller preview windows 1222. Similarly, the pixel size of the high definition stream can be the same as the pixel size the large focus windows 1224. In other configurations, the smaller preview windows 1222 or large focus windows 1224 can have different pixels sizes that the low definition streams or high definition streams respectively and the computing device 1220 can remap the streams onto the screen as would be understood in the art. In an embodiment, the computing device 1220 can receive the low definition streams and the high definition streams in a desired resolution and/or frame rate from the clone-of-a-clone server 1210.

Advantageously, the streaming system 1200 presented herein provides the user with a seamless visual experience as the user switches between the different views from each of the cameras 1202. Although the streaming system 1200 sends both high definition and low definition streams for each camera 1202 to the user, video compression can be used to reduce the overall bandwidth required. For example, video streams can be compressed using compression algorithms such as MP4, H.264, H.265 or other forms of compression as would be understood in the art. In an embodiment, the low definition and high definition streams can share a common audio stream to further reduce bandwidth. In an embodiment, the low definition streams and high definition streams can be separately streamed in distinct network connections to the computing device 1220. In an embodiment, streams can be combined into a single network connection.

While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the spirit and scope of the inventions.

Claims

1. A method, comprising:

receiving, by a clone-of-a-clone server, cloned copies of a plurality of high definition video streams; and
streaming, by the clone-of-a-clone server, parallel video streams to a user computing device,
wherein the parallel video streams comprise the high definition video streams and a plurality of low definition video streams based on each of the high definition video streams.

2. The method of claim 1, wherein each low definition video stream uses a common intermediate format (CIF) resolution of 320 by 240 pixels, and wherein each high definition video stream uses a 1080p resolution of 1920 by 1080 pixels.

3. The method of claim 1, wherein each low definition video stream and associated high definition video stream are substantially identical videos that differ primarily by spatial resolution.

4. The method of claim 1, further comprising:

generating, by the clone-of-a-clone server, each low definition video stream from an associated high definition video stream.

5. The method of claim 1, further comprising:

synchronizing frames of the parallel video streams sent to the user computing device.

6. The method of claim 1, further comprising:

receiving, by the user computing device, a plurality of parallel streams from the clone-of-a-clone server;
displaying, by the user computing device, each low definition video stream from the parallel video streams; and
displaying, by the user computing device, a selected high definition video stream from the parallel video streams.

7. The method of claim 6, further comprising:

receiving, by the user computing device, a user selection of one of the displayed low definition video streams, and
wherein the selected high definition video stream is based on the user selection.

8. The method of claim 7, further comprising:

receiving, by the user computing device, a second user selection of a second low definition video stream; and
switching, by the user computing device, from displaying the high definition video stream to displaying a second high definition video stream based on the second user selection, and
wherein the switching is performed substantially seamlessly between the high definition video stream and the second high definition video stream.

9. The method of claim 6, wherein each of the low definition video streams is displayed on the user computing device in a low resolution small window and wherein the selected high definition video stream is displayed in a high resolution large window.

10. A system, comprising:

a clone-of-a-clone server configured to receive a plurality of high definition video streams, and stream parallel video streams to a user computing device, wherein the parallel video streams comprise the high definition video streams and a plurality of low definition video streams based on each of the high definition video streams.

11. The system of claim 10, wherein each low definition video stream uses a common intermediate format (CIF) resolution of 320 by 240 pixels and wherein each high definition video stream uses a 1080p resolution of 1920 by 1080 pixels.

12. The system of claim 10, wherein each low definition video stream is selected from the group consisting of CIF, VGA, 4CIF, and D1, and wherein each high definition video stream is selected from the group consisting of 720p, 1 Megapixel, and 1080p.

13. The system of claim 10, further comprising:

a plurality of cameras each configured to stream video comprising a high definition video stream; and
a clone server configured to receive streaming video from a camera and clone the streaming video onto the clone-of-a-clone server.

14. The system of claim 10, wherein the clone-of-a-clone server is further configured to generate a low definition video stream from each of the received high definition video streams.

15. The system of claim 10, wherein the clone-of-a-clone server is configured to synchronize frames of the parallel video streams streamed to the user computing device.

16. The system of claim 10, further comprising:

a user computing device configured to receive the parallel video streams, display each low definition video stream from the parallel video streams, and display a selected high definition video stream from the parallel video streams.

17. The system of claim 16, wherein the user computing device is further configured to

receive a first user selection of one of the displayed low definition video streams,
display a high definition video stream associated with the first user selection,
receive a second user selection associated with a second displayed low definition video stream, and
switch from displaying the high definition video stream to displaying a second high definition video stream associated with the second user selection,
wherein the switch is performed substantially seamlessly between the high definition video stream and the second high definition video stream.

18. The system of claim 17, wherein each of the low definition video streams is displayed on the user computing device in a low resolution small window and wherein a selected high definition video stream is displayed in a high resolution large window.

19. A system, comprising:

a clone-of-a-clone server configured to receive a plurality of cloned high definition video streams, generate a low definition video stream from each high definition video stream, and selectively stream parallel video streams to a plurality of user computing devices each configured to display a plurality of low definition video streams and at least one selected high definition video stream, and
wherein each of the parallel video streams comprises one of the high definition video streams and an associated low definition video stream, and
wherein the clone-of-a-close server is further configured to synchronize the parallel video streams to enable seemless switching between the display of a first selected high definition video stream and the display of a second selected high definition video stream on the user computing device.

20. The system of claim 19, wherein each low definition video stream has a common intermediate format (CIF) resolution of 320 by 240 pixels and wherein each high definition video stream has a 1080p resolution of 1920 by 1080 pixels.

Patent History
Publication number: 20180077437
Type: Application
Filed: Feb 15, 2017
Publication Date: Mar 15, 2018
Inventors: Barrie Hansen (The Woodlands, TX), Rio Wing (The Woodlands, TX)
Application Number: 15/434,003
Classifications
International Classification: H04N 21/2343 (20060101); H04N 21/2187 (20060101); H04N 21/239 (20060101); H04N 21/431 (20060101); H04N 21/2365 (20060101);