SYSTEMS AND METHODS TO OVERLAY REMOTE AND LOCAL VIDEO FEEDS
According to some embodiments, a studio video feed, including a studio subject, may be received from a studio video camera. A remote video feed, including a remote subject, may be received from a remote video camera. The remote video feed may include, for example, the remote subject positioned in from of a solid-colored background. The remote subject may be overlaid into the studio video feed to produce a composite video signal, and at least one of the studio video feed and the remote video feed may be automatically adjusted to create an impression that the studio subject and the remote subject occupy a shared physical space.
The present invention relates to systems and methods to combine remote and local video signals. Some embodiments relate to systems and methods for efficiently overlaying remote and local video feeds.
BACKGROUNDA broadcast program might simultaneously include video information from two different physical locations. For example, the program might include videos of both (1) an interviewer (e.g., a program host located in a broadcast studio) and (2) a subject who is being interviewed (e.g., located at sports stadium remote from the broadcast studio). Typically, each video is displayed in a separate box on the broadcast display. For example, a first box might display the face of the interviewer (e.g., and the first box might be labeled “ESPN® Studios”) while a second box might display the face of the subject who is being interviewed (and the second box might be labeled “Fenway Park”). In some cases, a “split-screen” display might be provided (e.g., with the left half displaying a studio video feed and the right half displaying a remote video feed). Such approaches, however, re-enforce the impression that the interviewer and subject are not occupying a shared physical space which can distract and/or disorient viewers.
Applicants have recognized that there is a need for methods, systems, apparatus, means and computer program products to efficiently overlay remote and local video feeds. Note that a broadcast program might simultaneously include video information from two different physical locations. For example,
To help avoid such a result,
At 202, a local video feed is received from a local video camera, the local video feed including a local subject. As used herein, the phrase “video feed” may refer to any signal conveying information about a moving image, such as a High Definition-Serial Data Interface (“HD-SDI”) signal transmitted in accordance with the Society of Motion Picture and Television Engineers 292M standard. Although HD signals may be described in some examples presented herein, note that embodiments may be associated with any other type of video feed, including a standard broadcast feed and/or a 3D image feed.
At 204, a remote video feed is received from a remote video camera, the remote video feed including a remote subject. The remote video feed might comprise, for example, an HD-SDI signal received through a fiber cable and/or a satellite transmission. According to some embodiments, the remote subject is situated in front of a solid-colored background (e.g., a “greenscreen”).
Note that the local and remote video cameras may be any device capable of generating a video feed, such as a Vinten® studio (or outside) broadcast camera with a pan and tilt head. According to some embodiments, at least one of the local video camera and the remote video camera are an “instrumented” video camera adapted to provide substantially real-time information about dynamic adjustments being made to the instrumented video camera. As used herein, the phrase “dynamic adjustments” might refer to, for example, a panning motion, a tilting motion, a focal change, and/or a zooming adjustment being made to a video camera (e.g., zooming the camera in or out).
At 206, the remote video feed and the local video feed are overlaid to produce a composite video signal, wherein at least one of the local video feed and the remote video feed are automatically adjusted to create an impression that the local subject and the remote subject occupy a shared physical space. For example, the remote video feed might be automatically adjusted based on dynamic adjustments being made to the local video feed (e.g., the local camera might be slowly panning across a studio set). As a result of the automatic adjustment, the overlaid video feeds may create the impression that the remote subject is sitting next to the local subject in a broadcast studio. As another example, the overlaid video feeds might instead create the impression that the local subject is standing next to the remote subject at a baseball stadium.
Similarly, the system 300 may include a first remote video camera 320 aimed at a remote subject 322 (e.g., a guest standing in front of a greenscreen) from a first angle. The first remote video camera 320 might comprise, for example, a locokoff camera that transmits a remote video HD-SDI feed directly to the first PC 330 over a fiber or satellite connection. Note that the first PC 330 might be co-located with the first local video camera 310 or the first remote video camera 320 or may instead be implemented at an entirely different location.
The first PC 330 may automatically adjust the received remote video HD-SDI feed based on information about dynamic adjustments received from the first local video camera 310 (e.g., the image of the guest may be adjusted when the studio camera is tilted). As a result, the output of the first PC 330 may represent a tracked remote video foreground over greenscreen video signal that may be provided to a first overlay engine 340.
The first overlay engine 340 may also receive a local video HD SDI feed (including the studio background) directly from the first local video camera 310. The first overlay engine 340 may then combine the two received video feeds to generate an output video feed that creates an impression that the local subject and the remote subject occupy a shared physical space. Note that according to some embodiments, the first PC 330 and the first overlay engine 340 may comprise a single device.
The system 300 may also include, according to some embodiments, a second local camera 311 aimed at the local subject 312 from a second angle (different than the first angle). The second local video camera 311 might comprise, for example, another instrumented hard camera that can be dynamically adjusted (e.g., via pan and/or tilt motions). The second local video camera 311 might provide information about such dynamic adjustments directly to a second PC 350 via a serial interface and/or linked fiber transceivers. The second PC 350 might also be executing a rendering application, such as the Brainstorm eStudio® 3D real-time graphics software package. Note that according to some embodiments, the first and second PCs 330, 350 may comprise a single device.
Consider, for example,
Referring again to
The second overlay engine 360 may also receive a local video HD SDI feed (including the studio background) directly from the second local video camera 311. The second overlay engine 360 may then combine the two received video feeds to generate an output video feed that creates an impression that the local subject and the remote subject occupy a shared physical space. The two composite outputs from the first and second overlay engines 340, 360 might be routed to a patch panel 370 (and either of the two angles might be selected for broadcast by an operator). According to some embodiments, the system 300 further includes a virtual operator station 380 that may facilitate interactions between an operator and the two PCs 330, 350.
The system 300 may therefore provide an ability to have remote guests/talent seamlessly immersed in a studio environment (or vice versa). For example, a remote guest might appear to be sitting in the studio location alongside a studio host in the same camera shot.
According to some embodiments, the locked-off remote interview feed (over greenscreen) is fed from the first remote video camera 320 to the Brainstorm application executing at the first PC 330 as an HD/SDI live input, which may be mapped to a tracked plane in a virtual environment. The tracked plane of video may then be keyed over the encoded and delayed studio camera shot from the first studio video camera 310 (e.g., equipped with an encoded jib associated with a virtual setup) by a switcher using a chroma keyer to complete the effect. Note that the operator of the first remote video camera 320 may provide to the rendering software information about the distance between his or her camera to the subject and/or help calibrate the field of view (e.g., the width of the shot at the remote subject's distance).
The processor 510 is also in communication with an input device 540. The input device 540 may comprise, for example, a keyboard, a mouse, or computer media reader. Such an input device 540 may be used, for example, to enter information about a remote and/or studio camera set-up. The processor 510 is also in communication with an output device 550. The output device 550 may comprise, for example, a display screen or printer. Such an output device 550 may be used, for example, to provide information about a remote and/or studio camera set-up to an operator.
The processor 510 is also in communication with a storage device 530. The storage device 530 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., hard disk drives), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices.
The storage device 530 stores a rendering engine application 535 for controlling the processor 510. The processor 510 performs instructions of the application 535, and thereby operates in accordance any embodiments of the present invention described herein. For example, the processor 510 may receive dynamic adjustment information from a local video camera associated with a local subject. The processor 510 may also receive a remote video feed, from a remote video camera, the remote video feed including a remote subject. The processor 510 may then automatically adjust the remote video feed based on the dynamic adjustment information to create an impression that the local subject and the remote subject occupy a shared physical space. The processor 510 may then transmit the adjusted remote video feed to an overlay engine via the communication devices 520.
As used herein, information may be “received” by or “transmitted” to, for example: (i) the rendering engine 500 from other devices; or (ii) a software application or module within rendering engine 500 from another software application, module, or any other source.
As shown in
The system 700 may also include a remote video camera 720 aimed at the remote guest 722 (e.g., standing in front of a greenscreen). The remote video camera 720 might comprise, for example, a locokoff camera that transmits a remote video HD-SDI feed directly to the rendering engine 730 over a fiber or satellite connection (e.g., a video/audio interview uplink).
The rendering engine 730 may automatically adjust the received remote video HD-SDI feed based on information about dynamic adjustments received from the studio video camera 710 (e.g., the image of the guest 722 may be adjusted when the studio camera pans from left to right). As a result, the output of the rending engine 730 may represent a tracked remote video foreground over greenscreen video signal that may be provided to an overlay engine 740.
The overlay engine 740 may also receive a studio video HD SDI feed (including the studio background) directly from the studio video camera 710 (e.g., which includes a host 712). The overlay engine 740 may then combine the two received video feeds to generate a combined output video feed that creates an impression that the studio subject and the remote subject occupy a shared physical space. With such an arrangement, studio subjects may be immersed into a remote environment (or remote subjects may be placed into the studio environment) with significant flexibility when producing interviews and analysis associated with a remote site.
Some embodiments described herein utilize capabilities of an encoded studio camera system (with or without the addition of a greenscreen area at the remote site or in studio). Note that an identical camera and lens (e.g., non-encoded or motion-controlled) could be used at the remote site as compared to the studio. In some cases, camera data may be synchronized between sites via fiber or network connections. Thus, various combinations of equipment may provide different levels of immersion and/or interaction.
For example, in some cases no greenscreen might be used at either the studio or the remote site. In this case, both cameras may shoot subjects over the actual backgrounds and the two shots may be blended using rendering virtual software along with in-studio camera tracking This may allow a camera shot of the studio, where the studio subject can be on camera, and when the camera pans to the left or right it will appear as if the remote set is actually present next to the studio set.
As another example, a green screen may by used at the remote site but not at the local site. This may provide the added ability to place the remote subject “virtually” into the studio set. That is, it might appear as if the remote subject was actually standing in the local studio (and possibly next to the actual local subject. As still another example, the local studio set may have a partial greenscreen area such that the studio subject could walk from the actual physical set into the remote environment on the same camera shot. Note that embodiments described herein may require little or no added remote hardware and no additional personnel, while substantially increasing the immersive options.
The following illustrates various additional embodiments of the invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications.
Although a single local subject and single remote subject have been described in some of the examples presented herein, note that any number of subjects may be blended in accordance with the present invention. Similarly, although a single local and remote site have been described herein as examples, note that embodiments could blend together any number of locations. For example, an impression could be created that a first football player located in New York and a second football player located in Chicago are standing next to a studio host located in Connecticut.
Moreover, although specific hardware and data configurations have been described herein, note that any number of other configurations may be provided in accordance with embodiments of the present invention (e.g., some of the information associated with the databases and engines described herein may be split, combined, and/or handled by external systems). Further note that embodiments may be associated with any number of different types of broadcast programs (e.g., sports, news, and weather programs).
The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described, but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims.
Claims
1. A method comprising:
- receiving a studio video feed, from a studio video camera, the studio video feed including a studio subject;
- receiving a remote video feed, from a remote video camera, the remote video feed including a remote subject; and
- overlaying the remote video feed and the studio video feed to produce a composite video signal, wherein at least one of the studio video feed and the remote video feed are automatically adjusted, based on dynamic adjustments being made to the other video feed, to create an impression that the studio subject and the remote subject occupy a shared physical space.
2. The method of claim 1, wherein at least one of the studio video camera and the remote video camera comprise an instrumented camera adapted to provide substantially real-time information about dynamic adjustments made to the instrumented camera.
3. The method of claim 2, wherein the dynamic adjustments are associated with at least one of: (i) a panning motion, (ii) a tilting motion, (iii) a focal change, or (iv) a zooming adjustment.
4. The method of claim 2, wherein the remote video feed is received via a high definition serial digital interface signal.
5. The method of claim 4, wherein the high definition serial digital interface signal is received via at least one of: (i) a fiber cable or (ii) a satellite transmission.
6. The method of claim 2, wherein the remote video feed is automatically adjusted by a real time rendering platform, based on the dynamic adjustments made to an instrumented studio video camera, to create the impression that the studio subject and the remote subject occupy the shared physical space.
7. The method of claim 6, wherein the impression created is that the remote subject is present in the studio subject's physical space.
8. The method of claim 7, wherein the remote video feed includes the remote subject in front of a solid-colored background.
9. The method of claim 6, wherein the impression created is that the studio subject is present in the remote subject's physical space.
10. A system, comprising:
- a studio instrumented video camera outputting (i) a studio video feed including a studio subject and (ii) data associated with dynamic adjustments;
- a remote video camera outputting a remote video feed including a remote subject;
- a rending engine receiving the data from the studio instrumented video camera and the remote video feed from the remote video camera and generating an adjusted remote video signal; and
- an overlay engine receiving the adjusted remote video signal and the studio video feed and generating a combined output to create an impression that the studio subject and the remote subject occupy a shared physical space.
11. The system of claim 10, wherein the system includes a plurality of studio video cameras and paired remote video cameras, each pair being combined by a separate overlay engine.
12. The system of claim 11, wherein the dynamic adjustments are associated with at least one of: (i) a panning motion, (ii) a tilting motion, (iii) a focal change, or (iv) a zooming adjustment.
13. The system of claim 12, wherein the data associated with dynamic adjustments is provided from the studio video camera to the rendering engine via a serial signal transmitted via fiber transceivers.
14. The system of claim 13, wherein the impression created is that the remote subject is present in the studio subject's physical space.
15. The system of claim 13, wherein the impression created is that the studio subject is present in the remote subject's physical space.
16. A computer-readable medium storing instructions adapted to be executed by a processor to perform a method, the method comprising:
- receiving dynamic adjustment information from a studio video camera associated with a studio subject;
- receiving a remote video feed, from a remote video camera, the remote video feed including a remote subject;
- automatically adjusting the remote video feed based on the dynamic adjustment information to create an impression that the studio subject and the remote subject occupy a shared physical space; and
- transmitting the adjusted remote video feed to an overlay engine.
17. The medium of claim 16, wherein the dynamic adjustment information is received from an instrumented camera adapted to provide substantially real-time information about dynamic adjustments made to the instrumented camera.
18. The medium of claim 17, wherein the dynamic adjustments are associated with at least one of: (i) a panning motion, (ii) a tilting motion, (iii) a focal change, or (iv) a zooming adjustment.
19. The medium of claim 18, wherein the remote video feed is automatically adjusted by a real time rendering platform, based on the information about the dynamic adjustments made to the instrumented camera, to create the impression that the studio subject and the remote subject occupy the shared physical space.
20. The medium of claim 13, wherein the impression created is one of: (i) that the remote subject is present in the studio subject's physical space, or (ii) that the studio subject is present in the remote subject's physical space.
Type: Application
Filed: Jul 1, 2010
Publication Date: Jan 5, 2012
Inventors: Michael F. Gay (Burbank, CA), Anthony Bailey (Burbank, CA)
Application Number: 12/828,859
International Classification: H04N 9/74 (20060101); H04N 5/228 (20060101);