APPARATUS AND METHOD FOR CONCURRENT VIDEO VIEWING WITH USER-ADDED REALTIME CONTENT
The present principles generally relate to video processing and viewing, and particularly, to concurrent viewing of a video with other users and processing of user-added, real-time content. The present principles provide capabilities to create a shared video viewing experience which merge concurrent video watching with u user-provided real-time commenting and content. Users watching the same content at the same time may overlay graphical elements on the shared video to communicate with other concurrent viewers of the video. These graphical elements are annotations used to communicate with another viewer, or among a group of viewers, and are overlaid onto the video itself in real time during an interactive session as though the users are in concurrent conversations.
The present principles generally relate to video processing and viewing, and particularly, to concurrent viewing of a video with other users and processing of user-added, real-time content.
BACKGROUND INFORMATIONThis section is intended to introduce a reader to various aspects of art, which may be related to various aspects of the present principles that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding. Accordingly, it should be is understood that these statements are to be read in this light, and not as admissions of prior art.
More and more consumers are shifting from viewing televisions with traditional broadcast and cable services to watching and/or downloading Internet video via a broadband or Wi-Fi connection. The traditional broadcast and cable services do not allow an easy way for a user to interact with other viewers who are also watching the same programs, since the communication is only one way, from the broadcasters to the televisions.
More and more consumers are also sharing their videos online using websites such as YouTube™. YouTube™ allows users to post their video content to be watched by other users. YouTube™ also provides a tool to allow a video poster to provide static annotations on the video created before it is posted on the website. The annotation is static in the sense that it is permanently affixed to the posted video and the content cannot be changed dynamically in real time or at all. In addition, YouTube™'s annotation feature is not available for live streaming services provided by YouTube™. Therefore, there is no user interactivity between people watching the same video currently in real time.
SUMMARYThe present principles recognize that people watching a video concurrently in real time at different locations may want to have a shared viewing experience with e.g., their friends or family. The present principles further recognize that in today's environment, such a feature is not readily available or a user may have to use a second screen in order to use a separate texting or messaging application to talk about the video they are watching together on the primary screen.
Accordingly, the present principles provide capabilities to create a shared video viewing experience which merge concurrent video watching with user-provided real-time commenting and content. For example, users watching the same content at the same time may overlay graphical elements on the shared video to communicate with their friends. Hence, according to the present principles, someone may put a “thumbs up” or a “smiley” sticker or emoji directly on a video scene they like. They may also put, e.g., a speech bubble on one of the characters in the video to make a joke. These sticker annotations are used to communicate with another viewer, or among a group of viewers, and are overlaid onto the video itself in real time during an interactive session as though the users are in concurrent conversations.
Accordingly, a first electronic device is presented for communicating with a second electronic device, the second electronic device being at a remote location and displaying a video, the first electronic device comprising: a display device configured to display the video concurrently with the second electronic device; a user interface device configured to select a first communication item at the first electronic device and to overlay the selected first communication item onto the video at the first electronic device, the first item being overlaid onto the video during an interactive session between the first electronic device and the second electronic device; and a processor configured to provide information on the overlaid selected first communication item for displaying the first communication item overlaid onto the video at the second electronic device.
In another exemplary embodiment, a method performed by a first electronic device is presented for communicating with a second electronic device, the second electronic device being at a remote location and displaying a video, the method comprising: displaying concurrently the video on a display device of the first electronic device; selecting a first communication item at the first electronic device; overlaying the selected first communication item onto the video at the first electronic device, the first item being overlaid onto the video during an interactive session between the first electronic device and the second electronic device; and providing information on the selected first communication item for displaying the first communication item overlaid onto the video at the second electronic device.
In another exemplary embodiment, a computer program product stored in non-transitory computer-readable storage media for a first electronic device is presented for communicating with a second electronic device, the second electronic device being at a remote location and displaying a video, comprising computer-executable instructions for: displaying concurrently the video on a display device of the first electronic device; selecting a first communication item at the first electronic device; overlaying the selected first communication item onto the video at the first electronic device, the first item being overlaid onto the video during an interactive session between the first electronic device and the second electronic device; and providing information on the selected first communication item for displaying the first communication item overlaid onto the video at the second electronic device.
The above-mentioned and other features and advantages of the present principles, and the manner of attaining them, will become more apparent and the invention will be better understood by reference to the following description of embodiments of the present principles taken in conjunction with the accompanying drawings, wherein:
The examples set out herein illustrate exemplary embodiments of the present principles. Such examples are not to be construed as limiting the scope of the invention in any manner.
DETAILED DESCRIPTIONThe present principles allow a viewer to mix user-provided communication items including customizable graphical items such as stickers or emoji icons, or conversation texts onto a shared video in a time and spatially relevant way to provide a novel communication mechanism. While watching the same video concurrently, one user may add an item such as a sticker onto the video at a certain timestamp and in a spatial location (spatial location may mean pixel position within a video frame or specific objects such as an actor or a chair in the video that may move in a scene). The other remotely located video devices in the same interactive session of the video viewing/conversation would receive the metadata of the inserted items and render the items as needed on the video.
In one exemplary embodiment, the inserted item may persist for a given duration, or disappear once the other viewer sees it or removes it. People at the remote locations who are watching the same video concurrently may respond to an inserted item by adding another user-added item, or moving or deleting the original item. For one exemplary embodiment, there may be a predetermined set of available items for easy access for annotations—e.g., to allow a drag and drop of the user-selected items while watching the video. Accordingly, the present principles allow for a new and advantageous form of communication between concurrent video viewers and thus creating an enhanced shared viewing experience. The present principles also provide user communication onto the video itself and thus eliminate the need to have a separate chat or texting window, or a separate user device. The user-provided communication items may be used to convey in real time, emotions, feelings, thoughts, speech, and etc.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
Various exemplary user devices 260-1 to 260-n in
User devices 260-1 to 260-n shown in
An exemplary user device 260-1 in
Device 260-1 may also comprise a display 291 which is driven by a display driver/bus component 287 under the control of processor 265 via a display bus 288 as shown in
In additional, exemplary device 260-1 in
Exemplary device 260-1 also comprises a memory 285 which may represent both a transitory memory such as RAM, and a non-transitory memory such as a ROM, a hard drive and/or a flash memory, for processing and storing different files and information as necessary, including computer program products and software (e.g., as represented by a flow chart diagram of
User devices 260-1 to 260-n in
Video content being concurrently accessed by the user devices 260-1 to 260 is provided, e.g., by web server 205 of
In addition, server 205 is connected to network 250 through a communication interface 220 for communicating with other servers or web sites (not shown) and one or more user devices 260-1 to 260-n, as shown in
Returning to
At step 120 of
At step 130 of
As shown in
Also shown in
At step 140 of
At step 150 of
According to the present principles, in one exemplary embodiment, the information about the overlaid selected first communication comprises metadata on content of the overlaid selected first communication item, and location of the overlaid selected first communication item on the video. The content may be for example, an item identification number such as e.g., 363, which may be used to identify the particular emoji 363 from the plurality of pre-provided items 361-368 in area 320 of screen 300 as shown in the example of
In one exemplary embodiment, the metadata information regarding the overlaid selected first communication item 363′ are sent to the content provider such as content server 205 shown in
As described previously in connection with
Accordingly, in one exemplary embodiment, the information comprising metadata regarding the overlaid selected first communication item 363′ provided by the first electronic device 260-1 shown in
At step 160 of
As already described above in connection with steps 120-150, the sticker 267′ is similarly selected at a second electronic device, moved and overlaid by a remote user accordingly. The corresponding metadata information representing the content and location of the selected sticker 267′ is also sent to the content server 205 of
In another exemplary embodiment accordance with the present principles, at step 170 and as illustrated in
The object may be, e.g., a person such as an actor or a thing such as a chair 380 shown in
In addition, once a selected communication item such as a text bubble 368′ shown in
At step 180 of
In accordance with the present principles,
Therefore, in accordance with the present principles, as illustrated in
In addition, although an exemplary embodiment has been described above mainly with a content being provided by a streaming server 205 in
While several embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the functions and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the present embodiments. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings herein is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereof, the embodiments disclosed may be practiced otherwise than as specifically described and claimed. The present embodiments are directed to each individual feature, system, article, material and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials and/or methods, if such features, systems, articles, materials and/or methods are not mutually inconsistent, is included within the scope of the present embodiment.
Claims
1. A first electronic device for communicating with a second electronic device, the first electronic device comprising:
- a display device driver configured to cause to display a video, the video also displayed via the second electronic device;
- a user interface device configured to select a first communication item at the first electronic device and to overlay the selected first communication item onto the video at the first electronic device, the first item being overlaid onto the video during an interactive session between the first electronic device and the second electronic device; and
- a processor configured to provide information on the overlaid selected first communication item for causing to display the first communication item overlaid onto the video via the second electronic device;
- wherein the information on the overlaid selected first communication item comprises metadata on content of the overlaid selected first communication item, and wherein the user interface device is further configured to select an object on the video for linking the selected first communication item with the selected object on the video.
2. The first electronic device of claim 1 wherein the processor is further configured to cause to display a second communication item with the displayed video, wherein the second communication item is overlaid on the video by the second electronic device during the interactive session.
3. The first electronic device of claim 2 wherein the processor is further configured to remove the overlaid selected first communication item from the displayed video.
4. The first electronic device of claim 1 wherein the selected first communication item is displayed on the video at the second electronic device for a given duration.
5. The first electronic device of claim 1 wherein the first communication item is a graphical item.
6. (canceled)
7. (canceled)
8. The first electronic device of claim 1, wherein the selected object on the video is identified by metadata contained in the video.
9. The first electronic device of claim 8 wherein the metadata of the overlaid selected first communication item further comprises information for linking the selected first communication item with the selected object on the video.
10. The first electronic device of claim 5 wherein the graphical item is an emoji.
11. The first electronic device of claim 1 wherein the selected first communication item comprises text representing a conversation during the interactive session.
12. A method performed by a first electronic device for communicating with a second electronic device, the method comprising:
- causing to display the video on a display device driven by the first electronic device, the video also displayed via the second electronic device;
- selecting a first communication item at the first electronic device;
- overlaying the selected first communication item onto the video at the first electronic device, the first item being overlaid onto the video during an interactive session between the first electronic device and the second electronic device; and
- providing information on the selected first communication item for displaying the first communication item overlaid onto the video at the second electronic device;
- wherein the information on the overlaid selected first communication item comprises metadata on content of the overlaid selected first communication item, and wherein the user interface device is further configured to select an object on the video for linking the selected first communication item with the selected object on the video.
13. The method of claim 12 further comprising displaying a second communication item with the video on the display device, wherein the second communication item is overlaid on the video by the second electronic device during the interactive session.
14. The method of claim 13 further comprising removing the overlaid selected first communication item from the video on the display device.
15. The method of claim 13 wherein the selected first communication item is displayed on the video at the second electronic device for a given duration.
16. The method of claim 12 wherein the first communication item is a graphical item.
17. (canceled)
18. (canceled)
19. The method of claim 12 wherein the selected object on the video is identified by metadata contained in the video.
20. The method of claim 19 wherein the metadata of the overlaid selected first communication item further comprises information for linking the selected first communication item with the selected object on the video.
21. (canceled)
22. The method of claim 12 wherein the selected first communication item comprises text representing a conversation during the interactive session.
23. A computer program product stored in non-transitory computer-readable storage media for a first electronic device for communicating with a second electronic device, the computer program product comprising computer-executable instructions for:
- causing to display the video on a display device driven by the first electronic device, the video also displayed via the second electronic device;
- selecting a first communication item at the first electronic device;
- overlaying the selected first communication item onto the video at the first electronic device, the first item being overlaid onto the video during an interactive session between the first electronic device and the second electronic device; and
- providing information on the selected first communication item for displaying the first communication item overlaid onto the video at the second electronic device.
- wherein the information on the overlaid selected first communication item comprises metadata on content of the overlaid selected first communication item, and wherein the instructions provide for selection of an object on the video for linking the selected first communication item with the selected object on the video.
24. The first electronic device of claim 1 wherein the video is a streaming video selected by the first electronic device and the second electronic device.
25. The method of claim 12 wherein the video is a streaming video selected by the first electronic device and the second electronic device.
Type: Application
Filed: Nov 10, 2015
Publication Date: Aug 6, 2020
Inventors: Kent LYONS (Mountain View, CA), Jean C. BOLOT (Los Altos, CA), Caroline HANSSON (Oerebro), Amit DATTA (Pittsburgh, PA), Snigdha PANIGRAHI (Stanford, CA), Rashish TANDON (Austin, TX), Wenling SHANG (Ann Arbor, MI), Naveen GOELA (Berkeley, CA)
Application Number: 15/774,485