APPARATUS AND METHOD FOR MANIPULATING AN OBJECT INSERTED TO VIDEO CONTENT

The subject matter discloses a method of manipulating an object inserted into computerized content, comprising receiving input related to manipulation of the object, determining the manipulation to be applied on the object, determining the display of the object according to the determined manipulation and displaying the object after manipulated. The input may be received from the user or from metadata related to the video content, which may also be a single image. The subject matter also discloses a system for implementing and determining the manipulation applied on an object inserted into an image or video content.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present invention claims priority of the filing date of provisional patent application Ser. No. 61/065,703 titled In-video advertising real estate, filed Feb. 13, 2008, the contents of which is hereby incorporated by reference herein.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to insertion of objects into video content in general, and to manipulation of such objects in particular.

2. Discussion of the Related Art

The use of online video content has significantly developed during the past five years. Such video content may be received from adaptive websites, such as YouTube, or from other web pages, such as websites that provide news or entertainment content, online educational content and the like. Video content may also be received during video conferences, live video stream, web cameras and the like.

Objects known in the art are inserted into the video content and displayed in a static mode. For example, added from the lower end of the frame. Such objects are inserted after the video is processed. The video content does not change its properties. The objects may provide the user with additional content, such as commercials, related links, news, messages and the like. Since the objects are static, they do not require enough attention, and the user is likely to ignore them and focus on the content itself. However, the provider of the objects wishes that user will focus on the object's content, not only the video content. some of the objects are named pre-rolls mid-rolls and post-rolls, tickers, overlays and the like. The main disadvantage is that these objects are intrusive and do not fit contextually to the video content.

There is a long felt need to attract a user watching online video content to the object displayed in addition to the video, in order to increase the visibility of the inserted object and the attractiveness of the video content. By increasing the visibility of the object, the value of the commercial content represented by the object, such as an advertisement, is improved.

SUMMARY OF THE PRESENT INVENTION

It is an object of the subject matter to disclose a method of manipulating an object inserted into computerized content, comprising: receiving input related to manipulation of the object; determining the manipulation to be applied on the object; determining the display of the object according to the determined manipulation; displaying the object after manipulated.

In some embodiments, the computerized content is an image. In some embodiments, the computerized content is video. In some embodiments, the input is received from a user. In some embodiments, the input is received as metadata related to the content.

In some embodiments, the method further comprises a step of detecting interaction between a user provided with the computerized content and the manipulated object. In some embodiments, the method further comprises a step of providing an analysis based on the detected interaction.

It is another object of the subject matter to disclose a computer program product embodied on a on one or more computer-usable medium for performing a computer process comprising: receiving input related to manipulation of the object; determining the manipulation to be applied on the object; determining the display of the object according to the determined manipulation; displaying the object after manipulated.

It is another object of the subject matter to disclose a system for manipulating an object inserted into computerized content, comprising a manipulation module for receiving input related to manipulation of the object and determining the manipulation to be applied on the object; a rendering module for determining the display of the computerized content with the manipulated object.

In some embodiments, the system the computerized content is video. In some embodiments, the system further comprises a frame-based metadata storage for sending the rendering module metadata related to the display of the object in addition to the video.

In some embodiments, the system further comprises an input device for receiving user input such that the manipulation is determined as a function of the user input. In some embodiments, the system further comprises a video event dispatcher for tracking an event in the video such that the manipulation is determined as a function of the events.

BRIEF DESCRIPTION OF THE DRAWINGS

Exemplary non-limited embodiments of the disclosed subject matter will be described, with reference to the following description of the embodiments, in conjunction with the figures. The figures are generally not shown to scale and any sizes are only meant to be exemplary and not necessarily limiting. Corresponding or like elements are designated by the same numerals or letters.

FIG. 1 shows a computerized environment for manipulating objects inserted into video content, according to some exemplary embodiments of the subject matter;

FIG. 2 shows a computerized module for manipulating objects added to video content, in accordance with some exemplary embodiments of the subject matter;

FIGS. 3A-3D show objects being manipulated, in accordance with some exemplary embodiments of the subject matter; and,

FIG. 4 shows a flow for implementing the method of the disclosed subject matter, in accordance with some exemplary embodiments of the subject matter.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

One technical problem dealt with in the disclosed subject matter is to enable interactivity of objects inserted into video content. Interactivity of such objects results in increasing the attractiveness of the video content, as well as the attractiveness of the objects themselves. As a result the value of interactive objects within video content, is increased, especially when the object or the video content contain commercial content.

One technical solution discloses a system that comprises a receiving module for receiving information used to determine a manipulation applied on an object inserted to video content. The information may be received from a user or from another computerized entity, for example the distributor of the video content. The system also comprises a determination module for determining the manipulation applied on the object. Such manipulation may be changing the location or size of the object, generating sound feedback to be executed by the object and the like. The manipulation may be a function of a content of the video. The system of the disclosed subject matter may also comprise a rendering module for determining the display of the object or the display of the entire video content after the manipulation is determined. For example, determining the display takes into consideration the location of the camera in the video content, the location of specific elements in the frames, such as figures and the like. The rendering module may redraw the object, determine the shadow casted by the modified object and the like. The manipulated object may then be displayed on a display device of the user.

The disclosed subject matter relates to objects inserted to video content, as well as objects inserted into images, text or other visual entities displayed on a computerized device.

FIG. 1 shows a computerized environment for manipulating objects inserted into video content, according to some exemplary embodiments of the subject matter. Computerized environment 100 comprises a user's device 120 that receive content from a communication server 150. The communication server 150 may transmit the content to the user's device 120 via a network 110. The communication server 150 may be a group of servers or computers providing the content to the user's device 120. In some cases, the communication server 150 is a web server and the content comprises video data, video metadata, properties related to manipulating and inserting objects into the video data, events within the video and the like. In other embodiments, the communication server 150 may be a server that handles instant messaging applications or video conferences, such as ICQ, MSN messenger and the like, in which video is transmitted bi-directionally. The user's device 120 may be personal computer, television, wireless device such to as mobile phone, Personal Digital Assistance (PDA) and the like. The user's device 120 communicates with or comprises a display device 115 used for displaying the video transmitted from the communication server 150. The user's device 120 further comprises an input device, such as a pointing device 125, keyboard 128, touch screen (not shown) or other input devices desired by a person skilled in the art. Such input device enables the user to interact with the video content or with the object inserted to the video content, for example by pointing at the object and pressing a key. In some exemplary embodiments, the user's device 120 incorporates a computerized application used for converting the data received from the communication server 150 into the data displayed on the user's device 120 or on the display device 115. Such computerized application may be a media player, such as Windows media player, Adobe media player and the like. The video may be displayed on a specific region 130 within the display device 115.

In accordance with one exemplary embodiment of the disclosed subject matter, user's input that is received at the user's device 120 via the input devices manipulates the overlay object. For example, when the user hovers over or points at the object, the object increases by a predetermined proportion, such as 5 percents. In other alternative examples, the user can change the location of the object, change display parameters of the object such as color, luminance and the like. The user's input is received by a receiver (not shown) within or connected to the user's device 120. Such receiver may be hardware or software module and forwards the user's input to a processing module that manipulates the object according to the user's input and to a predetermined set of rules. Such rules may be stored in a storage device within the user's device, or within the communication server 150. In yet other alternative examples, the user may click or otherwise select the object and as a result, the video player may stop, pause, fast-forward, seek, rewind the video and the like. Additionally, clicking on an object may pause the video and display a second object or additional content, such as a window or bubble displaying additional information and/or drawings, figures, images, text, video and the like.

In accordance with other exemplary embodiments of the disclosed subject matter, manipulation on the object is performed according to the content of the video. The manipulation may be determined as a function of metadata of the video content received by the user's device 120, for example sound volume level of the video content. Some of the analysis may be done before transmitting the video content to the user's device 120, and some analysis may be performed in runtime. For example, volume level can be analyzed in runtime, while detecting specific objects or figures in the video, is more likely to be performed before the video content is transmitted to the user's device 120, for example in the video server 150. In an alternative embodiment, another server (not shown) may receive the video content from the video server, and add the objects to the video after analyzing said video content. In yet another exemplary embodiment, another server may select the object to be added to the video and send an indication to the user's device 120 to add the object. Such a selection may be performed in accordance with predetermined parameters, rules and configurations. The selection may be done in accordance with demographical information, user's history such as viewing history, location, video content and the like.

FIG. 2 shows a computerized module for manipulating objects added to video content, in accordance with some exemplary embodiments of the subject matter. Computerized module 200 comprises an I/O module 220 for receiving input from a user that relates to interacting with an object added to video content as overlay. Such input may be hover, pointing, clicking, touching the display device to interact with object, pressing a key on a keyboard, vocal input using a microphone and the like. Such I/O module 220 is likely to reside on the user's device 120 of FIG. 1, receive the user's input and send said input to a manipulation server 235 to determine the manipulation injected to the object according to the user's device. The I/O module 220 may receive manipulations from sources other than the user watching the video content, such as an RSS feed from a website, a computerized clock, an additional application and the like. In some exemplary embodiments, a lack of input from the I/O module 220 may initiate a manipulation by the manipulation server 235 such as illuminating the object, or displaying an additional object calling the user to interact with the object.

The manipulation server 235 may also be connected to a video event tracker 210 that tracks events in the video content transmitted to the user's device. The events tracked by the video event dispatcher 210 may also affect the manipulation selected by the manipulation server 235. For example, an object may be manipulated to follow a point of interest in the video, such as a ball bouncing. The video event dispatcher 210 may reside in the communication server 150, or in another server that analyzes the video content before said video content is transmitted to the user's device 120 of FIG. 1. The video event dispatcher 210 may comprise software or hardware applications to detect changes in the video content, such as location of objects in different video frames, shadowing, blocking of view by an obstacle, sound data, new scene, and the like. The video event dispatcher 210 may be connected to a process video module 215 or to a storage containing preprocessed data of the video content. As a result, such preprocessed data provides the video event dispatcher specific information concerning events, for example a specific frame, specific event and the like. Such preprocessed data is used when the video event dispatcher 210 dispatches a command to one or more manipulation servers, such as manipulation server 235, which determines a manipulation to be applied on the inserted object at a specific frame. The video event dispatcher 210 is also connected to the timeline of the video data when displayed on the user's device, to provide indications at a precise time segment. In some exemplary embodiments of the disclosed subject matter, the video event dispatcher 210 receives the metadata from the preprocessed video content, analyzes the metadata and issues notifications to the manipulation server 235 to provide a manipulation at a predefined time or frame.

In accordance with some exemplary embodiments of the disclosed subject matter, the manipulation server 235 receives data according to which a manipulation is determined. Such data may be sent from the I/O module 220, from the video event dispatcher 210, from the communication server 150 of FIG. 1, from another source of video content, from a publisher that wishes to add an object to the video content and the like. The manipulation server 235 comprises or communicates with object behavior storage 230 that stores data concerning manipulations. Such data may be manipulation options, set of rules, technological requirements for performing manipulations, cases upon which a manipulation cannot be provided, minimal time for a applying a manipulation on an object or video content and the like. In some cases, the user's device 120 of FIG. 1 may be limited in processing abilities such that some manipulations cannot be performed even if determined by the manipulation server 235. In some exemplary embodiments, the manipulation server 235 may take into account the processing abilities and other resources of the user's device 120 of FIG. 1 when determining a manipulation. In some other cases, the user may wish to change the object's location to an unauthorized location, for example the location of a show presenter that is required to appear on the display device 115 of FIG. 1. Such rules may be stored in the object behavior storage 230.

The manipulation server 235 is connected to a rendering module 250 and transmits the determined manipulation to the rendering module 250. The rendering module 250 determines the display of the content once the manipulation is applied on the object. For example, the rendering module 250 determines the angle from which the object is displayed. Further, the rendering module 250 may determine to modify or limit the manipulation determined by the manipulation module 235. For example, when the user wishes to raise a part of the object beyond a predefined height, and such height is determined by the manipulation module 235, the rendering module 250 may determine to limit the manipulation to a predefined height. Additionally, the rendering module 250 may define the frame displayed to the user, in terms of either video content, single image or the like. The rendering module 250 may also determine the shadow casted by the manipulated object, for example increasing the shadow when the object's size is increased, or change the shadow's location. The rendering module 250 may further determine the shadows casted on the manipulated object. The rendering module 250 may change transparency or opaque level according to the location of at least a portion of the object after manipulated. The rendering module 250 may generate or draw at least a portion of the object to execute the manipulation, for example draw facial expression of the object, determined according to the context of the video content. The rendering module 250 may further determine to display a portion of the manipulated object. For example, in case the object's visibility is partially blocked by an obstacle.

The rendering module 250 may be connected to frame-based metadata (FBM) storage 240. The FBM storage 240 comprises data related to the video content itself, camera angle provided in a specific frame in the video content, median or average gray scale value of a specific frame, appearance of a specific character or entity in the video content, atmosphere, points of interest in the video content, events in a scene and the like. Indication of such data enables the rendering module 250 to display the manipulated object in a more precise method, which is more attractive to the user, and improves the influence of a commercial object within video content.

Once the manipulated object is displayed on the user's device, the I/O module 220 may detect user's behavior actions concerning the object. Such behavior actions may be hovering over a pointing device such as a mouse on the display device, on a specific location where the object is displayed. Another exemplary behavior action may be pressing a link connected to the object. The I/O module 220 may send the detected behavior actions to another entity that analyzes said actions and provides statistical analysis. The statistical analysis refers also to changing the size and location of the object, refers to interaction with specific portion of the object, preferred manipulations in specific regions, ages, time in the day and the like.

The computerized module 200 and other elements disclosed in the subject matter detect, handle, and analyze manipulations and instructions using applications that preferably comprise software or hardware components. Such to software components may be written in any programming language such as C, C#, C++, Java, VB, VB.Net, or the like. Such components nay be developed under any development environment, such as Visual Studio.Net, Eclipse or the like. Communication between the elements disclosed above may be performed via the interne, or via another communication media, such as a telephone network, satellite, physical or wireless channels, and other media desired to a person skilled in the art.

The elements of the disclosed subject matter may be downloadable or installable on the user's device as an extension to a media player already installed on the user's device. As such, the elements comprise an interface to communicate with other portions of the media player already installed on the user's device. Alternatively, the elements may be downloaded as part of a new media player, not as an add-on to an existing media player.

FIGS. 3A-3D show objects being manipulated, in accordance with some exemplary embodiments of the subject matter. FIGS. 3A and 3B show a display device displaying an object being manipulated according to the user's input. FIG. 3A shows a display device 322 having a display region 324. Said display region 324 may be a region where an image is displayed, or a region used for a media player to provide video content. An object is displayed at the display region. The object of the exemplary embodiment comprises an ice cream cone 326 and ice cream 328. The object is inserted to an image or to video content provided to a user's device (such as 120 of FIG. 1).

In the example disclosed in FIG. 3B, the user desires to interact with the object. FIG. 3B shows a display device 302 and a display region 304, generally equivalent to elements 322 and 324 of FIG. 3A. The display region 304 displays ice cream 308 and ice-cream cone 306. The interaction disclosed in FIG. 3B relates to increasing the size of the ice cream (328 of FIG. 3A). In accordance with the example disclosed in FIG. 3B, the user points at the ice-cream 308 using a pointing device (not shown), such as a mouse. A pointer 310 related to the pointing device (not shown) points at the ice cream 308. As a result, the size of the ice cream 308 increases, for example by 25 percent. The I/O device 220 may detect the user's pointing on the ice cream 308, which is part of the object inserted into video content or image. The manipulation server 235 determines the manipulation performed on the ice cream 308 or on the entire object. For example, determines to extend the ice cream 308 and not change its location, which was also possible according to the user's input.

FIGS. 3C and 3D show a display device displaying an object manipulated according to the context of the video content or the content of the image to which the object is inserted, according to some exemplary embodiments of the disclosed subject matter. FIG. 3C shows a display device 342, a display region 344 and two objects displayed within the display region 344. In the disclosed example, the first object 346 is a person, and the second object 348 is a telephone. The first object 346 is part of the content provided by the content server (such as 150 of FIG. 1) while the second object 348 is added to the original content and can be manipulated.

FIG. 3D shows the manipulation applied on the second object added to the original content. FIG. 3D discloses a display device 362, a display region 364, a first object 366 and a second object 368. The first object 366 and the second object 368 are generally equivalent to elements 346 and 348 of FIG. 3C. The second object 368 is manipulated according to the context of the video content displayed in the video region 364. For example, when a specific sound tone is provided at a specific frame or group of frames in the video content, the second object 368 is manipulated in a way that it seems as the phone rings. Such manipulation increases the attractiveness of the second object 368 and enables interaction between the user and the video content. Further, such manipulation improves the visibility of the second object 368 to the user, and as a result, increases the value of the content provided along with the second object.

FIG. 4 shows a flow diagram of a method of the disclosed subject matter, in accordance with some exemplary embodiments of the subject matter. In step 402, the video content is processed before transmitted to the user's device. Such processing includes identifying events in which manipulation may be applied on an inserted object, identifying frames in which a scene begins, identifying change in the audio data and the like. Such preprocessed data is likely to be transmitted to the user's device in addition to the video content. in step 405, the user's input is received by the user's device. Such input may be provided using a mouse, keyboard, touch screen and the like. The detection in step 405 may be a result of a command or message from the user watching the content, from the source of the content, from a computer engine that generates such indications in a random manner, and the like. In case the detection's origin is user input, the computerized entity that detects the indication may send a notification to another module that the input from the user has been detected. In step 410, a computerized entity detects an indication to apply a manipulation on an object inserted into content displayed to a user. Such content may be video content, animated image or any other content desired by a person skilled in the art. The content may also be text. It will be noted that the object may be seamlessly inserted to the content, such as by being in line with a perspective of the content, such as video content. Additionally, the object may be displayed as an overlay over the content, such as a ticker being presented over a video content. In case the indication is from the source of the content, it is likely that the indication is sent to an adaptive module in the media player that displays the content, to provide a specific manipulation at a given moment, or that a predefined event takes place at a specific frame or sequence or frames.

In step 415, the manipulation to be applied on the object added to the content is determined. The determination may be a function of the source of the indication, for example a user or a computerized source. The manipulation may be a function of the number of objects inserted into the content, or the number of objects visible to the user in different content units, for example inserting a first object to video content and a second object into an image, while each object is manipulated autonomously. Manipulation may be changing the objects to parameters, such as size, location, texture, facial expression, level of speech, accent, outfit and the like. Further, the manipulation may change the display of the content. For example, the manipulation may pause a video content. In some exemplary embodiments, the manipulation may replace the inserted object with a different object. determining the manipulation is likely to be performed in the user's device.

In step 420, a computerized entity determines the display of the manipulated object. Such determination takes into consideration the content displayed to the user, for example, the location of other elements in the frame or the image. Determination the display of the object may require drawing or designing at least part of the manipulated object, for example in case the shape of the object is modified as a result of the manipulation. Determination of the display may also comprise determining other elements of the content to which the object is inserted into, or pausing the video, in case the content provided to the user is a sequence of video frames. Determination of the display may comprise determining shadow within the content and/or over the object, transparency level, location and size of elements in the content, limits of the manipulation and the like. Such determination may be performed by the rendering module 250 of FIG. 2, by an extension to a media player, by extension to a browser or to instant messages application and the like.

In step 430, the manipulated object is displayed. As noted above, the object may be injected or otherwise inserted into video content, animated image, text, and the like. When more than one object is inserted to the content, the computerized module determines the object to apply the manipulation on. Further, the system of the disclosed subject matter may comprise a Z-order module, for determining which object to display in front of other objects.

In step 440, the user interacts with the manipulated object. Such interaction may be by pressing a key, moving a pointing device, touching the screen, opening a link, clicking on the object, speaking to a microphone and the to like. The computerized module 200 of FIG. 2 detects such interactions, especially using the I/O module 220. In step 445, the interactions with the objects are analyzed. Such analysis may be performed in the user's device, or by an adaptive server after transmitted from the user's device. The analysis of interaction between the user and the manipulated object allows more than just analysis of links pressed by the user, for example the time in which the user interacts with the object, preferred manipulations, and the like.

While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from the essential scope thereof. Therefore, it is intended that the disclosed subject matter not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but only by the claims that follow.

Claims

1. A method of manipulating an object inserted into computerized content, comprising:

receiving input related to manipulation of the object;
determining the manipulation to be applied on the object;
determining the display of the object according to the determined manipulation;
displaying the object after manipulated.

2. The method according to claim 1, wherein the computerized content is an image.

3. The method according to claim 1, wherein the computerized content is video.

4. The method according to claim 1, wherein the input is received from a user.

5. The method according to claim 1, wherein the input is received as metadata related to the content.

6. The method according to claim 1, further comprising a step of detecting interaction between a user provided with the computerized content and the manipulated object.

7. The method according to claim 6, further comprising a step of providing an analysis based on the detected interaction.

8. The method according to claim 7, further comprising a step of storing the analysis

9. A computer program product embodied on a on one or more computer-usable medium for performing a computer process comprising:

receiving input related to manipulation of the object;
determining the manipulation to be applied on the object;
determining the display of the object according to the determined manipulation;
displaying the object after manipulated.

10. A system for manipulating an object inserted into computerized content, comprising

a manipulation module for receiving input related to manipulation of the object and determining the manipulation to be applied on the object;
a rendering module for determining the display of the computerized content with the manipulated object.

11. The system according to claim 10, wherein the computerized content is video.

12. The system according to claim 11, further comprises a frame-based metadata storage for sending the rendering module metadata related to the display of the object in addition to the video.

13. The system according to claim 11, further comprises an input device for receiving user input such that the manipulation is determined as a function of the user input.

14. The system according to claim 11, further comprises a video event dispatcher for tracking an event in the video such that the manipulation is determined as a function of the events.

15. The system according to claim 14, wherein the video event dispatcher issues a notification to the manipulation server to provide a manipulation.

Patent History
Publication number: 20110001758
Type: Application
Filed: Feb 12, 2009
Publication Date: Jan 6, 2011
Inventors: Tal Chalozin (New York, NY), Izhak Zvi Netter (Givaataim)
Application Number: 12/867,253
Classifications
Current U.S. Class: Graphic Manipulation (object Processing Or Display Attributes) (345/619)
International Classification: G09G 5/00 (20060101);