APPARATUS AND METHOD FOR MANIPULATING AN OBJECT INSERTED TO VIDEO CONTENT
The subject matter discloses a method of manipulating an object inserted into computerized content, comprising receiving input related to manipulation of the object, determining the manipulation to be applied on the object, determining the display of the object according to the determined manipulation and displaying the object after manipulated. The input may be received from the user or from metadata related to the video content, which may also be a single image. The subject matter also discloses a system for implementing and determining the manipulation applied on an object inserted into an image or video content.
The present invention claims priority of the filing date of provisional patent application Ser. No. 61/065,703 titled In-video advertising real estate, filed Feb. 13, 2008, the contents of which is hereby incorporated by reference herein.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to insertion of objects into video content in general, and to manipulation of such objects in particular.
2. Discussion of the Related Art
The use of online video content has significantly developed during the past five years. Such video content may be received from adaptive websites, such as YouTube, or from other web pages, such as websites that provide news or entertainment content, online educational content and the like. Video content may also be received during video conferences, live video stream, web cameras and the like.
Objects known in the art are inserted into the video content and displayed in a static mode. For example, added from the lower end of the frame. Such objects are inserted after the video is processed. The video content does not change its properties. The objects may provide the user with additional content, such as commercials, related links, news, messages and the like. Since the objects are static, they do not require enough attention, and the user is likely to ignore them and focus on the content itself. However, the provider of the objects wishes that user will focus on the object's content, not only the video content. some of the objects are named pre-rolls mid-rolls and post-rolls, tickers, overlays and the like. The main disadvantage is that these objects are intrusive and do not fit contextually to the video content.
There is a long felt need to attract a user watching online video content to the object displayed in addition to the video, in order to increase the visibility of the inserted object and the attractiveness of the video content. By increasing the visibility of the object, the value of the commercial content represented by the object, such as an advertisement, is improved.
SUMMARY OF THE PRESENT INVENTIONIt is an object of the subject matter to disclose a method of manipulating an object inserted into computerized content, comprising: receiving input related to manipulation of the object; determining the manipulation to be applied on the object; determining the display of the object according to the determined manipulation; displaying the object after manipulated.
In some embodiments, the computerized content is an image. In some embodiments, the computerized content is video. In some embodiments, the input is received from a user. In some embodiments, the input is received as metadata related to the content.
In some embodiments, the method further comprises a step of detecting interaction between a user provided with the computerized content and the manipulated object. In some embodiments, the method further comprises a step of providing an analysis based on the detected interaction.
It is another object of the subject matter to disclose a computer program product embodied on a on one or more computer-usable medium for performing a computer process comprising: receiving input related to manipulation of the object; determining the manipulation to be applied on the object; determining the display of the object according to the determined manipulation; displaying the object after manipulated.
It is another object of the subject matter to disclose a system for manipulating an object inserted into computerized content, comprising a manipulation module for receiving input related to manipulation of the object and determining the manipulation to be applied on the object; a rendering module for determining the display of the computerized content with the manipulated object.
In some embodiments, the system the computerized content is video. In some embodiments, the system further comprises a frame-based metadata storage for sending the rendering module metadata related to the display of the object in addition to the video.
In some embodiments, the system further comprises an input device for receiving user input such that the manipulation is determined as a function of the user input. In some embodiments, the system further comprises a video event dispatcher for tracking an event in the video such that the manipulation is determined as a function of the events.
Exemplary non-limited embodiments of the disclosed subject matter will be described, with reference to the following description of the embodiments, in conjunction with the figures. The figures are generally not shown to scale and any sizes are only meant to be exemplary and not necessarily limiting. Corresponding or like elements are designated by the same numerals or letters.
One technical problem dealt with in the disclosed subject matter is to enable interactivity of objects inserted into video content. Interactivity of such objects results in increasing the attractiveness of the video content, as well as the attractiveness of the objects themselves. As a result the value of interactive objects within video content, is increased, especially when the object or the video content contain commercial content.
One technical solution discloses a system that comprises a receiving module for receiving information used to determine a manipulation applied on an object inserted to video content. The information may be received from a user or from another computerized entity, for example the distributor of the video content. The system also comprises a determination module for determining the manipulation applied on the object. Such manipulation may be changing the location or size of the object, generating sound feedback to be executed by the object and the like. The manipulation may be a function of a content of the video. The system of the disclosed subject matter may also comprise a rendering module for determining the display of the object or the display of the entire video content after the manipulation is determined. For example, determining the display takes into consideration the location of the camera in the video content, the location of specific elements in the frames, such as figures and the like. The rendering module may redraw the object, determine the shadow casted by the modified object and the like. The manipulated object may then be displayed on a display device of the user.
The disclosed subject matter relates to objects inserted to video content, as well as objects inserted into images, text or other visual entities displayed on a computerized device.
In accordance with one exemplary embodiment of the disclosed subject matter, user's input that is received at the user's device 120 via the input devices manipulates the overlay object. For example, when the user hovers over or points at the object, the object increases by a predetermined proportion, such as 5 percents. In other alternative examples, the user can change the location of the object, change display parameters of the object such as color, luminance and the like. The user's input is received by a receiver (not shown) within or connected to the user's device 120. Such receiver may be hardware or software module and forwards the user's input to a processing module that manipulates the object according to the user's input and to a predetermined set of rules. Such rules may be stored in a storage device within the user's device, or within the communication server 150. In yet other alternative examples, the user may click or otherwise select the object and as a result, the video player may stop, pause, fast-forward, seek, rewind the video and the like. Additionally, clicking on an object may pause the video and display a second object or additional content, such as a window or bubble displaying additional information and/or drawings, figures, images, text, video and the like.
In accordance with other exemplary embodiments of the disclosed subject matter, manipulation on the object is performed according to the content of the video. The manipulation may be determined as a function of metadata of the video content received by the user's device 120, for example sound volume level of the video content. Some of the analysis may be done before transmitting the video content to the user's device 120, and some analysis may be performed in runtime. For example, volume level can be analyzed in runtime, while detecting specific objects or figures in the video, is more likely to be performed before the video content is transmitted to the user's device 120, for example in the video server 150. In an alternative embodiment, another server (not shown) may receive the video content from the video server, and add the objects to the video after analyzing said video content. In yet another exemplary embodiment, another server may select the object to be added to the video and send an indication to the user's device 120 to add the object. Such a selection may be performed in accordance with predetermined parameters, rules and configurations. The selection may be done in accordance with demographical information, user's history such as viewing history, location, video content and the like.
The manipulation server 235 may also be connected to a video event tracker 210 that tracks events in the video content transmitted to the user's device. The events tracked by the video event dispatcher 210 may also affect the manipulation selected by the manipulation server 235. For example, an object may be manipulated to follow a point of interest in the video, such as a ball bouncing. The video event dispatcher 210 may reside in the communication server 150, or in another server that analyzes the video content before said video content is transmitted to the user's device 120 of
In accordance with some exemplary embodiments of the disclosed subject matter, the manipulation server 235 receives data according to which a manipulation is determined. Such data may be sent from the I/O module 220, from the video event dispatcher 210, from the communication server 150 of
The manipulation server 235 is connected to a rendering module 250 and transmits the determined manipulation to the rendering module 250. The rendering module 250 determines the display of the content once the manipulation is applied on the object. For example, the rendering module 250 determines the angle from which the object is displayed. Further, the rendering module 250 may determine to modify or limit the manipulation determined by the manipulation module 235. For example, when the user wishes to raise a part of the object beyond a predefined height, and such height is determined by the manipulation module 235, the rendering module 250 may determine to limit the manipulation to a predefined height. Additionally, the rendering module 250 may define the frame displayed to the user, in terms of either video content, single image or the like. The rendering module 250 may also determine the shadow casted by the manipulated object, for example increasing the shadow when the object's size is increased, or change the shadow's location. The rendering module 250 may further determine the shadows casted on the manipulated object. The rendering module 250 may change transparency or opaque level according to the location of at least a portion of the object after manipulated. The rendering module 250 may generate or draw at least a portion of the object to execute the manipulation, for example draw facial expression of the object, determined according to the context of the video content. The rendering module 250 may further determine to display a portion of the manipulated object. For example, in case the object's visibility is partially blocked by an obstacle.
The rendering module 250 may be connected to frame-based metadata (FBM) storage 240. The FBM storage 240 comprises data related to the video content itself, camera angle provided in a specific frame in the video content, median or average gray scale value of a specific frame, appearance of a specific character or entity in the video content, atmosphere, points of interest in the video content, events in a scene and the like. Indication of such data enables the rendering module 250 to display the manipulated object in a more precise method, which is more attractive to the user, and improves the influence of a commercial object within video content.
Once the manipulated object is displayed on the user's device, the I/O module 220 may detect user's behavior actions concerning the object. Such behavior actions may be hovering over a pointing device such as a mouse on the display device, on a specific location where the object is displayed. Another exemplary behavior action may be pressing a link connected to the object. The I/O module 220 may send the detected behavior actions to another entity that analyzes said actions and provides statistical analysis. The statistical analysis refers also to changing the size and location of the object, refers to interaction with specific portion of the object, preferred manipulations in specific regions, ages, time in the day and the like.
The computerized module 200 and other elements disclosed in the subject matter detect, handle, and analyze manipulations and instructions using applications that preferably comprise software or hardware components. Such to software components may be written in any programming language such as C, C#, C++, Java, VB, VB.Net, or the like. Such components nay be developed under any development environment, such as Visual Studio.Net, Eclipse or the like. Communication between the elements disclosed above may be performed via the interne, or via another communication media, such as a telephone network, satellite, physical or wireless channels, and other media desired to a person skilled in the art.
The elements of the disclosed subject matter may be downloadable or installable on the user's device as an extension to a media player already installed on the user's device. As such, the elements comprise an interface to communicate with other portions of the media player already installed on the user's device. Alternatively, the elements may be downloaded as part of a new media player, not as an add-on to an existing media player.
In the example disclosed in
In step 415, the manipulation to be applied on the object added to the content is determined. The determination may be a function of the source of the indication, for example a user or a computerized source. The manipulation may be a function of the number of objects inserted into the content, or the number of objects visible to the user in different content units, for example inserting a first object to video content and a second object into an image, while each object is manipulated autonomously. Manipulation may be changing the objects to parameters, such as size, location, texture, facial expression, level of speech, accent, outfit and the like. Further, the manipulation may change the display of the content. For example, the manipulation may pause a video content. In some exemplary embodiments, the manipulation may replace the inserted object with a different object. determining the manipulation is likely to be performed in the user's device.
In step 420, a computerized entity determines the display of the manipulated object. Such determination takes into consideration the content displayed to the user, for example, the location of other elements in the frame or the image. Determination the display of the object may require drawing or designing at least part of the manipulated object, for example in case the shape of the object is modified as a result of the manipulation. Determination of the display may also comprise determining other elements of the content to which the object is inserted into, or pausing the video, in case the content provided to the user is a sequence of video frames. Determination of the display may comprise determining shadow within the content and/or over the object, transparency level, location and size of elements in the content, limits of the manipulation and the like. Such determination may be performed by the rendering module 250 of
In step 430, the manipulated object is displayed. As noted above, the object may be injected or otherwise inserted into video content, animated image, text, and the like. When more than one object is inserted to the content, the computerized module determines the object to apply the manipulation on. Further, the system of the disclosed subject matter may comprise a Z-order module, for determining which object to display in front of other objects.
In step 440, the user interacts with the manipulated object. Such interaction may be by pressing a key, moving a pointing device, touching the screen, opening a link, clicking on the object, speaking to a microphone and the to like. The computerized module 200 of
While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from the essential scope thereof. Therefore, it is intended that the disclosed subject matter not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but only by the claims that follow.
Claims
1. A method of manipulating an object inserted into computerized content, comprising:
- receiving input related to manipulation of the object;
- determining the manipulation to be applied on the object;
- determining the display of the object according to the determined manipulation;
- displaying the object after manipulated.
2. The method according to claim 1, wherein the computerized content is an image.
3. The method according to claim 1, wherein the computerized content is video.
4. The method according to claim 1, wherein the input is received from a user.
5. The method according to claim 1, wherein the input is received as metadata related to the content.
6. The method according to claim 1, further comprising a step of detecting interaction between a user provided with the computerized content and the manipulated object.
7. The method according to claim 6, further comprising a step of providing an analysis based on the detected interaction.
8. The method according to claim 7, further comprising a step of storing the analysis
9. A computer program product embodied on a on one or more computer-usable medium for performing a computer process comprising:
- receiving input related to manipulation of the object;
- determining the manipulation to be applied on the object;
- determining the display of the object according to the determined manipulation;
- displaying the object after manipulated.
10. A system for manipulating an object inserted into computerized content, comprising
- a manipulation module for receiving input related to manipulation of the object and determining the manipulation to be applied on the object;
- a rendering module for determining the display of the computerized content with the manipulated object.
11. The system according to claim 10, wherein the computerized content is video.
12. The system according to claim 11, further comprises a frame-based metadata storage for sending the rendering module metadata related to the display of the object in addition to the video.
13. The system according to claim 11, further comprises an input device for receiving user input such that the manipulation is determined as a function of the user input.
14. The system according to claim 11, further comprises a video event dispatcher for tracking an event in the video such that the manipulation is determined as a function of the events.
15. The system according to claim 14, wherein the video event dispatcher issues a notification to the manipulation server to provide a manipulation.
Type: Application
Filed: Feb 12, 2009
Publication Date: Jan 6, 2011
Inventors: Tal Chalozin (New York, NY), Izhak Zvi Netter (Givaataim)
Application Number: 12/867,253
International Classification: G09G 5/00 (20060101);