VIDEO EFFECT USING MOVEMENT WITHIN AN IMAGE
A video effect is created that provides an experience to a viewer of freezing time during an event that is the subject of a video presentation, investigating the event during that frozen moment in time, and (optionally) resuming the action of the event. During that frozen moment in time, the video can move around the scene of the event and/or zoom in (or out) to better highlight an aspect of the event. In one embodiment, there will be a transition from video captured by a broadcast camera (or another camera) to a high resolution still image, movement around the high resolution still image, and a transition from the high resolution still image back to video from the broadcast camera (or another camera).
Latest SPORTVISION, INC. Patents:
- Tracking system
- DETERMINING X,Y,Z,T BIOMECHANICS OF MOVING ACTOR WITH MULTIPLE CAMERAS
- Video tracking of baseball players to determine the end of a half-inning
- AUTOMATED FRAMING AND SELECTIVE DISCARD OF PARTS OF HIGH RESOLUTION VIDEOS OF LARGE EVENT SPACE
- Updating background texture for virtual viewpoint animations
The remarkable, often astonishing, physical skills and feats of great athletes draw millions of people every day to follow sports that range from the power of football to the grace of figure skating, from the speed of ice hockey to the precision of golf, etc. Sports fans are captivated by the abilities of a basketball player to soar to the rafters, of hockey players to make that perfect pass, of football teams to exploit the weaknesses of the defense, etc. In televising these events, broadcasters have deployed a varied repertoire of technologies ranging from slow motion replay to lipstick sized cameras mounted on helmets to highlight for viewers these extraordinary talents.
One aspect of televising sporting events that has been limiting is the field of view and resolution of the broadcast cameras. When a camera is depicting a portion of an event, the viewers will miss action occurring at other portions of the event. For example, a camera showing video of a quarterback in a football game will not likely be showing the athleticism of the players down the field preparing to receive a pass from the quarterback or what the QB has to work with when deciding who to throw the ball to. Similarly, the resolution of many video cameras does not allow for sufficiently zooming in during a reply to show the viewers aspects of the event not clear in the original video.
SUMMARY OF THE INVENTIONA video effect is created that provides an experience to a viewer of freezing time during an event that is the subject of a video presentation, investigating the event during that frozen moment in time, and (optionally) resuming the action of the event. During that frozen moment in time, the view can move around the scene of the event and/or zoom in to better highlight an aspect of the event. In one embodiment, there will be a transition from video captured by a broadcast camera (or another camera) to a high resolution still image, movement around the high resolution still image, and a transition from the high resolution still image back to video from the broadcast camera (or another camera).
One embodiment includes identifying a particular video image and a corresponding base image, identifying one or more sub-images in the base image, and creating a new video that includes a transition from the particular video image to the base image and includes movement in the base image to the one or more sub-images. In some implementations, the particular video image depicts a portion of the base image. Because the base image and particular video image may come from different cameras, while the particular video image depicts a portion of the base image the particular video image will not be exactly the same as the portion of the base image on a pixel by pixel basis. Rather, the will visually show the same subject in a reasonably similar manner.
One embodiment includes identifying a first sub-image in a base image, identifying a portion of the base image that corresponds to a particular video image, and creating a video that transitions from the video image to the portion of the base image and includes movement in the base image from the portion of the base image to the one or more sub-images.
One embodiment includes receiving an electronic base image, receiving an indication of a set of two or more target images in the electronic base image, creating a set of electronic intervening images in the electronic base image that are between the target images, and creating a video from the target images and the intervening images.
One embodiment includes receiving a higher resolution video sequence in the vicinity of the camera, transmitting a normal (standard definition or high definition) resolution video sequence live with each image being a subimage of one of the images in the higher resolution video sequence, also identifying a particular normal resolution video, and creating a new video that includes a transition from the particular video image to the base image and includes movement in the base image to the one or more sub-images. The particular video image depicts a portion of the base image.
The present invention can be accomplished using hardware, software, or a combination of both hardware and software. The software used for the present invention is stored on one or more processor readable storage media including hard disk drives, CD-ROMs, DVDs, optical disks, floppy disks, tape drives, RAM, ROM or other suitable storage devices. The software can be used to program one or more processors to perform the various methods described herein. The one or more processors are in communication with the storage media and one or more communication interfaces. In alternative embodiments, some or all of the software can be replaced by dedicated hardware including custom integrated circuits, gate arrays, FPGAs, PLDs, and special purpose computers.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter.
A video effect is created that provides an experience to a viewer of freezing time during an event that is the subject of a video presentation, investigating the event during that frozen moment in time, and (optionally) resuming the action of the event. During that frozen moment in time, the video can move around the scene of the event and/or zoom in (or out) to better highlight an aspect of the event.
One embodiment of the system will use a high resolution digital camera co-located with a broadcast video camera. The digital camera will be a locked down within 5 feet of the broadcast camera (mounted to a rail, a wall, or the broadcast camera) so that it sees the entire (or a larger portion of the) field of play. The digital camera will have an associated computer located a short distance from it (at the broadcast camera position). From this location the video/image/data will be transmitted to a production center (e.g., production truck) over fiber, coax or other transmission medium. The production center will house a work area for an operator, one computing system, and a storage system.
In one embodiment, when something interesting occurs during an event, the operator will push a button (or otherwise actuate a different input device) causing the system to “freeze” its buffer of base images as well as the video buffer of video images (high definition or standard definition) from the associated broadcast camera. At this point the operator will use the user interface to select one of the base images from the digital camera for the moment in time to be frozen. Upon selecting the base image, the associated frame of broadcast video frame closest in time will be identified from the video buffer. This will be the break point in the video stream from which the video effect will transition into the frozen world (and, optionally, back).
To build the video effect, the operator will identify the location and framing of one or more points of interest (sub-images) in that base image, thereby, creating a script for the video effect. The script identifies sub-images in the base image, the order of the sub-images, and the zoom level to show each sub-image. Once the script is created, the processing system will create/render the new video and save it for broadcast or other purposes. The newly created video will include the broadcast video that leads into the frozen shot (a predetermined lead-in duration) followed by a transition to the corresponding framing in the base image, followed by a smooth move to each of the sub-images identified by the operator (with a preset pause at each one), and finally a smooth transition back to the original framing (where the video will transition back to the broadcast video to resume action).
Camera location 10 includes a digital still camera 30 and a broadcast video camera 32. Digital still camera 30 is in communication with camera interface 34. In some embodiments, digital still camera 30 is mounted on the lens (or otherwise attached to the camera for) broadcast video camera 32. In other embodiments, digital still camera 30 can be located near video camera 32. For example, digital camera 30 can be rigidly mounted (i.e. not able to pan or tilt) on a wall, platform, handrail, etc. very close to broadcast video camera 32. One example of a suitable digital still camera is the Canon 1DsMKII. However, other digital still cameras can also be used. In one embodiment, each of the images captured by the digital still camera 30 comprises eight megapixels. In other embodiments, the digital pictures can be sixteen megapixels or a different resolution. Broadcast camera 32 can be any suitable broadcast camera known in the art.
In other embodiments, instead of using a digital still camera, camera 30 can be a digital video camera which outputs video images at a fixed interval. In yet another embodiment, camera 30 can be another type of sensor that can provide electronic images to computer 40. In one example, camera 30 can be an analog broadcast camera and camera interface 34 can convert the analog video signals to digital video images or other types of electronic images.
In other embodiments, instead of using a digital still camera, camera 30 can be omitted and camera 32 can be a very high resolution camera connected to computer interface 34 (as well as the connection to computer 40). In some embodiments, computer interface 34 has large recording capability.
Production center 20 includes a computer 40. In one embodiment, computer 40 includes dual 2.8 GHz P4 Xenon processors in communication with 4 GB of RAM, an NVidia Quattro FX 4000 video card, an Aja HD video card (in other embodiments a SD video card can also be used in addition to or instead of) and an Ethernet card. In another embodiment, computer 40 can include a Super Micro PD5GE Motherboard, 4 GB DDR2 RAM, Intel PD840 3.2 GHz Processor, NVidia Quattro FX 4000, Aja Xena HD Card, 250 GB Hard Drive, Ethernet card and Lite-On 16×DVD+RW. Other computer configurations can also be used. Broadcast video camera 32 sends its images to the video card (or other interface) on computer 40. Computer 40 stores the received video as electronic digital video images. In one embodiment, broadcast video camera 32 sends 30 frames of video per second. In other embodiments, 60 frames per second can be sent or another rate can be used. Each of the frames will be stored in a video buffer on a hard disk for computer 40. In one embodiment, the video buffer holds 15 seconds of HD video or 30 seconds of SD video. Other sizes for the video buffer can also be used.
Computer 40 includes a monitor 42 and keyboard/mouse 44, all of which are used to implement the user interface. Camera 32 is connected to monitor 60 which displays the video received from camera 32. Computer 40 is also connected to a monitor 62 for showing the output video. The output video is also provided to a video recorder 64. From the video recorder, the output video can be provided for broadcasting as part of the television production of the event being captured by broadcast video camera 32.
Camera interface 34 is used to control digital camera 30 and receive images from digital camera 30. Camera interface 30 sends the digital images to computer 40 via either a fiber data line 46 or a coaxial data line 48 (or other transmission mediums such as wireless, parallel twisted pairs, etc.). In some embodiments, both the fiber and the coaxial lines will be used. In other embodiments, only one of the lines will be used. The fiber optic line from camera interface 34 is connected to a converter 50, which converts fiber data to Ethernet and provides the information to an Ethernet card for computer 40. Data sent on coaxial line 48 is sent through a filter (e.g., Humbucker filter) which filters out 60 cycle noise. The output of the filter is sent to a cable modem which provides the data to the Ethernet card for computer 40
In another embodiment, an additional computer can be used at the production center 20. This additional computer will act as a video server, receiving the video from camera 32 and storing all of the video for an event on a hard disk drive or other storage medium. This video server will synchronize its time with computer 40 via an Ethernet connection. Each frame (or field or other unit) of video stored will be provided with a time stamp by the video server. Similarly, computer 40 will add the offset between computer 40 and the video server to the data for the still images corresponding in time. When a video effect is being created, as described below, computer 40 will access the necessary frames of video from the video server. In such a case, there will not be the same time limitation for deciding whether to create a video effect as described below.
I/O interface 120 communicates with computer 102 via a USB port for computer 102. In one embodiment, I/O interface 120 is an OnTrak ADU200. I/O interface 120 is connected to the flash sync connection for camera 30 and the power connection for camera 30 so that computer 102, via I/O interface 120, can turn the camera on and off, provide power to the camera, and detect when the flash is commanded on the camera. In one embodiment, every time the camera takes a picture the flash is used (or commanded). In another embodiment, rather than detecting flash commands, the camera will provide an indication that the shutter opened and closed, or another indication that a picture was taken. That indication is provided to I/O interface 120 and communicated to computer 102. When computer 102 receives an indication that a picture was taken, computer 102 will note the time that the picture was taken. Computer 102 will then wait for the camera to send the image via the firewire connection to computer 102. Computer 102 will package the image, a timestamp indicating the time the image was taken, and the size of the image into a data structure. In one embodiment, the image is stored as a .jpg file. In other embodiments, other file formats can be used. That data structure is then communicated to computer 40 via either the fiber line 46 or coaxial line 48 (or other means)
Computer 40 will receive the digital still images from camera 30 and the video images from camera 32. At any point during an event, a user may decide that something interesting has happened (or is about to happen) and request the system to create a video effect. At that time, the buffers storing the digital images and the video image will be frozen, the user will choose a digital still image as the base image for the effect, computer 40 will determine which video image most nearly corresponds to the chosen base image, and the user will be provided with the opportunity to create a script for the video effect. In response to the script, computer 40 will create the video effect and build a new video which includes the video effect, a transition from the video stream from camera 32 to the video effect, and a transition back from the video effect to the camera stream. The created video effect includes video depicting virtual camera movement within the frozen base image.
In one embodiment, computer 102 of camera interface 34 and computer 40 at production center 20 will synchronize their clocks so the timestamps added to the images will be relevant to both computers. In one embodiment, computer 102 will send its time to computer 40 five times a second (or at another interval). Each instance that computer 102 sends its time to computer 40, computer 40 will note the offset between the time from computer 102 and the time on the clock local for computer 40. Thus, computer 40 will maintain an offset between the two different clocks. In other embodiments, other intervals or methods can be used for synchronizing the two clocks.
As described above, computer 40 will receive video images from camera 32 and still images from camera interface 34.
In step 240, of
In step 270 of
Computer 40 includes a user interface which provides various controls for the operator as well as various video outputs. In one embodiment, the user interface will display the video from broadcast camera 32 and have a list of still images received from camera 30. Each still image received from camera 30 is identified in the user interface by the name of the file, the time of the image, the amount of video in the video buffer with timestamps before the timestamp for still image and the amount of video in the video buffer with timestamps after the timestamp for the still image. Once the video buffer is updated so that there is no video in the video buffer that is before the timestamp for a still image, then that still image cannot be used as a base image for creating the video effect (in one embodiment); therefore, that still image can then be removed from the user interface. In some embodiments, all broadcast video frames can be stored and thus still images will not be removed from the user interface. A user can click on any one of the file names in the list of still images and the user interface will display that still image. Step 250 of
The user interface includes a set of controls to manage various parameters of camera 30. The user interface also includes a set of controls to manage various parameters for creating the videos. For example, there are parameters that indicate the length of the leader video, the length of the trailer video, the maximum amount of zoom in the video effect, the mid-point zoom between sub-images, the pan, tilt, and zoom rates of the video effect between sub-images, and pause time at each of the sub-images. Other controls and parameters can also be included. Most of these parameters are defaults, minimums, maximums, etc. The video effect can accelerate and decelerate up to the maximum pan, tilt, and zoom rates (perhaps not even reaching them). The goals are to make the moves natural. This should emulate a camera operator, and should give the viewer context.
The user interface also includes a set of controls that the operator uses to create a video effect. For example, there is a “Freeze” button. When an operator sees an interesting scene in the event for which the operator wants to create the video effect, the operator will push the “Freeze” button. In response to the “Freeze” button, the computer 40 will freeze the video buffer so that no more video frames will be added to the video buffer or removed from the video buffer. In some embodiments, the system will continue to collect video until the trailing buffer is filled. The operator will then be provided with the ability to create a script. After a script is created, the user can preview the video effect and, if satisfied, the user can instruct computer 40 to create the final output video.
As described above, digital camera 30 is co-located with broadcast camera 32 so that they will shoot picture images of the same scene/event; however, the still camera will typically capture a much larger field of view of the scene/event. In one embodiment, digital camera 30 has a higher resolution than broadcast camera 32 and has a much wider field of view. In those embodiments the image from broadcast camera 32 is likely to represent a portion of the image received from digital camera 30. In many situations, the image from broadcast camera 32 will entirely fit inside the image from digital camera 30. In some embodiments, the image from broadcast camera 32 will be similar to or appearing the same as the image from digital camera 30. It is not likely (although possible) that they are exactly the same on a pixel by pixel basis. In some instances, the image from the broadcast camera will partially overlap with the image of the digital camera and part of the image form the broadcast camera will not be depicted in the image from the digital camera. In any of these cases, the image from the broadcast camera 32 still depicts a portion of the image form the digital camera 30.
In step 318, computer 40 will map the video image to the base image. As described above, in one embodiment the video image depicts a portion of the base image. Note that the video image and the portion of the base image may not match on a pixel by pixel basis due to differences in alignment, distortion, parallax, resolution, etc. Step 318 includes determining a mapping between the video image and a portion of the base image. The results of step 318 are depicted in
In step 320, a color correction process is performed. Computer 400 attempts to match the color balance in the video image and base image. In one embodiment, the system will create a histogram (or data representing a histogram since an actual histogram is not graphically plotted) of all the colors used in the video image and a histogram of all the colors used in the region of the still image that maps to the video image. The histograms plot color ranges versus number of pixels. The two histograms will be compared and one of the images will have its colors re-mapped so that the histograms more closely agree. In other embodiments, other means for performing color correction can be used. The object of the color correction is for the images to look as similar as possible. While the images may be of the same scene, the two cameras (30 and 32) may capture images with slight color variations. Thus, step 320 is used to make images look as similar as possible so the transition between images when building the video will be less perceptible. In some embodiments of the system of
In step 322, the user can create a script moving (and sizing) the Field of View (FOV) within the base image. In some embodiments, the user can also rotate, warp, or change the FOV in other ways. In one embodiment, the user will choose a set of sub-images within the base image, the order that the sub-images will be presented, and the zoom level for each sub-image (as well as pan-tilt-zoom rates, pause times. etc.). This information will define the script. In step 324, computer 40 will build the video effect based on the script. In step 326, computer 40 will build a video which includes the video effect, leader video, trailer video and (optionally) transition effects, all which will be discussed below. Note that the leader and trailer are optional. In step 328, the video created in step 326 is stored. In step 330, that stored video can be played for the user, recording, or broadcast.
In another embodiment, the distance between center of frames is spaced along a spline curve such that the apparent pan and tilt change in a manner that tries to mimic natural camera movement. That is, a smooth camera movement is simulated using a function of time to pixels and field of view, and that function can be a spline curve.
In one embodiment, the camera motion through the base image is defined as follows:
-
- 1. Given 2 defined sub-frames K0 and K1;
- 2. Determine center intermediate frame KC;
- 3. KC pos=½*(K0 pos+K1 pos) or evaluate cubic Bezier spline;
- 4. KC Width determined as follows:
- Factor: user defined
- Dist: distance between K0 pos and K1 pos
- MidW=½*(K0 Width+K1 Width)
- Ratio=Factor*Dist/MidW
- KC Width=Ratio*MidW
- 5. Build 2 cubic Bezier splines: one to smooth/interpolate center points and one to smoothly vary width.
In one embodiment, when computing the size of the intermediate frames between sub-frames, the system tests the computed “Ratio” against a user configurable number (e.g., 1.3 or another value appropriate for the particular implementation). If the “ratio” is found to be less than this threshold, then the system does not apply the ratio (or defines it to be 1.0). One of the effects of this is that it keeps the system from artificially zooming out when the move between sub-frames is primarily a zoom to begin with.
The process for creating the script may be accomplished using a pointing device and selecting sub-frames, defining a curve, or using other means of describing the general path, timing, sizing etc. for the FOV as it traverses the base image. In one embodiment, automation could be implemented to produce a script that mimics the actions of a live camera operator. This might be accomplished by computing acceleration and deceleration parameters related to the translation of the field of view, and/or by “zooming” out and then back in while translating between two sub-frames. Such motions offer greater perspective of the move and the event for the viewer, and feel more natural as they mimic the moves that are typical of a professional camera operator.
In step 610, the intervening images are rendered based on the center of frames, widths and orientations. In step 612, the system determines whether there are any more pairs of sub-images to consider. If not, the process is completed and the video effect is saved in step 616. If there are more pairs of sub-images to consider, then the next pair of sub-images are accessed in step 614 and the process loops back to step 602. After the video image and the first sub-image are considered, then the first sub-image and the second sub-image are accessed, and so on. In some embodiments, there may only be one sub-image in addition to the video image. In other embodiments, there can be many sub-images. One variation of the process depicted in
In one implementation, the video effect includes the exact sub-images identified by the user. In other implementations, the video effect can use sub-images that are close to, but not exactly the same as, the sub-images chosen by the operator.
In one embodiment, the video effect can be created in real time by an operator. In such an embodiment, there would be no need for a script.
In step 664, the video effect created in
Another embodiment of a video created for the technology described herein is depicted in
In the embodiments described above, the video created by the process of
In one embodiment, camera interface 34 will send thumbnails of the images prior to sending the actual digital images. This way, the user interface can display the thumbnails before receiving images to give the user an earlier look at potential video effects to build.
In one embodiment, a digital video camera can be used to implement camera 30 instead of a digital still camera. In one embodiment, a high resolution digital video camera can be used to replace both digital still camera 30 and the broadcast camera 32. The digital video camera would include a very wide field of view. A portion of that field of view (a standard definition image or high definition image) is cut out and used for broadcast. That portion that is cut out can be used as the video images described above. Note that the portion cut our can be any of various number of pixels and then mapped to the desired format. The original “complete” image from the digital video camera will act as the base image described above. Because the broadcast image is cut out from the base image, the matching step is already performed.
In another embodiment, instead of using a high resolution digital still camera, multiple digital still cameras can be used and those images can be stitched together to create a high resolution still image for the base image.
The video effect described herein allows a user to see portions of the base images that are not visible in the video image. In addition, because the base image is at a higher resolution than the video image, the video effect can zoom in on the base image with much greater clarity than it could on the video image (this may also be true because the system can use a faster shutter speed on the still camera)
In one embodiment, the video effect described above is created during a live event, but not in real time. That is, the video effect is created after the action happens but is broadcasted during the live event.
The foregoing detailed description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto.
Claims
1. A method of creating a video effect, comprising:
- identifying a particular video image and a corresponding base image, said particular video image depicts a portion of said base image; and
- creating a new video that includes a transition from said particular video image to said base image and includes movement in said base image.
2. A method according to claim 1, further comprising:
- accessing an identification of a camera path in said base image, said movement in said base image is indicated by said camera path.
3. A method according to claim 1, wherein:
- said particular video image depicts a first event captured from a first camera;
- said base image is a digital still image that depicts said first event and is captured from a second camera;
- said base image and said particular video image are not exactly identical; and
- said base image and said particular video image are in an electronic format.
4. A method according to claim 1, wherein:
- said transition includes a camera shutter effect.
5. A method according to claim 1, wherein:
- said transition does not include any special effects.
6. A method according to claim 1, wherein:
- said new video includes a transition from said base image to said particular video image.
7. A method according to claim 1, wherein:
- said identifying said particular video image includes choosing said particular video image from a group of video images; and
- said new video includes a transition from said base image to one of said group of video images.
8. A method according to claim 7, wherein:
- said new video includes leader video and trailer video;
- said leader video includes said particular video image and one or more other video images of said group of video images; and
- said trailer video includes said particular video image and at least of one or more additional video images of said group of video images.
9. A method according to claim 1, wherein:
- said particular video image depicts a first event captured from a first camera; and
- said base image depicts said first event captured from a second camera.
10. A method according to claim 1, further comprising:
- identifying one or more sub-images in said base image, said new video includes movement in said base image to said one or more sub-images.
11. A method according to claim 10, wherein:
- said movement in said base image includes movement from a first sub-image to a second sub-image while changing an appearance of a zoom level
12. A method according to claim 10, wherein:
- said identifying one or more sub-images in said base image includes creating a script (or other description) for said new video; and
- said script identifies said sub-images, zoom levels for said sub-images and an order for said sub-images.
13. A method according to claim 10, wherein:
- said method further includes mapping said particular video image to said portion of said base image;
- said identifying one or more sub-images in said base image includes creating a script for said new video, said script identifies said sub-images and an order for said sub-images;
- said particular video image is an electronic image depicting a first event captured from a first camera;
- said base image is an electronic or magnetic, etc. image depicting said first event (not identical event) captured from a second camera.
14. A method according to claim 1, wherein:
- said particular video image depicts a first event;
- said base image depicts said first event; and
- said movement in said base image includes depicting a portion of said first event not visible in said particular video image.
15. A method according to claim 1, wherein:
- said movement in said base image includes zooming in on said portion of said base image.
16. A method according to claim 1, wherein said identifying said particular image and said corresponding base image comprises:
- receiving a plurality of images and time codes for said plurality of images, said plurality of images includes said base image, said time codes include a time code corresponding to said base image;
- receiving said group of video images and storing said group of video images in a data store;
- receiving a selection of said base image; and
- using said time code corresponding to said base image to identify said particular video image.
17. A method according to claim 1, further comprising:
- mapping said particular video image to said portion of said base image.
18. A method according to claim 17, further comprising:
- correcting color on at least one of said base image or said particular video image to increase similarity between said portion of said base image and said particular video image.
19. A method of creating a video effect, comprising:
- identifying a portion of a base image that corresponds to a particular video image;
- identifying a first sub-image in said base image; and
- creating a video that transitions from said video image to said portion of said base image and includes movement in said base image from said portion of said base image to said one or more sub-images.
20. A method according to claim 19, wherein:
- said portion of said base image that corresponds to said particular video image includes a portion of said base image.
21. A method according to claim 19, wherein:
- said particular video image depicts a first event captured from a first camera;
- said base image depicts said first event captured from a second camera;
- said video includes leader video and trailer video;
- said leader video includes said particular video image and one or more other video images of said first event captured from said first camera; and
- said trailer video includes said particular video image and at least of one or more additional video images of said first event captured from said first camera.
22. A method according to claim 19, wherein:
- said particular video image depicts a first event captured from a first camera;
- said base image depicts said first event captured from a second camera; and
- said base image is a digital image.
23. A method according to claim 19, wherein:
- said particular video image depicts a first event;
- said base image depicts said first event;
- said movement in said base image includes depicts a portion of said first event not visible in said particular video image.
24. A method according to claim 19, wherein:
- said identifying a first sub-image includes creating a script;
- said script identifies said first sub-image and additional one or more sub-images;
- said script identifies zoom levels for said sub-images and an order for said sub-images; and
- said creating a video is performed according to said script.
25. A method of creating a video effect, comprising:
- receiving a first image captured from a first sensor, said first image depicts a first event;
- receiving a base image captured from a second sensor, said base image depicts said first event, said first image fits within said base image;
- creating a video that transitions from said first image to said base image.
26. A method according to claim 25, further comprising:
- identifying a sub-images in said base image, said video includes movement in said base image to said sub-image.
27. A method according to claim 25, wherein:
- said video includes movement in said base image.
28. A method according to claim 25, wherein:
- said first image is a video image; and
- said base image is a digital still image.
29. A method according to claim 25, wherein:
- said receiving a first image includes receiving many video images;
- said creating a video includes choosing said first image from said many video images and mapping said first image to a portion of said base image; and
- said video transitions from said first image to said portion of said base image.
30. A method of creating a video effect, comprising:
- receiving an electronic base image;
- receiving an indication of a set of two or more target images in said electronic base image;
- creating a set of electronic intervening images in said electronic base image that are between said target images; and
- creating a video from said target images and said intervening images.
31. A method according to claim 30, wherein:
- said creating said set of electronic intervening images is performed automatically.
32. A method according to claim 30, wherein:
- said receiving said indication of said set of two or more target images includes creating a script;
- said script identifies an order of said target images; and
- said video is created according to said script.
33. A method according to claim 30, wherein:
- said intervening images vary in zoom appearance.
34. A method according to claim 30, further comprising:
- receiving one or more indications of width of said target images; and
- varying widths of said intervening images based on said one or more indications of width of said target images.
35. A system for creating a video effect, comprising:
- a communication interface, said communication interface is in communication with one or more sources of images;
- a storage system; and
- one or more processors in communication with said storage system and said communication interface, said one or more processors receive base images, receiving video images, identify a particular video image and corresponding base image, match said particular video image to a portion of said corresponding base image, and create a video that transitions from said particular video image to said portion of said corresponding base image and includes movement in said corresponding base image.
36. A system according to claim 35, further comprising:
- a video camera in communication with said communication interface;
- a camera interface in communication with said communication interface;
- a digital camera in communication with said camera interface, said video camera and said digital camera are said sources of images, said camera interface controls said digital camera and receives base images from said digital camera, said camera interface packages said base images with time data and sends said base images to said communication interface; and
- a trigger in communication with said digital camera.
37. A system according to claim 35, wherein:
- said identifying said particular video image and corresponding base image includes receiving an indication of said corresponding base image and identifying said particular video image based on timing data associated with said base image.
38. A system according to claim 35, wherein:
- said one or more processors receive an indication of a first sub-image in said corresponding base image; and
- said movement in said corresponding base image includes movement toward said first sub-image.
39. A system according to claim 35, wherein:
- said one or more processors receiving an indication of a set of sub-image in said corresponding base image; and
- said movement in said corresponding base image includes movement between said sub-images.
40. A system according to claim 35, wherein:
- said one or more processors receive an indication of a set of sub-image in said corresponding base image and zoom levels for said sub-images; and
- said movement in said corresponding base image includes movement between said sub-images with varying zoom levels during said movement, said varying zoom levels are based on said zoom levels for said sub-images.
41. A system according to claim 35, wherein:
- said particular video image depicts a first event;
- said particular video image is captured from a first sensor;
- said corresponding base image depicts said first event; and
- said corresponding base image is captured from a second sensor.
42. A system according to claim 41, wherein:
- said one or more processors receive an indication of a first sub-image in said corresponding base image;
- said first sub-image depicts a portion of said first event not depicted in said particular video image; and
- said movement in said corresponding base image includes movement to said first sub-image.
Type: Application
Filed: Aug 25, 2006
Publication Date: Feb 28, 2008
Patent Grant number: 8456526
Applicant: SPORTVISION, INC. (Chicago, IL)
Inventors: James R. Gloudemans (San Mateo, CA), Walter Hsiao (Mountain View, CA), John LaCroix (Oakland, CA), Richard H. Cavallaro (Mountain View, CA), Marvin S. White (San Carlos, CA)
Application Number: 11/467,467
International Classification: H04N 5/262 (20060101);