IMMERSIVE CAPTURE AND REVIEW
In an embodiment, a system includes an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. Each of the plurality of cameras includes a partially-overlapping field of view, and the camera module is configured to comprehensively capture a target space. The system further includes a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space. Aspects herein can also relate to methods for capturing immersions, systems and methods for providing immersions, and systems and methods for viewing and controlling immersions.
This patent application is a continuation of U.S. patent application Ser. No. 15/613,704, filed Jun. 5, 2017, which claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 62/346,234 filed Jun. 6, 2016, both of which are incorporated herein by reference in their entirety.
TECHNICAL FIELDThe subject innovation generally relates to capturing and providing immersive media experiences. The subject innovation more specifically concerns allowing users to view remote locations in a non-linear and self-driven manner.
BACKGROUNDVideo and other media are used to allow entities to view or otherwise experience remote environments. However, this media has generally been limiting in a variety of ways. Moving video images are generally constrained to a linear path as recorded and do not permit substantial user interaction to drive the content. Still frame photographs can be used to provide additional control (e.g., with directional controls to move to an adjacent location) but are also limited to the views taken by the photographer.
SUMMARYIn an embodiment, a system includes an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. Each of the plurality of cameras includes a partially-overlapping field of view, and the camera module is configured to comprehensively capture a target space. The system further includes a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space.
In an embodiment, a system includes an immersive video generation module configured to seamlessly combine a comprehensive capture of a target space to a travelable comprehensive immersion. The immersive video generation module is configured to receive at least one image from each of a plurality of cameras at a first location, continuously stitch the at least one image from each of the plurality of cameras at the first location to produce a first location immersion, receive at least one image from the plurality of cameras at a second location, continuously stitch the at least one image from each of the plurality of cameras at the second location to produce a second location immersion, and continuously stitch the first location immersion and the second location immersion to create a travelable comprehensive immersion.
In an embodiment, a method includes providing an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites. The each of the plurality of cameras includes a partially-overlapping field of view and the camera module is configured to comprehensively capture a target space. The method also includes providing a chassis operatively coupled with the camera module, the chassis configured to smoothly maneuver the camera module comprehensively through the target space and recording at least one image from each of the plurality of cameras to record a comprehensive capture of the target space. The method also includes simultaneously while recording, smoothly maneuvering the camera module through the target space.
In an embodiment, a method includes receiving at least one image from each of a plurality of cameras at a first location, continuously stitching the at least one image from each of the plurality of cameras at the first location to produce a first location immersion, receiving at least one image from the plurality of cameras at a second location, continuously stitching the at least one image from each of the plurality of cameras at the second location to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create a travelable comprehensive immersion.
In an embodiment, a system includes an immersion engine configured to access a travelable comprehensive immersion. The immersion engine controls maneuver and view through the travelable comprehensive immersion based on user input. The system also includes a display configured to display the travelable comprehensive immersion as provided by the immersion engine and a control configured to provide the user input to the immersion engine.
In an embodiment, a method includes receiving a travelable comprehensive immersion, displaying an initial viewer state of the travelable comprehensive immersion, receiving user input related to the travelable comprehensive immersion, and displaying a subsequent viewer state of the travelable comprehensive immersion based on the user input. The subsequent viewer state differs from the initial viewer state in at least one of viewer position or viewer orientation.
These and other embodiments will be described in greater detail below.
The invention may take physical form in certain parts and arrangements of parts, an embodiment of which will be described in detail in the specification and illustrated in the accompanying drawings which form a part hereof, and wherein:
Aspects herein generally relate to systems and methods for comprehensively capturing a target space or environment, as well as displaying or providing comprehensive captures of target spaces or environments. These travelable comprehensive immersions provide a completely unique experience based on the user, on the basis that they can be explored continuously in three dimensions using control input. They have no start, end, timeline, or path, and are based off actual recorded media of the target space as opposed to a digital model. Direction, movement, speed, elevation, location, viewing angle, and so forth are all placed in user hands with no duration or predetermined time element.
As used herein, a target space can be any space or environment, including both indoor and outdoor public or private spaces. A target space is comprehensively captured after a camera module maneuvers the target space while recording. Maneuvering the target space can include movement in all three dimensions, and in various embodiments may include traveling a linear path through the space, travelling multiple paths through the space, travelling a gridded path or series of gridded paths through the space, travelling a curved path or series of curved paths through the space, traveling diagonals of the space, following a human-walked path through the space, et cetera. Maneuvering the target space can include travelling along or near walls or boundaries of the target space, and in some embodiments may then involve logically segmenting the space therein into sections, grids, curves, et cetera, either based on the dimensions of the target space or predefined intervals. In embodiments, maneuver can include a third, vertical dimension in addition to the area (e.g., floor or ground) covered, and the camera module can be held in a two dimensional location while multiple vertical views are collected, or the comprehensive maneuver can occur following the same or different two-dimensional paths at different heights. The camera module either continuously or according to a capture rate/interval records photographs or video of the space to provide combinable immersive views continuously or at discrete points for the entire maneuver. Comprehensively capturing a target space can also include maneuvering to or around focal points to provide still further views or other enhanced images of items of interest within the space.
As used herein, “smoothly maneuver” means to maneuver in a fashion not substantially subject to bumps, shaking, or other disruption modifying the intended path and orientation of the camera module there through. When camera modules are smoothly maneuvered, image quality is improved both in individual views and during stitching of different individual views into adjacent views.
When a target space is comprehensively captured through smooth maneuver, all images can be combined to produce a travelable comprehensive immersion. The travelable comprehensive immersion can be a file or group of files containing images and/or video of the target space combined in a manner that allows viewing of, movement through, and exploration of the target space in a non-linear and non-programmed manner. Because the space is “rebuilt” virtually—the camera module captures surrounding views in a variety of locations—the location and orientation of a viewer using the travelable comprehensive immersion can be modified in a substantially continuous manner, allowing movement to anywhere in the space and different viewing angles at any such point. In embodiments, these capabilities can be subject to a capture rate or interval, where discrete locations (e.g., 1 inch, 6 inches, 1 foot, 3 feet, 6 feet, and any other distance) are captured with interval gaps there between.
In the specification and claims, reference will be made to a number of terms that have the following meanings. The singular forms “a”, “an” and “the” include plural referents unless the context clearly dictates otherwise. Approximating language, as used herein throughout the specification and claims, may be applied to modify a quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term such as “about” is not to be limited to the precise value specified. In some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Moreover, unless specifically stated otherwise, a use of the terms “first,” “second,” etc., does not denote an order or importance, but rather the terms “first,” “second,” etc., are used to distinguish one element from another.
As used herein, the terms “may” and “may be” indicate a possibility of an occurrence within a set of circumstances; a possession of a specified property, characteristic or function; and/or qualify another verb by expressing one or more of an ability, capability, or possibility associated with the qualified verb. Accordingly, usage of “may” and “may be” indicates that a modified term is apparently appropriate, capable, or suitable for an indicated capacity, function, or usage, while taking into account that in some circumstances the modified term may sometimes not be appropriate, capable, or suitable. For example, in some circumstances an event or capacity can be expected, while in other circumstances the event or capacity cannot occur—this distinction is captured by the terms “may” and “may be.”
Turning to the figures,
Limitations of the viewing techniques of
In the illustrated embodiment, the camera module includes six cameras, with five mounted to provide a 360-degree panoramic view around the camera module and one mounted atop to allow upward viewing. In embodiments, the cameras may be mounted at angles to modify the field of view. For example, the panoramic series of cameras can include a slight downward tilt to reduce field of view overlap with the sixth camera directed upward, thereby maximizing the amount of unique image data in each immersive image constructed from individual camera images. The camera module(s) illustrated herein are provided for purposes of example only, and do not limit other possible camera module arrangements. In embodiments, other numbers of cameras can be utilized, and camera angles other than those pictured (e.g., downward, between top and side cameras, et cetera) can be employed without departing from the scope or spirit of the innovation.
The cameras can provide images collected to temporary or persistent storage, or directly to an immersive video generation module for production of an immersive video of the target space. The cameras can utilize any wired or wireless means of communication and/or powering.
As partially shown in
Control of immersive capture vehicles can be manual, automatic, or combinations thereof. Accordingly, the immersive capture vehicle includes at least a vehicle logic module capable of managing maneuver of the immersive capture vehicle (e.g., direction and speed) by controlling its propulsion mechanisms. The vehicle logic module can be operatively coupled or include a communication module (e.g., to send and receive information), storage and/or a general or application-specific processor (e.g., storing data for use controlling movement, calculating paths of movement, modifying vehicle operation, and so forth), sensor modules (e.g., for collecting data about vehicle operation, for collecting data about the environment), and others.
In embodiments where control is automated, the logic module can receive information about a target space before beginning or discover information about the target space (e.g., using the sensor module) before or during comprehensive capture of the target space. Techniques by which the logic module can automatically capture spaces or capture spaces based on user input are discussed further below. In embodiments, a logic module can include a location module, which can utilize one or more location techniques such as a global positioning system, a triangulation technique, or other techniques providing an absolute location, or techniques for discovering a relative location at a distance (e.g., radar, sonar, laser, infrared). Logic can be provided to prevent collisions in the target space while immersive media is being collected.
In an embodiment, an immersive capture vehicle can be a robot. In an embodiment, an immersive capture vehicle can be a self-balancing automated device.
Physical interfaces can include various aspects to improve ergonomics. For example, the physical interface and/or chassis can be pivot-able, extended or retracted, or otherwise adjustable to provide for ergonomic carriage facilitating smooth maneuver of the chassis and camera module. Where a person walks the system, smooth maneuver may or may not include substantially level or stable maneuver of the camera module, but may instead mimic human motion for a walking experience when viewed. Alternatively, a person can stabilize the human interface but be conveyed on another vehicle (e.g., rolling chair as in
As will be appreciated, the arrangements illustrated in
In particular embodiments such as those of
Embodiments such as that of, e.g.,
When displaying the immersion, a travelable comprehensive immersion can be received (e.g., from storage and/or an immersive video generation module). An initial viewer state of the travelable comprehensive immersion is displayed (e.g., entryway, initial location programmed into immersion, initial location selected by user). User input can then be received related to the travelable comprehensive immersion. Based on the user input, a subsequent viewer state can be displayed.
The subsequent viewer state can differ from the initial viewer state in at least one of viewer position (e.g., location within the target space) or viewer orientation (e.g., viewing direction at a location within the target space). Additional changes provided in subsequent state(s) can include environmental changes not based on user input, such as moving water, motion of the sun, curtains moving due to open window, et cetera. In this regard, the environment of the target space can be dynamic.
Immersions can be provided using remote storage. In an embodiment, an immersion is provided on a software-as-a-service basis or from a cloud hosting site. Billing can be based on the number of times an immersion is accessed, the time spent in an immersion, and so forth. Recording and/or viewing technology can also be provided as a service, and both viewing applications and immersion content can be provisioned wholly remotely.
As suggested by
In an alternative embodiment, supplemental content can be provided to a target space where the user is present in the target space and using a transparent or translucent virtual reality headset. In this fashion, a supplemental content module acts in a standalone manner to show virtual items in the space or provide information about virtual or real items in the space visible through the virtual reality headset providing superimposition.
In an embodiment, a group of cables connected to individual cameras, mobile devices, et cetera can connect into a mobile computer or other computing device. The lenses can be arranged in, e.g., an octahedron. This is intended to minimize space between lenses and arranges the respective fields of view to avoid difficulties reconciling parallax. The distance between lenses and processing and/or storage equipment can be variable from zero to 30 feet or more. For example, with a drone carrying onboard computing elements, the distance between the lens arrangement and computing elements can be zero or minimal distance. For VR camera rigs, the distance can be 3 to 10 feet. And for remote security cameras, sporting event cameras, concerts, et cetera, the distance can be greater than 10 feet. These are only examples, and various other arrangements using wired or wireless components at distance.
In embodiments, computing elements disposed at a distance from a lens or lenses may be larger or more power-intensive than those which could be integrated into a mobile element, or such that close proximity to the camera lenses is impossible without obstructing the wide view(s). For example, a tiny lens or group of lenses can be provided in an enclosure courtside at a basketball game to capture the entire game without blocking spectator views of the game. The footprint to both other spectators (or viewing apparatuses) and the lens field of view is reduced by tethering (via wired or wireless means) is reduced by offsetting larger aspects. In this fashion, neither the visual data collected nor the image quality/processing need suffer on behalf of the other. Storage, processing, and power can be located distal to the lens or lenses to support high resolution, rapid stitching, and other processing to minimize camera system footprint.
In embodiments using small cameras with
The rig's chasses (through which wired tethers can be threaded) can be mounted atop a self-balancing vehicle as described herein. The completed apparatus allows for rapid, steady, programmable, unmanned image capture, including high definition video, with little or no footprint or artifact left on the captured image data. The system can also include components and logic for post-production, or provide captured image data to other systems for such. The self-balancing vehicle can be provided with gimbal stabilizers and self-guiding software to produce steady, zero-footprint shots (requiring no nadir). Due to the stability and high quality, removal of undesirable video imperfections such as ghosting and blurring is made simpler, less-intensive, and more accurate. Hardware and/or other components for such use can be provided in the vehicle or rig, or be accomplished remote thereto.
Aspects herein can use high-definition or ultra-high-definition resolution cameras. Further technologies leveraged can include global positioning systems and other techniques. Location techniques can also employ cellular or network-based location, triangulation, radar, sonar, infrared or laser techniques, and image analysis or processing to discern distances and location information from images collected.
Aerial or waterborne drones (or similar devices) can be utilized in various embodiments as an immersive capture vehicle. In embodiments, two or more vehicles (which can be any combination of land-based, aerial, or marine) can be used simultaneously in a coordinated fashion to comprehensively capture a target space with greater speed or to capture views from locations and orientations which cannot be provided by a single device. Multiple devices can follow the same track in two dimensions at different heights, or different paths at the same or different heights. Multiple vehicles can be locationally “anchored” to one another for alignment or offset to aid in coordination, and one or both may include independent navigation systems to aid in location control.
Combination of the various images can prevent the existence of blind spots in views created. A continuous, single and uncut scene of the target space is provided in both static and moving manners. Fluid travel in any direction of space up to the boundaries can be provided.
As noted above, features of interest or “hotpoints” can be emphasized in immersions by providing supplemental content, related audio content, particular views, or other aspects. Such aspects can be a window with a view, a vista or patio, a fireplace, a water feature, et cetera.
The environment of immersions can change, such as providing a 24-hour lighting cycle based on sun and/or weather.
The immersion permits users to control the interest, pace, and length of a tour or other remote viewing. The viewing can be sped up or slowed down at user desire.
Static cameras can be integrated with movable camera modules to provide additional views or reference views which can be used to aid in navigation or to provide specific visual information to users.
While aspects herein related to recording and providing immersions in embodiments concern track-less, free movement by the user, movable cameras or virtual viewing can travel along pre-programmed tracks in embodiments still using other aspects of the innovation.
In embodiments, an immersion can be edited to show the inclusion or exclusion of items and/or changes to the target space such as removal of a wall or other renovation. In such embodiments, the non-changed portions of the immersion remain recorded media of the actual space, while modelling can be leveraged to integrate changes to the actual space to realistically display the modifications of the target space. Where a target space includes partitions which are removed through editing (e.g., knock out a wall), actual collected media of both sides can be stitched with only the space occupied by the removed wall being a model or virtualization of the space. Augmented reality technology can be leveraged as well.
Controls can include user interfaces that allow jumping to different portions of an immersion, speed controls (e.g., fast forward and/or rewind based on movement or viewing orientation), mute button, drone view button (in relevant embodiments or where the drone view is distinguishable from the main immersive view), still capture button, time lapse (to pause environment or other activity and view freeze), view angle controls, location or position controls, view outside target space (e.g., view of building from outside or above), and so forth.
Features such as allowing virtual reality goggles to share power with a phone (e.g., either charging the other) can be provided.
The number of cameras can vary based on particular camera modules. Cost, field of view, resolution, lens size, and other considerations can be considered to customize a camera module or camera modules for a particular use.
Example services provided with aspects herein are solo target space (e.g., apartment, home, or commercial unit) tours, guided tours, three-dimensional and 360-degree floorplans provided by augmented reality technology, websites or other network resources for hosting such (e.g., one per space or multiple spaces at a single hub), applications to aid in contracting, purchasing, payment, et cetera, related to immersions or supplemental content, and so forth.
In embodiments, immersive media can be used for training purposes. For example, individual cameras or camera modules located around a sports field can collect combinable media related to action on the sports field. In a specific example, the motion, delivery, speed, and movement of a pitch can be recorded from various angles, enabling an immersed batter to practice against a particular opponent pitcher.
This written description uses examples to disclose the invention, including the best mode, and also to enable one of ordinary skill in the art to practice the invention, including making and using devices or systems and performing incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to one of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differentiate from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
Claims
1. A system, comprising:
- an immersive camera module configured to capture a target space at discrete locations of the target space spaced apart from one another by distance intervals predetermined by a desired capture rate based on a target resolution; and
- an immersive video generation module configured to seamlessly combine the capture of the target space to a travelable comprehensive immersion, wherein seamlessly combining the capture of the target space includes continuously stitching at least one image from the immersive camera module at a first one of the discrete locations of the target space to produce a first location immersion, continuously stitching at least one image from the immersive camera module at a second one of the discrete locations of the target space to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create a travelable comprehensive immersion including a synthesized view of the target space from a location at which the immersive camera module is not present.
2. The system of claim 1, wherein the immersive camera module includes a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites.
3. The system of claim 2, wherein the plurality of cameras are mounted to the immersive camera module such that the immersive camera module is configured to capture a 360-degree panoramic view of the target space, and wherein at least one of the plurality of cameras is mounted atop the immersive camera module to capture an upward view of the target space.
4. The system of claim 1, wherein the travelable comprehensive immersion further includes one or more virtual items superimposed into the target space and supplemental content providing information relating to the one or more virtual items superimposed into the target space.
5. The system of claim 4, wherein the supplemental content is selected from the group consisting of an additional view of the one or more items, information for purchasing the one or more items, a link to the one or more items, and a feature of interest with respect to the one or more items.
6. The system of claim 1, further comprising a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the immersive camera module through the target space between the discrete locations of the target space.
7. The system of claim 6, further comprising:
- an immersive capture vehicle; and
- an immersive capture vehicle controller configured to control movement of the immersive capture vehicle,
- wherein the chassis is operatively coupled to the immersive capture vehicle, and wherein the immersive capture vehicle is configured to smoothly maneuver the chassis and the immersive camera module through the target space between the discrete locations of the target space.
8. The system of claim 7, further comprising:
- a sensor module which collects space geometry and obstacle data related to the target space.
9. The system of claim 8, wherein the immersive capture vehicle is configured to maneuver about obstacles based on the space geometry and the obstacle data.
10. The system of claim 8, further comprising:
- a modeling module configured to generate a model of the target space based on the space geometry and the obstacle data; and
- a path module configured to generate path instructions for the immersive capture vehicle controller, wherein the path instructions avoid obstacles and facilitate capturing the target space based on the model.
11. The system of claim 6, further comprising a physical interface operatively coupled to the chassis, wherein the physical interface is configured to facilitate smooth maneuver of the chassis and the immersive camera module through the target space.
12. The system of claim 6, further comprising:
- an adjustment module of the chassis;
- a shock-absorbing module of the chassis configured to stabilize the immersive camera module; and
- a pivot-plumb component of the chassis configured to stabilize the immersive camera module.
13. A method, comprising:
- providing an immersive camera module configured to capture a target space at discrete locations of the target space spaced apart from one another by distance intervals predetermined by a desired capture rate based on a target resolution;
- recording a first image via the immersive camera module at a first one of the discrete locations of the target space;
- recording a second image via the immersive camera module at a second one of the discrete locations of the target space offset from the first one of the discrete locations of the target space; and
- simultaneously while recording, smoothly maneuvering the immersive camera module through the target space between the discrete locations of the target space; and
- continuously stitching the first and the second images to create a travelable comprehensive immersion configured to seamlessly combine the capture of the target space at the discrete locations of the target space, the travelable comprehensive immersion including a synthesized view of a third location of the target space different from each of the first and second ones of the discrete locations of the target space, wherein the immersive camera module is not present at the third location of the target space or configured to record images at the third location of the target space.
14. The method of claim 13, further comprising:
- providing an immersive camera module including a camera mounting block having a plurality of camera mounting sites and a plurality of cameras mounted to the plurality of camera mounting sites; and
- providing a chassis operatively coupled with the immersive camera module, the chassis configured to smoothly maneuver the immersive camera module through the target space.
15. The method of claim 14, further comprising:
- providing a vehicle; and
- providing a vehicle controller,
- wherein the chassis is mounted to the vehicle, and wherein the vehicle is configured to smoothly maneuver the chassis through the target space between the discrete locations of the target space.
16. The method of claim 13, further comprising generating a path through the target space prior to recording and maneuvering.
17. The method of claim 16, further comprising:
- providing a sensor module configured to collect space geometry and obstacle data within the target space; and
- generating a model of the target space based on the space geometry and the obstacle data, wherein the path is based on the model.
18. The method of claim 13, further comprising:
- outputting the travelable comprehensive immersion including the synthesized view of the third location to a client device; and
- navigating the travelable comprehensive immersion on the client device.
19. A system, comprising:
- an immersion engine configured to access a travelable comprehensive immersion of a target space, the travelable comprehensive immersion being modified to remove a wall identified in the target space such that unmodified portions of the travelable comprehensive immersion include recorded media of the target space and a modified portion of the travelable comprehensive immersion displays a portion of the wall identified in the target space as being removed therefrom,
- wherein the travelable comprehensive immersion is based on continuously stitching at least one image at a first location of the target space proximate a first side of the wall identified in the target space to produce a first location immersion, continuously stitching at least one image from a second location of the target space proximate a second side of the wall identified in the target space opposite the first side to produce a second location immersion, and continuously stitching the first location immersion and the second location immersion to create the modified portion of the travelable comprehensive immersion.
20. The system of claim 19, further comprising:
- a display configured to display the travelable comprehensive immersion as provided by the immersion engine, wherein the immersion engine is configured to control maneuver and view through the travelable comprehensive immersion based on user input; and
- a control configured to provide the user input to the immersion engine.
Type: Application
Filed: Nov 19, 2021
Publication Date: Mar 10, 2022
Inventor: Bryan COLIN (Short Hills, NJ)
Application Number: 17/531,040