SYSTEMS, DEVICES, AND METHODS FOR AUGMENTED REALITY
Methods, devices, and systems are disclosed for providing augmented realities including trails or paths for navigating a real world space. Methods, devices, and systems are also disclosed for providing augmented realities for other forms of navigation guidance or tracking assistance.
This application claims the benefit of U.S. Provisional Patent Application No. 62/658,904, filed Apr. 17, 2018, the complete contents of which are herein incorporated by reference.
FIELD OF THE INVENTIONThe invention generally relates to augmented reality and, in particular, to forms of navigation guidance and tracking assistance using augmented reality representations.
BACKGROUNDNavigational tools have advanced considerably from days when paper maps were the only option for mobile persons to plan trips or draw routes taking into account a series of different locations. Portable electronic devices have grown into a mass consumer technology complete with “GPS” (Global Positioning System) technology. Single-purpose instruments for selecting road directions proliferated under brands like Garmin®. Multipurpose instruments like smartphones have since proliferated to provide consumers digital maps with driving direction features. Though multipurpose devices like smartphones may perform other tasks besides giving driving directions, the navigation applications of smartphones offer essentially the same features and user experience as single-purpose GPS navigation devices they've come to displace. Little has changed and improvements can feel marginal.
Between 2010 and 2020, virtual reality (VR) and augmented reality (AR) devices and technology rose substantially in use and availability among ordinary consumers and some businesses. However, in many instances such devices are limited to home use in contexts such as video game entertainment. VR and AR remain underdeveloped as meaningful navigation guidance and tracking assistances tools.
SUMMARYA target such as a human or vehicle may travel through the real world, and a user of an exemplary system or device is provided an ability to track that target and receive visual AR content which portrays a path the target leaves behind.
At a high level, the AR portrayal of a target's movement may take the appearance of a trail of “breadcrumbs,” an analogy to the familiar nursery story of children leaving a trail of breadcrumbs as a means for memorializing a path with some perpetuity. AR content of a breadcrumbs-type may entail the creation of specific augmentations which trace path or trails along which one or more targets previously traveled.
At a high level, the AR portrayal of a target's movement may take the appearance of a trail of fallen “dominos,” an analogy to the familiar tile based game in which a path of dominos fall over among an initial playing table full of standing dominos. AR content of a dominos-type may entail showing many augmentations at the outset and removing augmentations to signify a target has visited location corresponding to those augmentations.
In both categories of embodiments—breadcrumbs-type and dominos-type—augmentations may be changed (e.g., altered in appearance) instead of being outright created from nonexistence or outright removed from existence.
A target's particular parameter that is tracked may differ among embodiments. For example, one or more of location, movement, speed, latitude, longitude, altitude, depth, and proximity may be tracked, among other parameters. Changes in one or more of such parameters may be tracked.
Proximity of a target to a particular real world location may be tracked such that the value of the proximity at multiple sample times is recorded in, by, or with a virtual object associated with the particular real world location. The proximity values thus stored may be then be retrieved at some substantially concomitant time or, alternatively, some future time for the purpose of changing augmentations of an AR output based on the changes in the stored proximity values. Proximity is often assessed between a mobile target an a stationary mobile reference location. However, other types of references (besides a reference location) may be used in some embodiments.
According to an aspect of some exemplary embodiments, methods, devices, or systems generate augmented reality (AR) output involving proximity-based creation, destruction, and/or modification of AR content. In some embodiments, the AR content is further affected by a target's view (e.g., viewing direction or frustum) at locations or over time. In some embodiments, the AR content is further affected by a user's view (e.g., viewing direction or frustum) at locations or over time.
By virtue of being created, removed, or otherwise changed, augmentations may have a temporary nature. According to an aspect of some exemplary embodiments, a specialized system for storage and retrieval of information is provided which facilities the creation, removal, and modification of either or both breadcrumbs-type and dominos-type augmentations. According to such an exemplary system, a virtual world may be provided which is modeled after the real world.
According to some exemplary embodiments, the storage of locations which collectively define a path or trail is performed using a 3D virtual model of a real world space. In this disclosure, the terms “virtual model” and “virtual world” may be used interchangeably. An exemplary 3D virtual model has virtual locations which are configured to correspond with real world locations. In other words, the 3D virtual model includes a virtual landscape modeled after the real world landscape. Real world geography, locations, landscapes, landmarks, structures, and the like, natural or man-made, may be reproduced within the virtual world in like sizes, proportions, relative positions, and arrangements as in the real world. For example, a 3D virtual model of New York City would in fact resemble New York City in many respects, with matching general geography and landmarks. Within the virtual world, virtual objects may be created (e.g., instantiated) at virtual locations. Since a virtual location corresponds with a real world location, a virtual object at a given virtual location becomes associated with a particular real world location that corresponds with the given virtual location. Data stored by or with the virtual object is also inherently associated with the particular real world location. In some cases a single virtual object may be added as means for storing information for more than one location.
A virtual object stored in, with, or with reference to a virtual model may not inherently take a particular state as far as sensory modalities are concerned. For example, a virtual object may not have a particular appearance. Indeed, a virtual object may have no appearance at all, and in essence be “invisible” to an unaided human eye. By contrast, an augmentation is by definition perceptible according to one or more sensory modalities. That is, an augmentation may be seen, heard, touched, smelled, and/or tasted. An augmentation may be regarded as the “face” of a virtual object, in which case data stored in, by, or with the virtual object is used to determine what the augmentation portrays or signifies to a user looking upon that “face”.
According to one aspect of some embodiments, AR content is included in an AR output at multiple locations which were at some point in time in close proximity to a tracked real world object like a person or mobile electronic device. Wherever the tracked object went over some time period of interest, locations proximal to the tracked object at various points in time may be marked in the AR output with some augmentation, e.g., a virtual sign post. Proximity information may be stored for portrayal in AR content at some future time, in which case the proximity information may be stored (e.g., with a timestamp) using a virtual object and retrieved from the virtual object at the time of AR production. As the proximity of a given location and the tracked object changes, e.g. the tracked object moves away, the augmentation may be modified in the AR output (and/or in the virtual object) based on the changing proximity. Virtual objects may be used to keep a record of proximity over time, e.g., with different proximity values each having a different timestamp.
As an illustrative example, augmentations such as virtual sign posts may be associated with real world locations. Conceptually this relationship between augmentations and real world locations may be analogized to mile markers on a highway, boundary pegs or boundary posts used by property surveyors, or signs marking street addresses. All such real objects designate a real world physical location at which they exist. In contrast to these real world sign posts, however, virtual posts presented to a user in an AR output may convey not just an identify of a location, but also signify that a tracked target was near or at the location identified by the sign post. In other worlds, while real world objects like mile markers are strictly location-based, virtual objects and their augmentations according to some exemplary embodiments may be both location-based and proximity-based. AR content may be added, removed, or otherwise modified at specific non-mobile real world locations in dependence on the proximity of a mobile real world object (a target) with respect to those non-mobile real world locations.
For example, a sign post augmentation may be displayed in AR output for every location a mobile device visits. As time elapses since the mobile device's last visit, the sign post augmentation may change appearance (e.g., fade or change color) to signify the passage of time since the mobile device's last visit. As another example, as the distance grows between the mobile device and the location of an augmentation, the augmentation may change (e.g., change size or shape) in dependence on the changing proximity distance.
The preceding sign post examples are breadcrumbs-type AR. By contrast, according to the dominos-type AR, AR content may be removed from an AR output at each location which is or has been in close proximity to a tracked real world object since some selected start date/time. In other words, some embodiments may involve producing AR content in which the presence of augmentations (or the presence of augmentations of first particular appearance and not some other second particular appearance) signifies that the location has not been visited by a tracked target. Loosely analogous is the classic arcade game, Pac-Man, by which virtual yellow dots are removed from locations which Pac-Man visits. In Pac-Man, the presence of a yellow dot signifies that the location has not been visited by Pac-Man. The absence of an augmentation in the AR content (or the use of alternative appearances to the augmentation) may signify that the location has in fact been visited by the tracked target within some preceding window of time. In Pac-Man, the absence of a yellow dot signifies that the location was already visited by Pac-Man. Other variations may exist in other embodiments.
According to an aspect of some exemplary embodiments, augmented realities are provided in which a user is supplied AR content containing virtual “breadcrumbs” which mark a path (i.e., trail) to follow in a real world view.
According to another aspect of some exemplary embodiments, augmented realities are provided in which a user is supplied AR content containing virtual “dominos” which differentiate real world physical locations which have been visited from real world physical locations which have not been visited.
According to another aspect of some exemplary embodiments, virtual trails are generated using virtual augmentations to a real world view, where the virtual trails are created in response to a tracked target (e.g., a mobile electronic device) moving through a physical landscape. As the device moves from a first location to a second location to a third location and so forth, virtual objects are added to or updated within a 3D virtual model of that real world landscape at virtual world locations matching the real world locations. In effect, the mobile electronic device drops “breadcrumbs” in the form of virtual objects along the route navigated by the mobile electronic device. The virtual objects are thereafter usable to generate augmentations which allow a user to visually retrace the path of the tracked target that left the “breadcrumbs”.
Exemplary embodiments of the invention may involve a wide array of applications. AR trails of a breadcrumbs-type provided by exemplary methods and systems may provide guidance to hikers, bikers, skiers, and other outdoorsmen when they are lost or disoriented. AR trails may be provided for use by law enforcement personnel (e.g., police officers) when, for example, chasing a suspect or investigating (e.g., recreating) past events. Responding officers arriving at the initial scene of a crime may be provided with AR trails following an officer already in pursuit of a suspect. AR trails may be provided for replaying training scenarios. AR trails may be provided to mark a path to a particular target, destination, or other user.
AR trails of a dominos-type also have a variety of possible applications. For example, in the area of public safety, a dominos-type method and its AR output may easily highlight areas that have or have not been searched. As another example, in the area of wireless networks or surveying, the method and its AR output easily highlight areas that have or have not been measured. As yet another example, in a military context, the method and its AR output may assist in the clearing of minefields. Locations at which an initial AR augmentation has been removed and therefore which is no longer visible have been cleared, whereas locations at which AR augmentations are still visible remain to be unsafe.
Advantages of exemplary embodiments are multifold. Exemplary AR content may provide a relatively passive form of guidance that is neither intrusive or excessively distracting of a user's attention. For example, a simple path formed by visual cues or markers dotting a path of “breadcrumbs” provides a user simple and intuitive visual guidance without excessive distraction. Furthermore, in some embodiments, AR trails may easily display many markers within a user's viewing frustum without a risk of overwhelming, inundating, or confusing the user. A user may be provided with a readily understood visual basis for assessing not only the most immediate movement required but also movements in the future which may be in visual range.
For convenience of discussion, that with which the target is compared may be referred to as a reference. The target may be a person, an object, a location, or some other thing (typically but not necessarily tangible, and typically but not necessarily having recognizable boundaries). The reference may be a person, an object, a location, or some other thing (typically but not necessarily tangible, and typically but not necessarily having recognizable boundaries). A target may be a vehicle, a device such as a mobile electronic device (e.g., a mobile phone, wearable, laptop, smartwatch, etc.), an animal, a person of a particular type (e.g., a criminal suspect, a law enforcement officer, a soldier, a civilian, a child, etc.), a user, some other thing which may move from time to time, a plurality of any of these, and or a combination of any of these.
Proximity may be defined, characterized, or otherwise assessed in one or more of a variety of forms. At a high level, proximity entails how close together or how far apart two items are. Proximity may be a constant in the event a target and reference both maintain fixed positions within a common frame of reference. Proximity changes when either the target or the reference moves with respect to the other. For convenience of discussion, examples herein tend to describe proximity changes on the assumption that the reference is fixed (location is constant) and the target is mobile and has changed location at least once in some time window of interest. This is a non-limiting scenario used for exemplary illustration.
In
References (like reference locations Loc A and Loc B) are associated with virtual objects. Said differently, virtual objects are associated with the references. In
Storing information in virtual objects offers certain advantages. One advantage is the option of permanency. Central to many embodiments of the invention are changes which are made to augmentations to reflect changes in real world conditions, e.g., the movement of a real world target. As a result, augmentations may be temporary, even fleeting. Augmentations may come and go, and change so dramatically in appearance or by other sensory modality that information formerly portrayed by the augmentation is all but lost in an updated state of the augmentation. By contrast, virtual objects may persist where augmentations do not. In effect, virtual objects may provide a constant backbone for a variety of different and changing augmentations. Virtual objects may serve as data stores comprising a compilation of information for a respective real world location. An augmentation associated with a particular virtual object may be based upon just a fraction of the data maintained by the virtual object. That data which is not drawn upon may be preserved virtually, permitting it to remain available for a future change to the augmentation.
Virtual objects 111 and 112 have locations within the virtual model which match the real world locations with which the virtual objects are associated (here, Loc A and Loc B, respectively).
By definition virtual objects are entities of virtual reality and/or augmented reality which may or may not be “visible” or otherwise perceptible to a human (e.g., audible, tactile, etc.). At the time of writing this disclosure, the most common augmentations in AR content are visual and/or audial, and many of the illustrative examples will describe visual and/or audial augmentations. It should be understood, however, that additional or other sensory modalities may be employed in the practice of the invention (augmentations may be one or more of visual, audial, tactile, gustatory, and olfactoral). An “augmentation” is a sensory output of an AR or VR system by which a virtual object is expressed to a human. In the example of
If the target (here, device 102 or, indirectly, person 101) comes within the proximity range of a location (Loc A or Loc B), an augmentation is created or modified as a result of the change in proximity. At time T1, however, device 102 is not at Loc A or Loc B. Accordingly, an AR output may not show any augmentation at Loc A or Loc B. Of course, augmentations which are unrelated to this method of tracking proximity of a target may still be displayed.
Between T1 and T2 person 101 and device 102 moved. As a result, at time T2, device 102 falls within range A-A and is therefore at Loc A. A system detects this new proximity state (the proximity state at T2) and, as a result, the AR output is modified. An augmentation 121 (here, what appears as a darkly shaded rectangular post) now appears in the AR output at Loc A. The change in the tracked target's proximity to Loc A between T1 and T2 results in the change of AR content for Loc A. Whereas no augmentation existed at T1, an augmentation 121 now exists at T2. The augmentation 121 is a perceptible output corresponding with the virtual and otherwise imperceptible virtual object 111.
Between T2 and T3 person 101 and device 102 move again. At time T3, the targets are neither at Loc A or at Loc B. The change in proximity to Loc A (e.g., the change from the user being proximal to Loc A at T2 to the user no longer being proximal to Loc A at T3) has been detected by the system. In response to this detected change, the system changes the AR output to modify the appearance of the augmentation 121 displayed at Loc A. Now the AR output shows augmentation 131 at Loc A. Augmentation 131 is a modification of augmentation 121 and, as apparent from
Between T3 and T4 person 101 and device 102 move again. As shown in
The preceding paragraph focused on the proximity change of the target with respect to reference, Loc B. Between T3 and T4 other proximity states are changing as well, with respect to other references. In this limited example, the other reference is Loc A. From
In
In
In
In
The path (or paths) involved in method 500 may be any real world path, such as along or through bridges, roads, and buildings, or such as across country through a field or woods. A tracked target (such as a flying vehicle like an airplane, or a person using on such a vehicle) may be capable of flight, and a tracked target (such as with a submersible vehicle like a submarine) may be capable of diving. In these among other cases paths may especially involve altitude or depth considerations. In the alternative to receiving locations from a known origin, the system may receive a plurality of locations the origin of which is not necessarily known and/or consequential. In any case, the plurality of locations may describe a path used or useable to physically traverse a real world landscape. For instance, the locations may each be a fixed number of meters apart (1 meter, 10 meters, 20 meters, etc.) from an adjacent location in the same set and collectively trace out a path from a starting location to a destination location.
A target's location or proximity to one or more references may be tracked at different resolutions depending on the embodiment. In
The resolution of an AR trail formed by one or more augmentations may also vary, in part due to the resolution of the definition of locations, and in part due to other factors. As a very simple example, in
Methods like that illustrated by
Thus far both breadcrumbs-type methods and dominoes-type methods have been described in such a way that proximity of a target to a reference is the main determinant, perhaps the only determinant, of whether an augmentation is added, removed, or otherwise changed. This need not be the case in all embodiments. Another condition or trigger which may lead to a change in AR content is a target's view. In particular, it may not only matter where a target physically travels, but where the target's eyesight “travels”. In short, augmentations of an AR output may be changed based on one or more changes in proximity of a tracked target to a reference and/or the target's view of a reference.
The significance of the view criteria for determining whether to change an augmentation is illustrated well by the scenario of law enforcement or security personnel clearing locations while on patrol. A patrol may be conducted with a patrol car, from which an officer inside takes visual stock of the car's surroundings but does not physically leave the roadways or drive down every roadway (e.g., say a short alley connected to a main roadway). In this case, the physical presence of an officer in the alley may not be of particular criticality provided the officer's line of sight is able to reach the alley from one end. Thus, in an AR output, if the patrolling officer drives past the opening of an alleyway without directing his view into the alley, an augmentation corresponding to the alley may not be changed. By contrast, if the patrolling officer makes the same drive but directs his view into the alley, an augmentation corresponding to the alley may be changed or removed.
Another criterion which may be assessed to determine changes to augmentations is whether a particular action is taken by a target when at a reference location. For example, the consumer electronics show (CES) is a highly anticipated annual tradeshow (at least as of the writing of this disclosure). The CES, like many tradeshows, may involve booths at which respective companies showcase their products. A reporter at such an event must take particular actions in connection with her occupation—capture photographs or video content, conduct interviews, or read informational postings, for example. The proximity of a reporter (the target in this example) to a particular booth (a reference) may be of some interest to assess the reporter's coverage, but the actions taken by the reporter when at a booth are of particular consequence in their own right. For such contexts where not just locational proximity is of interest but also actions taken at locations, embodiments may be configured so that AR content is updated to signify what actions have been performed, or what actions have not been performed from some predetermined list. An action which may be signified by an augmentation may be any of a variety of actions. For example, the action may be taking a picture, sending an email, transmitting a signal (e.g., making a phone call), receiving a signal (e.g., receiving a phone call), activating a device, deactivating a device, etc.
The visual appearance of an augmentation may be configured to signify to a user an aspect of time. For example, the appearance may signify how old the augmentation is since it was created or since it was changed. The appearance may signify a date and/or time at which the augmentation was created or previously changed. For example, visual appearance of one or more augmentations may change opacity (e.g., fade) or change size (e.g., shrink) as time elapses, or as the augmentations age. A color, shade, or opacity of a visual augmentation may be configured to signify the amount of time since a target was at each augmentation's associated location. The visual appearance of the augmentations may indicate, for example, when the augmentation was created, how old the augmentation is, and/or how close or far the target is from the augmentation in real time.
The visual appearance of an augmentation may signify an aspect of speed. For example, if a tracked target passes a real location for which an augmentation is provided, the augmentation may visually signify the speed with which the tracked target passed the location. The visual appearance may also or alternatively give directional information. For example, the augmentation may include or signify a vector (e.g., arrow) which signifies the direction the target was moving when passing the location.
The visual appearance of an augmentation may signify the proximity of the associated real world location with some other real world location. For example, individual augmentations may indicate the measure of distance between the associated real world location and an end location, such as the end of a trail to which the augmentation contributes a visual path of “breadcrumbs”.
In breadcrumb-type embodiments, multiple trails may be displayed simultaneously in AR content. In such cases, the augmentations of respective trails may be provided with different visual appearances to differentiate one trail from another. In dominos-type embodiments, multiple trails may be displayed simultaneously in AR content. The different targets which give different trails may or may not be differentiated in the AR output. For example, if law enforcement officers are clearing an area, identifying the particular officer who clears an area may not be important, in which case the AR content may be configured to show only that locations are cleared (by any officer) without encumbering the AR content with information conveying who cleared each location.
The visual appearance of an augmentation may be configured to signify dwell time. Specifically, an aspect of the augmentation's appearance may correlate with the amount of time a tracked target spent at the location corresponding with that augmentation. As one example, augmentations showing a trail of a tracked target may be portrayed as drops of liquid left behind by the target, not unlike the manner in which a person soaked by rain leaves a trail of drops in their wake as they pass through a hallway. If the target remains at a location for a protracted period, the augmentation may comprise a pool of droplets, and a size of the puddle may qualitatively show the duration of time the target dwelled at the associated location. Other visual representations of dwell time, either qualitative or quantitative, may be used depending on the embodiment.
In some cases, the identity of the target may be unknown or deliberately hidden from a user. A trail left by the target may nonetheless be presented. In some embodiments augmentations may be configured to signify to a user an identity of one or more targets and/or other users. A trail of augmentations may, for instance, comprise information which identifies a person or object which left the trail. As an example, this feature may be used by personnel who must verify, corroborate, or otherwise check the work of other personnel. In the dominos case, if an inferior is required to visited particular locations as part of a patrol, a change in augmentations to indicate the inferior visited (versus no one visiting or someone who is not the inferior visiting) provides a straightforward means for the superior to ascertain the performance and completeness of the inferior's patrol.
The AR content may be determined or changed based on location or proximities of a second tracked object (besides the target). In particular, a user's location may be tracked and the AR content supplied to the user changed based on the user's location. For example, one augmentation “crumb” may be made visible at a time. When a user reaches that augmentation (i.e., reaches the location associated with the augmentation), the next augmentation in the trail becomes visible. As another example, the next augmentation in a sequence may have a unique identifier that all others do not. For instance, the next augmentation may be blinking while the remaining augmentations are still. As the users's location changes as the trail is followed, which augmentation is blinking is updated and changed.
In most embodiments a target (or targets, as the case may be) move through a real three-dimensional real world space. The real world locations a target visits affect virtual objects and corresponding augmentations which are associated with the real world locations through which or past which the target actually moves. The same is not necessarily true of users. While users may also move through a real three-dimensional real world space, users may also take advantage of the systems and methods herein from very different locations or vantages. For example, a user may be at a remote location from a target yet be supplied VR or AR content which shows a breadcrumbs-type or dominos-type trail for the target. A user may be entirely stationary, e.g., positioned at a desktop at a command center, in a squat car, in a data center, etc. Of course, the ability to provide AR content to a user in the same real world setting through which a target has previously passed is particularly advantageous. However not all embodiments necessarily conform to this modality of content output.
Different users may be provided different augmentations, even in cases where the augmentations are based on the same virtual objects. For example, different information or different kinds of information may be shared with different users. Users may be assigned or attributed different clearance levels (e.g., security levels), and augmentations selectively chosen for output to a particular user based on that user's clearance level.
Significant time may or may not elapse between i) a time when a target visits, passes through, or passes by a location and thereby triggers a change in a virtual object and/or augmentation for that location, and ii) a time when a user consumes the VR or AR content generated at least partially based on the target's visit or passing. The date/times (i) and (ii) may be both substantially in real time. The date/times (i) and (ii) may be substantially delayed, e.g., minutes, hours, a day or more, a week or more, or a month or more. Indeed, some embodiments may be employed for visually recreating events of the past. In a criminal investigation, for example, investigators may be supplied VR or AR content which permits them to visually observe the movement or path taken by a suspect at some time in the past that is of consequentiality to the investigation. As previously discussed, though individual augmentations may be relatively fleeting, the use of virtual objects as a backbone for producing augmentations provides a basis for more perpetual storage without loss of trail information tied to particular real world locations. This modality of storing and retrieving information is both efficient and robust.
Several of the features described above will now be illustrated with an example involving law enforcement officers (LEOs) pursuing a criminal suspect. Assume a criminal flees a crime scene on foot. The criminal is the tracked target. His movement is tracked, at least to some extent, by one or more of a helicopter keeping track of him from the air, a mobile device carried by the criminal the position of which has been estimated by cell tower triangulation or a GPS signal, and street cameras recording the criminal briefly as the criminal passes the respective cameras. As the fleeing criminal takes some path through streets, alleys, buildings, etc., he (briefly) visits real world locations such as a street corner, an intersection, a postal office, a particular point identifiable with GPS coordinates, etc. Visiting a location does not necessarily require any more than a moment at the location. One or more of these locations is associated with a virtual object, and the information for such virtual object is updated to indicate information about the criminal's visit, e.g., whether he visited or not, when he visited, whether he visited for more than a moment, how long he dwelled if he dwelled, how fast he was going when he visited, which direction he was traveling when he visited, etc. A law enforcement officer (LEO) on the ground is in pursuit of the criminal and is following his trail. AR content is supplied to the LEO to assist his ability to understand what path the criminal took. Augmentations at some of the aforementioned locations are created, with possible visual appearance characteristics to signify some of the information previously listed, to inform the LEO. The real time location of the officers may be tracked so that their proximity to the virtual objects is known. As a result, the LEO may be provided AR content which only contains augmentations within a certain proximity of the LEO. As a result, the LEO is not inundated with the AR content for the criminal's whole path, but receives only those augmentations to which he is nearest. Which augmentations are provided may be updated as the LEO's location and proximities changes, keeping the AR content relevant to the LEO in real time. Meanwhile a command center may be provided with another form of AR content. At the command center personnel may be working from stationary computers. The computers may display real world content, such as footage from the helicopter or footage from the street cameras, with virtual augmentations showing the breadcrumb-type path created by the fleeing criminal. To the command center, both the criminal and the LEO are possible targets of interest. Thus, the virtual content served to the command center for display may include augmentations which also trace a path the LEO has taken and/or is in the act of taking. In this way the command center is provided the ability to monitor both the fleeing suspect and the LEO who is on foot chasing the suspect. The information concerning the suspect's path and the LEO's path may both be stored using virtual objects in a virtual model. Sometime later, say one month, a criminal prosecutor or investigator may access the stored information. AR content is provided to the prosecutor or investigator the content provided to the command center the night of the crime. The temporal gap between the events which formed the trails and the serving of AR content is made possible by the supporting virtual model and storage/retrieval system.
“User,” as used herein, is an entity which employs a method, device, or system of the invention. A user may be a human, multiple humans, or some other entity. A user may be, for example, a person intended to consume AR content generated in accordance with a method or variant of a method disclosed herein. A user may be a person in pursuit of a mobile target. A user may be the target, such as would be the case that when a person wishes to retrace his or her steps and would benefit from AR showing where he or she previously visited, or what he or she did at visited locations, or how long he or she dwelled at visited locations, etc.
In
In the row labeled “VIRTUAL WORLD”, everything is virtual. The virtual world involves data characterizing the real world and which, when suitably processed, permits an output such as what is shown in the figures. Virtual objects 611, 621, 631, and 641 may or may not take a visual form in the virtual world. In
The row labeled “AR1” shows augmented reality content for a first user. The row labeled “AR2” shows augmented reality content for a second user. Real world content is depicted in these rows using broken lines for ease of distinguishing real and virtual to a reader of this disclosure. In actual implementations, real content and virtual content may be indistinguishable, as may be desired for highly realistic AR.
Returning to
The exemplary process 500 describes how paths may originally be formed and stored as well as how paths may be retrieved for display in an output. At block 501, a plurality of real world locations are received that collectively describe a path used or useable to physically traverse a real world landscape. For example, this receiving step may comprise receiving the plurality of real world locations from a mobile electronic device as a result of the mobile electronic device physically visiting the respective real world locations. The mobile device may periodically transmit or store its location as it is moved by a user through a real world geographic space.
At block 502, the locations from block 501 are stored for subsequent retrieval and use. If the locations of interest are already associated with virtual objects, information stored by the virtual object may be updated.
In
Blocks 503 to 505 of
At block 503, a real world frustum is determined. This real world frustum is regarded as the user's viewing frustum, and may correspond with the viewing frustum of a camera or cameras of an AR device which captures real world image data describing the user's real world surroundings. A real world frustum may be determined based on one or more of, for example, a present location (e.g., of the AR device), a field of view (e.g., of the AR device's camera), an orientation (e.g., of the AR device's camera), a position (e.g., of the AR device or camera), a pose (i.e., a combination of position and orientation), and assumptions about the near and far field limits (e.g., predetermined values for near and far field limits).
At block 504, the determined real world frustum is applied to the virtual world of the 3D virtual model. Essentially, the real world frustum is used to set the viewing frustum within the virtual world. Virtual objects which are inside the (now virtual) viewing frustum are found as candidates for augmentation. Virtual objects lying entirely outside the viewing frustum are not candidates for augmentation.
At block 505, a selection of augmentations based on the virtual object candidates occurs. This selection may involve one or more criteria including, for example, user option selections and the relationships between different virtual objects. In particular, virtual object candidates are selected which collectively describe a path or trail from the AR user's location (or approximate location) to some other location of interest, generally referred to as a destination location.
In
At block 506, a signal is initiated to direct or control the augmented reality output of an output device. The output device may simply be the original AR device for which the viewing frustum was previously determined. Depending on where the signal originates, it may be transmitted over a network such as one or more wireless networks and/or the Internet. In this way, processing related to process 100 may be performed on one or more remote computers (e.g., servers) of one or more cloud network, with output still being served to an end user on a network connected AR device. Alternatively, a single end-user device may be configured to perform much or all of process 500, in which case the signal initiated at block 506 may be initiated by a processor of the device and transmitted over a hardware connection to an output element such as a display (e.g., digital screen).
At block 507, the augmented reality is ultimately output to the user. Here, the signal of block 506 is used by an output device such as a head mounted display (HMD) or a digital display to show the augmentations together with real world content. The augmentations may include visual augmentations which are superimposed on the real world view. Significantly, the visual augmentations may form a visual trail or path configured for a user to follower.
Note the perspective from which this content may be shown to a user may vary. As depicted for a reader of this disclosure, both AR1 and AR2 use the same third-person elevated perspective as used for depicting the real world and virtual world. In general, AR outputs according to exemplary embodiments may take any of a variety of perspectives, including third-person, first-person, top-down, aerial, elevated, others, or some combination of these.
AR1 shows AR content which is based on changes in proximity of the truck with respect to real world locations. Thus, in the case of AR1, the truck is treated as a tracked target. Note that in the case of AR1, steps involving viewing frustums have not been illustrated. AR1 is a breadcrumbs-type AR in which the tracked target—the truck—appears to leave a trail of “crumbs” in the form of location-specific augmentations. The augmentations are configured in both position and appearance based on information from their corresponding respective virtual objects. In this example, the closer an augmentation to the tracked target, the larger the augmentation. Conversely, the further an augmentation from the tracked target, the smaller the augmentation. This is but one illustrative example.
AR2 is AR content generated on the basis of the car 651 being both a tracked target and a user to whom the AR content may be provided an AR output. (Specifically, the user may be the car or its operator, for example.) AR2 is an AR content which is based on changes in proximity of the car 651 with respect to reference locations associated with virtual objects 611, 621, 631, and 641. Note that the virtual objects 611, 621, 631, and 641 were nevertheless created on the basis of the truck 610 being a tracked target. This merely illustrates that multiple targets may be tracked, for similar or for different reasons, to ultimately yield AR content for a particular embodiment.
The location of car 651 is indicated in the virtual world as a dark circle to distinguish it from the “x” indicia for locations of the truck 610. As discussed above, which of the virtual objects are candidates for use in generating the AR content varies depending on the applied frustum 653 or 653′. As was discussed above, virtual objects 611 and 621 were selected at time T5 (
Thus far in the disclosure, attention has been focused on methods or processes which allow for augmented realities involving navigation, trail forming, trail following, area clearing, and the like. Such exemplary processes are generally carried out by some combination of hardware, software, and firmware, either in a particular electronics device or by a system of electronic devices.
An “output device”, as used herein, may be a device capable of providing at least visual, audio, audiovisual, or tactile output to a user such that the user can perceive the output using his senses (e.g., using her eyes and/or ears). In many embodiments, an output device will comprise at least one display, at least one speaker, or some combination of display(s) and speaker(s). A suitable display (i.e., display device) is a screen of a mobile electronic device (e.g., phone, smartphone, GPS device, laptop, tablet, smartwatch, etc.). Another suitable output device is a head-mounted display (HMD). In some embodiments, the display device is a see-through HMD. In such cases the display device passively permits viewing of the real world without reproducing details of a captured real world image feed on a screen. In a see-through HMD, it is generally be only the augmentations that are actively shown or output by the device. Visual augmentations are in any case superimposed on the direct view of the real world environment, without necessarily involving the display of any of the original video input to the system. In fact, for systems which do not use the video input to detect image data, the system may include one or more HMDs that have no camera at all, relying entirely on other sensors (e.g. GPS, gyro, compass) to determine the relevant augmentations, and displaying them on otherwise transparent glasses or visors. Output devices and viewing devices may include or be accompanied by input devices (e.g., buttons, touchscreens, menus, keyboards, data ports, etc.) for receiving user inputs.
An image of a real world view of a geographic space may be captured using one or more cameras.
A real world image or view may include (e.g., if from a city's street intersection camera for instance) HUD displays of date and time, or even could have augmentations in it from another augmented reality system that is providing video to a system based on the present disclosure. Input to one or more processors herein which is described as an image of a real world view may also or alternatively include one or more images which are not of a real world view. In general an augmented reality system need only have some portion of its input that is real. In some embodiments this may be a relatively small portion. Augmented reality systems may be used to modify the augmentations of other augmented reality systems in more complex applications, e.g., a system comprises distributed independent augmentation engines which make use of each other's output.
The data from the camera(s) 804 and collected by the other sensors (e.g., 806, 807, 808, 809, 810, and/or 811) is received by one or more processors 805. The camera data describes an image (or images) of a real world view of the geographic space in the vicinity of the camera and, in some but not all embodiments, in the vicinity of a user. In this example, the camera 804 and the display 814 are part of the same unitary electronic device 801, and the geographic space is also in the vicinity of the output device, display 814. The camera 804 and the electronic device 800 that includes the camera 804 may be regarded as the viewing device. Viewing devices may include various types (but not necessarily all types) of cameras, mobile electronic devices, mobile phones, tablets, portable computers, wearable technology, and the like. If the electronic device 801 were a head-mounted display (HMD), the HMD would be characterizable as a viewing device, too. A HMD that has no cameras, such as some see-through HMDs, may still qualify as a viewing device. A lens or pair of lenses of the see-through head-mounted display also qualifies as a viewing device.
A user may be able to view and benefit from what is shown by an output device, e.g. display 814, in real time. The real world view captured by the camera may be from the viewpoint of a human user as if the user were situated in the space (e.g., sitting, standing, walking, driving, biking, etc.). In many but not all embodiments, the user is situated in the space. A display is but one type of output device usable for providing augmentations. Displays, speakers, and vibratory devices are different examples of output devices usable in embodiments of the invention for providing augmentation outputs to a user detectable with their senses. In some embodiments a viewing device and an output device are the same device or part of the same device. For instance, an HMD may be accurately characterized as both a viewing device and an output device, as may a mobile phone or tablet that has both a camera and a display screen. Alternatively, viewing devices and output devices may be separate devices arranged at completely separate locations. A camera and sensors which are part of a viewing device collecting data about a real world view may be a first location and an output device like a display and/or speaker which provides augmentations with a reproduction of the real world view may be at a second and separate location at some distance apart from the first location.
The one or more processors 805 are configured to process the data from the one or more cameras 804, as well as other data like data from sensors 806, 807, 808, 809, 810, and/or 811, in order to generate an output useable by an output device to present an augmented reality to a user. In some embodiments, the image and/or sensor data from the cameras/sensors is sent over a network 703 (e.g., the Internet) to one or more remote servers comprising some of one or more processors that perform processing of the data before augmentations are provided to an output device for outputting to a user.
Exemplary systems, devices, and methods according to some exemplary embodiments provide augmented reality outputs comprising visual augmentations which collectively describe a path used or useable to traverse a real world landscape. For illustrative purposes,
Some embodiments of the present invention may be a system, a device, a method, and/or a computer program product. A system, device, or computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention, e.g., processes or parts of processes or a combination of processes described herein.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Processes described herein, or steps thereof, may be embodied in computer readable program instructions which may be paired with or downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions and in various combinations.
These computer readable program instructions may be provided to one or more processors of one or more general purpose computers, special purpose computers, or other programmable data processing apparatuses to produce a machine or system, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
While the invention has been described herein in connection with exemplary embodiments and features, one skilled in the art will recognize that the invention is not limited by the disclosure and that various changes and modifications may be made without departing from the scope of the invention as defined by the appended claims.
Claims
1. A computer-implemented method of augmented reality (AR), comprising
- associating a plurality of virtual objects with real world locations;
- tracking proximity of a real world target with respect to one or more of the real world locations associated with the plurality of virtual objects; and
- changing one or more augmentations of AR content for an AR output based on one or more changes in the proximity of the tracked target to the real world locations associated with the plurality of virtual objects,
- wherein an augmentation changed in the changing step corresponds with one of the plurality of virtual objects.
2. The method of claim 1, further comprising
- outputting to a user AR content comprising one or more augmentations which trace a path taken by the real world target.
3. The method of claim 1, further comprising
- outputting to a user AR content comprising one or more augmentations the presence of which signifies that the tracked target has not visited one or more real world locations within some predetermined time period.
4. The method of claim 1, wherein the changing step changes an appearance of a given virtual object based on one or more of the following:
- whether the target visited the real world location associated with the given virtual object;
- time elapsed since the target visited the real world location associated with the given virtual object;
- an amount of time the target spent at the real world location associated with the given virtual object;
- an identity of the target;
- whether a user visited the real world location associated with the given virtual object since the target visited the real world location associated with the given virtual object;
- whether the real world target's view included the real world location associated with the given virtual object; and
- a real time distance between the real world location associated with the given virtual object and a real time location of the target.
5. The method of claim 1, wherein the target is a person, a vehicle, or a mobile electronic device.
6. The method of claim 1, further comprising a step of creating the plurality of virtual objects in a virtual model.
7. A system of augmented reality (AR), comprising
- one or more processors configured to execute computer readable program instructions which, when executed, cause the one or more processors to perform associating a plurality of virtual objects with real world locations; tracking proximity of a real world target with respect to one or more of the real world locations associated with the plurality of virtual objects; and changing one or more augmentations of AR content for an AR output based on one or more changes in the proximity of the tracked target to the real world locations associated with the plurality of virtual objects, wherein an augmentation changed in the changing step corresponds with one of the plurality of virtual objects; and
- at least one output device for outputting the AR content to a user.
8. The system of claim 7, wherein the one or more processors are caused by the computer readable program instructions to make one or more augmentations of the AR output trace a path taken by the real world target.
9. The system of claim 7, wherein the one or more processors are caused by the computer readable program instructions to make a presence of one or more augmentations in the AR output signify that the tracked target has not visited one or more real world locations within some predetermined time period.
10. The system of claim 7, wherein the changing step changes an appearance of a given virtual object based on one or more of the following:
- whether the target visited the real world location associated with the given virtual object;
- time elapsed since the target visited the real world location associated with the given virtual object;
- an amount of time the target spent at the real world location associated with the given virtual object;
- an identity of the target;
- whether a user visited the real world location associated with the given virtual object since the target visited the real world location associated with the given virtual object;
- whether the real world target's view included the real world location associated with the given virtual object; and
- a real time distance between the real world location associated with the given virtual object and a real time location of the target.
11. The system of claim 7, wherein the target is a person, a vehicle, or a mobile electronic device.
12. The system of claim 7, further comprising a virtual model for containing the plurality of virtual objects.
13. A computer readable program product comprising a non-transitory computer readable medium with computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform
- associating a plurality of virtual objects with real world locations;
- tracking proximity of a real world target with respect to one or more of the real world locations associated with the plurality of virtual objects; and
- changing one or more augmentations of AR content for an AR output based on one or more changes in the proximity of the tracked target to the real world locations associated with the plurality of virtual objects,
- wherein an augmentation changed in the changing step corresponds with one of the plurality of virtual objects.
14. The computer readable program product of claim 13, further comprising computer readable instructions which cause the one or more processors to perform
- outputting to a user AR content comprising one or more augmentations which trace a path taken by the real world target.
15. The computer readable program product of claim 13, further comprising computer readable instructions which cause the one or more processors to perform
- outputting to a user AR content comprising one or more augmentations, wherein the presence of augmentations signifies that the tracked target has not visited one or more real world locations within some predetermined time period.
16. The computer readable program product of claim 13, wherein the changing step changes an appearance of a given virtual object based on one or more of the following:
- whether the target visited the real world location associated with the given virtual object;
- time elapsed since the target visited the real world location associated with the given virtual object;
- an amount of time the target spent at the real world location associated with the given virtual object;
- an identity of the target;
- whether a user visited the real world location associated with the given virtual object since the target visited the real world location associated with the given virtual object;
- whether the real world target's view included the real world location associated with the given virtual object; and
- a real time distance between the real world location associated with the given virtual object and a real time location of the target.
17. The computer readable program product of claim 13, wherein the target is a person, a vehicle, or a mobile electronic device.
18. The computer readable program product of claim 13, further comprising computer readable instructions which cause the one or more processors to perform
- creating the plurality of virtual objects in a virtual model.
19-30. (canceled)
31. The method of claim 1, further comprising
- providing a 3D virtual model having virtual locations configured to correspond with the real world locations, wherein the plurality of virtual objects are in the 3D virtual model;
- as the real world target moves from a first real world location to a second real world location, adding or updating one or more virtual objects within the 3D virtual model at virtual world locations matching the first and second real world locations.
32. The system of claim 7, wherein the one or more processors are caused by the computer readable program instructions to
- provide a 3D virtual model having virtual locations configured to correspond with the real world locations, wherein the plurality of virtual objects are in the 3D virtual model;
- as the real world target moves from a first real world location to a second real world location, add or update one or more virtual objects within the 3D virtual model at virtual world locations matching the first and second real world locations.
33. The computer readable program product of claim 13, further comprising computer readable instructions which cause the one or more processors to perform
- providing a 3D virtual model having virtual locations configured to correspond with the real world locations, wherein the plurality of virtual objects are in the 3D virtual model;
- as the real world target moves from a first real world location to a second real world location, adding or updating one or more virtual objects within the 3D virtual model at virtual world locations matching the first and second real world locations.
Type: Application
Filed: Feb 15, 2021
Publication Date: Jul 1, 2021
Inventors: Roger Ray Skidmore (Austin, TX), Dragomir Rosson (Webster, TX)
Application Number: 17/175,929