Augmented reality-based system and method to show the location of personnel and sensors inside occluded structures and provide increased situation awareness

An augmented reality system provides enhanced situational information to personnel located within an environment. A tracking system obtains viewpoint information corresponding to a real-time view of said environment. A processing system receives information from one or more sensors. Information includes sensor location information and status information about the environment and personnel therein. The processing system generates graphics using the sensor location information and the viewpoint information. Graphics include visual representations of said status information. A display displays the generated graphics on a display at a supervisor station that is outside of said environment such that graphics are superimposed on the real-time view.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is related to the following co-pending U.S. Patent applications, the entire contents of each of which are incorporated herein by reference:

    • 1. U.S. application Ser. No. 11/441,241 entitled “System and Method to Display Maintenance and Operation Instructions of an Apparatus Using Augmented Reality,” filed May 26, 2006; and
    • 2. U.S. application Ser. No. 11/______ entitled “Augmented Reality-Based System and Method Providing Status and Control of Unmanned Vehicles,” filed Mar. 8, 2007.
    • 3. U.S. application Ser. No. 11/516,545 entitled “Method and System for Geo-Referencing and Visualization of Detected Contaminants,” filed Sep. 7, 2006.

FIELD OF THE DISCLOSURE

This relates to showing the location and relevant data about personnel and sensors within an environment on top of a view of the environment.

BRIEF DESCRIPTION OF THE DRAWING

The following description, given with respect to the attached drawings, may be better understood with reference to the non-limiting examples of the drawing, wherein the drawings show:

FIG. 1: Prior art indoor personnel location information system;

FIG. 2: Prior art indoor personnel location information system;

FIG. 3: Prior art indoor personnel location information system;

FIG. 4: Exemplary indoor personnel location information system;

FIG. 5: Exemplary indoor personnel location information system incorporating an optical see-through display;

FIG. 6: Exemplary indoor personnel location information system incorporating a video display;

FIG. 7: Exemplary indoor personnel location information system incorporating a video display;

FIG. 8: Exemplary indoor personnel location information system incorporating a video display;

FIG. 9: Exemplary view of information superimposed on top of a view of an environment from the perspective of a responder outside of the environment;

FIG. 10: Exemplary indoor personnel location system; and

FIG. 11: Exemplary view of information superimposed on top of a view of an environment from the perspective of a responder within the environment.

INTRODUCTION

It has long been desirable to provide enhanced situational awareness to first responders. For example, providing first responders with more information about their surrounding environment could improve rescue operations. Prior art devices have attempted to provide enhanced situational awareness to first responders by combining a virtual representation of an environment (e.g. a map or 3D representation of a building) with status information received from first responders and having a user interpret the relevance of the combination and communicate the relevance to first responders.

FIG. 1 illustrates one of the simplest ways prior art systems provide information to first responders. In FIG. 1, first responder 100 and user 102 communicate across communication channel 106 using respective communication devices 104a and 104b. Communication devices 104a and 104b are typically radio transceivers and communication channel 106 is the physical medium through which communication devices are linked. In the case where communication devices 104a and 104b are radio transceivers, communication channel 106 is simply air. User 102 is located some distance away from responder 100 and has a perspective of the first responder's surrounding environment 108 that allows user 102 to provide responder 100 with information about the responder's environment 108 not immediately available to responder 100.

FIG. 2 illustrates a prior art system that enhances the prior art system of FIG. 1 by incorporating a computer 110 that provides a map 112 which can be viewed by user 102 on a display 114. Map 112 provides the user 102 with more information about environment 108. This information can be communicated by the user 102 to the first responder 100. Map 112 is typically a static 2D or 3D representation of a portion of environment 108.

FIG. 3 illustrates a prior art system that enhances the prior art system of FIG. 2 by equipping a first responder 100 with a sensor 116 that allows first responder's location to be monitored. This allows computer 110 to plot the first responder's location on the map 112. Thus, display 114 provides the user 102 with information about a first responder's location where the first responder's location is superimposed on map 112.

Prior art systems illustrated in FIGS. 1 and 2 do not combine a dynamic representation of an environment with information received from first responders. As a consequence, user 102 typically must create a mental picture of the location of each first responder 100 with respect to the environment 108 by using communications received from each responder or team of responders. Even if a map or 3D display of a virtual view of the environment is used as in systems illustrated in FIG. 3, this view is not aligned with the real environment 108, and therefore requires mental integration to put/relate received information in context with the environment 108. Further, given the large number of buildings in a metropolitan area, it is very rare that a map or a 3D model will be available for every building.

The following example provides an illustration of exemplary prior art systems. In an event where a one-story building is on fire, firefighters (i.e. first responders) arrive and enter the building. As they move around they let the captain (i.e. user) know roughly where they are in the building (e.g. “I am entering the North-East corner”). The captain can use a map of the building to plot the locations of the firefighters in the building or a more modern system might automatically plot the locations on a map, given that there are sensors able to sense the location of the responders in the building. The captain can then communicate information to the firefighters about their locations based on information from the map and/or the information from the captain's view of the building. Alternatively the captain might use his view of the building for his own use without communicating information to the firefighters. In this example, dynamic information about the building (e.g. what parts are on fire and/or going down) is not combined with information received from the firefighters (e.g. locations). That is, the captain must look to the map to determine where the firefighters are located and look to the building to see which parts are on fire and integrate both types of information to determine if any firefighters are in danger. Only after such integration can the captain communicate to the firefighters if they are in danger.

The system described herein uses augmented reality to show information received from first responders on top of a live view of the first responders' environment. By having information received from first responders on top of a dynamic/live/real-time view of an environment, the system can provide an enhanced representation of the environment. This enhanced representation can be used to provide enhanced situational awareness for first responders. For example, in the scenario described above, if a part of the building can be seen as becoming weak under fire, the captain will immediately be able to determine if any of the firefighters are in danger by looking at the information superimposed on the dynamic view of the environment. The captain can then call the person at this location to leave that area of the building. Further, if a responder is down, it is also possible to see the responder's location with respect to the actual building, which can be useful in determining the best way to reach the responder given the current conditions of the building. The system can also show the locations and values of sensors placed within the environment superimposed on top of a real-time view of the environment. For example, when a temperature sensor is dropped by a firefighter, the sensor's own tracking system (or last location of the firefighter at the time he dropped the sensor) provides the location of the sensor. By showing data coming from the sensor on top of a real-time view of the environment, the captain can directly relate the sensor reading with a location in the environment.

THE PRESENTLY PREFERRED EXEMPLARY EMBODIMENTS

The situation awareness system described herein makes use of Augmented Reality (AR) technology to provide the necessary view to the user. AR is like Virtual Reality, but instead of using completely artificial images (e.g. maps or 3D models), AR superimposes 3D graphics on a view of the real world. A very simple example of AR is used in football games to show the first down with a yellow line. An example of an AR system that can be employed is one described in the examples of U.S. application Ser. No. 11/441,241 in combination with the present disclosure.

An AR visualization system comprises: a spatial database, a graphical computer, a viewpoint tracking device, and a display device.

The working principle of an Augmented Reality system is described below. A display device that displays dynamic images corresponding to a user's view is tracked. That is, the display's position and orientation are measured by a viewpoint tracking device. A spatial database and a graphical computer associate information with a real world environment. Associated information is superimposed on top of the dynamic display image in accordance with the display's position and orientation, thereby creating an augmented image.

FIG. 4 shows an exemplary embodiment of an AR system used to provide a responder 100 with enhanced situational awareness.

Computer 110 collects information from sensors 116a worn by first responder 100 and sensors 116b placed throughout surrounding environment 108. Sensors 116a include sensors that allow a first responder's location to be monitored and can include sensors that provide information about the state of the first responder's health (e.g. temperature, heart rate, etc.), the first responder's equipment (e.g. capacity and/or power level of equipment), and conditions within the first responder's immediate proximity (e.g. temperature, air content). Sensors 116b can include any sensors that can gather information about the conditions of the environment 108. Examples of such sensors 116b include temperature sensors, radiation sensors, smoke detectors, gas sensors, wind sensors, pressure sensors, humidity sensors and the like. It should be noted that although FIG. 4 shows a single first responder 100 such a representation is not intended to be limiting and any number of first responders 100 could be located within the environment 108. Sensors 116a and 116b communicate with computer 110 using a communication medium similar to communication medium 106.

Computer 110 updates database 118 with the information received from sensors 116a and 116b. Database 118 stores the information from sensors 116a and 116b. Database 118 may additionally contain model information about the environment 108, such as a 3D model of a building. Model information may be used to provide advanced functionality in the system, but is not necessary for the basic system implementation. Graphical computer 110 continuously renders information from the database 118, thereby showing a first responder's location within the environment 108 and generating graphics from current information received from sensors 116a and 116b. Instead of each sensor having a tracking device, since a sensor is not moving once it is installed by a firefighter, it is possible to use the location of the firefighter once the sensor was dropped (or activated) as the location of the sensor. Graphical computer 110 continuously renders information from the database 118, thereby placing current information from sensors 116a and 116b in the database 118.

Computer 110 also receives information about the viewpoint of the display device 124 captured by the tracking device 122. Computer 110 takes information from database 118 and tracking information about the viewpoint of the display device 124 and renders current information from sensors 116a and 116b in relation to the current view of the display device 124 by using a common 3D projection process. By measuring in real time the position and orientation of the display 124 (i.e. determining user's viewpoint), it is possible to align information rendered from the spatial database 118 with the corresponding viewpoint.

The display device 124 is able to show the image generated by the graphical computer 110 superimposed on a view of the surrounding environment 108 as “seen” by or through the display device 124. Thus, user 102 has a global perspective of environment 108 with information superimposed thereon and is able to use this enhanced global perspective of environment 108 to communicate information to first responder 100. Thereby, efficiently providing first responder 102 with information about environment 108 that would not otherwise be available to first responder 102.

FIG. 5 shows display device 124 implemented with an optical see-through display. Optical see-through displays show the image generated by the graphical computer 110 superimposed on a view of the surrounding environment 108 by using an optical beam splitter that lets through half of the light coming from environment 108 in front and reflecting half of the light coming from a display 124 showing the image generated by the graphical computer 110, in effect combining the real world environment 118 and the graphics. See-through displays are typically in the form of goggles that are worn by the user 102, but could be also a head-up display as used in fighter jets.

FIG. 6 shows the display device 124 implemented with a video see-through display. Video see-through displays achieve showing the image generated by the graphical computer 110 superimposed on a view of environment 108 by using a video camera 126 to take video of environment 108 and show it on the display 124 after the image from the graphical computer 110 has been overlaid on top of it using video rendering device 128. In the case of a video see-through display, the camera capturing the view of the real world environment 108 and the display showing this video can be co-located in a single display device as shown in FIG. 6 or placed at different locations as shown in FIG. 7. Video displays can be implemented using various types of display technologies and can be located anywhere in proximity to user 102. In the firefighter example, display 124 could be a screen inside a truck, a tablet computer or PDA outside the truck.

The three exemplary configurations (optical see-through, collocated camera and display, and camera and display at different locations) described above are mentioned for understanding the implementation of an AR system and are not intended to be limiting. Any AR system that is able to superimpose graphics that appear attached to the real world could be used.

FIG. 8 is an exemplary embodiment in which user 102 and first responder 100 each have displays 124. This allows first responder 100 to receive the augmented video displayed on the user's 102 video see-through display. In the case where there are multiple first responders 100, each responder 100 could receive video generated from any AR system used by other responder 100. Multiple video sources can be provided to user 102 and each first responder 100 using any known manner i.e. split screen, multiple displays, switching sources, etc. It should be noted that a responder can receive video on a display in implementations where the user display and camera are not co-located, as in FIG. 7.

It should be noted that the elements shown in FIGS. 4-8 can be combined in any number of ways when appropriate (e.g. tracking 122 and computer 110 can be combined within the same physical device). Further, the elements shown can be distinct physical devices that communicate with each other in any appropriate manner. For example, sensors 116a and 116b can communicate with computer 110 via radio communications, across a network using network protocols, or using any other appropriate method of electronic communications.

FIG. 9 illustrates the concept of superimposing information on top of a real world exterior building view using the example of firefighters inside a burning building. In this example, the view of the building could be provided from a video camera which could be mounted on a truck near the building, handled by a cameraman, or mounted on the captain in such a way that it represents the captain's view. The image could also be generated by using an optical see-through display. The image in FIG. 9 provides a perspective of the environment from outside the actual environment. As shown, the locations of responders called “John” and “Joe” are superimposed on top of the real-life view of the building. It should be noted that although John and Joe are represented by the same symbol (i.e. a cross), such a representation is not intended to be limiting and each responder could be represented by a unique symbol.

Also displayed next to John and Joe's names is information regarding the status of each. In this example, the percentage represents the level of oxygen that each has in his oxygen tank. Here John's oxygen tank is 80% full and Joe's tank is 90% full. This can provide the captain with an idea of how much time John and Joe have to operate inside the building. Avatars can alternatively be used to represent the first responders or any of the information received from them. There are numerous known avatars used in the electronic gaming art which could be incorporated into the system. Further, graphical information can represent the combined status of John and Joe, e.g. an indicator that represents a combined oxygen level. Alternatively, both could be shown using an aggregated symbol (a team of responders operating close by, the reduce display cluttering).

Shown above the representations of John and Joe, is data coming from sensors that have been dropped inside the building. In this exemplary embodiment, the sensors are temperature sensors dropped somewhere in the building on fire. One such sensor is supplying the temperature reading 435 degrees as shown. Other types of sensors and additional temperature sensors can be placed throughout the building.

Although the principles of the exemplary system is illustrated by using the example of as a situational-awareness system for firefighters, exemplary systems can also be implemented as systems in the following applications: a system showing “blue force” (friendly) in military operations in urban environments, a system showing locations of workers inside a building, a system used by security personnel showing the location of an alarm sensor that has been triggered, a system used by maintenance personnel to show the location and data about a sensor or a set of sensor in a plant/building.

FIG. 10 illustrates the principle of a first responder 100 having an augmented view. The system in FIG. 10 is not implemented any differently from the systems in FIGS. 4-8. The difference is the position of the person with the augmented view of the environment 108. In FIGS. 4-8, this person (user 102) is outside of the environment 108. In FIG. 10, this person (first responder 100) is inside the environment. When first responder 100 has an augmented view, it allows first responders 100 to have information from sensors superimposed on his view of the environment, which is unique from the user's view. An example of such a view is shown in FIG. 11, in which the firefighter's view of a room is superimposed by information such as who is in the room and where they are, as well as values coming from sensors that have been placed in the environment.

In this case where first responders 100 see graphics superimposed on their individual views, first responders 100 might be using a helmet, wrist mounted or PDA/tablet display to see the information aligned with the real world environment 108. This display 124 would show the same information such as the locations and data about responders 100 and sensors 116b or any other useful information. If a responder 100 needs assistance, it becomes now easy for other responders to come to help because they see where the responder 100 is with respect to the environment 108 and they can see how to get to the responder 100 while avoiding obstacles.

While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims

1. A method of providing enhanced situational information to personnel located within an environment comprising the steps of:

receiving information from one or more sensors located within the environment, where information includes sensor location information and status information about the environment and personnel therein;
obtaining viewpoint information corresponding to an exterior real-time view of said environment;
generating graphics using said sensor location information and viewpoint information, wherein the graphics include visual representations of the status information;
displaying the generated graphics on a display at a supervisor station that is outside of said environment such that the generated graphics are superimposed on the real-time view.

2. The method of claim 1, further comprising the step of:

communicating situational information from the supervisor station to said personnel based on information visually represented on the display.

3. The method of claim 1, wherein said personnel located within an environment are occluded from the exterior real-time perspective.

4. The method of claim 1, wherein the said personnel are firefighters and said environment is a building.

5. The method of claim 1, wherein the said personnel are soldiers and said environment is a combat zone.

6. The method of claim 2, wherein the step of communicating situational information includes:

sending video on the display at a supervisor station to said personnel located within the environment.

7. The method of claim 1, wherein said visual representations include symbols representing said personnel.

8. An augmented reality system for providing enhanced situational information to personnel located within an environment comprising:

one or more sensors located within the environment;
a tracking system that obtains viewpoint information corresponding to a real-time view of said environment;
a processing system that receives information from said one or more sensors, where the information includes sensor location information and status information about the environment and personnel therein, and generates graphics using said sensor location information and said viewpoint information, wherein the graphics include visual representations of said status information;
a display that displays the generated graphics at a supervisor station that is outside of said environment such that the generated graphics are superimposed on the real-time view.

9. The augmented reality system of claim 8, further comprising:

a communications device that communicates situational information from the supervisor station to said personnel based on information visually represented on the display.

10. The method of claim 8, wherein the said personnel are firefighters and said environment is a building.

11. The method of claim 8, wherein the said personnel are soldiers and said environment is a combat zone.

12. The augmented reality system of claim 8, wherein the situational information includes video on the display at a supervisor station to said personnel located within the environment.

13. The method of claim 8, wherein said visual representations include symbols representing said personnel.

14. A method of providing enhanced situational information to personnel located within an environment comprising the steps of:

receiving information from one or more sensors located within the environment, where the information includes sensor location information and status information about the environment and personnel therein;
obtaining viewpoint information corresponding to a real-time view of said environment;
generating graphics using said sensor location information and viewpoint information, wherein the graphics include visual representations of the status information;
displaying the generated graphics on a display such that the generated graphics are superimposed on the real-time view.

15. The method of claim 14, wherein the said personnel are firefighters and said environment is a building.

16. The method of claim 14, wherein the said personnel are soldiers and said environment is a combat zone.

17. A method of providing enhanced situational information about an environment comprising the steps of:

receiving information from one or more sensors located within the environment, where the information includes sensor location information and status information about the environment;
obtaining viewpoint information corresponding to a real-time view of said environment;
generating graphics using said sensor location information and viewpoint information, wherein the graphics include visual representations of the status information;
displaying the generated graphics on a display such that the generated graphics are superimposed on the real-time view.

18. The method of claim 17, wherein said environment is a building, said one or more sensors correspond with one or more alarms in the building, and said status information indicates whether an alarm has been triggered.

Patent History
Publication number: 20080218331
Type: Application
Filed: Mar 8, 2007
Publication Date: Sep 11, 2008
Applicant: ITT Manufacturing Enterprises, Inc. (Wilmington, DE)
Inventor: Yohan Baillot (Reston, VA)
Application Number: 11/715,338
Classifications
Current U.S. Class: Plural Diverse Conditions (340/521)
International Classification: G08B 19/00 (20060101);