VIRTUAL ENVIRONMENT CAPTURE
A system and method for capturing and modeling an environment is described. A user may direct a rangefinder within an environment to capture data points and correlate the data points with predefined objects. One may also capture images of the environment to be used in the modeling environment. Using the present system, one may easily and quickly capture an environment using relatively few data points.
Latest Patents:
This application is a divisional of U.S. patent application Ser. No. 10/441,121, entitled “Virtual Environment Capture”, filed May 20, 2003, which claims priority to U.S. Provisional Application Ser. No. 60/432,009, entitled “Virtual Environment Capture System,” filed Dec. 10, 2002, whose contents are expressly incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
Aspects of the present invention relate to computer modeling of environments. More specifically, aspects of the present invention relate to accurately capturing virtual environments.
2. Description of Related Art
A need exists for three-dimensional models of interior environments. From the gaming community to the military, actual renderings of building interiors based on actual buildings are being sought. For example, applications are being developed to provide training at the individual human level to facilitate improved military operations and capabilities as well as situational awareness. The training of individuals improves with a greater level of realism during the training exercise. For instance, benefits gained form mission rehearsal increase with the realism of the environment in which the mission is rehearsed.
Numerous applications exist for modeling outdoor terrain including satellite mapping and traditional surveying techniques. However, interior modeling techniques have been constrained to using pre-existing computer aided design drawings, building layout drawings, and a person's recollection of an environment. These modeling approaches may rely on out-of-date information (for example, blueprints of a 50-year-old building). Further, these modeling approaches fail to accurately define furniture types and furniture placement. In some situations, one may need to orient oneself based on the furniture and wall coverings of a room. Without this information, one may become lost and have to spend valuable time reorienting oneself with the environment. Further, one may want to know what is movable and what is not movable. Accordingly, realistic interiors are needed for simulation and training purposes.
Traditional methods exist for rendering building interiors including the use of computer aided design (CAD) drawings or laser scanning for the development of 3-D building models. However, as above, these approaches have not been helpful when attempting to interact a rendered environment in real-time. In some cases, the overload of information has made interaction with these environments difficult. For instance, original CAD drawings are not always current and laser scanning requires significant post processing. In particular, laser scanning about a point in space generates a cloud of data points which need to be processed to eliminate redundant points and convert them into easily-handled polygons. Further, laser scanning from one position is sub-optimal as points in a room are generally occluded by furniture. Blocking access to these points makes subsequent image processing difficult as a computer would need to determine whether non-planar points represent furniture or a curve in the wall of a room. To accurately determine the walls and contents of a room, multiple passes may be needed to attempt to eliminate blind spots caused by furniture and other objects. Further, these techniques do not capture data with knowledge of what is being captured.
Some techniques have been used to capture photographs of building interiors. For example, spherical photographic images have been used to capture photographic information surrounding a point in space in a room. While these photographic images may be assembled into a video stream representing a path through the room, deviation from the predefined path is not possible.
Accordingly, an improved system is needed for capturing and modeling interior spaces.
BRIEF SUMMARYAspects of the present invention are directed to solving at least one of the issues identified above, thereby providing an enhanced image capture and modeling system for creating virtual environments from building interiors. In one aspect, a user is provided with a system for capturing photorealistic images of a building's interior and associating the images with specific data points. The data points may be obtained through use of a rangefinder. In some aspects, an imaging system may be attached to the rangefinder, thereby capturing both data points and photorealistic images at the same time. The user may use the combination of the rangefinder, imaging system, and a system that monitors the vector to the rangefinder from a position in the room and orientation of the rangefinder to determine distances from a central location to various objects about the user. Because of the user's interaction and selection of specific points of objects, objects may be selected and modeled using few data points. In other aspects, other sensors may be used in addition to or in place of the rangefinder and the imaging system. Information generated from the sensors may be combined into a virtual environment corresponding to the actual scanned environment. The rangefinder may include a laser rangefinder or other type of rangefinder. These and other aspects are described in greater detail below.
BRIEF DESCRIPTION OF THE DRAWINGSAspects of the present invention are illustrated by way of example and not limited in the accompanying figures.
Aspects of the present invention permit rapid capture and modeling of interior environments. Instead of using clouds of data points, which can require significant post processing to determine individual objects and construct a virtual environment, aspects of the present invention relate to capturing specific points in an actual environment and coordinating the point capture with predefined objects. The coordination with objects simplifies or eliminates post processing requirements.
The following description is divided into headings to assist a user in understanding aspects of the present invention. The headings include: modeling; data capture hardware; data gathering processes; data synthesis software; and applications of modeled environments.
Modeling
Interiors of buildings may be used by different entities. For example, architects may use virtual environments during construction of a building so as to properly define locations of walls, HVAC systems and pillars. Architects may also use virtual environments to experiment with remodeling spaces including moving walls and windows. Real estate agents and interior designers may use virtual environments for providing tours to prospective buyers and for suggesting modifications to existing spaces. Police and military units may further use virtual model of interior spaces for rehearsing in real-time delicate missions including rescuing hostages. For example, if a unit needed to rescue hostages from an embassy and the embassy was modeled, the unit would be able to rehearse in real time from a variety of different angles how different missions may be accomplished. Also, using a virtual rendering of a building allows experimentation into blowing holes in the walls to permit entry into adjoining spaces as well as determining potential angles from which enemy combatants may be firing. Further, the modeled environment may be used in distributed simulations in which two or more users or groups of users may interact within the environment. Further, the modeled environment may be used for military training.
To capture and model interior space, the system uses a combination of hardware and software to capture and model an environment.
The following provides a brief overview of how, in one example, one may model a room. Referring to
The selection of the object from the object library or object database may include a list of points that need to be obtained to locate a virtual instance of the object so that it corresponds to the actual location of the object in the actual room. This may be done by navigating a user interface (using a mouse or voice commands) to have the system retrieve the virtual object from the object library or database. The contents of the object library may be established based on information from a customer requesting the modeled environment. For example, the customer may specify that the environment to be captured may be an office (for which an object library of office furniture is obtained), a residential house (for which an object library of typical home furnishings is obtained), an industrial complex (for which industrial machines and other industrial objects are obtained), and the like. In other situations, a generic library of objects may be used.
To ease capturing of the points, the points may be gathered in a predefined order as shown in step 12. The order of the points to be gathered may be explicitly shown in the object from the object library or database. Once the system knows which object will be identified next by the user, the user manually guides a rangefinder to illuminate spots on the actual object in the room. The rangefinder described herein may be any suitable rangefinder including a laser rangefinder, a sonar-based rangefinder, a system that determines the position of a probe (wired or wirelessly, a ruler, touch probe, GPS, ultra-wideband radio positioning, for instance), and the like. For purposes of simplicity, the rangefinder is described as a laser rangefinder.
The laser from the laser range finder may be visible light or may be invisible to the unaided eye. When the user has positioned the illuminated spot on a desired position of the object, the user then indicates to the system (manually, verbally, or the like) that the current distance from the illuminated spot on the object to the data capture unit should be read as shown in step 13. Because the system determines the vector to the data capture unit and the angle of the laser rangefinder, the vector from the data capture unit to an illuminated spot may be determined. The location of the captured spot may be used as a three dimensional reference point for the virtual object from the object library.
Next, in step 14, the virtual object may be placed in the modeled environment. Next, in step 15, another point may be captured. In step 16, the instance of the object may be oriented in the modeled environment. In step 17, another point may be captured and, in step 18, the instance of the object scaled with the third point. It is appreciated that, alternatively, the placement of the object instance into the virtual environment may occur only after the scale of the object has been determined in step 18 rather than in step 13.
Next, a user directs a laser rangefinder from location A 25 to illuminate spots 1 and 2, sequentially, on chair 24 (which is chair 20 but being illuminated by the laser rangefinder). The user may press a button on the laser rangefinder, may press a button on a mouse, or may speak into a headset to have the system accept the current location illuminated by the laser rangefinder as a location for chair 24. Additional points may be used to further orient and/or scale each chair.
The two locations 1 and 2 scanned from chair 24 are used to orient an instance of the chair in virtual environment 26. The virtual instance of the chair 28 is registered by the two locations 1 and 2 from position A 27 corresponding to position A 25 in the actual room. One position of the two positions 1 and 2 may be used to locate the chair 28 and the other of the two positions 1 and 2 used to specify the angle of the chair 28. It is appreciated that some objects may only be positioned in an environment. For example, one may need only to specify that a plant exists on a desk, rather than determine the orientation and size of the plant. Here, the size of the plant may be predefined in the library of objects.
The following describes in greater detail the system used to create the modeled environment.
Data Capture Hardware
The data capture hardware may be separated into two systems: a mobile data capture unit and a data assembly station.
Data assembly station 108 then creates the modeled environment 103. The data assembly station 108 may use the raw data from the sensors or may use created objects that were created at the mobile data capture unit 104. The data assembly station may also perform checks on the information received from the mobile data capture unit 104 to ensure that objects do not overlap or occupy the same physical space. Further, multiple data capture units (three, for example, are shown here) 104, 112, and 113 may be used to capture information for the data assembly station 108. The data assembly station 108 may coordinate and integrate information from the multiple data capture units 104, 112, and 113. Further, the data assembly station 108 may organize the mobile data capture units 104, 112, and 113 to capture an environment by, for instance, parsing an environment into sub-environments for each mobile data capture unit to handle independently. Also, the data assembly station 108 may coordinate the mobile data capture units 104, 112, and 113 to capture an environment with the mobile data capture units repeatedly scanning rooms, and then integrate the information into a single dataset. Using information from multiple data capture units may help minimize errors by averaging the models from each unit. The integration of data at the data assembly station 108 from one or more mobile data capture units may include a set of software tools that check and address issues with modeled geometry as well as images. For instance, the data assembly station 108 may determine that a room exists between the coverage areas from two mobile data capture units and instruct one or more of the mobile data capture units to capture that room. The data assembly station 108 may include conversion or export to a target runtime environment.
The mobile data capture unit 104 and the data assembly station 108 may be combined or may be separate from each other. If separate, the mobile data capture unit 104 may collect and initially test the collected data to ensure completeness of capture of an initial environment (for example, all the walls of a room). The data assembly station 108 may assemble data from the mobile data capture unit 104, build a run-time format database and test the assembled data to ensure an environment has been accurately captured.
Testing data among multiple mobile data capture units may occur at multiple levels. First, each mobile data capture unit may capture data and test for completeness the captured data. The data assembly station 108 collects data from the mobile data capture units. The received data is integrated and tested. Problem areas may be communicated back to the operators of the mobile data capture units (wire or wirelessly) to resample or fix or sample additional data. Data integration software may be running on the data assembly station (for instance, Multi-Gen Creator by Computer Associates, Inc. and OTB-Recompile may be running). It is appreciated that other software may be run in conjunction with or in place of the listed software on the data assembly station.
The testing may performed at any level. One advantage of performing testing at the mobile data capture units is that it provides the operators the feedback to know which areas of captured data need to be corrected.
The objects created at the mobile data capture unit may have been created with the guidance of a user, who guided the laser rangefinder 105 about the environment and capturing points of the environment or may be inferred from the order or previously captured knowledge of an environment. For example, one may position a door frame in a first room. Moving into the second room, the door frame may be placed in the second room based on 1) the locations of the rooms next to each other, 2) a predefined order in which rooms were to be captured, and/or 3) a determination of the walls being in alignment and the door frame being common to both rooms. The captured points may then be instantiated as objects from a predefined object library 109. These object models from object library 109 may be selected through user interface 111 as controlled by processor 110.
In one aspect, the laser rangefinder 105 may be separate from the camera 106. In another aspect, the laser rangefinder 105 and the camera 106 may be mounted together and aligned so that the camera 106 will see the image surrounding a spot illuminated by the laser rangefinder 105. The camera 106 can be a digital camera that captures a visual image. The camera 106 may also be or include an infrared/thermal camera so as to see the contents of and what is located behind a wall. Further, camera 106 may also include a night vision camera to accurately capture night vision images.
The laser rangefinder 105 determines distance between an object illuminated by its laser spot and itself. Using articulated arm 202, one may determine the position of the illuminated spot compared to tripod 201. With tripod 201 fixed at a position in a room, the laser rangefinder 105 may be moved about on articulated arm 202 and have all distances from illuminated objects to tripod 201 determined. These distances may be temporarily stored in portable computer 203. Additionally or alternatively, the portable computer may transmit received information over connection 204 to data assembly station 108. Connection 204 may be a wired or wireless connection.
Articulated arm 202 may be a multi-jointed arm that includes sensors throughout the arm, where the sensors determine the angular offset of each segment of the arm. The combination of segments may be modeled as a transform matrix and applied to the information from the laser rangefinder 105. The result provides the location of tripod 201 and the pitch, yaw, and roll of laser rangefinder 105 and camera 106. Multi-jointed arms are available as the Microscribe arms by Immersion.com of San Jose, Calif. and from Faro Technologies of Florida. The specific arm 202 to be used may be chosen based on the precision required in the modeling of an environment. While the laser rangefinder 105 may not benefit from a determination of roll, the camera 106 benefits in that images captured by the camera may be normalized based on the amount of roll experienced by the camera and the images captured by the camera.
Camera 106 may include a digital camera. Any digital camera may be used, with the resulting resolution of the camera affecting the clarity of resultant modeled environments. The images from camera 106 may be mapped onto the surfaces of objects to provide a more realistic version of the object.
Further, articulated arm 202 may be replaced by a handheld laser and camera combination. The handheld combination may include a location determining device (including GPS, differential GPS, time multiplexed ultra wideband, and other location determining systems). The handheld unit may transmit its location relative to the tripod 201, another location, or may associate its position with the data points captured with the laser rangefinder. By associating its position with the information from the laser rangefinder, a system modeling the environment would be able to use the points themselves of an environment, rather than using an encoder arm. For instance, one may use GPS or ultra wide band radio location systems to generate location information without having to be physically connected to the mobile data capture unit 104. Further, different range finding techniques may be used including physical measurements with a probe or the like may be used to determine points in the environment.
In one example, the mobile data capture unit 104 may be controlled by a keyboard and/or a mouse. Alternatively, mobile data capture unit 104 may also include a headset 205 including a microphone 206 for receiving voice commands from an operator. The operator may use the microphone 206 to select and instruct the portable computer 203 regarding an object being illuminated by laser rangefinder 105 and captured by camera 106.
It is noted that the camera 106 does not need to be attached to the laser rangefinder 105. While attaching the two devices eliminates processing to correlate a spot from the laser rangefinder 105 with an image captured by camera 106, one may separate the laser rangefinder from the camera and have the camera view an illuminated object and apply an offset matrix to the camera 106 and the laser rangefinder 105 to correlate the camera image with the location of the illuminated object.
Referring to
Data Gathering Processes
In one aspect, the user may follow a predefined path or series of locations on an object and have a processor match an instance of an object to the obtained locations. Alternatively, the user may obtain the locations in a random order and have the processor attempt to orient the instance of the object to fit the obtained locations.
Camera 106 may be aligned with laser rangefinder 105 so that the spot illuminated by laser rangefinder 105 falls roughly in the center of the field of view of camera 106. Image 603′ I is the image captured by camera 106 when the laser rangefinder 105 is illuminating spot 603′. Also, image 603″ I is the image captured by camera 106 when the laser rangefinder 105 is illuminating spot 603″. Because of an order of capturing locations as specified by the instance of the picture, the dimensions of the picture may be specified. The instance of the picture may also include an indication of what visual information is relevant to it. For location 603′, the relevant image information from image 603′ I is the bottom right quadrant of the image. For location 603″, the relevant image information from image 603″ I is the top left quadrant of the image. These two quadrants of the images may be correlated until an overlap is found (shown here as region 603 O). The two images 603′ I and 603″ I may be merged using this overlapping region 603 O as a guide. Image portions lying outside the picture may be eliminated. Further, the remaining image may be further parsed, cropped, skewed, stretched, and otherwise manipulated to rectify its image in association with a modeled version of picture 603. This may be necessary if the angle capturing picture 603 resulted in a foreshortened image of the picture 603.
In an alternative approach, one may concentrate on corners of the room to locate walls, ceilings, and floors. For instance, one may locate spots 715-717 to determine one corner and locate spots 718-720 to determine another corner. Locating corners as opposed to walls per se provides the benefit of being able to locate corners relatively easily when walls are occluded by multiple objects. Alternatively, one may locate walls separately and have locations of corners determined by data assembly station 108 as described above. This alternative approach eliminates issues of corners being occluded by furniture and other objects. Further, one may specify points 715-717 and the rangefinder repeatedly scan between the points, thereby constructing a number of points from which the planes of the walls and ceilings may be determined. Further, the intersecting edge between the plane of the ceiling and those of the walls may be determined from the multiple scans between the points by obtaining the maximum/minimum values of the points and using these values to determine the location of the edge. This technique may be used for other objects or intersection of objects as well. This technique uses some degree of automation to speed up data capture, thereby capturing knowledge of the environment at a higher level.
A user may move between multiple rooms to capture the environments of each room.
From position 904, a user locates doorframe 908 and determines its location with respect to position 904. Next, the user moves to position 905 and determines the location of doorframe 908 from the perspective of position 905. Next, the user then determines the location of doorframe 909 from perspective of position 905, then moves to position 906 and again determines the location of doorframe 909 from the perspective of position 906. Finally, the user locates doorframe 910 from both of positions 906 and 907. Using this stepwise approach, the locations of the mobile data capture unit as it moves between positions 904-907 may be determined.
This stepwise approach as shown in
Data Synthesis Software
Region 1105 shows a user interface similar to that of user interface 21 of
Region 1107 display is a variety of objects to be selected. For example, various types of office desks may be shown for placement as the desk in
Finally, shown in broken line is step 1206 where testing may be performed. Various tests may be performed to attempt to complete the capture data of an environment. For instance, one may test to see if a captured environment encloses 3D space. If not, then the missing pieces may be obtained. One may test to see if all polygons face the interior of a room. While one may specify that both sides of polygons are to be mapped with graphics, a savings of run-time processing power is achieved when only the visible sides of polygons are mapped. Testing the polygons permits one to ensure that the sides of the polygons that are to have graphics are facing inward toward the inside of a room. One may test to ensure that polygons share vertices. The polygons may be adjusted so that vertices match. One may test to see if distinct rooms share the same rendered space (for instance, to determine if rooms intersect), excessive gaps between walls, walls are a constant thickness (where specified), and floors and ceilings are the same level (where specified). The specification of these items mentioned above may occur before, during, and/or after data capture. In one example, the specification of the height of the ceiling may be specified when a project is first started (along with other requirements including maximum error tolerances) and the like.
Applications Of Modeled Environments
Revenue may be generated from the ability to exchange information as shown in
To establish the specific requirements, tolerances, type of object library to use, and the like, a customer may be interviewed to determine the extent of the modeling requirements. For instance, a single house may only need a single modeler. However, a complex may require two or more mobile data capture units. Further, these mobile data capture units may have redundant capabilities or may have complementary data capturing capabilities (for instance, one mobile data capturing unit may have a visual camera associated with a rangefinder and second mobile data capturing unit may have a thermal camera associated with the rangefinder). The requirements may also specify the accuracy needed for the project. If, for example, a project requires certain accuracy and a mobile data capture unit exceeds the predefined accuracy tolerance, then the mobile data capture unit may be alerted to this issue and the mobile data capture unit instructed to reacquire data points to conform to the accuracy tolerance. Further, the data assembly station may coordinate mobile data capture units to work together as a team to capture environments and not duplicate efforts. To this end, the data assembly station may assign tasks to the mobile data capture units and monitor their completion of each task.
Aspects of the present invention have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure.
Claims
1. A process for reducing errors in a rendered environment comprising the steps of:
- modeling a first room from a first position within said first room;
- modeling at least a second room connected to said first room from a second position;
- returning to said first position within said first room and obtaining distance information from said first position;
- determining an error value associated with the environment based on the distance information from said first position; and,
- sequentially subtracting the error value from the modeling of said first and said at least second room.
2. The process according to claim 1, further comprising:
- modeling at least a third room connected to said second room.
3. The process according to claim 1, wherein the modeling of the first room or the at least a second room includes a process for constructing a virtual object based on actual environment data comprising:
- receiving an identification of an object to be created;
- receiving at least one point to be associated with said object, said at least one point identified by a rangefinder;
- instantiating said object based on said at least one point.
4. The process according to claim 3, wherein the process for constructing a virtual object based on actual environment data further comprises:
- obtaining additional points; and
- modifying said instantiated object based on said additional points.
5. The process according to claim 3, wherein the process for constructing a virtual object based on actual environment data further comprises:
- instructing a user which points of said object a user is to capture with said rangefinder.
6. The process according to claim 5, wherein said points are from locations on a physical object in a room to be modeled.
7. The process according to claim 3, wherein the process for constructing a virtual object based on actual environment data further comprises:
- capturing image information with a camera associated with said rangefinder; and
- adding the image information to said object.
8. The process according to claim 3, wherein said rangefinder is a laser rangefinder.
9. A modeling and correction system configured to reduce errors in a rendered environment comprising:
- a contextual data capture unit configured to receive environment data from a first position and at least a second position, said data capture unit including a rangefinder for measuring specific points selectable by a user;
- a data assembly station configured to receive information from said data capture unit, and further having computer-readable instructions on a computer-readable medium that when executed model an environment based on environment data captured by said data capture unit, wherein the modeling comprises: after receiving the environment data from the first position and at least the second position, receiving a second reading of environment data from the first position; determining an error value associated with the environment based on the second reading from said first position; and sequentially subtracting the error value from the modeling of said first position and said at least second position.
10. The system according to claim 9, wherein the first position is in a first room and the at least second position is in a second room connected to the first room.
11. The system according to claim 10, wherein the contextual data capture unit is configured to receive environment data from at least a third position.
12. The system according to claim 9, wherein the computer-executable instructions when executed further comprise a process to construct a virtual object based on actual environment data comprising:
- receiving an identification of an object to be created;
- receiving at least one point to be associated with said object, said at least one point identified by a rangefinder; and
- instantiating said object based on said at least one point.
13. The system according to claim 12, wherein the process for constructing a virtual object based on actual environment data further comprises:
- obtaining additional points; and
- modifying said instantiated object based on said additional points.
14. The system according to claim 12, wherein the process for constructing a virtual object based on actual environment data further comprises:
- instructing a user which points of said object a user is to capture with said rangefinder.
15. The process according to claim 14, wherein said points are from locations on a physical object in a room to be modeled.
16. The system according to claim 12, wherein the process for constructing a virtual object based on actual environment data further comprises:
- capturing image information with a camera associated with said rangefinder; and
- adding the image information to said object.
17. The process according to claim 9, wherein said rangefinder is a laser rangefinder.
Type: Application
Filed: Jan 18, 2007
Publication Date: May 24, 2007
Applicant:
Inventors: Matthew Kraus (Orlando, FL), Benito Graniela (Orlando, FL), Mary Pigora (Orlando, FL)
Application Number: 11/624,593
International Classification: G06K 9/00 (20060101);