DISPLAYING THREE-DIMENSIONAL VIRTUAL OBJECTS BASED ON FIELD OF VIEW
Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
Latest Microsoft Patents:
- Systems and methods for electromagnetic shielding of thermal fin packs
- Application programming interface proxy with behavior simulation
- Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service
- Machine learning driven teleprompter
- Efficient electro-optical transfer function (EOTF) curve for standard dynamic range (SDR) content
This application is a continuation of U.S. patent application Ser. No. 15/299,247, filed Oct. 20, 2016, which claims priority to U.S. Provisional Application Ser. No. 62/311,324, filed on Mar. 21, 2016, the entire contents of each of which are hereby incorporated herein by reference for all purposes.
BACKGROUNDMixed reality display systems, such as head-mounted display systems, may be configured to present virtual imagery superimposed over a view of a real world background to provide an immersive visual experience.
SUMMARYExamples are disclosed herein that relate to displaying three-dimensional virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
Another example provides a display device comprising a camera, a display, a logic subsystem, and a storage subsystem comprising instructions that are executable by the logic subsystem to acquire image data imaging an environment via the camera, from the image data detect a surface within the environment, receive a user input requesting display of a three-dimensional virtual object, display the three-dimensional virtual object via the display, receive a user input moving a position of the three-dimensional virtual object, detect the three-dimensional virtual object being moved to within a threshold distance of the surface, display the three-dimensional virtual object to appear as being positioned on the surface, and constrain movement of the three-dimensional virtual object to being along the surface.
Yet another example provides a display device comprising a depth camera, a display, a logic subsystem, and a storage subsystem comprising instructions that are executable by the logic subsystem to acquire image data of an environment and monitor the environment via the camera, detect the presence of a physical hand in the environment, and in response, automatically display a menu via the display.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Augmented reality display devices may present virtual objects superimposed over a real world environment.
Accordingly, examples are disclosed herein that relate to modifying of three-dimensional virtual objects to fit within the field of view of an augmented reality display device. The disclosed examples allow a display device to automatically scale a three-dimensional virtual object when suitable, and also to determine when modifying of a three-dimensional virtual object would not be suitable and thus display the three-dimensional virtual object without modification. The disclosed examples further provide for positioning a three-dimensional virtual object appropriately with regard to the real world, such as automatically positioning a three-dimensional virtual object to appear as being positioned on a real world surface. Examples are also disclosed relating to automatically displaying a virtual user interface, such as a menu, based on the detected presence of a user's hand in image data acquired by the HMD 104.
The HMD 104 includes one or more outward-facing image sensors configured to acquire image data of the environment 100. Examples of such image sensors include, but are not limited to, depth sensor systems (e.g. time-of-flight, structured light camera(s), and/or stereo camera arrangements), and two-dimensional image sensors (e.g. RGB and/or grayscale sensors). Such image sensor(s) may be configured to detect images in visible, infrared and/or other suitable wavelength range(s). The acquired image data may be utilized to obtain a three-dimensional representation of the environment 100 for use in displaying and positioning three-dimensional virtual objects appropriately. As a non-limiting example, the HMD 104 may be configured to obtain a three-dimensional surface reconstruction mesh of the environment 100 as constructed from acquired depth data. As another example, the HMD 104 may retrieve a previously constructed, stored three-dimensional representation of the environment from a local storage subsystem residing on the HMD, or from a remote computing device, based upon a current location of the HMD. While shown in
The HMD 104 may obtain content for display from any suitable source, such as from a remote server over a network, from one or more peer computing devices (e.g. peer HMDs), or from local storage. Likewise, the HMD may display any suitable type of content including but not limited to virtual object models representing three-dimensional virtual objects. A three-dimensional virtual object may be displayed in a variety of ways. For example, a three-dimensional virtual object may be displayed in a world-locked view relative to the real world environment 100. The term “world-locked” as used herein signifies that the three-dimensional virtual object is displayed as positionally fixed relative to real world objects (although this position may be user-adjusted in some examples). This may allow a user to move within the environment 100 to view a displayed three-dimensional virtual object from different perspectives, for example, as if the user were walking around a real object. A three-dimensional virtual object also may be displayed in a “device-locked” view, such that its position is fixed relative to the HMD display.
Some three-dimensional virtual objects may include dimensional information, e.g. metadata regarding dimensions at which the three-dimensional virtual object is to be displayed relative to the real-world environment. The dimensional information may specify any suitable scale characteristics (e.g. dimension(s), volume(s), aspect ratio(s), scale(s), orientation(s), position(s), etc.), and may take any suitable form. For example, the dimensional information may include units specifying dimensions of the three-dimensional virtual object, or may comprise unitless values that are given units by a computer program used to display/view the three-dimensional virtual object. The dimensional information may be obtained from metadata for the three-dimensional virtual object, a data table, a database, or any other suitable location.
Where a three-dimensional virtual object is too large to fit entirely within the HMD field of view 106 due to its scale exceeding the HMD field of view 106, the HMD 104 may modify the three-dimensional virtual object to fully fit within the HMD field of view 106 so that a user may view the three-dimensional virtual object in its entirety.
The relative aspect ratios of the three-dimensional virtual object 200 and the HMD field of view 106 may vary depending upon an apparent distance from the HMD 104 at which the three-dimensional virtual object 200 is to appear when displayed. Thus, in some examples, the HMD 104 may be configured to position the three-dimensional virtual object 200 at a preselected virtual distance from the HMD 104 for comparing the dimensional information of the three-dimensional virtual object 200 and the HMD field of view 106. In the example of
As mentioned above, the aspect ratio of the three-dimensional virtual object 200 that is used for the comparison may represent the aspect ratio from a particular perspective (e.g. a front perspective), or a combination of the largest dimension in each coordinate direction. Thus, in some examples the three-dimensional virtual object 200 may be rotated and compared to the HMD field of view 106 at a plurality of different viewing angles to determine a largest dimension in each coordinate direction. This may be performed for a single axis (e.g. a direction of gravity), or along each coordinate axis. The HMD 104 may then modify the three-dimensional virtual object 200 based upon a largest dimension in each coordinate direction to obtain the modified three-dimensional virtual object 204.
Other suitable methods of modifying the three-dimensional virtual object 200 to fit within the HMD field of view 106 may be utilized. As another example, an apparent distance of the displayed three-dimensional virtual object 200 from the user 102 may be varied while keeping the dimensions fixed until the three-dimensional virtual object 200 fits within the HMD field of view 106. In this sense, the scale is changed by changing the apparent distance from the user at which the virtual object 200 is displayed. Yet another example includes scaling and/or resizing the three-dimensional virtual object 200 to a predetermined scale/size and displaying at a predetermined distance from the user 102. In other examples, other suitable references points (e.g. a center point) of the three-dimensional virtual object 200 may be utilized for positioning the three-dimensional virtual object 200. Further, the user 102 may modify a scale/size of the three-dimensional virtual object 200 via user input(s).
In some instances, it may be desired not to modify a three-dimensional virtual object for display, such as where it is desired to display a three-dimensional virtual object in a true-to-size scale relative to the real world. As such, dimensional information provided to the HMD 104 may further specify whether a three-dimensional virtual object is not to be modified for display, e.g. to remain true-to-size. Such information may take the form of metadata (e.g. a flag) that is included in the three-dimensional virtual object data file, a user-controllable setting, or any other suitable form. The flag may be set at development time by an author of the model, or may be a user-adjustable parameter. As an example, a three-dimensional virtual object representing a piece of furniture may be displayed true-to-size with respect to the real world environment 100, rather than modified to fit in the HMD field of view 106. Accordingly,
In some examples, a user may manipulate or otherwise interact with three-dimensional virtual objects that are displayed.
As the user 102 moves the three-dimensional virtual object 400, the three-dimensional virtual object 400 may be displayed such that it appears to “snap to” a surface within the environment 100. As such, the HMD 104 may be configured to detect one or more surfaces in the environment 100 via image data acquired from outward-facing camera(s). Non-limiting examples of methods to detect geometric planes in the three-dimensional representation include the use of algorithms such as linear least squares or random sample consensus (RANSAC) algorithms.
In the example of
The HMD 104 may be configured to cease display of the visual feedback 404 as the user moves the three-dimensional virtual object 400 away from the floor or other surface. On the other hand, when the three-dimensional virtual object 400 is released within the threshold distance (or upon any other suitable user input), the HMD 104 may automatically reposition the three-dimensional virtual object 400 so that it is in apparent contact with the floor. This automatic repositioning may take the form of an animated movement to illustrate the snap effect. Further, subsequent movement of the three-dimensional virtual object 400 via user input may be constrained to being along the floor. The HMD 104 may further apply collision and/or occlusion logic to display the three-dimensional virtual object 400 with regard to other virtual objects and/or real objects as the user moves the three-dimensional virtual object 400 along the floor.
In some examples, the HMD 104 may automatically determine a position within the environment in which to display the three-dimensional virtual object 400. For example, the HMD 104 may select as a display position a surface having a size and/or shape that can fit the three-dimensional virtual object, an unoccupied surface as opposed to a cluttered surface, and/or a surface type that is compatible with a virtual object type or characteristic (e.g. a wall for display of a virtual poster). It will be understood that a three-dimensional virtual object may appear to “snap to” and be constrained to move along any other suitable surface and/or feature of the environment than the floor.
The display device may compare the dimensional information in any suitable manner, examples of which are described above with regard to
Continuing with
Continuing, the method 600 further may include, at 624, receiving a user input requesting display of a second three-dimensional virtual object, and at 626, determining that the second three-dimensional virtual object is not to be modified compared to the field of view of the display device. This may be determined, for example, by checking a status of a flag associated with the second three-dimensional virtual object, at 628, by receiving a user input requesting the second three-dimensional virtual object to not be modified, at 630, or in any other suitable manner. The method 600 then includes, at 632, displaying the second three-dimensional virtual object without modifying the second three-dimensional virtual object. As described above, this may allow selected three-dimensional virtual objects to appear within the environment true-to-size. In some examples, such a flag also may indicate not to allow the second three-dimensional virtual model to be moved or rotated.
The head-mounted display device 900 further includes an additional see-through optical component 906, shown in
The augmented reality display system 1000 may further include a gaze detection subsystem 1010 configured to detect a gaze of a user for detecting user input interacting with displayed virtual lists and objects, for example when the augmented reality display system 1000 is implemented as a head-mounted display system, as mentioned above. The gaze detection subsystem 1010 may be configured to determine gaze directions of each of a user's eyes in any suitable manner. In this example, the gaze detection subsystem 1010 comprises one or more glint sources 1012, such as infrared light sources configured to cause a glint of light to reflect from each eyeball of a user, and one or more image sensor(s) 1014, such as inward-facing sensors, configured to capture an image of each eyeball of the user. Changes in glints from the user's eyeballs and/or a location of a user's pupil as determined from image data gathered via the image sensor(s) 1014 may be used to determine a direction in which to project gaze lines from the user's eyes. Further, a location at which gaze lines projected from the user's eyes intersect the environment may be used to determine an object at which the user is gazing (e.g. a displayed virtual object and/or real background object). The gaze detection subsystem 1010 may have any suitable number and arrangement of light sources and image sensors. In other examples, the gaze detection subsystem 1010 may be omitted.
The augmented reality display system 1000 also may include additional sensors. For example, the augmented reality display system 1000 may include non-imaging sensor(s) 1016, examples of which may include but are not limited to an accelerometer, a gyroscopic sensor, a global positioning system (GPS) sensor, and an inertial measurement unit (IMU). Such sensor(s) may help to determine the position, location, and/or orientation of the augmented reality display system 1000 within the environment, which may help provide accurate 3D mapping of the real-world environment for use in displaying three-dimensional virtual objects appropriately in an augmented reality setting.
Motion sensors, as well as the microphone(s) 1008 and the gaze detection subsystem 1010, also may be employed as user input devices, such that a user may interact with the augmented reality display system 1000 via gestures of the eye, neck and/or head, as well as via verbal commands. It will be understood that sensors illustrated in
The augmented reality display system 1000 further includes one or more speaker(s) 1018, for example to provide audio outputs to a user for user interactions. The augmented reality display system 1000 further includes a controller 1020 having a logic subsystem 1022 and a storage subsystem 1024 in communication with the sensors, the gaze detection subsystem 1010, the display subsystem 1004, and/or other components. The storage subsystem 1024 comprises instructions stored thereon that are executable by the logic subsystem 1022, for example, to receive and interpret inputs from the sensors, to identify location and movements of a user, to identify real objects in an augmented reality field of view and present augmented reality imagery therefore, to detect objects located outside a field of view of the user, and to present indications of positional information associated with objects located outside the field of view of the user, among other tasks.
The logic subsystem 1022 includes one or more physical devices configured to execute instructions. For example, the logic subsystem 1022 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic subsystem 1022 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem 1022 may include one or more hardware or firmware logic subsystems configured to execute hardware or firmware instructions. Processors of the logic subsystem 1022 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic subsystem 1022 optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem 1022 may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
The storage subsystem 1024 includes one or more physical devices configured to hold instructions executable by the logic subsystem 1022 to implement the methods and processes described herein. When such methods and processes are implemented, the state of the storage subsystem 1024 may be transformed—e.g., to hold different data.
The storage subsystem 1024 may include removable and/or built-in devices. The storage subsystem 1024 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage subsystem 1024 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that the storage subsystem 1024 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage device.
Aspects of the logic subsystem 1022 and the storage subsystem 1024 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The display subsystem 1004 may be used to present a visual representation of data held by the storage subsystem 1024. This visual representation may take the form of three-dimensional virtual objects, a graphical user interface (GUI) comprising a menu and/or other graphical user interface elements. As the herein described methods and processes change the data held by the storage subsystem 1024, and thus transform the state of the storage subsystem, the state of see-through the display subsystem 1004 may likewise be transformed to visually represent changes in the underlying data. The display subsystem 1004 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with the logic subsystem 1022 and/or the storage subsystem 1024 in a shared enclosure, or such display devices may be peripheral display devices.
The communication subsystem 1026 may be configured to communicatively couple the augmented reality display system 1000 with one or more other computing devices. The communication subsystem 1026 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem 1026 may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem 1026 may allow the augmented reality display system 1000 to send and/or receive data to and/or from other devices via a network such as the Internet.
It will be appreciated that the depicted augmented reality display system 1000 is described for the purpose of example, and is not meant to be limiting. It is to be further understood that the augmented reality display system 1000 may include additional and/or alternative sensors, cameras, microphones, input devices, output devices, etc. than those shown without departing from the scope of this disclosure. For example, the display system 1000 may be implemented as a virtual reality display system rather than an augmented reality system. Additionally, the physical configuration of a display device and its various sensors and subcomponents may take a variety of different forms without departing from the scope of this disclosure. Further, it will be understood that the methods and processes described herein may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer program product. Such computer program products may be executable locally on the augmented reality display system 1000 or other suitable display system, or may be executable remotely on a computing system in communication with the augmented reality display system 1000.
Another example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display. Where the three-dimensional virtual object is a first three-dimensional virtual object, the method may additionally or alternatively include receiving a user input requesting display of a second three-dimensional virtual object, determining that the second three-dimensional virtual object is not to be modified and displaying the second three-dimensional virtual object without modifying the second three-dimensional virtual object. Determining that the second three-dimensional virtual object is not to be modified may additionally or alternatively include checking a status of a flag associated with the second three-dimensional virtual object. Determining that the second three-dimensional virtual object is not to be modified may additionally or alternatively include receiving a user input requesting the second three-dimensional virtual object to not be modified. The method may additionally or alternatively include not permitting the second three-dimensional virtual model to be moved or rotated based upon determining that the second three-dimensional virtual object is not to be modified. Comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view may additionally or alternatively include comparing an aspect ratio of a bounding box defined around the three-dimensional virtual object to an aspect ratio of the field of view. Comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view may additionally or alternatively include positioning the three-dimensional virtual object at a preselected virtual distance from the display device and comparing based upon the preselected virtual distance. Positioning the three-dimensional virtual object may additionally or alternatively include positioning a nearest location of the three-dimensional virtual object at the preselected virtual distance from the display device. Comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view further may additionally or alternatively include receiving a user input positioning the three-dimensional virtual object at a virtual distance from the display device, and comparing based upon the virtual distance. Comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view may additionally or alternatively include rotating the three-dimensional virtual object around an axis and comparing aspect ratios at multiple axial positions, and wherein modifying further comprises modifying the three-dimensional virtual object based upon a greatest aspect ratio of the aspect ratios at the multiple axial positions. The method may additionally or alternatively include obtaining the dimensional information from metadata for the three-dimensional virtual object. The method may additionally or alternatively include obtaining the dimensional information from one or more of a data table and a database.
Another example provides a display device, comprising a camera, a display, a logic subsystem, and a storage subsystem comprising instructions that are executable by the logic subsystem to acquire image data imaging an environment via the camera, from the image data, detect a surface within the environment, receive a user input requesting display of a three-dimensional virtual object, display the three-dimensional virtual object via the display, receive a user input moving a position of the three-dimensional virtual object, detect the three-dimensional virtual object being moved to within a threshold distance of the surface, display the three-dimensional virtual object to appear as being positioned on the surface, and constrain movement of the three-dimensional virtual object to being along the surface. The instructions may be additionally or alternatively executable to display the three-dimensional virtual object to appear as being positioned on the surface when a user input moving the three-dimensional virtual object is completed within the threshold distance of the surface. The instructions may be additionally or alternatively executable to, prior to displaying the three-dimensional object to appear as being positioned on the surface, display visual feedback related to a distance of the three-dimensional virtual object from the surface. The instructions may be additionally or alternatively executable to display the visual feedback by displaying a change in an appearance of the surface proximate to a position of the three-dimensional virtual object.
Another example provides a display device comprising a depth camera, a display, a logic subsystem, and a storage subsystem comprising instructions that are executable by the logic subsystem to acquire image data of an environment and monitor the environment via the camera, detect the presence of a physical hand in the environment, and in response, automatically display a menu via the display. The instructions may be additionally or alternatively executable to detect that the physical hand is no longer present in the environment, and cease display of the menu. The instructions may be additionally or alternatively executable to display the menu as world-locked. The instructions may be additionally or alternatively executable to receive a user input made via the physical hand interacting with the menu.
The configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed. The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Claims
1. A display device comprising
- a camera;
- a display;
- a logic subsystem; and
- a storage subsystem comprising instructions that are executable by the logic subsystem to: acquire via the camera image data imaging an environment, from the image data, detect a surface within the environment, receive a user input requesting display of a three-dimensional virtual object, display the three-dimensional virtual object via the display as a stereoscopic virtual object, receive a user input moving a position of the three-dimensional virtual object, detect the three-dimensional virtual object being moved to within a threshold distance of the surface, display a change in an appearance of the surface proximate to a position of the three-dimensional virtual object, display the three-dimensional virtual object to appear as being positioned on the surface, and constrain movement of the three-dimensional virtual object to being along the surface.
2. The display device of claim 1, wherein the instructions are further executable to display the three-dimensional virtual object to appear as being positioned on the surface when a user input moving the three-dimensional virtual object is completed within the threshold distance of the surface.
3. The display device of claim 2, wherein the instructions are executable to display an animated movement of the three-dimensional virtual object toward the surface before displaying the three-dimensional virtual object to appear as being positioned on the surface.
4. The display device of claim 1, wherein the instructions are further executable to, prior to displaying the three-dimensional object to appear as being positioned on the surface, display visual feedback related to a distance of the three-dimensional virtual object from the surface.
5. The display device of claim 1, wherein the instructions are further executable to display the change in the appearance of the surface proximate to the position of the three-dimensional virtual object by displaying a change in one or more of a color, a texture, a pattern, an outline, and a shading of the surface.
6. The display device of claim 1, wherein the instructions are executable to receive a user input moving a position of the three-dimensional virtual object by detecting on or more of a speech input, a gesture input, a touch input and an eye gaze input.
7. The display device of claim 1, wherein the instructions are further executable to output one or more of audio and haptic feedback in response to detecting that the three-dimensional virtual object is moved to within a threshold distance of the surface.
8. The display device of claim 1, wherein the instructions are further executable to apply collision and occlusion logic while displaying the three-dimensional virtual object on the surface.
9. On a display device comprising a camera and a display, a method comprising:
- acquiring via the camera image data imaging an environment;
- from the image data, detecting a surface within the environment;
- receiving a user input requesting display of a three-dimensional virtual object;
- displaying the three-dimensional virtual object via the display as a stereoscopic virtual object;
- receiving a user input moving a position of the three-dimensional virtual object;
- detecting the three-dimensional virtual object being moved to within a threshold distance of the surface;
- displaying a change in an appearance of the surface proximate to a position of the three-dimensional virtual object;
- displaying the three-dimensional virtual object to appear as being positioned on the surface; and
- constraining movement of the three-dimensional virtual object to being along the surface.
10. The method of claim 9, further comprising displaying the three-dimensional virtual object to appear as being positioned on the surface when a user input moving the three-dimensional virtual object is completed within the threshold distance of the surface.
11. The method of claim 9, further comprising displaying an animated movement of the three-dimensional virtual object toward the surface before displaying the three-dimensional virtual object to appear as being positioned on the surface.
12. The method of claim 9, further comprising, prior to displaying the three-dimensional object to appear as being positioned on the surface, displaying visual feedback related to a distance of the three-dimensional virtual object from the surface.
13. The method of claim 9, wherein displaying the change in the appearance of the surface proximate to the position of the three-dimensional virtual object comprises displaying a change in one or more of a color, a texture, a pattern, an outline, and a shading of the surface.
14. The method of claim 9, wherein receiving a user input moving a position of the three-dimensional virtual object comprises detecting on or more of a speech input, a gesture input, a touch input and an eye gaze input.
15. The method of claim 9, further comprising outputting one or more of audio and haptic feedback in response to detecting that the three-dimensional virtual object is moved to within a threshold distance of the surface.
16. The method of claim 9, further comprising applying collision and occlusion logic while displaying the three-dimensional virtual object on the surface.
17. An augmented reality display device comprising
- a depth camera;
- a see-through display;
- a logic subsystem; and
- a storage subsystem comprising instructions that are executable by the logic subsystem to acquire image data of an environment and monitor the environment via the depth camera, in the absence of a physical hand in the environment as determined from the image data, not displaying a menu, detect the presence of a physical hand in the environment based on the image data acquired by the depth camera, and in response, automatically display the menu as an augmented reality image via the see-through display.
18. The augmented reality display device of claim 17, wherein the instructions are further executable to detect that the physical hand is no longer present in the environment, and cease display of the menu.
19. The augmented reality display device of claim 17, wherein the instructions are further executable to display the menu as world-locked.
20. The augmented reality display device of claim 17, wherein the instructions are further executable to receive a user input made via the physical hand interacting with the menu.
Type: Application
Filed: Nov 16, 2018
Publication Date: Apr 4, 2019
Applicant: Microsoft Technology Licensing, LLC (Redmond, WA)
Inventors: Megan Ann Lindsay (Kirkland, WA), Michael Scavezze (Bellevue, WA), Aaron Daniel Krauss (Snoqualmie, WA), Michael Thomas (Redmond, WA), Richard Wifall (Sammamish, WA), Jeffrey David Smith (Duvall, WA), Cameron Brown (Bellevue, WA), Charlene Jeune (Redmond, WA), Cheyne Rory Quin Mathey-Owens (Seattle, WA)
Application Number: 16/193,108