Electronic device that displays virtual objects
An electronic device may include one or more sensors and one or more displays. The display may be configured to display virtual objects of various types. The electronic device may receive a request to display a first type of virtual object that has a location defined relative to a location corresponding to the electronic device or a user of the electronic device. In response to the request to display the first type of virtual object, the electronic device may determine a depth of an object that is aligned with a target display direction for the virtual object, determine an apparent depth at which to display the virtual object based on at least the depth of the object, and display, via the one or more displays, the virtual object at the apparent depth.
Latest Apple Patents:
- Control resource set information in physical broadcast channel
- Multimedia broadcast and multicast service (MBMS) transmission and reception in connected state during wireless communications
- Methods and apparatus for inter-UE coordinated resource allocation in wireless communication
- Control resource set selection for channel state information reference signal-based radio link monitoring
- Physical downlink control channel (PDCCH) blind decoding in fifth generation (5G) new radio (NR) systems
This application claims priority to U.S. provisional patent application No. 63/348,897, filed Jun. 3, 2022, which is hereby incorporated by reference herein in its entirety.
BACKGROUNDThis disclosure relates generally to electronic devices and, more particularly, to electronic devices with displays that display virtual objects.
Some electronic devices include displays that present images close to a user's eyes. For example, extended reality headsets may include displays with optical elements that allow users to view images from the displays.
Devices such as these can be challenging to design. If care is not taken, viewing images from the displays may be less comfortable than desired to a user.
SUMMARYAn electronic device may include one or more sensors, one or more displays, one or more processors, and memory storing instructions configured to be executed by the one or more processors. The instructions may include instructions for receiving a request to display a virtual object and, in accordance with a determination that the virtual object is a first type of virtual object that has a location defined relative to a location corresponding to the electronic device or a user of the electronic device, determining, via the one or more sensors, a depth of a physical object, determining an apparent depth at which to display the virtual object based on at least the depth of the physical object, and displaying, via the one or more displays, the virtual object at the apparent depth.
Head-mounted devices may display different types of extended reality content for a user. The head-mounted device may display a virtual object that is perceived at an apparent depth within the physical environment of the user. Virtual objects may sometimes be displayed at fixed locations relative to the physical environment of the user. For example, consider an example where a user's physical environment includes a table. A virtual object may be displayed for the user such that the virtual object appears to be resting on the table. As the user moves their head and otherwise interacts with the XR environment, the virtual object remains at the same, fixed position on the table (e.g., as if the virtual object were another physical object in the XR environment). This type of content may be referred to as world-locked content (because the position of the virtual object is fixed relative to the physical environment of the user).
Other virtual objects may be displayed at locations that are defined relative to the head-mounted device or a user of the head-mounted device. First, consider the example of virtual objects that are displayed at locations that are defined relative to the head-mounted device. As the head-mounted device moves (e.g., with the rotation of the user's head), the virtual object remains in a fixed position relative to the head-mounted device. For example, the virtual object may be displayed in the front and center of the head-mounted device (e.g., in the center of the device's or user's field-of-view) at a particular distance. As the user moves their head left and right, their view of their physical environment changes accordingly. However, the virtual object may remain fixed in the center of the device's or user's field of view at the particular distance as the user moves their head (assuming gaze direction remains constant). This type of content may be referred to as head-locked content. The head-locked content is fixed in a given position relative to the head-mounted device (and therefore the user's head which is supporting the head-mounted device). The head-locked content may not be adjusted based on a user's gaze direction. In other words, if the user's head position remains constant and their gaze is directed away from the head-locked content, the head-locked content will remain in the same apparent position.
Second, consider the example of virtual objects that are displayed at locations that are defined relative to a portion of the user of the head-mounted device (e.g., relative to the user's torso). This type of content may be referred to as body-locked content. For example, a virtual object may be displayed in front and to the left of a user's body (e.g., at a location defined by a distance and an angular offset from a forward-facing direction of the user's torso), regardless of which direction the user's head is facing. If the user's body is facing a first direction, the virtual object will be displayed in front and to the left of the user's body. While facing the first direction, the virtual object may remain at the same, fixed position relative to the user's body in the XR environment despite the user rotating their head left and right (to look towards and away from the virtual object). However, the virtual object may move within the device's or user's field of view in response to the user rotating their head. If the user turns around and their body faces a second direction that is the opposite of the first direction, the virtual object will be repositioned within the XR environment such that it is still displayed in front and to the left of the user's body. While facing the second direction, the virtual object may remain at the same, fixed position relative to the user's body in the XR environment despite the user rotating their head left and right (to look towards and away from the virtual object).
In the aforementioned example, body-locked content is displayed at a fixed position/orientation relative to the user's body even as the user's body rotates. For example, the virtual object may be displayed at a fixed distance in front of the user's body. If the user is facing north, the virtual object is in front of the user's body (to the north) by the fixed distance. If the user rotates and is facing south, the virtual object is in front of the user's body (to the south) by the fixed distance.
Alternatively, the distance offset between the body-locked content and the user may be fixed relative to the user whereas the orientation of the body-locked content may remain fixed relative to the physical environment. For example, the virtual object may be displayed in front of the user's body at a fixed distance from the user as the user faces north. If the user rotates and is facing south, the virtual object remains to the north of the user's body at the fixed distance from the user's body.
Body-locked content may also be configured to always remain gravity or horizon aligned, such that head and/or body changes in the roll orientation would not cause the body-locked content to move within the XR environment. Translational movement may cause the body-locked content to be repositioned within the XR environment to maintain the fixed distance from the user. Subsequent descriptions of body-locked content may include both of the aforementioned types of body-locked content.
To improve user comfort in certain scenarios, head-locked and/or body-locked content may be displayed at an apparent depth that matches the depth of an object (e.g., a virtual or physical object) in an XR environment. The head-mounted device may include one or more sensors that determine the depth of a physical object in a physical environment. Based at least on the depth of the object, the head-mounted device may determine an apparent depth for the virtual object and display the virtual object at the apparent depth. The apparent depth of the virtual object may be repeatedly updated to continuously match the depths of objects in the XR environment.
System 10 of
The operation of system 10 may be controlled using control circuitry 16. Control circuitry 16 may be configured to perform operations in system 10 using hardware (e.g., dedicated hardware or circuitry), firmware and/or software. Software code for performing operations in system 10 and other data is stored on non-transitory computer readable storage media (e.g., tangible computer readable storage media) in control circuitry 16. The software code may sometimes be referred to as software, data, program instructions, instructions, or code. The non-transitory computer readable storage media (sometimes referred to generally as memory) may include non-volatile memory such as non-volatile random-access memory (NVRAM), one or more hard drives (e.g., magnetic drives or solid state drives), one or more removable flash drives or other removable media, or the like. Software stored on the non-transitory computer readable storage media may be executed on the processing circuitry of control circuitry 16. The processing circuitry may include application-specific integrated circuits with processing circuitry, one or more microprocessors, digital signal processors, graphics processing units, a central processing unit (CPU) or other processing circuitry.
System 10 may include input-output circuitry such as input-output devices 12. Input-output devices 12 may be used to allow data to be received by system 10 from external equipment (e.g., a tethered computer, a portable device such as a handheld device or laptop computer, or other electrical equipment) and to allow a user to provide head-mounted device 10 with user input. Input-output devices 12 may also be used to gather information on the environment in which system 10 (e.g., head-mounted device 10) is operating. Output components in devices 12 may allow system 10 to provide a user with output and may be used to communicate with external electrical equipment. Input-output devices 12 may include sensors and other components 18 (e.g., image sensors for gathering images of real-world objects that are optionally digitally merged with virtual objects on a display in system 10, accelerometers, depth sensors, light sensors, haptic output devices, speakers, batteries, wireless communications circuits for communicating between system 10 and external electronic equipment, etc.).
Display modules 20A may be liquid crystal displays, organic light-emitting diode displays, laser-based displays, or displays of other types. Optical systems 20B may form lenses that allow a viewer (see, e.g., a viewer's eyes at eye box 24) to view images on display(s) 20. There may be two optical systems 20B (e.g., for forming left and right lenses) associated with respective left and right eyes of the user. A single display 20 may produce images for both eyes or a pair of displays 20 may be used to display images. In configurations with multiple displays (e.g., left and right eye displays), the focal length and positions of the lenses formed by system 20B may be selected so that any gap present between the displays will not be visible to a user (e.g., so that the images of the left and right displays overlap or merge seamlessly).
If desired, optical system 20B may contain components (e.g., an optical combiner, etc.) to allow real-world image light from real-world images or objects 28 to be combined optically with virtual (computer-generated) images such as virtual images in image light 38. In this type of system, a user of system 10 may view both real-world content and computer-generated content that is overlaid on top of the real-world content. Camera-based systems may also be used in device 10 (e.g., in an arrangement in which a camera captures real-world images of object 28 and this content is digitally merged with virtual content at optical system 20B).
System 10 may, if desired, include wireless circuitry and/or other circuitry to support communications with a computer or other external equipment (e.g., a computer that supplies display 20 with image content). During operation, control circuitry 16 may supply image content to display 20. The content may be remotely received (e.g., from a computer or other content source coupled to system 10) and/or may be generated by control circuitry 16 (e.g., text, other computer-generated content, etc.). The content that is supplied to display 20 by control circuitry 16 may be viewed by a viewer at eye box 24.
In
Because the virtual object in
In
As shown in
In
In
In
In
In
To improve user comfort when viewing head-locked content, the head-locked content may have an apparent depth that matches the depth of a physical object aligned with the head-locked content.
A sensor 18 (see
The example of apparent depth 50 being set to match the depth 52-1 of physical object 42-1 is merely illustrative. The apparent depth may have a default depth (such as a maximum allowable depth) that is adjusted if the apparent depth is greater than the distance to the nearest physical object.
Additionally, apparent depth 50 may be set to match the depth of other virtual objects in the XR environment. For example, a world-locked virtual object may be present in the XR environment at an apparent depth. Apparent depth 50 may sometimes be set to have an apparent depth that matches the apparent depth of the world-locked virtual object. In this case, the sensor is not required to determine the depth of the world-locked virtual object (since this object is displayed by the head-mounted device and therefore the depth of the world-locked virtual object is known by the head-mounted device). However, the adjustments of the apparent depth for a head-locked (or body-locked) virtual object may otherwise be the same when aligned with another virtual object as when aligned with a physical object.
The user's view of the XR environment of
The angle 54 that characterizes the apparent location 44 of the virtual object remains fixed as the user moves their head. However, the depth 50 that characterizes the apparent location 44 of the virtual object may be updated based on the depth of the closest physical object aligned with the virtual object.
In
In
In some situations, the depth of a closest physical (or virtual) object may be used to adjust the apparent depth of a virtual object even when there is no overlap between the virtual object and the closest physical (or virtual) object. For example, if the closest physical (or virtual) object is within a threshold distance or angle of the virtual object, the apparent depth of the virtual object may be set to equal the depth of the closest physical (or virtual) object. This may enable a viewer to easily look from one to the other.
The user's view of the XR environment of
By adjusting the depth of the head-locked content to match the depth of physical objects in the XR environment, the user may have improved comfort when viewing the head-locked content.
Head-locked content is explicitly described in connection with
One or more sensors 18 in head-mounted device 10 may be configured to determine the depth of physical objects in the XR environment. The one or more sensors 18 may include an image sensor with an array of imaging pixels (e.g., that sense red, blue, and green visible light) that is configured to capture images of the user's physical environment. Machine learning algorithms may be applied to the captured images from the image sensor to determine the depth of various physical objects in the physical environment. As another example, the one or more sensors may include gaze detection sensors. The gaze detection sensors may be able to determine the degree of convergence of a user's eyes. High convergence may be indicative of a physical object that is close to the user (e.g., a short depth) whereas low convergence may be indicative of a physical object that is far away from the user (e.g., a far depth). The convergence determined by the gaze detection sensor(s) my therefore be used to estimate the depth of physical objects in the physical environment. As another example, the one or more sensors may include a stereo camera (with two or more lenses and image sensors for capturing three-dimensional images). As yet another example, the one or more sensors may include a depth sensor. The depth sensor may be a pixelated depth sensor (e.g., that is configured to measure multiple depths across the physical environment) or a point sensor (that is configured to measure a single depth in the physical environment). When a point sensor is used, the point sensor may be aligned with the known display direction of the head-locked content in head-mounted device 10. For example, in
The optical system 20B in head-mounted device 10 may have an associated target viewing zone for virtual objects displayed using the head-mounted device. The user may view virtual objects with an acceptable level of comfort when the virtual objects are displayed within the target viewing zone.
In the example of
In the example of
As shown and discussed in connection with
In addition to maintaining a constant apparent size with varying apparent depths, the alignment of the images used to display the virtual object may be adjusted with varying apparent depths. First and second displays may display first and second images that are viewed by first and second eyes of the user to perceive the virtual object. At closer apparent depths, the first and second images may be separated by a smaller distance than at farther apparent depths.
Changes to the apparent depth and/or alignment of the images used to display the virtual object may be performed gradually across a transition period. This transition period may simulate the performance of the human eye and produce a more natural viewing experience for the user. The transition period may have a duration of at least 5 milliseconds, at least 50 milliseconds, at least 100 milliseconds, at least 200 milliseconds, at least 300 milliseconds, at least 500 milliseconds, less than 500 milliseconds, less than 1 second, less than 300 milliseconds, between 200 milliseconds and 400 milliseconds, between 250 milliseconds and 350 milliseconds, between 50 milliseconds and 1 second, etc. The alignment and/or apparent depth of the virtual object may change gradually throughout the transition period.
At block 102, the control circuitry may receive a request to display a virtual object. The virtual object may be a two-dimensional virtual object or a three-dimensional virtual object. The virtual object may be a world-locked virtual object (where the position of the virtual object is fixed relative to the physical environment of the user), a head-locked virtual object (where the virtual object remains in a fixed position relative to the head-mounted device when the head-mounted device moves), or a body-locked virtual object (where the virtual object remains in a fixed position relative to a portion of the user of the head-mounted device). As specific examples, the virtual object may be a notification to the user that includes text. This type of virtual object may be a head-locked virtual object. The virtual object may alternatively be a simulation of a physical object such as a cube. This type of virtual object may be a world-locked virtual object. At block 104, the control circuitry may determine if the virtual object is a first type of virtual object that has a location defined relative to a location corresponding to the electronic device (e.g., head-locked content) or a user of the electronic device (e.g., body-locked content) or if the virtual object is a second type of virtual object that has a location defined relative to a static location within a coordinate system of a three-dimensional environment (e.g., world-locked content). For example, the control circuitry may receive a request to display a user notification that includes text. The control circuitry may determine that this virtual object is the first type of virtual object (e.g., a head-locked virtual object). Alternatively, the control circuitry may receive a request to display a simulation of the cube and determine that the virtual object is the second type of virtual object (e.g., a world-locked virtual object). The three-dimensional environment with the coordinate system may be an XR environment that represents a virtual environment or the physical environment surrounding the user of the head-mounted device.
In response to determining that the virtual object is the second type of virtual object (e.g., a world-locked virtual object), the method may proceed to block 106. At block 106, the virtual object may be displayed (e.g., using display 20) using the location as the apparent location for the virtual object. The world-locked virtual object may remain fixed at the location relative to the three-dimensional environment as the user moves their head, body, and/or gaze. For example, the simulation of the cube (discussed above) may be displayed at a fixed location relative to the physical environment (e.g., on a table) as the user moves their head.
In response to determining that the virtual object is the first type of virtual object (e.g., a head-locked virtual object or a body-locked virtual object), the method may proceed to block 108. At block 108, control circuitry 16 may use at least one sensor 18 to determine a depth of a physical object in the physical environment of the user. The at least one sensor may include a camera configured to capture images of surroundings of the electronic device (that are subsequently analyzed by the control circuitry to determine the depth of the physical object), a LIDAR sensor, a depth sensor, and/or a stereo camera. The physical object may be the nearest physical object to the electronic device in a given direction relative to the electronic device. The virtual object may be displayed at an apparent depth corresponding to the determined depth and in the given direction relative to the electronic device. In other words, the sensor is configured to determine the depth of the closest physical object that is aligned with the intended display direction of the virtual object. For example, consider the example above where the virtual object is a user notification that includes text. The sensor may determine the depth of a nearest physical object (e.g., a wall) to the head-mounted device that is aligned with the user notification.
The example in block 108 of determining the depth to a nearest physical object is merely illustrative. As previously mentioned, the control circuitry may determine the depth of the nearest object in the XR environment, whether the nearest object is a physical object (e.g., a wall as mentioned above) or a virtual object (e.g., a simulation of a three-dimensional object).
In some examples, at block 108, the depth of a physical or virtual object may only be relied upon if the object is determined to satisfy one or more criteria, such as the object being larger than a threshold size, the object occupying a threshold field of view of the user or electronic device, the object being a particular type of object (e.g., a wall, display of an electronic device, etc.), or the like. The size of the object may be determined using known properties of the object, a depth sensor, a camera in system 10, or any other desired sensor(s). In other words, the control circuitry 16 may determine the depth of the closest object that is both aligned with the intended display direction of the virtual object and that is larger than a threshold size. For example, the depth sensor may detect a first physical object that is at a first depth. However, the first physical object may be smaller than the threshold size and therefore the first depth is not relied upon for subsequent processing. The depth sensor may also detect a second physical object that is at a second depth that is greater than the first depth. The second physical object may be larger than the threshold size and therefore the second depth is used for subsequent processing.
Next, at block 110, the control circuitry may determine an apparent depth at which to display the virtual object based on at least the depth of the physical object. The head-mounted device may have an associated minimum allowable apparent depth and maximum allowable apparent depth. When the depth of the physical object is greater than or equal to the minimum allowable apparent depth and less than or equal to the maximum allowable apparent depth, the apparent depth of the virtual object may be set to equal the depth of the physical object. When the depth of the physical object is greater than the maximum allowable apparent depth, the apparent depth of the virtual object may be set to equal the maximum allowable apparent depth. When the depth of the physical object is less than the minimum allowable apparent depth, the apparent depth of the virtual object may be set to equal the minimum allowable apparent depth. When the depth of the physical object is less than the minimum allowable apparent depth or greater than the maximum allowable apparent depth, the virtual object may be adjusted to mitigate viewer discomfort. Possible adjustments include changing the size of the virtual object (e.g., making the virtual object smaller), changing the opacity of the virtual object (e.g., fading out the virtual object), displaying a warning or other discomfort indicator (e.g., a red dot, warning text, etc.) instead of or in addition to the virtual object, applying a visual effect on or around the virtual object, etc.
Consider the example above where the nearest object to head-mounted device is a wall in the physical environment. If the depth of the wall is greater than or equal to the minimum allowable apparent depth and less than or equal to the maximum allowable apparent depth, the apparent depth of the user notification may be set to equal the depth of the wall. If the depth of the wall is greater than the maximum allowable apparent depth, the apparent depth of the user notification may be set to equal the maximum allowable apparent depth. When the depth of the wall is less than the minimum allowable apparent depth, the apparent depth of the user notification may be set to equal the minimum allowable apparent depth. When the depth of the wall is less than the minimum allowable apparent depth or greater than the maximum allowable apparent depth, the user notification may be adjusted to mitigate viewer discomfort. Possible adjustments include changing the size of the user notification (e.g., making the user notification smaller), changing the opacity of the user notification (e.g., fading out the user notification), displaying a warning or other discomfort indicator (e.g., a red dot, warning text, etc.) instead of or in addition to the user notification, applying a visual effect on or around the user notification, etc.
Generally, the display direction of head-locked and body-locked virtual objects is fixed (relative to the head-mounted device and user's body, respectively). However, there are some cases where the angle (e.g., angle 54) may be adjusted at block 110. Angle 54 may be adjusted to prevent the virtual object from overlapping two physical objects of varying depths. Consider a scenario where the left half of the virtual object overlaps a first physical object at a first depth and a right half of the virtual object overlaps a second physical object at a second depth that is different than the first depth. The angle may be shifted such that the virtual object entirely overlaps either the first physical object or the second physical object.
After determining the apparent location (including the apparent depth) at which to display the virtual object at block 110, the virtual object (e.g., the user notification discussed above) may be displayed at the apparent depth at block 112. Blocks 108-112 may be performed repeatedly for virtual objects of the first type. In this way, if the user rotates their head and the depth of the physical object aligned with the virtual object changes, the apparent depth of the virtual object may be continuously adjusted to match the depth of the aligned physical object. Blocks 108-112 (e.g., repeatedly determining the depth of the physical object and repeatedly determining the apparent depth) may be repeated at a frequency that is greater than 1 Hz, greater than 2 Hz, greater than 4 Hz, greater than 10 Hz, greater than 30 Hz, greater than 60 Hz, less than 60 Hz, less than 30 Hz, less than 10 Hz, less than 5 Hz, between 2 Hz and 10 Hz, etc.
Repeatedly determining the apparent depth may include changing the apparent depth from a first apparent depth to a second apparent depth that is greater than the first apparent depth. The apparent size of the virtual object may remain constant while the apparent depth changes.
The apparent depth and/or alignment of the virtual object may be updated gradually during a transition period. The transition period may have a duration of at least 200 milliseconds or another desired duration.
Out of an abundance of caution, it is noted that to the extent that any implementation of this technology involves the use of personally identifiable information, implementers should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users.
The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination.
Claims
1. An electronic device comprising:
- one or more sensors;
- one or more displays;
- one or more processors; and
- memory storing instructions configured to be executed by the one or more processors, the instructions for: receiving a request to display a virtual object; and in accordance with a determination that the virtual object is a first type of virtual object that has a location defined relative to a location corresponding to the electronic device or a user of the electronic device: determining, via the one or more sensors, a depth of a physical object; determining an apparent depth at which to display the virtual object based on at least the depth of the physical object; and displaying, via the one or more displays, the virtual object at the apparent depth.
2. The electronic device defined in claim 1, wherein the instructions further comprise instructions for:
- repeatedly determining, via the one or more sensors, the depth of the physical object; and
- repeatedly determining the apparent depth based on the determined depths of the physical object, wherein repeatedly determining the apparent depth comprises changing the apparent depth from a first apparent depth to a second apparent depth that is different than the first apparent depth.
3. The electronic device defined in claim 2, wherein changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth comprises changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
4. The electronic device defined in claim 1, wherein the one or more displays has an associated range of acceptable apparent depths for the virtual object, wherein the range of acceptable apparent depths for the virtual object is defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and wherein the determining the apparent depth at which to display the virtual object comprises:
- determining the apparent depth of the virtual object to be the minimum acceptable apparent depth in response to the depth of the physical object being less than the minimum acceptable apparent depth; and
- determining the apparent depth of the virtual object to be the maximum acceptable apparent depth in response to the depth of the physical object being greater than the maximum acceptable apparent depth.
5. The electronic device defined in claim 1, wherein determining, via the one or more sensors, the depth of the physical object comprises determining a depth of a nearest physical object to the electronic device in a given direction relative to the electronic device, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
6. The electronic device defined in claim 1, wherein determining, via the one or more sensors, the depth of the physical object comprises determining a depth of a nearest physical object to the electronic device in a given direction relative to the electronic device that satisfies a size criterion and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
7. The electronic device defined in claim 1, wherein the one or more sensors comprises a light detection and ranging (LIDAR) sensor, a depth sensor, or a stereo camera.
8. The electronic device defined in claim 1, wherein the instructions further comprise instructions for:
- in accordance with a determination that the virtual object is a second type of virtual object that has an additional location defined relative to a static location within a coordinate system of a three-dimensional environment: displaying, via the one or more displays, the virtual object at the additional location.
9. A method of operating an electronic device that comprises one or more sensors and one or more displays, the method comprising:
- receiving a request to display a virtual object; and
- in accordance with a determination that the virtual object is a first type of virtual object that has a location defined relative to a location corresponding to the electronic device or a user of the electronic device: determining, via the one or more sensors, a depth of a physical object; determining an apparent depth at which to display the virtual object based on at least the depth of the physical object; and displaying, via the one or more displays, the virtual object at the apparent depth.
10. The method defined in claim 9, further comprising:
- repeatedly determining, via the one or more sensors, the depth of the physical object; and
- repeatedly determining the apparent depth based on the determined depths of the physical object, wherein repeatedly determining the apparent depth comprises changing the apparent depth from a first apparent depth to a second apparent depth that is different than the first apparent depth.
11. The method defined in claim 10, wherein changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth comprises changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
12. The method defined in claim 9, wherein the one or more displays has an associated range of acceptable apparent depths for the virtual object, wherein the range of acceptable apparent depths for the virtual object is defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and wherein the determining the apparent depth at which to display the virtual object comprises:
- determining the apparent depth of the virtual object to be the minimum acceptable apparent depth in response to the depth of the physical object being less than the minimum acceptable apparent depth; and
- determining the apparent depth of the virtual object to be the maximum acceptable apparent depth in response to the depth of the physical object being greater than the maximum acceptable apparent depth.
13. The method defined in claim 9, wherein determining, via the one or more sensors, the depth of the physical object comprises determining a depth of a nearest physical object to the electronic device in a given direction relative to the electronic device and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
14. The method defined in claim 9, wherein determining, via the one or more sensors, the depth of the physical object comprises determining a depth of a nearest physical object to the electronic device in a given direction relative to the electronic device that satisfies a size criterion, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
15. The method defined in claim 9, wherein the one or more sensors comprises a light detection and ranging (LIDAR) sensor, a depth sensor, or a stereo camera.
16. The method defined in claim 9, wherein the method further comprises:
- in accordance with a determination that the virtual object is a second type of virtual object that has an additional location defined relative to a static location within a coordinate system of a three-dimensional environment: displaying, via the one or more displays, the virtual object at the additional location.
17. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device that comprises one or more sensors and one or more displays, the one or more programs including instructions for:
- receiving a request to display a virtual object; and
- in accordance with a determination that the virtual object is a first type of virtual object that has a location defined relative to a location corresponding to the electronic device or a user of the electronic device: determining, via the one or more sensors, a depth of a physical object; determining an apparent depth at which to display the virtual object based on at least the depth of the physical object; and displaying, via the one or more displays, the virtual object at the apparent depth.
18. The non-transitory computer-readable storage medium defined in claim 17, wherein the instructions further comprise instructions for:
- repeatedly determining, via the one or more sensors, the depth of the physical object; and
- repeatedly determining the apparent depth based on the determined depths of the physical object, wherein repeatedly determining the apparent depth comprises changing the apparent depth from a first apparent depth to a second apparent depth that is different than the first apparent depth.
19. The non-transitory computer-readable storage medium defined in claim 18, wherein changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth comprises changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
20. The non-transitory computer-readable storage medium defined in claim 17, wherein the one or more displays has an associated range of acceptable apparent depths for the virtual object, wherein the range of acceptable apparent depths for the virtual object is defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and wherein the determining the apparent depth at which to display the virtual object comprises:
- determining the apparent depth of the virtual object to be the minimum acceptable apparent depth in response to the depth of the physical object being less than the minimum acceptable apparent depth; and
- determining the apparent depth of the virtual object to be the maximum acceptable apparent depth in response to the depth of the physical object being greater than the maximum acceptable apparent depth.
21. The non-transitory computer-readable storage medium defined in claim 17, wherein determining, via the one or more sensors, the depth of the physical object comprises determining a depth of a nearest physical object to the electronic device in a given direction relative to the electronic device and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
22. The non-transitory computer-readable storage medium defined in claim 17, wherein determining, via the one or more sensors, the depth of the physical object comprises determining a depth of a nearest physical object to the electronic device in a given direction relative to the electronic device that satisfies a size criterion, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
23. The non-transitory computer-readable storage medium defined in claim 17, wherein the one or more sensors comprises a light detection and ranging (LIDAR) sensor, a depth sensor, or a stereo camera.
24. The non-transitory computer-readable storage medium defined in claim 17, wherein the instructions further comprise instructions for:
- in accordance with a determination that the virtual object is a second type of virtual object that has an additional location defined relative to a static location within a coordinate system of a three-dimensional environment: displaying, via the one or more displays, the virtual object at the additional location.
9367960 | June 14, 2016 | Poulos et al. |
10754496 | August 25, 2020 | Kiemele et al. |
11049328 | June 29, 2021 | Powderly et al. |
11112863 | September 7, 2021 | Miller |
11205308 | December 21, 2021 | Chen |
20150312561 | October 29, 2015 | Hoof et al. |
20190172262 | June 6, 2019 | McHugh et al. |
20190362557 | November 28, 2019 | Lacey |
20200371673 | November 26, 2020 | Faulkner |
Type: Grant
Filed: Apr 4, 2023
Date of Patent: Aug 6, 2024
Patent Publication Number: 20230396752
Assignee: Apple Inc. (Cupertino, CA)
Inventor: Paulo R Jansen dos Reis (San Jose, CA)
Primary Examiner: Afroza Chowdhury
Application Number: 18/295,353