OPERATION OF HEAD MOUNTED DEVICE FROM EYE DATA
A method of operating a head mounted device includes capturing eye data with one or more sensors of the head mounted device. The one or more sensors are configured to sense an eyebox region. The eye data may include images. Operations of the head mounted device are adjusted in response to the eye data.
This application claims priority to U.S. non-provisional application Ser. No. 17/468,578 filed Sep. 7, 2021, which is hereby incorporated by reference.
TECHNICAL FIELDThis disclosure relates generally to optics, and in particular to head mounted devices.
BACKGROUND INFORMATIONA head mounted device is a wearable electronic device, typically worn on the head of a user. Head mounted devices may include one or more electronic components for use in a variety of applications, such as gaming, aviation, engineering, medicine, entertainment, activity tracking, and so on. Head mounted devices may include display to present virtual images to a wearer of the head mounted device. When a head mounted device includes a display, it may be referred to as a head mounted display. Head mounted devices may have user inputs so that a user can control one or more operations of the head mounted device.
Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
Embodiments of operating a head mounted device in response to eye data are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In some implementations of the disclosure, the term “near-eye” may be defined as including an element that is configured to be placed within 50 mm of an eye of a user while a near-eye device is being utilized. Therefore, a “near-eye optical element” or a “near-eye system” would include one or more elements configured to be placed within 50 mm of the eye of the user.
In aspects of this disclosure, visible light may be defined as having a wavelength range of approximately 380 nm-700 nm. Non-visible light may be defined as light having wavelengths that are outside the visible light range, such as ultraviolet light and infrared light. Infrared light having a wavelength range of approximately 700 nm-1 mm includes near-infrared light. In aspects of this disclosure, near-infrared light may be defined as having a wavelength range of approximately 700 nm-1.6 μm.
In aspects of this disclosure, the term “transparent” may be defined as having greater than 90% transmission of light. In some aspects, the term “transparent” may be defined as a material having greater than 90% transmission of visible light.
Implementations of devices, systems, and methods of operating a head mounted device in response to eye data are disclosed herein. Eye data of a user of a head mounted device may be captured by sensors of the head mounted device. The sensors may include image sensors, photodiodes, micro-electro-mechanical systems (MEMS) mirrors, ultrasound, or LIDAR units, for example. The eye data may include one or more images of the eye, a position of the eye, a measurement of the eye (e.g. pupil size), and/or a measurement of the eye over time (e.g. speed of pupil dilation). The eye data may also include a movement of eyebrows, movement of eyelids, and/or facial micro gestures. Other examples of eye data will be disclosed in more detail below.
In an implementation of the disclosure, a transparency of a lens of a head mounted device is adjusted in response to eye data from an eyebox region of the head mounted device. By way of example, eye data may indicate that a user is squinting and a transparency of a lens of the head mounted device may be adjusted (e.g. darkened or lightened) in response to the eye data so that user is more comfortable in viewing scene light from the outside world. If the head mounted device includes a display (e.g. augmented reality head mounted display), darkening the transparency of the lens in the head mounted device may also allow the user to view virtual images generated by the display. The transparency of the lens may also be adjusted in response to an ambient light reading by a photodetector of the head mounted device. In some implementations, the transparency of the lens may be adjusted according to previously selected lens transparency settings selected by the user in similar environmental conditions (e.g. similar previous ambient light measurements). In some implementations, the transparency of the lens may be adjusted according to a predetermined lens transparency value. The predetermined lens transparency value may be derived from crowd-sourced or aggregate eye data corresponding to a particular ambient light measurement value, for example.
In an implementation of the disclosure, a display brightness of a head mounted display (e.g. augmented reality head mounted device) is adjusted in response to eye data. By way of example, eye data may indicate that a user is squinting and a display brightness of the head mounted device may be adjusted (e.g. dimmed or brightened) in response to the eye data so that user is more comfortable in viewing the display. The brightness of the display may also be adjusted in response to an ambient light reading by a photodetector of the head mounted device. In some implementations, the brightness of the display may be adjusted according to a previously selected display brightness value that was previously selected by the user in similar environmental conditions (e.g. similar previous ambient light measurements). In some implementations, the brightness of the display may be adjusted according to a predetermined display brightness value. The predetermined display brightness value may be derived from crowd-sourced or aggregate eye data corresponding to a particular ambient light measurement value, for example.
In an implementation of the disclosure, a volume of an audio output of a head mounted display (e.g. augmented reality head mounted device) is adjusted in response to eye data. By way of example, eye data may indicate that a user is uncomfortable with a particular audio output level and/or a change in an audio output level. Therefore, a volume of an audio output may be adjusted (e.g. volume up or volume down) in response to the eye data so that user is more comfortable with the audio level. The volume of the audio output may also be adjusted in response to an ambient noise measurement reading by a microphone of the head mounted device. A user comfort with a particular audio output level may depend on the noise level in a user environment. For example, the user may be comfortable with a higher audio output level in a high-noise environment (e.g. an airplane) whereas the user may be uncomfortable with the same higher audio output level in a low-noise environment (e.g. a library). These and other embodiments are described in more detail in connection with
In addition to image sensors, various other sensors of head mounted device 100 may be configured to capture eye data. Ultrasound or LIDAR chips may be configured in frame 102 to detect a position of an eye of the user by detecting the position of the cornea of the eye, for example. Discrete photodiodes included in frame 102 or optical elements 110A and/or 110B may also be used to detect a position of the eye of the user. Discrete photodiodes may be used to detect “glints” of light reflecting off of the eye, for example. Eye data generated by various sensors may not necessarily be considered “images” of the eye.
When head mounted device 100 includes a display, it may be considered a head mounted display. Head mounted device 100 may be considered an augmented reality (AR) head mounted display. While
Illumination layer 130A is shown as including a plurality of in-field illuminators 126. In-field illuminators 126 are described as “in-field” because they are in a field of view (FOV) of a user of the head mounted device 100. In-field illuminators 126 may be in a same FOV that a user views a display of the head mounted device 100, in an embodiment. In-field illuminators 126 may be in a same FOV that a user views an external environment of the head mounted device 100 via scene light 191 propagating through near-eye optical elements 110. Scene light 191 is from the external environment of head mounted device 100. While in-field illuminators 126 may introduce minor occlusions into the near-eye optical element 110A, the in-field illuminators 126, as well as their corresponding electrical routing may be so small as to be unnoticeable or insignificant to a wearer of head mounted device 100. In some implementations, illuminators 126 are not in-field. Rather, illuminators 126 could be out-of-field in some implementations.
As shown in
As shown in
Optically transparent layer 120A is shown as being disposed between the illumination layer 130A and the eyeward side 109 of the near-eye optical element 110A. The optically transparent layer 120A may receive the infrared illumination light emitted by the illumination layer 130A and pass the infrared illumination light to illuminate the eye of the user. As mentioned above, the optically transparent layer 120A may also be transparent to visible light, such as scene light 191 received from the environment and/or image light 141 received from the display layer 140A. In some examples, the optically transparent layer 120A has a curvature for focusing light (e.g., display light and/or scene light) to the eye of the user. Thus, the optically transparent layer 120A, in some examples, may be referred to as a lens. In some aspects, the optically transparent layer 120A has a thickness and/or curvature that corresponds to the specifications of a user. In other words, the optically transparent layer 120A may be a prescription lens. However, in other examples, the optically transparent layer 120A may be a non-prescription lens.
In
Display layer 440 presents virtual images in image light 441 to an eyebox region 201 for viewing by an eye 203. Processing logic 470 is configured to drive virtual images onto display layer 440 to present image light 441 to eyebox region 201. Processing logic 470 is also configured to adjust a brightness of display layer 440. In some implementations, adjusting a display brightness of display layer 440 includes adjusting the intensity of one or more light sources of display layer 440. All or a portion of display layer 440 may be transparent or semi-transparent to allow scene light 456 from an external environment to become incident on eye 203 so that a user can view their external environment in addition to viewing virtual images presented in image light 441.
Transparency modulator layer 450 may be configured to change its transparency to modulate the intensity of scene light 456 that propagates to the eye 203 of a user. Processing logic 470 may be configured to drive an analog or digital signal onto transparency modulator layer 450 in order to modulate the transparency of transparency modulator layer 450. In an example implementation, transparency modulator layer 450 includes liquid crystals where the alignment of the liquid crystals is adjusted in response to a drive signal from processing logic 470 to modulate the transparency of transparency modulator layer 450. Other suitable technologies that allow for electronically controlled dimming of transparency modulator 450 may be included in transparency modulator 450.
Illumination layer 430 includes light sources 426 configured to illuminate an eyebox region 201 with infrared illumination light 427. Illumination layer 430 may include a transparent refractive material that functions as a substrate for light sources 426. Infrared illumination light 427 may be near-infrared illumination light. Camera 477 is configured to image (directly) eye 203, in the illustrated example of
Camera 477 may include a complementary metal-oxide semiconductor (CMOS) image sensor, in some implementations. An infrared filter that receives a narrow-band infrared wavelength may be placed over the image sensor so it is sensitive to the narrow-band infrared wavelength while rejecting visible light and wavelengths outside the narrow-band. Infrared light sources (e.g. light sources 426) such as infrared LEDs or infrared VCSELS that emit the narrow-band wavelength may be oriented to illuminate eye 203 with the narrow-band infrared wavelength. Camera 447 may capture eye-tracking images of eyebox region 201. Eyebox region 201 may include eye 203 as well as surrounding features in an ocular area such as eyebrows, eyelids, eye lines, etc. Processing logic 470 may initiate one or more image captures with camera 477 and camera 477 may provide eye-tracking images 479 to processing logic 470. Processing logic 470 may perform image processing to determine the size and/or position of various features of the eyebox region 201. For example, processing logic 470 may be configured to determine size and/or position of the features described in association with
In the illustrated implementation of
Processing logic 470 is communicatively coupled to microphone 488 of head mounted device 400, in the example implementation of
In operation, transparency modulator layer 450 may be driven to various transparency values by processing logic 470 in response to various eye data and ambient light measurements 429. By way of example, a pupil diameter of an eye may indicate that scene light 456 is brighter than the user prefers. Other measurements of an ocular region (e.g. dimension of eyelids, sclera, number of lines in corner region 263, etc.) of the user may indicate the user is squinting and that scene light 456 may be brighter than the user prefers. Thus, a transparency of transparency modulator layer 450 may be driven to a transparency that makes the user more comfortable with the intensity of scene light 459 that propagates through transparency modulator layer 450. The transparency of transparency modulator layer 450 may be modulated to various levels between 10% transparent and 90% transparent, in response to the eye data and the ambient light measurement, for example.
In process block 505, eye data is captured with one or more sensors of a head mounted device. The one or more sensors are configured to sense the eyebox region. The sensors may include one or more photodiodes, image sensors, or any other suitable sensor to capture eye data. As described previously, the eye data may include images of an eyebox region for example. The eye data may include positions of various features of an ocular area of a user in the eyebox region.
In process block 510, an ambient light measurement is initiated with a photodetector (e.g. ambient light sensor 423) of the head mounted device. The ambient light measurement (e.g. ambient light measurement 429) may be initiated during a same time period as the eye data is captured.
In process block 515, a transparency of a lens of the head mounted device is adjusted in response to the eye data and the ambient light measurement. The “lens” of a head mounted device may be an optical element (e.g. optical element 110 or 410) that the user views the world through. In other words, scene light (e.g. scene light 456) from an external environment of the head mounted device may propagate through the lens prior to becoming incident on an eye 203. In the illustration of
In some implementations, the eye data includes at least one of a pupil size of an eye, speed of pupil dilation of the eye, a gaze direction of the eye, or eye-movement data (e.g. number of saccades in a given time period).
Eye data may include one or more images (e.g. image(s) 479) of eye 203 or the ocular region of a user that occupies eyebox region 201, for example. In an implementation, process 500 further includes performing image processing on the one or more images of the eye to determine a heart rate of a user of the head mounted device. The heart rate of the user may be determined by pupil size over a time period, for example. Adjusting the transparency of the lens may be based at least in part on the heart rate of the user.
In an implementation, process 500 further includes performing image processing on the one or more images of the eye to determine at least one of movement of eyebrows, movement of eyelids, or facial muscle micro gestures. Adjusting the transparency of the lens may be based at least in part on the movement of eyebrows, movement of eyelids, and/or facial muscle micro gestures.
In an implementation of process 500, adjusting the transparency of the lens of the head mounted device in response to the eye data and the ambient light measurement includes associating the eye data and the ambient light measurement to previous eye data paired with a previous ambient light measurement and adjusting the transparency of the lens to a previously selected lens transparency selected by the user. The previously selected lens transparency corresponds to the previous eye data paired with the previous ambient light measurement. The previous eye data and the previous ambient light measurement were captured by the head mounted device during a same time period. In this way, previous transparency selections of the user can be driven onto transparency modulator layer 450 to drive the transparency of optical element 410 to be personalized to previous selections of the user.
To illustrate,
In
Second previous eye data 522 is paired with second previous ambient light measurement 523. Second previously selected lens transparency value 521 corresponds with the pairing of data 522 and 523. A user of a head mounted device may have previously selected second previously selected lens transparency value 521 when (or immediately after) the second previous ambient light measurement 523 and second previous eye data 522 were collected by a head mounted device.
Memory 575 may include integer n number of previously selected lens transparency values corresponding to previous eye data paired with previous ambient light measurements. Thus, given an ambient light measurement (e.g. 429) and eye data, a personalized transparency value (previously selected by the user during similar ambient light conditions matched to similar eye data) can be driven onto transparency modulator layer 450 to adjust the intensity of scene light 459.
In an implementation of process 500, adjusting the transparency of the lens of the head mounted device in response to the eye data and the ambient light measurement includes adjusting the transparency of the lens of the head mounted device to a predetermined lens transparency value associated with aggregate eye data corresponding to the ambient light measurement. In this way, transparency selections determined in testing, crowd-sourced data or averaged from user preferences can be driven onto transparency modulator layer 450 to drive the transparency of optical element 410 to predetermined transparency values that were comfortable under similar ambient light conditions and eye data.
To illustrate,
In
Second aggregate eye data 527 is paired with second previous ambient light measurement 528. First predetermined lens transparency value 526 corresponds with the pairing of data 527 and 528. An average user setting of a head mounted device may have second predetermined lens transparency value 526 when (or immediately after) the second previous ambient light measurement 528 and second aggregate eye data 527 were collected by a head mounted device.
Memory 576 may include integer n number of predetermined lens transparency values corresponding to aggregate eye data paired with previous ambient light measurements. Thus, given an ambient light measurement (e.g. 429) and aggregate eye data for that particular ambient light measurement, a predetermined lens transparency value known to be suitable for the ambient light measurement and eye data can be driven onto transparency modulator layer 450 to adjust the intensity of scene light 459.
In process block 605, eye data is captured with one or more sensors of a head mounted device. The one or more sensors are configured to sense the eyebox region. The sensors may include one or more photodiodes, image sensors, or any other suitable sensor to capture eye data. As described previously, the eye data may include images of an eyebox region, for example. The eye data may include positions of various features of an ocular area of a user in the eyebox region.
In process block 610, an ambient light measurement is initiated with a photodetector (e.g. ambient light sensor 423) of the head mounted device. The ambient light measurement (e.g. ambient light measurement 429) may be initiated during a same time period as the eye data is captured.
In process block 615, a brightness of a display of the head mounted device is adjusted in response to the eye data and the ambient light measurement. In the illustration of
In some implementations, the eye data includes at least one of a pupil size of an eye, speed of pupil dilation of the eye, a gaze direction of the eye, or eye-movement data (e.g. number of saccades in a given time period).
Eye data may include one or more images (e.g. image(s) 479) of eye 203 or the ocular region of a user that occupies eyebox region 201, for example. In an implementation, process 600 further includes performing image processing on the one or more images of the eye to determine a heart rate of a user of the head mounted device. The heart rate of the user may be determined by pupil size over a time period, for example. Adjusting the brightness of the display may be based at least in part on the heart rate of the user.
In an implementation, process 600 further includes performing image processing on the one or more images of the eye to determine at least one of movement of eyebrows, movement of eyelids, or facial muscle micro gestures. Adjusting the brightness of the display may be based at least in part on the movement of eyebrows, movement of eyelids, and/or facial muscle micro gestures.
In an implementation of process 600, adjusting the brightness of the display in response to the eye data and the ambient light measurement includes associating the eye data and the ambient light measurement to previous eye data paired with a previous ambient light measurement and adjusting the brightness of the display to a previously selected display brightness selected by the user. The previously selected display brightness corresponds to the previous eye data paired with the previous ambient light measurement and the previous eye data and the previous ambient light measurement were captured by the head mounted device during a same time period. In this way, previous display brightness selections of the user can be driven onto display layer 440 to drive the brightness of the display to be personalized to previous selections of the user.
To illustrate,
In
Second previous eye data 622 is paired with second previous ambient light measurement 623. Second previously selected display brightness value 621 corresponds with the pairing of data 622 and 623. A user of a head mounted device may have previously selected second previously selected display brightness value 621 when (or immediately after) the second previous ambient light measurement 623 and second previous eye data 622 were collected by a head mounted device.
Memory 675 may include integer n number of previously selected display brightness values corresponding to previous eye data paired with previous ambient light measurements. Thus, given an ambient light measurement (e.g. 429) and eye data, a personalized display brightness value (previously selected by the user during similar ambient light conditions matched to similar eye data) can be driven onto display layer 440 to adjust a brightness of image light 441.
In an implementation of process 600, adjusting the brightness of the display of the head mounted device in response to the eye data and the ambient light measurement includes adjusting the brightness of the display to a predetermined display brightness associated with aggregate eye data corresponding to the ambient light measurement. In this way, display brightness selections determined in testing, crowd-sourced data, or averaged from user preferences can be driven onto display layer 440 to drive display layer 440 to brightness levels that are comfortable under similar ambient light conditions and eye data.
To illustrate,
In
Second aggregate eye data 627 is paired with second previous ambient light measurement 628. First predetermined display brightness value 626 corresponds with the pairing of data 627 and 628. An average user setting of a head mounted device may have second predetermined display brightness value 626 when (or immediately after) the second previous ambient light measurement 628 and second aggregate eye data 627 were collected by a head mounted device.
Memory 676 may include integer n number of predetermined display brightness values corresponding to aggregate eye data paired with previous ambient light measurements. Thus, given an ambient light measurement (e.g. 429) and aggregate eye data for that particular ambient light measurement, a predetermined display brightness value known to be suitable for the ambient light measurement and eye data can be driven onto display layer 440 to adjust a brightness of image light 441.
In process block 705, eye data is captured with one or more sensors of a head mounted device. The one or more sensors are configured to sense the eyebox region. The sensors may include one or more photodiodes, image sensors, or any other suitable sensor to capture eye data. As described previously, the eye data may include images of an eyebox region, for example. The eye data may include positions of various features of an ocular area of a user in the eyebox region. Eye data may include at least one of a pupil size of an eye, speed of pupil dilation of the eye, a gaze direction of the eye, or eye-movement data, for example. The eye data may include a number of saccades in a time period.
In process block 710, a volume of an audio output of the head mounted device is adjusted in response to the eye data. Adjusting the volume of the audio output may include adjusting a sound magnitude of speakers of the head mounted device where the speakers are oriented to provide sound to ears of a user of the head mounted device.
In an implementation, process 700 further includes initiating an ambient noise measurement with a microphone of the head mounted device. In this implementation, adjusting the volume of the audio output of the head mounted device is in response to the eye data and to the ambient noise measurement.
Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
The term “processing logic” (e.g. 470) in this disclosure may include one or more processors, microprocessors, multi-core processors, Application-specific integrated circuits (ASIC), and/or Field Programmable Gate Arrays (FPGAs) to execute operations disclosed herein. In some embodiments, memories (not illustrated) are integrated into the processing logic to store instructions to execute operations and/or store data. Processing logic may also include analog or digital circuitry to perform the operations in accordance with embodiments of the disclosure.
A “memory” or “memories” described in this disclosure may include one or more volatile or non-volatile memory architectures. The “memory” or “memories” may be removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Example memory technologies may include RAM, ROM, EEPROM, flash memory, CD-ROM, digital versatile disks (DVD), high-definition multimedia/data storage disks, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
Network may include any network or network system such as, but not limited to, the following: a peer-to-peer network; a Local Area Network (LAN); a Wide Area Network (WAN); a public network, such as the Internet; a private network; a cellular network; a wireless network; a wired network; a wireless and wired combination network; and a satellite network.
Communication channels may include or be routed through one or more wired or wireless communication utilizing IEEE 802.11 protocols, BlueTooth, SPI (Serial Peripheral Interface), I2C (Inter-Integrated Circuit), USB (Universal Serial Port), CAN (Controller Area Network), cellular data protocols (e.g. 3G, 4G, LTE, 5G), optical communication networks, Internet Service Providers (ISPs), a peer-to-peer network, a Local Area Network (LAN), a Wide Area Network (WAN), a public network (e.g. “the Internet”), a private network, a satellite network, or otherwise.
A computing device may include a desktop computer, a laptop computer, a tablet, a phablet, a smartphone, a feature phone, a server computer, or otherwise. A server computer may be located remotely in a data center or be stored locally.
The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described. Additionally, the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or otherwise.
A tangible non-transitory machine-readable storage medium includes any mechanism that provides (i.e., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). For example, a machine-readable storage medium includes recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
Claims
1. A method of operating a head mounted device, the method comprising:
- capturing, with an imaging module of the head mounted device, one or more images of an eye in an eyebox region of the head mounted device;
- performing image processing on the one or more images of the eye to determine a heart rate of a user of the head mounted device; and
- adjusting a transparency of a lens of the head mounted device based at least in part on the heart rate of the user.
2. The method of claim 1 further comprising:
- initiating an ambient light measurement with a photodetector of the head mounted device, wherein adjusting the transparency of the lens of the head mounted device is also based on the ambient light measurement.
3. The method of claim 2, wherein adjusting the transparency of the lens of the head mounted device in response to the heart rate and the ambient light measurement includes adjusting the transparency of the lens to a predetermined lens transparency value associated with aggregate eye data corresponding to the ambient light measurement.
4. The method of claim 2, wherein the photodetector includes a camera.
5. The method of claim 1, wherein the imaging module includes a MEMS mirror-based laser system.
6. The method of claim 1, wherein the imaging module includes a camera.
7. The method of claim 1, wherein the performing image processing includes determining a pupil size of the eye over a time period.
8. A method of operating a head mounted device, the method comprising:
- capturing eye data with one or more sensors of the head mounted device, wherein the one or more sensors are configured to sense an eyebox region; and
- adjusting a volume of an audio output of the head mounted device, wherein adjusting the volume of the audio output of the head mounted device is in response to the eye data and an ambient noise measurement.
9. The method of claim 8, wherein the eye data includes at least one of a pupil size of an eye, speed of pupil dilation of the eye, a gaze direction of the eye, or eye-movement data.
10. The method of claim 8, wherein the eye data includes one or more images of an eye.
11. The method of claim 10 further comprising:
- performing image processing on the one or more images of the eye to determine a heart rate of a user of the head mounted device.
12. The method of claim 10 further comprising:
- performing image processing on the one or more images of the eye to determine at least one of movement of eyebrows, movement of eyelids, or facial muscle micro gestures, and wherein adjusting the volume of the audio output of the head mounted device is based at least in part on the movement of eyebrows, the movement of eyelids, or the facial muscle micro gestures.
13. The method of claim 8, wherein the eye data includes a number of saccades in a time period.
14. The method of claim 8, wherein the one or more sensors of the head mounted device includes one or more photodiodes.
15. The method of claim 8, wherein the ambient noise measurement is generated by one or more microphones of the head mounted device.
16. The method of claim 8, wherein the volume of the audio output is adjusted higher in response to a higher audio output level of the ambient noise measurement taken in a high-noise environment.
17. The method of claim 8, wherein the volume of the audio output is adjusted lower in response to a lower audio output level of the ambient noise measurement taken in a low-noise environment.
18. A head mounted device comprising:
- a camera configured to capture one or more images of an eye in an eyebox region of the head mounted device;
- processing logic configured receive the one or more images from the camera, wherein the processing logic is also configured to perform image processing on the one or more images of the eye to determine at least one of movement of eyebrows, movement of eyelids, or facial muscle micro gestures; and
- a lens, wherein the processing logic is also configured to adjust a transparency of the lens based at least in part on the movement of eyebrows, the movement of eyelids, or the facial muscle micro gestures.
19. The head mounted device of claim 18 further comprising:
- a photodetector configured to generate an ambient light measurement, wherein the processing logic is further configured to adjust the transparency of the lens of the head mounted device based on the ambient light measurement and the heart rate of the user.
20. The head mounted device of claim 19, wherein the photodetector includes an image sensor.
Type: Application
Filed: Jun 20, 2023
Publication Date: Oct 19, 2023
Inventors: Sebastian Sztuk (Virum), Salvael Ortega Estrada (Cedar Park, TX)
Application Number: 18/212,041