NEAR-EYE LIGHT-FIELD DISPLAY SYSTEM
A near-eye light field display for use with a head mounted display unit with enhanced resolution and color depth. A display for each eye is connected to one or more actuators to scan each display, increasing the resolution of each display by a factor proportional to the number of scan points utilized. In this way, the resolution of near-eye light field displays is enhanced without increasing the size of the displays.
This application claims the benefit of U.S. Provisional Application Ser. No. 62/152,893, filed Apr. 26, 2015, which is hereby incorporated herein by reference in its entirety.
TECHNICAL FIELDThe disclosed technology relates generally to near-eye displays, and more particularly, some embodiments relate to near-eye systems having light-field displays.
DESCRIPTION OF THE RELATED ARTHead-mounted displays (“HMDs”) are generally configured such that one or more displays are placed directly in front of a person's eyes. HMDs have been utilized in various applications, including gaming, simulation, and military uses. Traditionally, HMDs have comprised heads-up displays, wherein the user focuses on the display in front of the eyes, as images are traditionally displayed on a two-dimensional (“2D”) surface. Optics are used to make the display(s) appear farther away than it actually is, in order to allow for a suitable display size to be utilized so close to the human eye. Despite the use of optics, however, HMDs generally have low resolution because of trade-offs related to the overall weight and form factor of the HMD, as well as pixel pitch.
BRIEF SUMMARY OF EMBODIMENTSAccording to various embodiments of the disclosed technology, a head mounted display for generating light field representations is provided. The head mounted display comprises an array of lenses (comprising a plurality of light field lenses) positioned opposite and parallel to an array of displays (comprising a plurality of light sources). The array of lenses may be configured to capture light rays from one or more light sources of the array of displays to generate a near-eye light field representation. The head mounted display may include an exterior housing configured to support the edge of the array of lenses, and an interior housing configured to support the array of displays disposed on a surface of the interior housing. In some embodiments, the exterior housing and the interior housing may be positioned such that the distance between the array of lenses and the array of displays remains fixed. In other embodiments, a vertical motion actuator may be disposed between the interior housing and the exterior housing such that the interior housing may be moved vertically relative to the exterior housing to increase or reduce the distance between the two arrays, or vice versa.
Other features and aspects of the disclosed technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the disclosed technology. The summary is not intended to limit the scope of any inventions described herein, which are defined solely by the claims attached hereto.
The technology disclosed herein, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments of the disclosed technology. These drawings are provided to facilitate the reader's understanding of the disclosed technology and shall not be considered limiting of the breadth, scope, or applicability thereof. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.
The figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the disclosed technology be limited only by the claims and the equivalents thereof.
DETAILED DESCRIPTION OF THE EMBODIMENTSAs discussed above, HMDs generally employ one or more displays placed in front of the human eye. 2D images are shown on the displays, and the eye focuses on the display itself. In order to provide a clear, focused image, optics placed between the eye and the display make the display appear farther away than the display may actually be in reality. In this way, the eye is capable of focusing beyond the space occupied by the display.
HMDs are generally either too large, have limited resolution, or a combination of both. This is due to the distance between pixels in the display, or pixel pitch. When there is sufficient distance between the display and the eye, pixel pitch does not impact resolution to a great extent, as the space between pixels is not as noticeable. However, HMDs place displays near the eye, making pixel pitch an important limiting factor related to resolution. In order to increase resolution, larger displays are necessary to increase the number of pixels in the display. Larger displays require larger optics to create the illusion of space between the eye and the display.
Traditionally, the display in a HMD is a 2D surface projecting a 2D image to each eye. Some HMDs utilize waveguides in an attempt to simulate 3D images. Waveguides, however, are complex, requiring precise design and manufacture to avoid errors in beam angle and beam divergence.
One solution that provides true three-dimensional images is the use of near-eye light field displays. Similar to light field cameras, a near-eye light field display creates a representation of light as it crosses a plane that provides information not only relating to the intensity of the light, but also to the direction of the light rays. Traditional 2D displays only provide information regarding the intensity of light on a plane. Waveguides with diffractive optical elements may be used to synthesize light rays on a plane, but this approach is complex, requiring precise design and manufacture to avoid errors in light ray angle and divergence.
Embodiments of the technology disclosed herein are directed toward systems and methods for near-eye light field displays. More particularly, the various embodiments of the technology disclosed herein relate to near-eye light field displays providing enhanced resolution and color depth compared to conventional near-eye light field displays. As described in greater detail below, embodiments of the technology disclosed herein enable near-eye display systems with true light-field representations, providing true 3D imaging with greater resolution, and without the need for complex waveguides. By scanning a light source array, such as an LED or OLED display, while controlling the intensity of the light source away in synchronization with the scan pattern, the impact of pixel pitch is reduced, resulting in increased resolution without the need for larger displays or optics. In some embodiments, the intensity modulation of the pixels is achieved by turning the pixels on and off for a time duration that is dependent on the desired intensity. For example, if higher intensity is desired on a red pixel than on a blue pixel, the red pixel would be lit up longer than the blue pixel. In some embodiments, the intensity modulation of the pixels is achieved by adjusting the current or voltage to the light emitter in the pixel.
Moreover, the distance between the light source array and an array of lenses may be adjusted during the retention time of the human eye. In this way, the rays from one or more lenses in the array of lenses provide the depth cues for an image within the field of view of the HMD. In some embodiments, the Z-direction motion may be achieved by one or more vertical actuators, separate from the actuators utilized for lateral movement of the light source displays. In some embodiments, said vertical actuators may be used to move a lens. In various embodiments, one or more actuators may be utilized which are capable of both lateral (in-plane) and vertical (out-of-plane) movement of the light source displays, without the need for separate, particularized actuators for each type of movement. In this manner, true light-field display is possible, without the need for the use of light-field cameras or continually changing the focus of image capture cameras and piecing the images together into a representation. Non-limiting examples of such vertical actuators and dual-plane (in-plane & out-of-plane) actuators include actuators disclosed in co-pending U.S. patent application Ser. No. 15/089,276, filed Apr. 1, 2016, the disclosure of which is herein incorporated by reference in its entirety.
By employing embodiments of the systems and methods described below, it is possible to reduce the size and/or enhance the resolution and color of traditional HMDs, or convert a traditional HMD into a light field display.
In other words, by utilizing a near-eye light field display, the eyes can “look through” the display and focus on a virtual image 130 beyond the light source (display) 110. Note, however, that divergence of the rays of light is provided by the distance between the light source array 110 and the array of lenses 120. If the spacing between the light source array 110 and the array of lenses 120 is decreased, the divergence of the rays of light will increase, and the virtual image 130 will appear to be closer. Conversely, if the spacing between the light source array 110 and the array of lenses 120 is increased, the divergence of the rays of light will decrease, and the virtual image 130 will appear to be further away. Accordingly, a light field display may be constructed from a traditional HMD by providing a means of adjusting the focus distance between the light source array 110 and the array of lenses 120 according to the desired apparent distance of the virtual image 130. This adjustment in focus distance changes the direction of the rays as required for a light field display. In various embodiments, the focus adjustment is done within the retention time of the eye while different portions of the light source array are lit up so that different portions of the virtual image 130 appear at different distances. The focus adjustment is done when the focus of the eye changes in some embodiments, as can happen when the user looks at objects that are closer or further away.
When the virtual image 130 is within the field of view (FOV) of multiple lenses within the array of lenses 120, the rays entering the eye 140 representing the virtual image 130 may come from more than one lens.
As illustrated in
As illustrated in
Cameras 310a, 310b may be disposed on the HMD 300 in various embodiments. The cameras 310a, 310b are configured to capture the user's FOV. Although shown as being disposed such that the cameras 310a, 310b are positioned on the side of the user's head, other embodiments may have cameras disposed elsewhere on the basic near-eye display 300. In some embodiments, the cameras 310a, 310b may be disposed on the top and/or the bottom of the imaging display systems 320a, 320b, respectively. In some embodiments, the HMD 300 may include multiple cameras per imaging display system 320a, 320b, respectively. Various different types of image sensors may comprise the cameras 310a, 310b. Non-limiting examples of image sensors that may be cameras 310a, 310b include: video cameras; light-field cameras; infrared (IR) cameras; low-light designed cameras; wide dynamic range cameras, high speed cameras, or thermal imaging sensors; among others. In various embodiments, the basic near-eye display 300 may include a combination of the above identified image sensors to provide a variety of imaging data to the subject, whether all at once or in different operational modes.
The cameras 310a, 310b and the imaging display systems 320a, 320b may be combined within a housing 330. The housing 330 enables the imaging display systems 320a, 320b to be positioned in front of the user's eyes 140. In various embodiments, the housing 330 may be configured as a pair of eyeglasses, with the imaging display systems 320a, 320b positioned with the glass lenses are generally positioned. In some embodiments, the housing 330 may be configured to wrap around the eyes 140 to prevent any outside light from entering the HMD 300. A variety of components may be included in the housing 330 to maintain the positioning of the HMD 300 on the user's head. Various embodiments may include nasal supports to allow the HMD 300 to rest on the user's nose (not pictured). Various embodiments may include inter pupillary distance (IPD) adjustment so that the distance between one display system 320a and the second display system 320b may be adjusted to substantially match the distance between the user's eyes. In some embodiments, the housing 330 may include ear supports to rest on the user's ears (not pictured). The housing 330 may wrap around the user's head, similar to swimming or welding goggles. The housing 330 may include a webbing structure to support the HMD 300 by resting across the skull of the subject, similar to the supporting webbing structure of hard hats. The supports of the housing 330 may include an adjustable strap to allow the HMD 300 to be modified to fit correctly on a user's head.
The example near-eye light field system 400 of
The images from the one or more cameras 410 may be fed into processor unit 420 to compute a true light field representation of the actual image at different depths based on the captured images. For example, where a traditional camera is used, the images from the one or more cameras 610 may comprise a series of pictures at different focal depths. To create the three dimensional actual image, the different captured images are processed to provide depth to the actual image. The computed light field is used to compute when the light source array (such as light source array 110 discussed with respect to
In some embodiments, the near-eye light field system 400 may include one or more gyroscopes or accelerometers 430, providing information representative of the particular position of the user's head. Furthermore, the images captured by the one or more cameras 410 may be processed to determine the motion of the user's head, as well. This information may be fed into the processor unit 420 to utilize in computing the light field to account for changes in the position of the user's head.
The near-eye light field system 400 may include one or more imaging display systems 470. The imaging display system 470 may be similar to the imaging display systems 320a, 320b discussed with respect to
As illustrated, the example imaging display system 500 includes a source display 510, actuator 520, light field lens 560, processor unit 540, and actuator control components 530. Non-limiting examples of a source display 510 include an LED, OLED, LCD, plasma, or other electronic visual display technologies. The processor unit 540 may be connected to the source display 510 to control the illumination of the pixels of the source display 510, similar to the processor unit 420 discussed with respect to
In some embodiments the processor unit 540 may include one or more of a microprocessor, memory, a field programmable gate array (FPGA), and/or display and drive electronics. A light field lens 560 is disposed between the source display 510 and the use's eye (not pictured). In some embodiments, the light field lens 560 is composed of multiple lenses arranged along the optical axis in order to improve the optical performance as compared with a single lens.
As discussed above, embodiments of the technology disclosed herein enable enhanced resolution without the need for larger displays. This helps to reduce the overall cost, size, and weight of HMDs and near-eye displays. As illustrated in
To enhance the resolution of the source display 510, scanning of the source display 510 through the use of the one or more actuators 520 enhances spatial resolution of images, i.e., how closely lines can be resolved in an image. In terms of pixels, the greater the number of pixels per inch (“ppi”), the clearer the image that may be resolved.
As illustrated in
Another benefit of scanning the display is the ability to include different sensors within the display without sacrificing resolution.
In various embodiments, the light field lenses are designed to capture as much light as possible, while maintaining a resolution better than a human eye. In some embodiments, each light field lens may be designed with an aperture between 5 and 10 mm. Light field lenses with various aperture sizes may be combined into a single lens array in some embodiments. The focal length of each light field lens may be between 7 and 20 mm. This focal length is for the light field lens itself, and does not take into account chromatic aberration within each lens. Chromatic aberration results in light of different colors having different focal lengths. To account for chromatic aberration in each light field lens in various embodiments, each colored pixel may be turned on during scanning at different focal positions, thereby ensuring that the different colored light impacts the eye at the same focal point. In other embodiments, standard techniques for minimizing chromatic aberration may be used, such as but not limited to doublets and diffraction gratings. Where LEDs are another Lambertian emitter (distributed source) is utilized, a microlens may be disposed on top of the light source, to account for the distributed nature of the light.
As discussed above, in some embodiments the light field lenses may be incorporated into an array of lenses that is disposed between the light source displays and the eye.
The array of lenses illustrated in
As discussed above, the curved array of lenses discussed with respect to
The outside edges of the array of lenses 1101 may be connected to an exterior platform 1104, while the array of displays 1102 may be connected to an interior platform 1105 in various embodiments. In this manner, the near-eye display system 1100 may be modularly constructed, with the array of lenses portion may be constructed separately from the array of displays portion, and combined after fabrication. The exterior platform 1104 and/or the interior platform 1105 may further be configured to house additional components of the near-eye display system 1100, including but not limited to: control computer or processing components; cameras; memories; or motion sensors, such as gyroscopes, accelerometers, or other motion sensors; or a combination thereof. The exterior platform 1104 and interior platform 1105 may be created through injection molding or press molding, and may comprise of many different materials, including but not limited to plastic.
In some embodiments, the light source displays of the array of displays 1102 may be disposed on an actuator configured to provide both in-plane scanning, as well as out-of-plane motion. In-plane motion refers to motion within the same horizontal plane as the actuator, while out-of-plane motion refers to motion in the vertical direction above or below the actuator. In this way, a light field display may be generated through scanning alone of the light source displays of the array of displays 1102. In some embodiments, only a light source display located in the center of the array of displays 1102 may be disposed on an actuator capable of both in-plane and out-of-plane scanning. In other embodiments, the light source displays surrounding and abutting the central light source display of the array of displays 1102 may be disposed on actuators capable of in-plane and out-of-plane motion, while all the exterior light source displays are disposed on stationary or in-plane only actuators. In some embodiments, the out-of-plane motion may be determined based on one or more position sensors disposed on the array of displays 1102, similar to the sensors discussed with respect to
In some embodiments, the out-of-plane motion may be provided by moving the array of displays relative to the array of lenses, or vice versa.
Moreover, the near-eye display system 1200 further illustrates an array of displays 1202 where only the central light source displays of the array of displays 1202 are disposed on actuators. The outside light source displays of the array of displays are disposed directly on the rigid circuit board in the illustrated embodiment.
The focus of each camera 1302, 1304 may be controlled based on the focus of the user's eyes. The basic light field system 1300 may include eye focus sensors 1308, 1310 disposed within a display in front of the left eye and right eye, respectively. Each eye focus sensor 1308, 1310 may include one or more focus sensors in various embodiments. In some embodiments, the eye focus sensors 1308, 1310 may be disposed in the spaces between the pixels of a left display 1312 and a right display 1314. The eye focus sensors 1308, 1310 may be used to determine where a user's eyes are focused. The information from the eye focus sensors 1308, 1310 may be fed into a focus correction module 1316. The focus correction module 1316 may determine the correct focus based on the point where the user's eyes are focused, and provide this information to a display focus control 1318. The display focus control 1318 may provide this information to the camera focus control 1306. The camera focus control 1306 may utilize the focus information from the display focus control 1318 to set the focus of each camera 1302, 1304. The vision of a user with eye focus problems (myopia or hyperopia, nearsighted or farsighted) can be corrected by setting the focus of the cameras to a different depth than the focus of the display. In some embodiments, the cameras 1302, 1304 may be one or more of a light field camera, a standard camera, an infrared camera, or some other image sensor, or a combination thereof. For example, in some embodiments the cameras 1302, 1304 may comprise a standard camera and an infrared camera, enabling the basic light field system 1300 to provide both a normal view and an infrared view to the user.
The display focus control 1318 may also utilize the desired focus from the focus correction module 1316 to set the focus of the displays 1312, 1314, to the focus of each eye.
Once the cameras 1302, 1304 are set to the desired focus, the cameras 1302, 1304 may capture the scene within the field of view of each camera 1302, 1304. The images from each camera 1302, 1304 may be processed by a processor unit 1320, 1322. As illustrated, each camera 1302, 1304 has its own processor unit 1320, 1322, respectively. In some embodiments, a single processor unit may be employed for both cameras 1302, 1304. The processor unit 1320, 1322 may process the images from each camera 1302, 1304 in a similar fashion as described above with respect to
Although illustrated as separate components, aspects of the basic light field system 1300 may be implemented as in a single component. For example, the focus correction 1316, the display focus control 1318, and the camera focus control 1306 may be implemented in software and executed by a processor, such as processor unit 1320, 1322.
At 1420, a desired focus is determined. The desired focus is determined based on the measured eye focus from 1410. The desired focus may be different from the eye focus if the user has focus problems. For example, if the user has myopia (nearsightedness), the desired focus is further away than the measured eye focus. The desired focus may also be determined from the position of the eye, such as close if looking down, or the position of the eye with respect to the image, such as the same focus as a certain object in the scene, or some of other measurement of the eye. Based on the desired focus, the camera focus may be set to the desired focus at 1430. In various embodiments, the camera focus may be set equal to the desired focus. In other embodiments, the camera focus may be set to a focus close to, but not equal to, the desired focus. In such embodiments, the camera focus may be set as close as possible based on the type of camera employed in the embodiment.
At 1440, the cameras capture images of objects within the field of view of the cameras. In some embodiments, the field of view of the cameras may be larger than the displayed field of view to enable some ability to quickly update the display when there is rapid head movement, without the need for capturing a new image.
At 1450, each display is set to the eye focus. In some embodiments, the eye focus is the same as the desired focus. In other embodiments, the desired focus is derived from the eye focus identified at 1410. In some embodiments, the displays may be set to the eye focus before setting the camera focus at 1430, after 1430 but before the camera captures images at 1440, or simultaneous to the actions at 1430 and/or 1440.
At 1460, the images are displayed to each eye. The images are displayed to each eye via the respective display. In some embodiments, the images may be processed by a processor unit prior to being displayed, similar to the processing discussed above with respect to
IMU 1524 may include one or more gyroscopes, accelerometers, or other motion or orientation sensors, or a combination thereof. The components comprising the IMU 1524 may track the motion of a user's head, and provide that information to the processor and memory 1520. In this way, the position of augmented objects or images may be adjusted based on the user's head movements. Augmentation is a way of enhancing the user's experience of the scene within the field of view by providing additional information on objects within the field of view, or even adding computer-generated objects to the field of view.
Although included within the example process 1600, both 1650 and 1660 need not be performed every time. In some embodiments, only adding objects at 1650 will occur. In other embodiments, only enhancing of the images at 1660 will be performed. In various embodiments, both 1650 and 1660 will be performed. Setting the displays to the focus of the eyes at 1670 and displaying the pictures at 1680 may be similar to the setting 1450 and displaying 1460 actions discussed with respect to
As used herein, the term component might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the technology disclosed herein. As used herein, a component might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a component. In implementation, the various components described herein might be implemented as discrete components or the functions and features described can be shared in part or in total among one or more components. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared components in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate components, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality.
Where components of the technology are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing component capable of carrying out the functionality described with respect thereto. One such example computing component is shown in
Referring now to
Computing component 1700 might include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor 1704. Processor 1704 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic. In the illustrated example, processor 1704 is connected to a bus 1702, although any communication medium can be used to facilitate interaction with other components of computing component 1700 or to communicate externally.
Computing component 1700 might also include one or more memory components, simply referred to herein as main memory 1708. For example, preferably random access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor 1704. Main memory 1708 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1704. Computing component 1700 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 1702 for storing static information and instructions for processor 1704.
The computing component 1700 might also include one or more various forms of information storage mechanism 1710, which might include, for example, a media drive 1712 and a storage unit interface 1720. The media drive 1712 might include a drive or other mechanism to support fixed or removable storage media 1714. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive might be provided. Accordingly, storage media 1714 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 1712. As these examples illustrate, the storage media 1714 can include a computer usable storage medium having stored therein computer software or data.
In alternative embodiments, information storage mechanism 1710 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing component 1700. Such instrumentalities might include, for example, a fixed or removable storage unit 1722 and an interface 1720. Examples of such storage units 1722 and interfaces 1720 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 1722 and interfaces 1720 that allow software and data to be transferred from the storage unit 1722 to computing component 1700.
Computing component 1700 might also include a communications interface 1724. Communications interface 1724 might be used to allow software and data to be transferred between computing component 1700 and external devices. Examples of communications interface 1724 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface. Software and data transferred via communications interface 1724 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 1724. These signals might be provided to communications interface 1724 via a channel 1728. This channel 1728 might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as, for example, memory 1708, storage unit 1720, media 1714, and channel 1728. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing component 1700 to perform features or functions of the disclosed technology as discussed herein.
While various embodiments of the disclosed technology have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the disclosed technology, which is done to aid in understanding the features and functionality that can be included in the disclosed technology. The disclosed technology is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical or physical partitioning and configurations can be implemented to implement the desired features of the technology disclosed herein. Also, a multitude of different constituent module names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise.
Although the disclosed technology is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the disclosed technology, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the technology disclosed herein should not be limited by any of the above-described exemplary embodiments.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.
The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “component” does not imply that the elements or functionality described or claimed as part of the component are all configured in a common package. Indeed, any or all of the various elements of a component, whether control logic or other elements, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.
Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.
Claims
1. A head mounted display, comprising:
- an array of lenses comprising a plurality of light field lenses;
- an array of displays comprising a plurality of display components, each display component comprising a light source disposed on a circuit board, at least one display component including a light source disposed on an actuator;
- an exterior housing supporting the array of lenses, the exterior housing connected to an outside edge of the array of lenses;
- an interior housing supporting the array of light source displays, the array of light source arrays disposed on a top surface of the interior housing;
- wherein the array of lenses is disposed at a fixed distance from the array of displays, and each light field lens of the plurality of light field lenses is parallel to at least one display component of the array of displays; and
- wherein an actuator control component is communicatively coupled to the array of displays and a processor unit, and configured to move the at least one light source disposed on the actuator in accordance with a scan pattern.
2. The head mounted display of claim 1, wherein the processor unit is configured to synchronize the illumination of a plurality of pixels of the light source with the scan pattern.
3. The head mounted display of claim 1, wherein the light source comprises a OLED.
4. The head mounted display of claim 1, wherein the light source comprises one of: an LED; an LCD; a plasma display.
5. The head mounted display of claim 1, wherein the scan pattern comprises a raster scan pattern configured to scan the one or more actuators in plane.
6. The head mounted display of claim 1, wherein the scan pattern results in a Lissajous curve.
7. The head mounted display of claim 1, further comprising one or more cameras housed in the exterior housing, each camera having a field of view encompassing a portion of a view of a user, and wherein the processor is further configured to compute a light field representation.
8. The head mounted display of claim 7, wherein the one or more cameras comprise one or more light field cameras.
9. The head mounted display of claim 7, wherein the one or more cameras comprises a motorized focus.
10. The head mounted display of claim 1, each display component comprising one or more focus sensors disposed on a surface of the display component between a plurality of pixels of the light source on the surface of the display component.
11. The head mounted display of claim 1, wherein the scan pattern comprises a depth scan pattern configured to scan the at least one actuator in the Z-axis.
12. The head mounted display of claim 1, wherein an opaque mask is disposed at a transition point between each light field lens of the array of lenses, wherein the transition point comprises a point where an edge of a first light field lens meets an edge of a second light field lens.
13. A head mounted display, comprising:
- a light field lens;
- aa display component comprising a light source disposed on an actuator;
- an exterior housing supporting the light field lens;
- an interior housing supporting the display component disposed on a top surface of the interior housing;
- wherein the light field lens is disposed opposite the display component, and parallel to the display component;
- a vertical motion actuator disposed between the interior housing and the exterior housing such that, when activated, the interior housing moves in a vertical direction relative to the exterior housing to increase or decrease the distance between the light field lens and the display component; and
- wherein an actuator control component is communicatively coupled to the display component and a processor unit, and configured to move the light source dispose on the actuator laterally and the vertical motion actuator vertically in accordance with a scan pattern.
14. The head mounted display of claim 13, wherein the processor unit is configured to synchronize the illumination of a plurality of pixels of the light source with the scan pattern.
15. The head mounted display of claim 13, wherein the light source comprises an OLED.
16. The head mounted display of claim 13, wherein the light source comprises one of: an LED; an LCD; a plasma display.
17. The head mounted display of claim 13, wherein the scan pattern comprises a raster scan pattern configured to scan the one or more actuators in-plane.
18. The head mounted display of claim 13, wherein the scan pattern results in a Lissajous curve.
19. The head mounted display of claim 13, further comprising one or more cameras housed in the exterior housing, each camera having a field of view encompassing a portion of a view of a user, and wherein the processor is further configured to compute a light field representation.
20. The head mounted display of claim 19, wherein the one or more cameras comprise one or more light field cameras.
21. The head mounted display of claim 19, wherein the one or more cameras comprises a motorized focus.
22. The head mounted display of claim 13, the display component comprising one or more focus sensors disposed on a surface of the display component between a plurality of pixels of the light source on the surface of the display component.
23. The head mounted display of claim 13, wherein the scan pattern comprises a depth scan pattern configured to scan the vertical motion actuator in the Z-axis.
24. The head mounted display of claim 13, further comprising an array of lenses comprising a plurality of light field lenses, an array of displays comprising a plurality of display components, at least one display component including a light source disposed on an actuator.
25. The head mounted display of claim 24, wherein an opaque mask is disposed at a transition point between each light field lens of the array of lenses, wherein the transition point comprises a point where an edge of a first light field lens meets an edge of a second light field lens.
Type: Application
Filed: Apr 1, 2016
Publication Date: Oct 27, 2016
Inventor: ROMAN GUTIERREZ (Arcadia, CA)
Application Number: 15/089,308