SYSTEM FOR USING AUGMENTED REALITY FOR VISION
A computer-implemented system and method for an augmented reality system for vision, operable and upgradeable on different portable, wearable and self-contained hardware platforms that is easily adjustable by the user. The system comprises at least: a frame; one or more than one display lenses attached to the frame; one or more than one camera operably connected to the frame; communications means for transferring power and data to the hardware platform connected to the frame; an input device for controlling the hardware platform; one or more than one processor; and a storage. The system executes instructions to automatically detect the hardware capabilities of the hardware. Loading and installing applications to the user based on the detected hardware. Changing the magnification and contrast to view the selected application, where the contrast adjustment uses filters specifically designed to enhanced the visually impaired experience.
The present invention is in the technical field of augmented reality, and more particularly to a method for using augmented reality for vision operable on different hardware platforms that is easily adjustable by the user.
BACKGROUNDThe current estimated number of people visually impaired in the world is 285 million, of that, 246 million suffer from having low vision. There are an estimated 60 million people in the United States & Europe with low vision. Due to an aging populace, by 2030, vision loss rates will almost double. This is an average of 27.25 thousand per million population. Current products for treating people with a visual impairment have not kept pace with mainstream technology. The implementation of the Augmented Reality (AR) and Virtual Reality (VR) systems has not occurred outside of the gaming. There are qualities in AR and VR technology that can assist people that suffer from low vision and other uses that can move the AR and VR systems from its current novelty use to a life changing system.
Others have attempted to provide some assistance using proprietary systems, such as, for example, U.S. Pat. No. 8,135,227 to Esight Corporation, titled “Apparatus and method for augmenting sight.” Disadvantageously, the Esight patent requires extensive involvement of medical and ophthalmological personnel to: determine “the locations of retinal damage in an eye of the patient” and obtain “an image of a scene viewed by the patient” and then “mapping, using a processor, the image to a display in such a way to avoid the locations of retinal damage when the display is viewed by the patient.” This process, while workable, does not take into the consideration the time and effort required to accomplish this task. Further, the Esight invention must be recalibrated every time the patient's eyesight changes. This non-adjustable, time consuming method is unsuited to seniors or others with low vision as they must continually have the system adjusted by professionals.
Along with the Esight system noted above, other available systems and methods have failed to develop deliverable markets. Each product attempts to solve a singular vision issue by making complicated technology. To date, many virtual and augmented reality wearable systems developed have not lived up to expectations or have simply failed to deliver any results.
These factors have stalled wide implementation of AR/VR. Current low vision technology solutions prevented these collaborative efforts due to limitations in portability, weight and size. Current products use a lot of space which is why few eye doctors dispense them to patients or end users. The currently available platforms are not internet enabled and applications cannot be added. Most comprise both proprietary hardware and software in an attempt to “lock” users into a particular platform ecosystem. The products also require a technician to go to the user's home to install the system. Many require the user to go to a doctor and have the system calibrated to what the doctor believes is the best view for them. Currently available technology is not portable and cannot be used outside the home with few exceptions. Thereby limiting the usefulness of the system.
Additionally, using AR/VR in the surgical and dental space has been limited to one set of magnification only, the current devices cannot record, take pictures, email, or link back the surgery to electronic medical records. The currently available surgical and dental loupes are also heavy which create neck strain on the doctor.
Therefore, there is a need for an augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user.
These and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying figures where:
The present invention overcomes the limitations of the prior art by providing an augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user. A minimal system comprises a frame, one or more than one display lenses attached to the frame, one or more than one camera operably attached to the frame, wired, wireless or both wired and wireless communications means for transferring power, data or both power and data to the hardware platform connected to the frame, a wired, wireless or both wired and wireless input device for controlling the hardware platform comparably connected to the frame, one or more than one processor for executing instructions connected to the frame, and a storage attached to the processor for storing instructions, applications and data connected to the frame. The instructions operable on the processor comprise powering the system, loading instructions from a storage into one or more than one processor, displaying information about the system to the user, displaying a selection menu to the user, accepting inputs from the user adjusting the image displayed, and executing instructions for a selected menu item. The system further comprises a battery connected to the frame for powering the hardware platform, a microphone operably connected to the frame, one or more than one audio output device connected to the frame, and a remote control operably connected to the processor for transmitting commands. The system further comprises one or more than one removable lens located in front of the one or more than one camera for capturing images. The lenses can also be of various magnification levels to assist the camera for patients with magnification needs that exceed the hardware provided.
There is also provided a method for using an augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user. The method comprises the steps of first placing an augmented reality system for vision on a user's head and in front of the user's eyes. Then powering the system. Next, loading instructions from a storage into one or more than one processor. Then, displaying information about the system to the user. Next, displaying a selection menu to the user. Then, accepting inputs from the user adjusting the image displayed. Next, executing instructions for a selected menu item. Finally, repeating the steps above until the system is powered off.
There is also provided a computer-implemented method for an augmented reality system for vision, operable and upgradeable on different portable, wearable and self-contained hardware platforms that is easily adjustable by the user, comprising executing on one or more than one processor. First, by providing a portable, wearable and self-contained hardware platform. Where the hardware comprises at least a frame, one or more than one display lenses attached to the frame, one or more than one camera operably connected to the frame, wired, wireless or both wired and wireless communications means for transferring power, data or both power and data to the hardware platform connected to the frame, a wired, wireless or both wired and wireless input device for controlling the hardware platform operably connected to the frame, one or more than one processor for executing instructions connected to the frame, and a storage attached to the processor for storing instructions, applications and data connected to the frame. Then, executing instructions on the one or more than one processor to automatically detect the hardware capabilities of the portable, wearable and self-contained hardware platform. Next, loading and installing instructions for a user onto the portable, wearable and self-contained hardware platform. Then, displaying available applications to the user based on the detected hardware available on the portable, wearable and self-contained hardware platform. Next, selecting an available application. Then, retrieving the selected application from a storage connected to the one or more than one processor. Next, executing the retrieved instructions for the selected application. Then, changing the magnification to view the selected application. Next, changing the contrast of the selected application using the track pad, the remote control or by voice command. The step of selecting an available application is performed using a track pad. The step of selecting an available application is performed using a remote control. The step of selecting an available application is performed using a voice command.
The method further comprises auto-focusing the one or more than one display lenses, where the one or more than one display lenses are set to continuous auto focus but locked at set distances determined by magnification settings. Adjusting a magnification of the one or more than one display lenses. Selecting and overlaying contrast filters specifically designed to enhanced the visually impaired experience above a threshold. Enabling hands free control using various voice frequencies and algorithms set to commands and key phrases that control the portable, wearable and self-contained hardware platform. Executing instructions located on the portable, wearable and self-contained hardware platform for reading back text, signs or any printed material. Recording multimedia information from the portable, wearable and self-contained hardware platform. Displaying an onboard gallery of the stored multimedia information. Electronically changing the lens attached to the one or more than one camera to take a 50 degree electronically magnified image at distances of great than 50+ feet and editing the field of view down to 30 degree creating a clear image. The magnification can be adjusted to, but not limited to, 0× magnification (or no magnification) up to at least 12× magnification, depending on the available hardware. The step of reading back text, signs or any printed material is operable to a distance of at least 10 feet from the one or more than one camera within the system without using any external services. The multimedia information is stored to the storage attached to the one or more than one processor for later download. Additionally, the multimedia information is transmitted wired, wirelessly or both wired and wirelessly to a secure server or electronic medical record.
DETAILED DESCRIPTION OF THE INVENTIONThe present invention overcomes the limitations of the prior art by providing an augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user. Almost all the available AR/VR platforms today are used for gaming and virtual tours. However, in the vision field, there are only a handful of systems that attempt to improve the lives of people with macular degeneration and other vision ailments. These systems are limited in their approach and not user friendly. Most require constant recalibration and visits to a technician or a doctor to “re-adjust” the device so that it can be useful to the user again. However, this leads to many complaints by the users because vision changes on a daily basis. Using the present invention, constant re-adjusts are done by the user to get the best view for them at that time under changing conditions. For example, going to a movie will require several vision adjustments that can't be accomplished by currently available devices. However, these adjustments are easily done, on the fly, by using the present invention. The cost for the proprietary hardware is also prohibitive to many lower income and disabled individuals. The present invention is hardware agnostic, thereby allowing it to be used on any system meeting the minimum requirements. The present invention lowers the entry point, is portable between devices and is customizable by the user, thereby overcoming all the limitations of the prior art.
All dimensions specified in this disclosure are by way of example only and are not intended to be limiting. Further, the proportions shown in these Figures are not necessarily to scale. As will be understood by those with skill in the art with reference to this disclosure, the actual dimensions and proportions of any system, any system or part of a system or system disclosed in this disclosure will be determined by its intended use.
Methods and systems that implement the embodiments of the various features of the invention will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention. Reference in the specification to “one embodiment” or “an embodiment” is intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an embodiment of the invention. The appearances of the phrase “in one embodiment” or “an embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. In addition, the first digit of each reference number indicates the figure where the element first appears.
As used in this disclosure, except where the context requires otherwise, the term “comprise” and variations of the term, such as “comprising”, “comprises” and “comprised” are not intended to exclude other additives, components, integers or steps.
In the following description, specific details are given to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. Well-known circuits, structures and techniques may not be shown in detail in order not to obscure the embodiments. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail.
Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. The flowcharts and block diagrams in the figures can illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer programs according to various embodiments disclosed. In this regard, each block in the flowchart or block diagrams can represent a module, segment, or portion of code, that can comprise one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. Additionally, each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Moreover, a storage may represent one or more systems for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory systems and/or other non-transitory machine readable mediums for storing information. The term “machine readable medium” includes, but is not limited to portable or fixed storage systems, optical storage systems, wireless channels and various other non-transitory mediums capable of storing, comprising, containing, executing or carrying instruction(s) and/or data.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). One or more than one processor may perform the necessary tasks in series, distributed, concurrently or in parallel. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or a combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted through a suitable means including memory sharing, message passing, token passing, network transmission, etc. and are also referred to as an interface, where the interface is the point of interaction with software, or computer hardware, or with peripheral systems.
In the following description, certain terminology is used to describe certain features of one or more embodiments of the invention.
Various embodiments provide an augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user. One embodiment of the present invention provides a system for using augmented reality for vision. In another embodiment, there is provided a method for using the system the system and method will now be disclosed in detail.
Referring now to
Also, included in the frame 101 is one or more than one processor 108 and a storage (not shown). The one or more than one processor 108 and the storage are sized for the appropriate usage of the system 100. A typical system 100 would preferably have at least a quad processor and 4 gigabytes RAM and 4 gigabytes of storage located in the frame 101. External expansion ports, as known in the industry, are also possible additions to the frame 101.
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
More expensive telehealth systems 900 can be reserved for operating rooms, dental offices and other areas not prone to excessive stresses and where a medical loupe might be used, so that x-ray, CAT scan, MM or other required information is available immediately by voice command to the medical professional. For example, multiple surgeons working on a single patient can easily access the information and share information relevant to the patient all without using their hands. Specific medical information, or little known procedures can be displayed to the medical personnel on demand. Other medical specialists can also observe the procedures and assist. Additionally, student training can be more easily accomplished as the students would have an actual point of view of the procedure in real time.
Referring now to
Referring now to
Referring now to
The advantages over the coemption are the following:
1. Auto-Focus set to continuous but locked at set distances determined by magnification settings
2. Variable magnification inclusive but not limited to 0× (or no mag) up to 12×
3. Contrast or augmented reality threshold filters specifically designed to enhanced the visually impaired experience.
4. Voice In/Voice Out-A large variance of voice frequencies and algorithms set to commands and key phrases that control the smartglasses enabling hands free control
5. TTS/OCR-The augmented reality software ecosystem has the ability to read back text, signs or any printed material up to but not limited to 10 ft from said device.
6. Voice Controlled Recording or picture taking-For purposes of visually impaired and/or surgical and dental loupes the software can record or take pictures via voice commands, then stored to an on-board gallery or be emailed to a secure server or electronic medical record.
7. On Board Gallery-The software ecosystem contains within it a secure gallery that can store images and recordings
8. 2× Lens (hardware). The lens has the ability but not limited to take a 50 degree electronically magnified image at distances of great than 50+ feet and cut the field of view down to 30 degree in turn creating a clear image. The lens can also be affixed or taken off of the smart glasses.
Referring now to
Referring now to
Additionally, the system 100 further comprises instructions to perform the following:
1. Auto-focusing the one or more than one display lenses 104 and 106 is set to continuous but locked at set distances determined by magnification settings;
2. Variable magnification can be adjusted to, but not limited to, 0× (or no mag) up to at least 12×, depending on the available hardware;
3. Use contrast filters specifically designed to enhanced the visually impaired experience above a threshold.
4. Voice In/Voice Out-using various voice frequencies and algorithms set to commands and key phrases that control the system 100 enabling hands free control.
5. TTS/OCR-executing instructions located on the system 100 to read back text, signs or any printed material up to but not limited to 10 ft from said device without using any external services.
6. Voice controlled recording or picture taking for visually impaired and/or surgical and dental loupes, where the recording or the pictures are then stored to an on-board gallery or transmitted to a secure server or electronic medical record.
7. A secure on board gallery that can store images and recordings for future download.
8. Changing the lens attached to the one or more than one camera 102 to take a 50 degree electronically magnified image at distances of great than 50+ feet and cut the field of view down to 30 degree in turn creating a clear image. The lens can also be affixed or taken off of the smart glasses.
What has been described is a new and improved system and method for a system for using augmented reality for vision, overcoming the limitations and disadvantages inherent in the related art.
Although the present invention has been described with a degree of particularity, it is understood that the present disclosure has been made by way of example and that other versions are possible. As various changes could be made in the above description without departing from the scope of the invention, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be illustrative and not used in a limiting sense. The spirit and scope of the appended claims should not be limited to the description of the preferred versions contained in this disclosure.
All features disclosed in the specification, including the claims, abstracts, and drawings, and all the steps in any method or process disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in the specification, including the claims, abstract, and drawings, can be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Any element in a claim that does not explicitly state “means” for performing a specified function or “step” for performing a specified function should not be interpreted as a “means” or “step” clause as specified in 35 U.S.C. § 112.
Claims
1. An augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user, comprising:
- a) a frame;
- b) one or more than one display lens attached to the frame;
- c) one or more than one camera operably attached to the frame;
- d) wired, wireless or both wired and wireless communications means for transferring power, data or both power and data to the hardware platform connected to the frame;
- e) a wired, wireless or both wired and wireless input device for controlling the hardware platform operably connected to the frame;
- f) one or more than one processor for executing instructions connected to the frame, where the instructions comprise: 1) powering the system; 2) loading instructions from a storage into one or more than one processor; 3) displaying information about the system to the user; 4) displaying a selection menu to the user; 5) accepting inputs from the user adjusting the image displayed; and 6) executing instructions for a selected menu item; 7) recording multimedia information from the portable, wearable and self-contained hardware platform; 8) displaying an onboard gallery of the stored multimedia information; 9) electronically changing the lens attached to the one or more than one camera to take at least a 50 degree electronically magnified image at distances of greater than 50 feet and editing the field of view down to 30 degree creating a clear image; and
- g) a storage attached to the processor for storing instructions, applications and data connected to the frame.
2. The system of claim 1, further comprising:
- a) a battery connected to the frame for powering the hardware platform;
- b) a microphone operably connected to the frame;
- c) one or more than one audio output device connected to the frame; and
- d) a remote control operably connected to the processor for transmitting commands.
3. The system of claim 2, further comprising one or more than one removable lens located in front of the one or more than one camera for capturing images.
4. A method for using an augmented reality system for vision, operable and upgradeable on different hardware platforms that is easily adjustable by the user, the method comprising the steps of:
- a) placing an augmented reality system for vision on a user's head and in front of the user's eyes;
- b) powering the system;
- c) loading instructions from a storage into one or more than one processor;
- c) displaying information about the system to the user;
- d) displaying a selection menu to the user;
- e) accepting inputs from the user adjusting the image displayed;
- f) executing instructions for a selected menu item; and
- g) repeating steps c)-f) until the system is powered off.
5. A computer-implemented method for an augmented reality system for vision, operable and upgradeable on different portable, wearable and self-contained hardware platforms that is easily adjustable by the user, comprising executing on one or more than one processor the steps of:
- a) providing a portable, wearable and self-contained hardware platform, where the hardware platform comprises at least: 1) a frame; 2) one or more than one display lenses attached to the frame; 3) one or more than one camera operably connected to the frame; 4) wired, wireless or both wired and wireless communications means for transferring power, data or both power and data to the hardware platform connected to the frame; 5) a wired, wireless or both wired and wireless input device for controlling the hardware platform operably connected to the frame; 6) one or more than one processor for executing instructions connected to the frame; and 7) a storage attached to the processor for storing instructions, applications and data connected to the frame;
- b) executing instructions on the one or more than one processor to automatically detect the hardware capabilities of the portable, wearable and self-contained hardware platform;
- c) loading and installing instructions for a user onto the portable, wearable and self-contained hardware platform;
- d) displaying available applications to the user based on the detected hardware available on the portable, wearable and self-contained hardware platform;
- e) selecting an available application;
- f) retrieving the selected application from a storage connected to the one or more than one processor;
- g) executing the retrieved instructions for the selected application;
- h) changing the magnification to view the selected application;
- i) changing the contrast of the selected application using the track pad, the remote control or by voice command;
- j) recording multimedia information from the portable, wearable and self-contained hardware platform;
- k) displaying an onboard gallery of the stored multimedia information;
- l) electronically changing the lens attached to the one or more than one camera to take a 50 degree electronically magnified image at distances of great than 50 feet and editing the field of view down to 30 degree creating a clear image.
6. The method of claim 5, where the step of selecting an available application is performed using a track pad.
7. The method of claim 5, where the step of selecting an available application is performed using a remote control.
8. The method of claim 5, where the step of selecting an available application is performed using a voice command.
9. The method of claim 5, further comprising the steps of:
- a) Auto-focusing the one or more than one display lenses, where the one or more than one display lenses are set to continuous auto focus but locked at set distances determined by magnification settings;
- b) adjusting a magnification of the one or more than one display lenses;
- c) selecting and overlaying contrast filters specifically designed to enhanced the visually impaired experience above a threshold;
- d) enabling hands free control using various voice frequencies and algorithms set to commands and key phrases that control the portable, wearable and self-contained hardware platform; and
- e) executing instructions located on the portable, wearable and self-contained hardware platform for reading back text, signs or any printed material.
10. The method of claim 9, where the magnification can be adjusted from 0× magnification up to at least 12× magnification, depending on the available hardware.
11. The method of claim 9, where the step of reading back text, signs or any printed material is operable to a distance of at least 10 feet from the one or more than one camera within the system without using any external services.
12. The method of claim 9, where the multimedia information is stored to the storage attached to the one or more than one processor for later download.
13. The method of claim 9, where the multimedia information is transmitted wired, wirelessly or both wired and wirelessly to a secure server or electronic medical record.
Type: Application
Filed: Aug 20, 2019
Publication Date: Dec 12, 2019
Inventor: Mark Greget (Newport Beach, CA)
Application Number: 16/545,320