HYBRID AUGMENTED REALITY MULTIMODAL OPERATION NEURAL INTEGRATION ENVIRONMENT
A method of controlling a device relative to one or more objects in an environment of a user employing the device may include receiving a volitional input from the user indicative of a task to be performed relative to an object with the device, receiving object targeting information associated with interaction between the device and the object where the object targeting information is presented in an augmented reality context, integrating the volitional input with the object targeting information to determine a control command to direct the device to interact with the object, and providing the control command to the device.
This application is a continuation of prior-filed, co-pending U.S. Nonprovisional application Ser. No. 14/275,029 filed on May 12, 2014, which claims priority to and the benefit of U.S. Provisional Application Ser. No. 61/822,635 filed on May 13, 2013, now expired, the entire contents of which are hereby incorporated herein by reference.
STATEMENT OF GOVERNMENTAL INTERESTThis invention was made with government support under contract number 90045078 awarded by the National Institutes of Health (NIH). The government has certain rights in the invention.
TECHNICAL FIELDExample embodiments generally relate to assistive devices and, more particularly, relate to a human rehabilitation/assistive device that hybridizes computer automation and human volitional control to perform everyday Activities of Daily Living (ADL) tasks.
BACKGROUNDProsthetic devices are an example of assistive devices that have continued to evolve over time to improve the functional capabilities and aesthetic appearance. In relation to improving functional capabilities of such devices, one area in which improvement is desired relates to the use of brain-machine interfaces (BMI). BMIs attempt to provide direct communication link between the brain and the prosthetic device to assist with sensory-motor functions. However, current BMIs lack widespread clinical use due to their general inability to provide paralyzed patients reliable control of prosthetic devices to perform everyday tasks.
Some robotic prosthetic devices such as modular prosthetic limbs (MPLs) are now capable of performing a wide range of dexterous tasks. However, current BMIs tend to require daily training and a significant amount of cognitive effort to enable low-level kinematic control of multiple degrees of freedom. Accordingly, improved BMI may be desirable.
BRIEF SUMMARY OF SOME EXAMPLESAccordingly, some example embodiments may enable the provision of a BMI system that utilizes a hybrid input, shared control, and intelligent robotics to improve robotic limb control or control of other assistive devices. For example, some embodiments may enable users to visually identify an object and imagine reaching for the object to initiate a semi-autonomous reach and grasp of the object by a highly dexterous modular prosthetic limb. Physiological input signals may include eye tracking for object selection and detection of electrocorticographic (ECoG) neural responses for reach intent. System components for shared control and intelligent robotics may utilize an infrared sensor for object segmentation and semi-autonomous robotic limb control for low-level motor task planning. However, example embodiments may also be used to control other assistive devices such as, for example, wheel chairs or other household devices.
In one example embodiment, a method of controlling a device relative to one or more objects in an environment of a user employing the device is provided. The method may include receiving a volitional input from the user indicative of a task to be performed relative to an object with the device, receiving object targeting information associated with interaction between the device and the object where the object targeting information is presented in an augmented reality context, integrating the volitional input with the object targeting information to determine a control command to direct the device to interact with the object, and providing the control command to the device.
In another example embodiment, a device control unit including processing circuitry configured to control a device relative to one or more objects in an environment of a user employing the device is provided. The processing circuitry may be configured for receiving a volitional input from the user indicative of a task to be performed relative to an object with the device, receiving object targeting information associated with interaction between the device and the object where the object targeting information is presented in an augmented reality context, integrating the volitional input with the object targeting information to determine a control command to direct the device to interact with the object, and providing the control command to the device.
In accordance with another example embodiment, a system for control of a device relative to one or more objects in an environment of a user employing the device is provided. The system may include a volitional input unit, a task control unit, a targeting unit, an eye tracking unit, a machine vision unit, an integration unit and a device controller. The volitional input unit may be configured to generate trigger signals for communication to a task control unit. The trigger signals may be indicative of a task to be performed relative to an object with the device. The targeting unit may be configured to interface with an eye tracking unit and a machine vision unit to generate object targeting information associated with interaction between the device and the object. The object targeting information may be presented in an augmented reality context. The integration unit may be configured to integrate the volitional input with the object targeting information to determine a control command to direct the device to interact with the object. The device controller may be configured to receive the control command and interactively communicate with the device for closed loop control of the device based on the control command.
Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
Some example embodiments now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all example embodiments are shown. Indeed, the examples described and pictured herein should not be construed as being limiting as to the scope, applicability or configuration of the present disclosure. Rather, these example embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout.
Some example embodiments may enable a relatively light hardware structure for controlling an assistive device such as, for example, a prosthetic device to be provided. Such a structure may employ a relatively small number of components that can be provided in a wearable package to provide robust control in an augmented reality environment or context. Accordingly, the control of the device afforded to the wearer and the comfort of the wearer may be enhanced. Example embodiments may be helpful when practiced with prosthetic devices, wheel chairs, household devices or other assistive devices that include grasping capabilities or other functions that would benefit from fine motor control. However, it should be appreciated that some example embodiments may alternatively be practiced in connection with other devices as well. Thus, although an example will primarily be described in a context where a user is a patient and the device is a prosthetic device, other users may also employ other devices consistent with example embodiments.
The volitional input unit 20 and the task control unit 30 may cooperate to enable volitional inputs to be provided to initiate, modulate and discontinue automated prosthetic movements. In some cases, the volitional input unit 20 and the task control unit 30 may cooperate to generate a request for a task to be performed based on volitional inputs and queue a selected task through a context menu and enable the queued tasks to be performed. While eye tracking may also be used to initiate and discontinue tasks, volitional inputs may work in combination with the eye tracking to provide an intuitive mechanism by which users can continuously direct the prosthetic device (e.g., MPL 84) in real time. The direction of the device may include direction and modulation of a number of actions such as the speed of movement of the device, and control over closing the grasp of the device. CPC and ECoG volitional control may be employed to initiate a grasping sequence on an object detected via machine vision.
In some cases, volitional inputs may further include voice commands that can be integrated for directing tasks and prompting object recognition and task identification modules to identify and then cue predefined tasks. As an example, if a user verbally requests a task of pouring milk into a glass, the system 10 may perform machine vision aided searches for a container of milk and a glass. If matches can be found in the workspace, then a preview of the proposed task execution may be provided on a display (e.g., monitor 62) so that the patient 12 can accept the proposed plan or override the proposal and define a new trajectory or new object of interest. Additionally or alternatively, the patient 12 may be enabled to provide CPC, BMI or voice commands as volitional inputs to intuitively initiate the planned execution.
In an example embodiment, the integration unit 40 may be configured to receive control information that is integrated with the volitional inputs from an eye tracking and machine vision assembly. In this regard, an eye tracking unit 50 may be provided along with a machine vision unit 60 to provide augmented reality visualizations to the patient 12. The augmented reality visualizations may be provided via a monitor 62 that forms a part of or is otherwise in communication with the machine vision unit 60 and is visible to the patient 12.
In some cases, the monitor 62 could be provided in a pair of goggles or glasses, for example, as a transparent heads up display and, in some cases, also a machine vision element for detecting objects 82 in an environment 80 of the patient 12. The eye tracking unit 50 may interface with the monitor 62 and the patient 12 to determine where on the monitor 62 the patient 12 is looking to generate eye tracking data for communication to a targeting unit 70. The targeting unit 70 may also receive environmental topographical map or video data from the machine vision unit 60 and utilize locational information associated with objects 82 and/or an MPL 84 within the environment 80 surrounding the patient 12. MPL location, map data (which may include object shape, orientation, position and color) and/or an eye tracking solution may therefore be integrated by the targeting unit 70 to determine such information as targeted object shape, orientation, and position, which may be referred to generally as targeting information.
Accordingly, the monitor 62 may provide a tool for overlaying graphic visualizations with information and live user menus for the patient 12. As such, the monitor 62 may provide an augmented reality environment with menus that provide various modes and method of interaction for the patient 12 with the MPL 84 and objects 82 in the environment. The displayed information may inform the patient 12 in real time of the status of the MPL 84. The displayed information may also inform the patient 12 of available tasks or options for controlling the MPL 12 to interface with detected objects 82.
In addition to providing a real-time eye-tracking and machine vision capability that aids in the detecting of objects of interest to the patient 12, the monitor 62, within the context of a glasses or goggles environment, may identify the orientation and location of the patient 12 relative to objects 82. Inertial and positioning sensors may be incorporated into the system 10 to enable the orientation and location of the patient 12 and/or objects 82 to be determined. Additionally, the glasses or goggles may employ wireless sensor technology for communication with other system 10 components so that, for example, raw sensor data or other information may be streamed in real-time and processed.
The eye tracking unit 50 may be configured to align the measured user gaze location of the patient 12 with both machine vision detected objects in the environment 80 and presented context menus on the monitor 62. Thus, direct input may be provided for task control (e.g., to the integration unit 40) for high level user control that includes task identification (alignment with detected objects), and selection, initiation modulation and cessation (from context menus).
Machine vision and image processing may be employed by the machine vision unit 60 to facilitate real-time control of the MPL 84 and real-time object position determination relative to the MPL 84. Object shape and orientation information may also be determined so that, for example, strategies for approaching and grasping objects can be determined. Eye tracking may be integrated via the eye tracking unit 50 to update the monitor 62 with proposed or possible tasks. Trajectory and grasp planning may also continually be updated while tasks are being executed.
In some embodiments, the machine vision unit 60 may include sensors that can acquire both a 3D point cloud and 2D red-green-blue (RGB) raw image data of the environment. This image data may be directly streamed to the integration unit 40 (which may employ a control unit or control box) where image processing and/or segmentation may be accomplished. The image processing may include algorithms for segmenting object surfaces and extracting known features for object recognition purposes. Image processing and object recognition may be accomplished via corresponding modules in the integration unit 40, and the modules could be examples of open source or other available software libraries such as, for example, Point Cloud Library (PCL), Robot Operating System (ROS), and OpenCV. Libraries such as the examples mentioned above may be used to scale and convert images to different formats, to perform histogram calculations, to perform feature extraction, and/or to perform color based segmentation as well as 2D/3D object recognition. The libraries may also provide a software framework for implementation with a variety of machine vision sensors. Additionally or alternatively, a low cost commercial machine vision sensor technology may be employed to generate accurate 3D point clouds over long distances and with mapping resolutions that complement the expected object sizes utilized in ADL tasks.
The integration unit 40 may be configured to receive targeting information (e.g. object shape information) along with the volitional inputs and integrate such information to generate control signals for an MPL controller 90. In some embodiments, the grasp trigger signal generated based on volitional inputs may be integrated with grasp information generated by the targeting unit 70 relating to various grasp types and characteristics (e.g., pinch, power, etc.) to generate ROC (reduced order control) grasp commands. Similarly, the integration unit 40 may be configured to receive the reach trigger signals associated with volitional inputs along with targeting information (from the targeting unit 70) including endpoint orientation information and endpoint position information to generate accurate endpoint command signals. The endpoint command signals and the ROC grasp commands may combine to form MPL command signals that are provided to the MPL controller 90.
The MPL controller 90 may interface with the MPL 84 to issue MPL motion commands and to receive feedback and other information related to MPL percepts, joint angles, endpoint position and/or the like. The MPL controller 90 may provide closed loop control and employ inverse kinematics to interface with the MPL 84 based on the MPL command signals provided by the integration unit 40.
In an example embodiment, the integration unit 40 may be embodied as control or processing circuitry (as further described below in reference to
Utilizing the various components or units of
An example embodiment of the invention will now be described with reference to
Referring now to
The user interface 160 may be in communication with the processing circuitry 150 to receive an indication of a user input at the user interface 160 and/or to provide an audible, visual, mechanical or other output to the user. As such, the user interface 160 may include, for example, a keyboard, a mouse, a joystick, a display, a touch screen, a microphone, a speaker, a cell phone, or other input/output mechanisms. In embodiments where the apparatus is embodied at a server or other network entity, the user interface 160 may be limited or even eliminated in some cases. Alternatively, as indicated above, the user interface 160 may be remotely located.
The device interface 162 may include one or more interface mechanisms for enabling communication with other devices and/or networks. In some cases, the device interface 162 may be any means such as a device or circuitry embodied in either hardware, software, or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device or module in communication with the processing circuitry 150. In this regard, the device interface 162 may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network and/or a communication modem or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB), Ethernet or other methods. In situations where the device interface 162 communicates with a network, the network may be any of various examples of wireless or wired communication networks such as, for example, data networks like a Local Area Network (LAN), a Metropolitan Area Network (MAN), and/or a Wide Area Network (WAN), such as the Internet.
In an example embodiment, the storage device 154 may include one or more non-transitory storage or memory devices such as, for example, volatile and/or non-volatile memory that may be either fixed or removable. The storage device 154 may be configured to store information, data, applications, instructions or the like for enabling the apparatus to carry out various functions in accordance with example embodiments of the present invention. For example, the storage device 154 could be configured to buffer input data for processing by the processor 152. Additionally or alternatively, the storage device 154 could be configured to store instructions for execution by the processor 152. As yet another alternative, the storage device 154 may include one of a plurality of databases that may store a variety of files, contents or data sets. Among the contents of the storage device 154, applications may be stored for execution by the processor 152 in order to carry out the functionality associated with each respective application.
The processor 152 may be embodied in a number of different ways. For example, the processor 152 may be embodied as various processing means such as a microprocessor or other processing element, a coprocessor, a controller or various other computing or processing devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a hardware accelerator, or the like. In an example embodiment, the processor 152 may be configured to execute instructions stored in the storage device 54 or otherwise accessible to the processor 152. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 152 may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to embodiments of the present invention while configured accordingly. Thus, for example, when the processor 152 is embodied as an ASIC, FPGA or the like, the processor 152 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor 152 is embodied as an executor of software instructions, the instructions may specifically configure the processor 152 to perform the operations described herein.
In an example embodiment, the processor 152 (or the processing circuitry 150) may be embodied as, include or otherwise control the integration unit 40, which may be any means such as a device or circuitry operating in accordance with software or otherwise embodied in hardware or a combination of hardware and software (e.g., processor 152 operating under software control, the processor 152 embodied as an ASIC or FPGA specifically configured to perform the operations described herein, or a combination thereof) thereby configuring the device or circuitry to perform the corresponding functions of the integration unit 40 as described below.
The device interface 162 may enable the integration unit 40 to communicate with and/or control various other units 180, which may include the task control unit 30, the targeting unit 70, the MPL controller 90, and/or any other units of
From a technical perspective, the integration unit 40 described above may be used to support some or all of the operations described above. As such, the platform described in
Accordingly, blocks of the flowchart support combinations of means for performing the specified functions and combinations of operations for performing the specified functions. It will also be understood that one or more blocks of the flowchart, and combinations of blocks in the flowchart, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.
In this regard, a method of controlling a prosthetic device relative to one or more objects in an environment of a patient employing the prosthetic device according to one embodiment of the invention, as shown in
In an example embodiment, an apparatus for performing the method of
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe exemplary embodiments in the context of certain exemplary combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. In cases where advantages, benefits or solutions to problems are described herein, it should be appreciated that such advantages, benefits and/or solutions may be applicable to some example embodiments, but not necessarily all example embodiments. Thus, any advantages, benefits or solutions described herein should not be thought of as being critical, required or essential to all embodiments or to that which is claimed herein. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims
1. A method of controlling a device relative to an environment, the method comprising:
- receiving a volitional input from a user indicative of a task to be performed relative to an object with the device;
- receiving object targeting information associated with performing the task by the device, wherein the object targeting information comprises eye tracking information of the user that is indicative of a position of the object in the environment;
- generating a control command based on a combination of the volitional input and the object targeting information, wherein the control command is configured to direct the device to interact with the object; and
- providing the control command to the device.
2. The method of claim 1, wherein the receiving the volitional input comprises receiving brain-machine interface command inputs from the user.
3. The method of claim 1, wherein the object targeting information further comprises at least one of a size, shape, or orientation of the object, or information indicative of a color, texture, or inertia of the object.
4. The method of claim 1, wherein to interact with the object is based on the task, the position of the object, and at least one of a size, shape, or orientation of the object.
5. The method of claim 1, wherein receiving the object targeting information further comprises receiving real-time feedback and updating the control command based on the real-time feedback.
6. The method of claim 1, wherein the eye tracking information is based on an alignment of a measured gaze location of the user's eye with the object.
7. The method of claim 1, wherein the receiving the object targeting information comprises receiving the eye tracking information via goggles or glasses worn by the user.
8. The method of claim 1, wherein the environment is a virtual or augmented reality environment and the object is a virtual object in the virtual or augmented reality environment.
9. The method of claim 8, wherein the virtual object is a plurality of menu options, and the virtual object is displayed in the virtual or augmented reality environment.
10. The method of claim 9, wherein to interact with the virtual object is to select one of the plurality of menu options.
11. The method of claim 1, wherein the environment is an environment of the user and the object is a physical object in the environment of the user.
12. The method of claim 1, wherein the device is a computer, laptop, or mobile computing device.
13. The method of claim 1, further comprising:
- receiving a machine vision input for the object;
- identifying the object targeting information of the object from the machine vision input; and
- recognizing the object based on the identified object targeting information.
14. The method of claim 13, wherein the generating the control command is further based on the recognized object.
15. A computer system comprising:
- a controller, wherein the controller is configured to: receive a volitional input from a user indicative of a task to be performed relative to an object in an environment; receive object targeting information associated with performing the task by the computer system, wherein the object targeting information comprises eye tracking information of the user that is indicative of a position of the object in the environment; generate a control command based on a combination of the volitional input and the object targeting information, wherein the control command is configured to direct the computer system to interact with the object; and provide the control command to the computer system.
16. The computer system of claim 15, wherein the controller is further configured to receive brain-machine interface command inputs from the user.
17. The computer system of claim 15, wherein the object targeting information further comprises at least one of a size, shape, or orientation of the object, or information indicative of a color, texture, or inertia of the object.
18. The computer system of claim 15, wherein to interact with the object is based on the task, the position of the object, and at least one of a size, shape, or orientation of the object.
19. The computer system of claim 15, wherein the controller is further configured to receive real-time feedback and to update the control command based on the real-time feedback.
20. The computer system of claim 15, wherein the eye tracking information is based on an alignment of a measured gaze location of the user's eye with the object.
21. The computer system of claim 15, wherein the controller is further configured to receive the eye tracking information via goggles or glasses worn by the user.
22. The computer system of claim 15, wherein
- the object is a virtual object and the environment is a virtual or augmented reality environment, and
- the computer system further comprises a display configured to display the virtual object in the virtual or augmented reality environment.
23. The computer system of claim 22, wherein the virtual object is a plurality of menu options.
24. The computer system of claim 23, wherein to interact with the virtual object is to select one of the plurality of menu options.
25. The computer system of claim 15, wherein
- the object is a physical object detected by machine vision,
- the environment is an environment of the user, and
- the computer system further comprises a machine vision unit configured to detect the physical object in the environment of the user.
26. The computer system of claim 25, wherein the controller is further configured to:
- receive a machine vision input for the object from the machine vision unit;
- identify the object targeting information of the object from the machine vision input; and
- recognize the object based on the identified object targeting information.
27. The computer system of claim 26, wherein the control command is further based on the recognized object.
Type: Application
Filed: Dec 18, 2018
Publication Date: May 16, 2019
Inventors: Kapil D. Katyal (Chevy Chase, MD), Brock A. Wester (Baltimore, MD), Matthew S. Johannes (Catonsville, MD), Timothy G. McGee (Columbia, MD), Andrew J. Harris (Columbia, MD), Matthew Fifer (Baltimore, MD), Guy Hotson (Baltimore, MD), David McMullen (Holmdel, NJ), Robert S. Armiger (Catonsville, MD), R. Jacob Vogelstein (Bethesda, MD), Nathan E. Crone (Baltimore, MD)
Application Number: 16/223,150