Patents by Inventor Gregory D. Hager
Gregory D. Hager has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240038097Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Applicant: The Johns Hopkins UniversityInventors: Carol E. REILEY, Gregory D. Hager, Balakrishnan Varadarajan, Sanjeey Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
-
Publication number: 20220184803Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.Type: ApplicationFiled: March 4, 2022Publication date: June 16, 2022Applicant: The Johns Hopkins UniversityInventors: Kelleher Guerin, Gregory D. Hager
-
Patent number: 11279022Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.Type: GrantFiled: June 6, 2019Date of Patent: March 22, 2022Assignee: The Johns Hopkins UniversityInventors: Kelleher Guerin, Gregory D. Hager
-
Patent number: 10850392Abstract: A system and method for automatically computing desirable palm grasp configurations of an object by a robotic hand is disclosed. A description of the object's surface is obtained. A grasp configuration of a robotic hand comprising a palm and one or more fingers is selected, which specifies a hand configuration and joint variables of the hand, and a plurality of contact points on the object's surface for the palm and fingers. The coefficient of friction at each of the contact points is determined, and a description of one or more external wrenches to which may apply to the object is acquired. The ability of the robotic hand to hold the object against the external wrenches in the selected configuration is then computed. In some embodiments, a plurality of grasp configurations may be compared to determine which are the most desirable. In other embodiments, the space of hand configurations, or the smaller space of palm contact configurations, may be searched to find the most desirable grasp configurations.Type: GrantFiled: April 12, 2005Date of Patent: December 1, 2020Assignee: Strider Labs, Inc.Inventors: Gregory D. Hager, Eliot Leonard Wegbreit
-
Patent number: 10807237Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.Type: GrantFiled: June 14, 2018Date of Patent: October 20, 2020Assignee: THE JOHN HOPKINS UNIVERSITYInventors: Kelleher Guerin, Gregory D. Hager, Sebastian Riedel
-
Publication number: 20200273261Abstract: A system and method for constructing a 3D scene model comprising 3D objects and representing a scene, based upon a prior 3D scene model. The method comprises the steps of acquiring an image of the scene; initializing the computed 3D scene model to the prior 3D scene model; and modifying the computed 3D scene model to be consistent with the image. The step of modifying the computed 3D scene models consists of the sub-steps of comparing data of the image with objects of the 3D scene model, resulting in associated data and unassociated data; using the unassociated data to compute new objects that are not in the 3D scene model and adding the new objects to the 3D scene model; and using the associated data to detect objects in the prior 3D scene model that are absent and removing the absent objects from the 3D scene model. The present invention may also be used to construct multiple alternative 3D scene models.Type: ApplicationFiled: May 9, 2020Publication date: August 27, 2020Inventors: Eliot Leonard Wegbreit, Gregory D. Hager
-
Patent number: 10650608Abstract: A system and method for constructing a 3D scene model comprising 3D objects and representing a scene, based upon a prior 3D scene model. The method comprises the steps of acquiring an image of the scene; initializing the computed 3D scene model to the prior 3D scene model; and modifying the computed 3D scene model to be consistent with the image. The step of modifying the computed 3D scene models consists of the sub-steps of comparing data of the image with objects of the 3D scene model, resulting in associated data and unassociated data; using the unassociated data to compute new objects that are not in the 3D scene model and adding the new objects to the 3D scene model; and using the associated data to detect objects in the prior 3D scene model that are absent and removing the absent objects from the 3D scene model. The present invention may also be used to construct multiple alternative 3D scene models.Type: GrantFiled: October 8, 2008Date of Patent: May 12, 2020Assignee: Strider Labs, Inc.Inventors: Eliot Leonard Wegbreit, Gregory D. Hager
-
Publication number: 20190283248Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.Type: ApplicationFiled: June 6, 2019Publication date: September 19, 2019Applicant: The Johns Hopkins UniversityInventors: Kelleher Guerin, Gregory D. Hager
-
Patent number: 10350751Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.Type: GrantFiled: April 5, 2017Date of Patent: July 16, 2019Assignee: The Johns Hopkins UniversityInventors: Kelleher Guerin, Gregory D. Hager
-
Patent number: 10188281Abstract: An observation system for viewing light-sensitive tissue includes an illumination system configured to illuminate the light-sensitive tissue, an imaging system configured to image at least a portion of the light-sensitive tissue upon being illuminated by the illumination system, and an image display system in communication with the imaging system to display an image of the portion of the light-sensitive tissue. The illumination system is configured to illuminate the light-sensitive tissue with a reduced amount of light within a preselected wavelength range compared to multispectral illumination light, and the image of the portion of the light-sensitive tissue is compensated for the reduced amount of light within the preselected frequency range to approximate an image of the light-sensitive tissue under the multispectral illumination.Type: GrantFiled: March 30, 2016Date of Patent: January 29, 2019Assignee: The Johns Hopkins UniversityInventors: Russell H. Taylor, Seth D. Billings, Peter L. Gehlbach, Gregory D. Hager, James T. Handa, Jin U. Kang, Balazs P. Vagvolgyi, Raphael Sznitman, Zachary Pezzementi
-
Publication number: 20180290303Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.Type: ApplicationFiled: June 14, 2018Publication date: October 11, 2018Inventors: Kelleher GUERIN, Gregory D. HAGER, Sebastian RIEDEL
-
Publication number: 20180253994Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.Type: ApplicationFiled: May 4, 2018Publication date: September 6, 2018Applicant: The Johns Hopkins UniversityInventors: Carol E. REILEY, Gregory D. HAGER, Balakrishnan VARADARAJAN, Sanjeev Pralhad KHUDANPUR, Rajesh KUMAR, Henry C. LIN
-
Patent number: 10022870Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.Type: GrantFiled: February 3, 2017Date of Patent: July 17, 2018Assignee: THE JOHNS HOPKINS UNIVERSITYInventors: Kelleher Guerin, Gregory D. Hager, Sebastian Riedel
-
Patent number: 10008129Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.Type: GrantFiled: April 19, 2017Date of Patent: June 26, 2018Assignee: The Johns Hopkins UniversityInventors: Carol E. Reiley, Gregory D. Hager, Balakrishnan Varadarajan, Sanjeev Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
-
Patent number: 9978141Abstract: Systems and methods for image guidance, which may include an image processing unit, cameras, and handheld real-time imaging components or handheld displays, wherein the cameras observe visual features on patients, tools, or other components, and wherein said features allow camera or object positions to be determined relative to secondary image data or a reference position, and wherein the image processing unit is configured to dynamically register observations with secondary image data, and to compute enhanced images based on combinations of one or more of secondary image data, positioning data, and real-time imaging data.Type: GrantFiled: August 3, 2016Date of Patent: May 22, 2018Assignee: Clear Guide Medical, Inc.Inventors: Philipp J. Stolka, Ehsan Basafa, Pezhman Foroughi, Gregory D. Hager, Emad M. Boctor
-
Patent number: 9814392Abstract: A visual tracking and annotation system for surgical intervention includes an image acquisition and display system arranged to obtain image streams of a surgical region of interest and of a surgical instrument proximate the surgical region of interest and to display acquired images to a user; a tracking system configured to track the surgical instrument relative to the surgical region of interest; a data storage system in communication with the image acquisition and display system and the tracking system; and a data processing system in communication with the data storage system, the image acquisition and display system and the tracking system. The data processing system is configured to annotate images displayed to the user in response to an input signal from the user.Type: GrantFiled: November 1, 2010Date of Patent: November 14, 2017Assignee: THE JOHNS HOPKINS UNIVERSITYInventors: Marcin A. Balicki, Russell H. Taylor, Gregory D. Hager, Peter L. Gehlbach, James T. Handa, Rajesh Kumar
-
Publication number: 20170221385Abstract: A system and method for quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.Type: ApplicationFiled: April 19, 2017Publication date: August 3, 2017Applicant: The Johns Hopkins UniversityInventors: Carol E. REILEY, Gregory D. HAGER, Balakrishnan VARADARAJAN, Sanjeev Pralhad KHUDANPUR, Rajesh KUMAR, Henry C. LIN
-
Publication number: 20170203438Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.Type: ApplicationFiled: April 5, 2017Publication date: July 20, 2017Applicant: The Johns Hopkins UniversityInventors: Kelleher Guerin, Gregory D. Hager
-
Patent number: 9691290Abstract: Systems for quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.Type: GrantFiled: October 7, 2015Date of Patent: June 27, 2017Assignee: The Johns Hopkins UniversityInventors: Carol E. Reiley, Gregory D. Hager, Balakrishnan Varadarajann, Sanjeev Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
-
Publication number: 20170144304Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.Type: ApplicationFiled: February 3, 2017Publication date: May 25, 2017Inventors: Kelleher GUERIN, Gregory D. HAGER, Sebastian RIEDEL