Patents by Inventor Gregory D. Hager

Gregory D. Hager has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240038097
    Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Application
    Filed: October 10, 2023
    Publication date: February 1, 2024
    Applicant: The Johns Hopkins University
    Inventors: Carol E. REILEY, Gregory D. Hager, Balakrishnan Varadarajan, Sanjeey Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
  • Publication number: 20220184803
    Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
    Type: Application
    Filed: March 4, 2022
    Publication date: June 16, 2022
    Applicant: The Johns Hopkins University
    Inventors: Kelleher Guerin, Gregory D. Hager
  • Patent number: 11279022
    Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: March 22, 2022
    Assignee: The Johns Hopkins University
    Inventors: Kelleher Guerin, Gregory D. Hager
  • Patent number: 10850392
    Abstract: A system and method for automatically computing desirable palm grasp configurations of an object by a robotic hand is disclosed. A description of the object's surface is obtained. A grasp configuration of a robotic hand comprising a palm and one or more fingers is selected, which specifies a hand configuration and joint variables of the hand, and a plurality of contact points on the object's surface for the palm and fingers. The coefficient of friction at each of the contact points is determined, and a description of one or more external wrenches to which may apply to the object is acquired. The ability of the robotic hand to hold the object against the external wrenches in the selected configuration is then computed. In some embodiments, a plurality of grasp configurations may be compared to determine which are the most desirable. In other embodiments, the space of hand configurations, or the smaller space of palm contact configurations, may be searched to find the most desirable grasp configurations.
    Type: Grant
    Filed: April 12, 2005
    Date of Patent: December 1, 2020
    Assignee: Strider Labs, Inc.
    Inventors: Gregory D. Hager, Eliot Leonard Wegbreit
  • Patent number: 10807237
    Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: October 20, 2020
    Assignee: THE JOHN HOPKINS UNIVERSITY
    Inventors: Kelleher Guerin, Gregory D. Hager, Sebastian Riedel
  • Publication number: 20200273261
    Abstract: A system and method for constructing a 3D scene model comprising 3D objects and representing a scene, based upon a prior 3D scene model. The method comprises the steps of acquiring an image of the scene; initializing the computed 3D scene model to the prior 3D scene model; and modifying the computed 3D scene model to be consistent with the image. The step of modifying the computed 3D scene models consists of the sub-steps of comparing data of the image with objects of the 3D scene model, resulting in associated data and unassociated data; using the unassociated data to compute new objects that are not in the 3D scene model and adding the new objects to the 3D scene model; and using the associated data to detect objects in the prior 3D scene model that are absent and removing the absent objects from the 3D scene model. The present invention may also be used to construct multiple alternative 3D scene models.
    Type: Application
    Filed: May 9, 2020
    Publication date: August 27, 2020
    Inventors: Eliot Leonard Wegbreit, Gregory D. Hager
  • Patent number: 10650608
    Abstract: A system and method for constructing a 3D scene model comprising 3D objects and representing a scene, based upon a prior 3D scene model. The method comprises the steps of acquiring an image of the scene; initializing the computed 3D scene model to the prior 3D scene model; and modifying the computed 3D scene model to be consistent with the image. The step of modifying the computed 3D scene models consists of the sub-steps of comparing data of the image with objects of the 3D scene model, resulting in associated data and unassociated data; using the unassociated data to compute new objects that are not in the 3D scene model and adding the new objects to the 3D scene model; and using the associated data to detect objects in the prior 3D scene model that are absent and removing the absent objects from the 3D scene model. The present invention may also be used to construct multiple alternative 3D scene models.
    Type: Grant
    Filed: October 8, 2008
    Date of Patent: May 12, 2020
    Assignee: Strider Labs, Inc.
    Inventors: Eliot Leonard Wegbreit, Gregory D. Hager
  • Publication number: 20190283248
    Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
    Type: Application
    Filed: June 6, 2019
    Publication date: September 19, 2019
    Applicant: The Johns Hopkins University
    Inventors: Kelleher Guerin, Gregory D. Hager
  • Patent number: 10350751
    Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: July 16, 2019
    Assignee: The Johns Hopkins University
    Inventors: Kelleher Guerin, Gregory D. Hager
  • Patent number: 10188281
    Abstract: An observation system for viewing light-sensitive tissue includes an illumination system configured to illuminate the light-sensitive tissue, an imaging system configured to image at least a portion of the light-sensitive tissue upon being illuminated by the illumination system, and an image display system in communication with the imaging system to display an image of the portion of the light-sensitive tissue. The illumination system is configured to illuminate the light-sensitive tissue with a reduced amount of light within a preselected wavelength range compared to multispectral illumination light, and the image of the portion of the light-sensitive tissue is compensated for the reduced amount of light within the preselected frequency range to approximate an image of the light-sensitive tissue under the multispectral illumination.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: January 29, 2019
    Assignee: The Johns Hopkins University
    Inventors: Russell H. Taylor, Seth D. Billings, Peter L. Gehlbach, Gregory D. Hager, James T. Handa, Jin U. Kang, Balazs P. Vagvolgyi, Raphael Sznitman, Zachary Pezzementi
  • Publication number: 20180290303
    Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.
    Type: Application
    Filed: June 14, 2018
    Publication date: October 11, 2018
    Inventors: Kelleher GUERIN, Gregory D. HAGER, Sebastian RIEDEL
  • Publication number: 20180253994
    Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Application
    Filed: May 4, 2018
    Publication date: September 6, 2018
    Applicant: The Johns Hopkins University
    Inventors: Carol E. REILEY, Gregory D. HAGER, Balakrishnan VARADARAJAN, Sanjeev Pralhad KHUDANPUR, Rajesh KUMAR, Henry C. LIN
  • Patent number: 10022870
    Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: July 17, 2018
    Assignee: THE JOHNS HOPKINS UNIVERSITY
    Inventors: Kelleher Guerin, Gregory D. Hager, Sebastian Riedel
  • Patent number: 10008129
    Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: June 26, 2018
    Assignee: The Johns Hopkins University
    Inventors: Carol E. Reiley, Gregory D. Hager, Balakrishnan Varadarajan, Sanjeev Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
  • Patent number: 9978141
    Abstract: Systems and methods for image guidance, which may include an image processing unit, cameras, and handheld real-time imaging components or handheld displays, wherein the cameras observe visual features on patients, tools, or other components, and wherein said features allow camera or object positions to be determined relative to secondary image data or a reference position, and wherein the image processing unit is configured to dynamically register observations with secondary image data, and to compute enhanced images based on combinations of one or more of secondary image data, positioning data, and real-time imaging data.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: May 22, 2018
    Assignee: Clear Guide Medical, Inc.
    Inventors: Philipp J. Stolka, Ehsan Basafa, Pezhman Foroughi, Gregory D. Hager, Emad M. Boctor
  • Patent number: 9814392
    Abstract: A visual tracking and annotation system for surgical intervention includes an image acquisition and display system arranged to obtain image streams of a surgical region of interest and of a surgical instrument proximate the surgical region of interest and to display acquired images to a user; a tracking system configured to track the surgical instrument relative to the surgical region of interest; a data storage system in communication with the image acquisition and display system and the tracking system; and a data processing system in communication with the data storage system, the image acquisition and display system and the tracking system. The data processing system is configured to annotate images displayed to the user in response to an input signal from the user.
    Type: Grant
    Filed: November 1, 2010
    Date of Patent: November 14, 2017
    Assignee: THE JOHNS HOPKINS UNIVERSITY
    Inventors: Marcin A. Balicki, Russell H. Taylor, Gregory D. Hager, Peter L. Gehlbach, James T. Handa, Rajesh Kumar
  • Publication number: 20170221385
    Abstract: A system and method for quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Application
    Filed: April 19, 2017
    Publication date: August 3, 2017
    Applicant: The Johns Hopkins University
    Inventors: Carol E. REILEY, Gregory D. HAGER, Balakrishnan VARADARAJAN, Sanjeev Pralhad KHUDANPUR, Rajesh KUMAR, Henry C. LIN
  • Publication number: 20170203438
    Abstract: System and methods to create an immersive virtual environment using a virtual reality system that receives parameters corresponding to a real-world robot. The real-world robot may be simulated to create a virtual robot based on the received parameters. The immersive virtual environment may be transmitted to a user. The user may supply input and interact with the virtual robot. Feedback such as the current state of the virtual robot or the real-world robot may be provided to the user. The user may train the virtual robot. The real-world robot may be programmed based on the virtual robot training.
    Type: Application
    Filed: April 5, 2017
    Publication date: July 20, 2017
    Applicant: The Johns Hopkins University
    Inventors: Kelleher Guerin, Gregory D. Hager
  • Patent number: 9691290
    Abstract: Systems for quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: June 27, 2017
    Assignee: The Johns Hopkins University
    Inventors: Carol E. Reiley, Gregory D. Hager, Balakrishnan Varadarajann, Sanjeev Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
  • Publication number: 20170144304
    Abstract: Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters.
    Type: Application
    Filed: February 3, 2017
    Publication date: May 25, 2017
    Inventors: Kelleher GUERIN, Gregory D. HAGER, Sebastian RIEDEL