Patents by Inventor Nassir Navab

Nassir Navab has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928838
    Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: March 12, 2024
    Assignee: The Johns Hopkins University
    Inventors: Ehsan Azimi, Long Qian, Peter Kazanzides, Nassir Navab
  • Patent number: 11861062
    Abstract: A calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a virtual image having at least one feature. The calibration platform may determine, based on information relating to a gaze of a user wearing the OST-HMD, that the user performed a voluntary eye blink to indicate that the at least one feature of the virtual image appears to the user to be aligned with at least one point on the three-dimensional real-world object. The calibration platform may record an alignment measurement based on a position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink. Accordingly, the alignment measurement may be used to generate a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in a display space of the OST-HMD.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: January 2, 2024
    Assignee: The Johns Hopkins University
    Inventors: Ehsan Azimi, Long Qian, Peter Kazanzides, Nassir Navab
  • Publication number: 20230274517
    Abstract: A method for aligning the positions and orientations of a real object and a virtual object in real space, the virtual object corresponding to a virtual replica of the real object, the method comprising visualizing at least one alignment feature superimposed on or replacing a representation of the virtual object in a field of view containing the real object, wherein the alignment feature is indicative of a position and orientation of the virtual object in real space, and wherein the at least one alignment feature complements a shape and/or surface pattern of the real object, such that the alignment feature and the real object form a composite object with complementing patterns and/or shapes in the field of view, when the real object and the virtual object are aligned.
    Type: Application
    Filed: July 6, 2021
    Publication date: August 31, 2023
    Inventors: Nassir Navab, Alejandro Martin Gomez
  • Publication number: 20230080133
    Abstract: A computer-implemented method of estimating a 6D pose and shape of one or more objects from a 2D image, comprises the steps of: detecting, within the 2D image, one or more 2D regions of interest, each 2D region of interest containing a corresponding object among the one of more objects; cropping out a corresponding pixel value array, coordinate tensor , and feature map for each 2D region of interest; concatenating the corresponding pixel value array, coordinate tensor, and feature map for each 2D region of interest; and inferring, for each 2D region of interest, a 4D quaternion describing a rotation of the corresponding object in the 3D rotation group, a 2D centroid, which is a projection of a 3D translation of the corresponding object onto a plane of the 2D image given a camera matrix associated to the 2D, image, a distance from a viewpoint of the 2D image to the corresponding object a size and a class-specific latent shape vector of the corresponding object.
    Type: Application
    Filed: February 21, 2020
    Publication date: March 16, 2023
    Inventors: Sven Meier, Norimasa Kobori, Luca Minciullo, Kei Yoshikawa, Fabian Manhardt, Manuel Nickel, Nassir Navab
  • Publication number: 20230057389
    Abstract: A computer-implemented method for determining the refractive power of an intraocular lens to be inserted is presented. The method includes generating first training data for a machine learning system on the basis of a first physical model for a refractive power for an intraocular lens and training the machine learning system by means of the first training data generated, for the purposes of forming a first learning model for determining the refractive power. Furthermore, the method includes training the machine learning system, which was trained using the first training data, using clinical ophthalmological training data for forming a second learning model for determining the refractive power and providing ophthalmological data of a patient and an expected position of the intraocular lens to be inserted. Moreover, the method includes predicting the refractive power of the intraocular lens to be inserted by means of the trained machine learning system and the second learning model.
    Type: Application
    Filed: January 21, 2021
    Publication date: February 23, 2023
    Applicant: Carl Zeiss Meditec AG
    Inventors: Hendrik Burwinkel, Holger Matz, Stefan Saur, Christoph Hauger, Nassir Navab
  • Publication number: 20220366598
    Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.
    Type: Application
    Filed: July 8, 2022
    Publication date: November 17, 2022
    Applicant: The Johns Hopkins University
    Inventors: Ehsan AZIMI, Long QIAN, Peter KAZANZIDES, Nassir NAVAB
  • Patent number: 11430203
    Abstract: A computer-implemented method for registering low dimensional images with a high dimensional image includes receiving a high dimensional image of a region of interest and simulating synthetic low dimensional images of the region of interest from a number of poses of a virtual low dimensional imaging device, from the high dimensional image. The method determines positions of landmarks within the low dimensional images by applying a first learning algorithm to the low dimensional images and back projecting of the positions of the determined landmarks into the high dimensional image space, to thereby obtain the positions of the landmarks in the high dimensional image. The positions of landmarks within low dimensional images acquired form an imaging device are determined by applying the first or a second learning algorithm to the low dimensional images. The low dimensional images are registered with the high dimensional image based on the positions of the landmarks.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: August 30, 2022
    Assignee: MAXER Endoscopy GmBH
    Inventors: Nassir Navab, Matthias Grimm, Javier Esteban, Wojciech Konrad Karcz
  • Patent number: 11386572
    Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: July 12, 2022
    Assignee: The Johns Hopkins University
    Inventors: Ehsan Azimi, Long Qian, Peter Kazanzides, Nassir Navab
  • Patent number: 11369440
    Abstract: A system for modelling a portion of a patient includes a processing unit, a manipulator, a sensor, and a display device. The processing unit is configured to receive patient data and to process the patient data to generate a model of the portion of the patient. The sensor is configured to capture manipulator data. The processing unit is configured to receive the manipulator data from the sensor and to process the manipulator data to determine a position of the manipulator, an orientation of the manipulator, or both. The display device is configured to display the model on or in the manipulator.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: June 28, 2022
    Assignee: THE JOHN HOPKINS UNIVERSITY
    Inventors: Bernhard Fuerst, Greg M. Osgood, Nassir Navab, Alexander Winkler
  • Patent number: 11367226
    Abstract: A method for aligning a real-world object with a virtual object includes capturing images, video, or both of the real-world object from a first viewpoint and from a second viewpoint. The first and second viewpoints are different. The method also includes simultaneously superimposing the virtual object at least partially over the real-world object from the first viewpoint in a first augmented reality (AR) display and from the second viewpoint in a second AR display based at least in part on the images, video, or both. The method also includes adjusting a position of the real-world object to at least partially align the real-world object with the virtual object from the first viewpoint in the first AR display and from the second viewpoint in the second AR display.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: June 21, 2022
    Assignee: THE JOHNS HOPKINS UNIVERSITY
    Inventors: Nassir Navab, Javad Fotouhi
  • Publication number: 20220139532
    Abstract: A device may receive, from an imaging device, a two-dimensional image of a patient being operated on by a user, where the two-dimensional image captures a portion of the patient, and where the portion of the patient is provided between a focal point of an imaging source of the imaging device and a detector plane of the imaging device. The device may translate the two-dimensional image along a frustum of the imaging source, and may generate one or more images in a three-dimensional space based on translating the two-dimensional image along the frustum of the imaging source. The device may provide the one or more images in the three-dimensional space to an augmented reality device associated with the user.
    Type: Application
    Filed: February 24, 2020
    Publication date: May 5, 2022
    Applicant: The Johns Hopkins University
    Inventors: Mohammadjavad FOTOUHIGHAZVINI, Mathias UNBERATH, Nassir NAVAB
  • Publication number: 20210378750
    Abstract: Described herein are systems, methods, and techniques for spatially-aware displays for computer-assisted interventions. A Fixed View Frustum technique renders computer images on the display using a perspective based on a virtual camera having a field-of-view facing the display and automatically updates the virtual position of the virtual camera in response to adjusting the pose of the display. A Dynamic Mirror View Frustum technique renders computer images on the display using a perspective based on a field-of-view of a virtual camera that has a virtual position behind the display device. The virtual position of the virtual camera is dynamically updated in response to movement of a user's viewpoint located in front of the display device. Slice visualization techniques are also described herein for use with the Fixed View Frustum and Dynamic Mirror View Frustum techniques.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 9, 2021
    Applicant: Stryker Leibinger GmbH & Co. KG
    Inventors: Nassir Navab, Alexander Winkler
  • Publication number: 20210272328
    Abstract: A method for aligning a real-world object with a virtual object includes capturing images, video, or both of the real-world object from a first viewpoint and from a second viewpoint. The first and second viewpoints are different. The method also includes simultaneously superimposing the virtual object at least partially over the real-world object from the first viewpoint in a first augmented reality (AR) display and from the second viewpoint in a second AR display based at least in part on the images, video, or both. The method also includes adjusting a position of the real-world object to at least partially align the real-world object with the virtual object from the first viewpoint in the first AR display and from the second viewpoint in the second AR display.
    Type: Application
    Filed: February 23, 2021
    Publication date: September 2, 2021
    Inventors: Nassir Navab, Mohammadjavad Fotouhighazvini
  • Patent number: 11045090
    Abstract: A medical imaging apparatus for combined X-ray and optical visualization is provided. It comprises: an X-ray detector positioned above a patient; an X-ray source positioned below a patient; a control device; and a camera setup adapted to deliver an optical stereoscopic or 3D image. Thereby, the camera setup is positioned adjacent to the X-ray detector above the patient, and the control device is adapted to calculate an optical 2D image or a 3D surface from the data delivered by the camera setup, that optical 2D image or 3D surface having a virtual viewpoint similar to the viewpoint of the X-ray source. It is further adapted to superimpose an X-ray image acquired by the X-ray detector and the optical 2D image or 3D surface in order to achieve an augmented optical/X-ray image.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: June 29, 2021
    Assignee: Technische Universität München
    Inventor: Nassir Navab
  • Publication number: 20210142508
    Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.
    Type: Application
    Filed: January 31, 2019
    Publication date: May 13, 2021
    Applicant: The Johns Hopkins University
    Inventors: Ehsan AZIMI, Long QIAN, Peter KAZANZIDES, Nassir NAVAB
  • Publication number: 20210121243
    Abstract: A system for modelling a portion of a patient includes a processing unit, a manipulator, a sensor, and a display device. The processing unit is configured to receive patient data and to process the patient data to generate a model of the portion of the patient. The sensor is configured to capture manipulator data. The processing unit is configured to receive the manipulator data from the sensor and to process the manipulator data to determine a position of the manipulator, an orientation of the manipulator, or both. The display device is configured to display the model on or in the manipulator.
    Type: Application
    Filed: March 30, 2018
    Publication date: April 29, 2021
    Inventors: Bernhard FUERST, Greg M. OSGOOD, Nassir NAVAB, Alexander WINKLER
  • Publication number: 20210103753
    Abstract: A computer-implemented method for registering low dimensional images with a high dimensional image includes receiving a high dimensional image of a region of interest and simulating synthetic low dimensional images of the region of interest from a number of poses of a virtual low dimensional imaging device, from the high dimensional image. The method determines positions of landmarks within the low dimensional images by applying a first learning algorithm to the low dimensional images and back projecting of the positions of the determined landmarks into the high dimensional image space, to thereby obtain the positions of the landmarks in the high dimensional image. The positions of landmarks within low dimensional images acquired form an imaging device are determined by applying the first or a second learning algorithm to the low dimensional images. The low dimensional images are registered with the high dimensional image based on the positions of the landmarks.
    Type: Application
    Filed: October 2, 2020
    Publication date: April 8, 2021
    Applicant: Maxer Endoscopy GmbH
    Inventors: Nassir Navab, Matthias Grimm, Javier Esteban, Wojciech Konrad Karcz
  • Publication number: 20200363867
    Abstract: A calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a virtual image having at least one feature. The calibration platform may determine, based on information relating to a gaze of a user wearing the OST-HMD, that the user performed a voluntary eye blink to indicate that the at least one feature of the virtual image appears to the user to be aligned with at least one point on the three-dimensional real-world object. The calibration platform may record an alignment measurement based on a position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink. Accordingly, the alignment measurement may be used to generate a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in a display space of the OST-HMD.
    Type: Application
    Filed: January 31, 2019
    Publication date: November 19, 2020
    Applicant: The Johns Hopkins University
    Inventors: Ehsan AZIMI, Long QIAN, Peter KAZANZIDES, Nassir NAVAB
  • Publication number: 20200275988
    Abstract: The present invention is directed to a system and method for image to world registration for medical reality applications, using a world spatial map. This invention is a system and method to link any point in a fluoroscopic image to its corresponding position in the visual world using spatial mapping with a head mounted display (HMD) (world tracking). On a projectional fluoroscopic 2D image, any point on the image can be thought of as representing a line that is perpendicular to the plane of the image that intersects that point. The point itself could lie at any position in space along this line, located between the X-Ray source and the detector. With the aid of the HMD, a virtual line is displayed in the visual field of the user.
    Type: Application
    Filed: October 2, 2018
    Publication date: September 3, 2020
    Inventors: Alex A. Johnson, Kevin Yu, Sebastian Andress, Mohammadjavad Fotouhighazvin, Greg M. Osgood, Nassir Navab, Mathias Unberath
  • Patent number: 10755433
    Abstract: A method and system for scanning an object using an RGB-D sensor, the method includes: a plurality of elementary scans of the object using an RGB-D sensor and visual odometry, each elementary scan delivering a plurality of key frames associated with a pose of the sensor with respect to the object, and each elementary scan being associated with a position of the object; for each elementary scan, elaborating a three-dimensional model of the object using the plurality of key frames and poses of the scan; merging each three-dimensional model into a merged three-dimensional model of the object.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: August 25, 2020
    Assignee: TOYOTA MOTOR EUROPE
    Inventors: Zbigniew Wasik, Wadim Kehl, Nassir Navab