Patents by Inventor Nassir Navab
Nassir Navab has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250118011Abstract: The present invention relates to a system for visualizing OCT signals, having a display means designed for the time-resolved display of image data and a control unit, the control unit being configured to receive a time-resolved OCT signal of a selected field of view of a sample from an OCT system, to ascertain, on the basis of the OCT signal, a time-resolved OCT image having at least one object and having a virtual surface, with a reflection of the at least one object in the OCT image being ascertained on the virtual surface, and to control the display means to display the time-resolved OCT image on the display means. The present invention also relates to a corresponding method for visualizing OCT signals.Type: ApplicationFiled: October 4, 2024Publication date: April 10, 2025Inventors: Nassir NAVAB, Michael SOMMERSPERGER, Shervin DEHGHANI
-
Publication number: 20250117181Abstract: A method for generating a sound output is based on an interaction with a data set. The data set comprises a plurality of data points. Each data point stores one or more data features. An interaction is obtained with at least a part of the data points and a sound model is used to generate a sound output based on the interaction. The sound model maps at least one of the one or more data features to one or more acoustic properties of the sound output as a function of the interaction. The data features can be one of a spatial feature, a time feature, a physical property, and a data label. The acoustic properties can be one of pitch, pulsing frequency, duty cycle, loudness, and tone colour. The method can be used to validate labels assigned to ground truth input data and to train a machine learning algorithm.Type: ApplicationFiled: October 3, 2024Publication date: April 10, 2025Inventors: Nassir Navab, Sasan Matinfar, Mehrdad Salehi, Shevin Dehghani, Navid Navab
-
Publication number: 20250012908Abstract: A method for generating an acoustic calibration signal for adjusting a tool in a plurality of dimensions relying on acoustic feedback. Each dimension of the plurality of dimensions corresponds to a respective degree of freedom of the tool. The method comprises generating the calibration acoustic signal having an acoustic property for each dimension of the plurality of dimensions, wherein each of the acoustic properties varies towards a corresponding predetermined value when the tool is being adjusted in the corresponding dimension towards a corresponding predetermined target adjustment. A related method and a related alignment system for adjusting a tool in a plurality of dimensions.Type: ApplicationFiled: November 18, 2022Publication date: January 9, 2025Inventors: Nassir NAVAB, Sasan MATINFAR, Mehrdad SALEHI
-
Patent number: 12175610Abstract: A method for aligning the positions and orientations of a real object and a virtual object in real space, the virtual object corresponding to a virtual replica of the real object, the method comprising visualizing at least one alignment feature superimposed on or replacing a representation of the virtual object in a field of view containing the real object, wherein the alignment feature is indicative of a position and orientation of the virtual object in real space, and wherein the at least one alignment feature complements a shape and/or surface pattern of the real object, such that the alignment feature and the real object form a composite object with complementing patterns and/or shapes in the field of view, when the real object and the virtual object are aligned.Type: GrantFiled: July 6, 2021Date of Patent: December 24, 2024Assignee: TECHNISCHE UNIVERSITÄT MÜNCHENInventors: Nassir Navab, Alejandro Martin Gomez
-
Patent number: 12165760Abstract: A device may receive, from an imaging device, a two-dimensional image of a patient being operated on by a user, where the two-dimensional image captures a portion of the patient, and where the portion of the patient is provided between a focal point of an imaging source of the imaging device and a detector plane of the imaging device. The device may translate the two-dimensional image along a frustum of the imaging source, and may generate one or more images in a three-dimensional space based on translating the two-dimensional image along the frustum of the imaging source. The device may provide the one or more images in the three-dimensional space to an augmented reality device associated with the user.Type: GrantFiled: February 24, 2020Date of Patent: December 10, 2024Assignee: The Johns Hopkins UniversityInventors: Javad Fotouhi, Mathias Unberath, Nassir Navab
-
Publication number: 20240193854Abstract: The present invention relates to a system for visualizing OCT signals, comprising a display means designed for the time-resolved display of image data and a control unit, the control unit being configured to receive a time-resolved OCT signal of a selected field of view of a sample from an OCT system, to ascertain a time-resolved OCT image with a virtual shadow on the basis of the OCT signal, with the virtual shadow being generated in object-specific fashion on at least one area of the OCT image by a virtual irradiation of at least one object of the OCT image by means of a virtual light source, and to control the display means to display the time-resolved OCT image on the display means. The present invention also relates to a corresponding method for visualizing OCT signals.Type: ApplicationFiled: December 7, 2023Publication date: June 13, 2024Inventors: Nassir NAVAB, Michael SOMMERSPERGER
-
Patent number: 11928838Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.Type: GrantFiled: July 8, 2022Date of Patent: March 12, 2024Assignee: The Johns Hopkins UniversityInventors: Ehsan Azimi, Long Qian, Peter Kazanzides, Nassir Navab
-
Patent number: 11861062Abstract: A calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a virtual image having at least one feature. The calibration platform may determine, based on information relating to a gaze of a user wearing the OST-HMD, that the user performed a voluntary eye blink to indicate that the at least one feature of the virtual image appears to the user to be aligned with at least one point on the three-dimensional real-world object. The calibration platform may record an alignment measurement based on a position of the at least one point on the three-dimensional real-world object in a real-world coordinate system based on a time when the user performed the voluntary eye blink. Accordingly, the alignment measurement may be used to generate a function providing a mapping between three-dimensional points in the real-world coordinate system and corresponding points in a display space of the OST-HMD.Type: GrantFiled: January 31, 2019Date of Patent: January 2, 2024Assignee: The Johns Hopkins UniversityInventors: Ehsan Azimi, Long Qian, Peter Kazanzides, Nassir Navab
-
Publication number: 20230274517Abstract: A method for aligning the positions and orientations of a real object and a virtual object in real space, the virtual object corresponding to a virtual replica of the real object, the method comprising visualizing at least one alignment feature superimposed on or replacing a representation of the virtual object in a field of view containing the real object, wherein the alignment feature is indicative of a position and orientation of the virtual object in real space, and wherein the at least one alignment feature complements a shape and/or surface pattern of the real object, such that the alignment feature and the real object form a composite object with complementing patterns and/or shapes in the field of view, when the real object and the virtual object are aligned.Type: ApplicationFiled: July 6, 2021Publication date: August 31, 2023Inventors: Nassir Navab, Alejandro Martin Gomez
-
Publication number: 20230080133Abstract: A computer-implemented method of estimating a 6D pose and shape of one or more objects from a 2D image, comprises the steps of: detecting, within the 2D image, one or more 2D regions of interest, each 2D region of interest containing a corresponding object among the one of more objects; cropping out a corresponding pixel value array, coordinate tensor , and feature map for each 2D region of interest; concatenating the corresponding pixel value array, coordinate tensor, and feature map for each 2D region of interest; and inferring, for each 2D region of interest, a 4D quaternion describing a rotation of the corresponding object in the 3D rotation group, a 2D centroid, which is a projection of a 3D translation of the corresponding object onto a plane of the 2D image given a camera matrix associated to the 2D, image, a distance from a viewpoint of the 2D image to the corresponding object a size and a class-specific latent shape vector of the corresponding object.Type: ApplicationFiled: February 21, 2020Publication date: March 16, 2023Inventors: Sven Meier, Norimasa Kobori, Luca Minciullo, Kei Yoshikawa, Fabian Manhardt, Manuel Nickel, Nassir Navab
-
Publication number: 20230057389Abstract: A computer-implemented method for determining the refractive power of an intraocular lens to be inserted is presented. The method includes generating first training data for a machine learning system on the basis of a first physical model for a refractive power for an intraocular lens and training the machine learning system by means of the first training data generated, for the purposes of forming a first learning model for determining the refractive power. Furthermore, the method includes training the machine learning system, which was trained using the first training data, using clinical ophthalmological training data for forming a second learning model for determining the refractive power and providing ophthalmological data of a patient and an expected position of the intraocular lens to be inserted. Moreover, the method includes predicting the refractive power of the intraocular lens to be inserted by means of the trained machine learning system and the second learning model.Type: ApplicationFiled: January 21, 2021Publication date: February 23, 2023Applicant: Carl Zeiss Meditec AGInventors: Hendrik Burwinkel, Holger Matz, Stefan Saur, Christoph Hauger, Nassir Navab
-
Publication number: 20220366598Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.Type: ApplicationFiled: July 8, 2022Publication date: November 17, 2022Applicant: The Johns Hopkins UniversityInventors: Ehsan AZIMI, Long QIAN, Peter KAZANZIDES, Nassir NAVAB
-
Patent number: 11430203Abstract: A computer-implemented method for registering low dimensional images with a high dimensional image includes receiving a high dimensional image of a region of interest and simulating synthetic low dimensional images of the region of interest from a number of poses of a virtual low dimensional imaging device, from the high dimensional image. The method determines positions of landmarks within the low dimensional images by applying a first learning algorithm to the low dimensional images and back projecting of the positions of the determined landmarks into the high dimensional image space, to thereby obtain the positions of the landmarks in the high dimensional image. The positions of landmarks within low dimensional images acquired form an imaging device are determined by applying the first or a second learning algorithm to the low dimensional images. The low dimensional images are registered with the high dimensional image based on the positions of the landmarks.Type: GrantFiled: October 2, 2020Date of Patent: August 30, 2022Assignee: MAXER Endoscopy GmBHInventors: Nassir Navab, Matthias Grimm, Javier Esteban, Wojciech Konrad Karcz
-
Patent number: 11386572Abstract: A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object.Type: GrantFiled: January 31, 2019Date of Patent: July 12, 2022Assignee: The Johns Hopkins UniversityInventors: Ehsan Azimi, Long Qian, Peter Kazanzides, Nassir Navab
-
Patent number: 11369440Abstract: A system for modelling a portion of a patient includes a processing unit, a manipulator, a sensor, and a display device. The processing unit is configured to receive patient data and to process the patient data to generate a model of the portion of the patient. The sensor is configured to capture manipulator data. The processing unit is configured to receive the manipulator data from the sensor and to process the manipulator data to determine a position of the manipulator, an orientation of the manipulator, or both. The display device is configured to display the model on or in the manipulator.Type: GrantFiled: March 30, 2018Date of Patent: June 28, 2022Assignee: THE JOHN HOPKINS UNIVERSITYInventors: Bernhard Fuerst, Greg M. Osgood, Nassir Navab, Alexander Winkler
-
Patent number: 11367226Abstract: A method for aligning a real-world object with a virtual object includes capturing images, video, or both of the real-world object from a first viewpoint and from a second viewpoint. The first and second viewpoints are different. The method also includes simultaneously superimposing the virtual object at least partially over the real-world object from the first viewpoint in a first augmented reality (AR) display and from the second viewpoint in a second AR display based at least in part on the images, video, or both. The method also includes adjusting a position of the real-world object to at least partially align the real-world object with the virtual object from the first viewpoint in the first AR display and from the second viewpoint in the second AR display.Type: GrantFiled: February 23, 2021Date of Patent: June 21, 2022Assignee: THE JOHNS HOPKINS UNIVERSITYInventors: Nassir Navab, Javad Fotouhi
-
Publication number: 20220139532Abstract: A device may receive, from an imaging device, a two-dimensional image of a patient being operated on by a user, where the two-dimensional image captures a portion of the patient, and where the portion of the patient is provided between a focal point of an imaging source of the imaging device and a detector plane of the imaging device. The device may translate the two-dimensional image along a frustum of the imaging source, and may generate one or more images in a three-dimensional space based on translating the two-dimensional image along the frustum of the imaging source. The device may provide the one or more images in the three-dimensional space to an augmented reality device associated with the user.Type: ApplicationFiled: February 24, 2020Publication date: May 5, 2022Applicant: The Johns Hopkins UniversityInventors: Mohammadjavad FOTOUHIGHAZVINI, Mathias UNBERATH, Nassir NAVAB
-
Publication number: 20210378750Abstract: Described herein are systems, methods, and techniques for spatially-aware displays for computer-assisted interventions. A Fixed View Frustum technique renders computer images on the display using a perspective based on a virtual camera having a field-of-view facing the display and automatically updates the virtual position of the virtual camera in response to adjusting the pose of the display. A Dynamic Mirror View Frustum technique renders computer images on the display using a perspective based on a field-of-view of a virtual camera that has a virtual position behind the display device. The virtual position of the virtual camera is dynamically updated in response to movement of a user's viewpoint located in front of the display device. Slice visualization techniques are also described herein for use with the Fixed View Frustum and Dynamic Mirror View Frustum techniques.Type: ApplicationFiled: June 8, 2021Publication date: December 9, 2021Applicant: Stryker Leibinger GmbH & Co. KGInventors: Nassir Navab, Alexander Winkler
-
Publication number: 20210272328Abstract: A method for aligning a real-world object with a virtual object includes capturing images, video, or both of the real-world object from a first viewpoint and from a second viewpoint. The first and second viewpoints are different. The method also includes simultaneously superimposing the virtual object at least partially over the real-world object from the first viewpoint in a first augmented reality (AR) display and from the second viewpoint in a second AR display based at least in part on the images, video, or both. The method also includes adjusting a position of the real-world object to at least partially align the real-world object with the virtual object from the first viewpoint in the first AR display and from the second viewpoint in the second AR display.Type: ApplicationFiled: February 23, 2021Publication date: September 2, 2021Inventors: Nassir Navab, Mohammadjavad Fotouhighazvini
-
Patent number: 11045090Abstract: A medical imaging apparatus for combined X-ray and optical visualization is provided. It comprises: an X-ray detector positioned above a patient; an X-ray source positioned below a patient; a control device; and a camera setup adapted to deliver an optical stereoscopic or 3D image. Thereby, the camera setup is positioned adjacent to the X-ray detector above the patient, and the control device is adapted to calculate an optical 2D image or a 3D surface from the data delivered by the camera setup, that optical 2D image or 3D surface having a virtual viewpoint similar to the viewpoint of the X-ray source. It is further adapted to superimpose an X-ray image acquired by the X-ray detector and the optical 2D image or 3D surface in order to achieve an augmented optical/X-ray image.Type: GrantFiled: September 28, 2016Date of Patent: June 29, 2021Assignee: Technische Universität MünchenInventor: Nassir Navab