Patents by Inventor Mona Fathollahi

Mona Fathollahi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11883245
    Abstract: Embodiments described herein provide a surgical duration estimation system for continuously predicting real-time remaining surgical duration (RSD) of a live surgical session of a given surgical procedure based on a real-time endoscope video of the live surgical session. In one aspect, the process receives a current frame of the endoscope video at a current time of the live surgical session, wherein the current time is among a sequence of prediction time points for making continuous RSD predictions during the live surgical session. The process next randomly samples N?1 additional frames of the endoscope video corresponding to the elapsed portion of the live surgical session between the beginning of the endoscope video corresponding to the beginning of the live surgical session and the current frame corresponding to the current time. The process then combines the N?1 randomly sampled frames and the current frame in the temporal order to obtain a set of N frames.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: January 30, 2024
    Assignee: VERB SURGICAL INC.
    Inventors: Mona Fathollahi Ghezelghieh, Jocelyn Elaine Barker, Pablo Eduardo Garcia Kilroy
  • Patent number: 11775788
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: October 3, 2023
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20230298336
    Abstract: Disclosed are various systems and techniques for performing video-based surgeon technical-skill assessments and classifications. In one aspect, a process for classifying a surgeon's technical skill in performing a surgery is disclosed. During operation, the process receives a tool-motion track comprising a sequence of detected tool motions of a surgeon performing a surgery with a surgical tool. The process then generates a sequence of multi-channel feature matrices to mathematically represent the tool-motion track. Next, the process performs a one-dimensional (1D) convolution operation on the sequence of multi-channel feature matrices to generate a sequence of context-aware multi-channel feature representations of the tool-motion track.
    Type: Application
    Filed: March 20, 2023
    Publication date: September 21, 2023
    Inventors: Mona FATHOLLAHI GHEZELGHIEH, Mohammad Hasan SARHAN, Jocelyn BARKER, Lela DIMONTE
  • Publication number: 20230177703
    Abstract: Disclosed are various systems and techniques for tracking surgical tools in a surgical video. In one aspect, the system begins by receiving one or more established tracks for one or more previously-detected surgical tools in the surgical video. The system then processes a current frame of the surgical video to detect one or more objects using a first deep-learning model. Next, for each detected object in the one or more detected objects, the system further performs the flowing steps to assign the detected object to a right track: (1) computing a semantic similarity between the detected object and each of the one or more established tracks; (2) computing a spatial similarity between the detected object and the latest predicted location for each of the one or more established tracks; and (3) attempting to assign the detected object to one of the one or more established tracks based on the computed semantic similarity and the spatial similarity metric.
    Type: Application
    Filed: December 8, 2022
    Publication date: June 8, 2023
    Inventors: Mona FATHOLLAHI GHEZELGHIEH, Jocelyn BARKER
  • Publication number: 20220296334
    Abstract: Embodiments described herein provide a surgical duration estimation system for continuously predicting real-time remaining surgical duration (RSD) of a live surgical session of a given surgical procedure based on a real-time endoscope video of the live surgical session. In one aspect, the process receives a current frame of the endoscope video at a current time of the live surgical session, wherein the current time is among a sequence of prediction time points for making continuous RSD predictions during the live surgical session. The process next randomly samples N-1 additional frames of the endoscope video corresponding to the elapsed portion of the live surgical session between the beginning of the endoscope video corresponding to the beginning of the live surgical session and the current frame corresponding to the current time. The process then combines the N-1 randomly sampled frames and the current frame in the temporal order to obtain a set of N frames.
    Type: Application
    Filed: March 22, 2021
    Publication date: September 22, 2022
    Inventors: Mona Fathollahi Ghezelghieh, Jocelyn Elaine Barker, Pablo Eduardo Garcia Kilroy
  • Patent number: 11379992
    Abstract: Systems and methods for frame and scene segmentation are disclosed herein. One method includes associating a first primary element from a first frame with a background tag, associating a second primary element from the first frame with a subject tag, generating a background texture using the first primary element, generating a foreground texture using the second primary element, and combining the background texture and the foreground texture into a synthesized frame. The method also includes training a segmentation network using the background tag, the foreground tag, and the synthesized frame.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: July 5, 2022
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Prasanna Krishnasamy, Mona Fathollahi, Michael Tetelman
  • Publication number: 20220058414
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Application
    Filed: April 30, 2021
    Publication date: February 24, 2022
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Patent number: 11189031
    Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: November 30, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen
  • Patent number: 11080884
    Abstract: A trained network for point tracking includes an input layer configured to receive an encoding of an image. The image is of a locale or object on which the network has been trained. The network also includes a set of internal weights which encode information associated with the locale or object, and a tracked point therein or thereon. The network also includes an output layer configured to provide an output based on the image as received at the input layer and the set of internal weights. The output layer includes a point tracking node that tracks the tracked point in the image. The point tracking node can track the point by generating coordinates for the tracked point in an input image of the locale or object. Methods of specifying and training the network using a three-dimensional model of the locale or object are also disclosed.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: August 3, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Patent number: 10997448
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: May 4, 2021
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364900
    Abstract: Systems and methods for point marking using virtual fiducial elements are disclosed. An example method includes placing a set of fiducial elements in a locale or on an object and capturing a set of calibration images using an imager. The set of fiducial elements is fully represented in the set of calibration images. The method also includes generating a three-dimensional geometric model of the set of fiducial elements using the set of calibration images. The method also includes capturing a run time image of the locale or object. The run time image does not include a selected fiducial element, from the set of fiducial elements, which was removed from a location in the locale or on the object prior to capturing the run time image. The method concludes with identifying the location relative to the run time image using the run time image and the three-dimensional geometric model.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364482
    Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364895
    Abstract: A trained network for point tracking includes an input layer configured to receive an encoding of an image. The image is of a locale or object on which the network has been trained. The network also includes a set of internal weights which encode information associated with the locale or object, and a tracked point therein or thereon. The network also includes an output layer configured to provide an output based on the image as received at the input layer and the set of internal weights. The output layer includes a point tracking node that tracks the tracked point in the image. The point tracking node can track the point by generating coordinates for the tracked point in an input image of the locale or object. Methods of specifying and training the network using a three-dimensional model of the locale or object are also disclosed.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364521
    Abstract: Trained networks configured to detect fiducial elements in encodings of images and associated methods are disclosed. One method includes instantiating a trained network with a set of internal weights which encode information regarding a class of fiducial elements, applying an encoding of an image to the trained network where the image includes a fiducial element from the class of fiducial elements, generating an output of the trained network based on the set of internal weights of the network and the encoding of the image, and providing a position for at least one fiducial element in the image based on the output. Methods of training such networks are also disclosed.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
  • Publication number: 20200364873
    Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen
  • Publication number: 20200364878
    Abstract: Systems and methods for frame and scene segmentation are disclosed herein. One method includes associating a first primary element from a first frame with a background tag, associating a second primary element from the first frame with a subject tag, generating a background texture using the first primary element, generating a foreground texture using the second primary element, and combining the background texture and the foreground texture into a synthesized frame. The method also includes training a segmentation network using the background tag, the foreground tag, and the synthesized frame.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 19, 2020
    Applicant: Matterport, Inc.
    Inventors: Gary Bradski, Prasanna Krishnasamy, Mona Fathollahi, Michael Tetelman
  • Patent number: 10708525
    Abstract: Techniques and systems are provided for processing one or more low light images. For example, a short exposure image associated with one or more shutter speeds can be obtained. A long exposure image is also obtained, which is captured using a slower shutter speed than the one or more shutter speeds associated with the short exposure image. An output image can be generated by mapping color information from the long exposure image to the short exposure image.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: July 7, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Reza Pourreza Shahri, Mona Fathollahi Ghezelghieh, Ramin Rezaiifar, Donna Roy
  • Publication number: 20200068151
    Abstract: Techniques and systems are provided for processing one or more low light images. For example, a short exposure image associated with one or more shutter speeds can be obtained. A long exposure image is also obtained, which is captured using a slower shutter speed than the one or more shutter speeds associated with the short exposure image. An output image can be generated by mapping color information from the long exposure image to the short exposure image.
    Type: Application
    Filed: August 27, 2018
    Publication date: February 27, 2020
    Inventors: Reza POURREZA SHAHRI, Mona FATHOLLAHI GHEZELGHIEH, Ramin REZAIIFAR, Donna ROY