Patents by Inventor Austin Reiter

Austin Reiter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11832969
    Abstract: An embodiment according to the present invention includes a method for a machine-learning based approach to the formation of ultrasound and photoacoustic images. The machine-learning approach is used to reduce or remove artifacts to create a new type of high-contrast, high-resolution, artifact-free image. The method of the present invention uses convolutional neural networks (CNNs) to determine target locations to replace the geometry-based beamforming that is currently used. The approach is extendable to any application where beamforming is required, such as radar or seismography.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: December 5, 2023
    Assignee: The Johns Hopkins University
    Inventors: Muyinatu Bell, Austin Reiter
  • Patent number: 11721354
    Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: August 8, 2023
    Assignee: Snap Inc.
    Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
  • Patent number: 11497417
    Abstract: An embodiment in accordance with the present invention includes a technology to continuously measure patient mobility automatically, using sensors that capture color and depth images along with algorithms that process the data and analyze the activities of the patients and providers to assess the highest level of mobility of the patient. An algorithm according to the present invention employs the following five steps: 1) analyze individual images to locate the regions containing every person in the scene (Person Localization), 2) for each person region, assign an identity to distinguish ‘patient’ vs. ‘not patient’ (Patient Identification), 3) determine the pose of the patient, with the help of contextual information (Patient Pose Classification and Context Detection), 4) measure the degree of motion of the patient (Motion Analysis), and 5) infer the highest mobility level of the patient using the combination of pose and motion characteristics (Mobility Classification).
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: November 15, 2022
    Assignee: The Johns Hopkins University
    Inventors: Suchi Saria, Andy Jinhua Ma, Austin Reiter
  • Publication number: 20220108713
    Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.
    Type: Application
    Filed: September 14, 2021
    Publication date: April 7, 2022
    Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
  • Patent number: 11189298
    Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: November 30, 2021
    Assignee: Snap Inc.
    Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
  • Patent number: 11152012
    Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: October 19, 2021
    Assignee: Snap Inc.
    Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
  • Publication number: 20210217432
    Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.
    Type: Application
    Filed: August 30, 2019
    Publication date: July 15, 2021
    Inventors: Changxi Zheng, Arun Asokan Nair, AUSTIN REITER, Shree K. Nayar
  • Patent number: 10368720
    Abstract: A system for stereo reconstruction from a monoscopic endoscope includes an image pick-up element at a distal end thereof and a working channel defined by a body of the monoscopic endoscope. The system comprises a light patterning component configured to be disposed within the working channel such that a light emitting end of the light patterning component will be fixed with a defined relative distance from the distal end of the image pick-up element. The system also includes a data processor adapted to be in communication with the image pick-up element. The light patterning component forms a pattern of light that is projected onto a region of interest. The data processor receives an image signal of the region of interest that includes the pattern, and determines a distance from the endoscope to the region of interest based on the image signal and based on the defined relative distance.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: August 6, 2019
    Assignee: The Johns Hopkins University
    Inventors: Kevin C. Olds, Tae Soo Kim, Russell H. Taylor, Austin Reiter
  • Publication number: 20190231231
    Abstract: An embodiment in accordance with the present invention includes a technology to continuously measure patient mobility automatically, using sensors that capture color and depth images along with algorithms that process the data and analyze the activities of the patients and providers to assess the highest level of mobility of the patient. An algorithm according to the present invention employs the following five steps: 1) analyze individual images to locate the regions containing every person in the scene (Person Localization), 2) for each person region, assign an identity to distinguish ‘patient’ vs. ‘not patient’ (Patient Identification), 3) determine the pose of the patient, with the help of contextual information (Patient Pose Classification and Context Detection), 4) measure the degree of motion of the patient (Motion Analysis), and 5) infer the highest mobility level of the patient using the combination of pose and motion characteristics (Mobility Classification).
    Type: Application
    Filed: October 4, 2017
    Publication date: August 1, 2019
    Inventors: Suchi Saria, Andy Jinhua Ma, Austin Reiter
  • Publication number: 20180177461
    Abstract: An embodiment according to the present invention includes a method for a machine-learning based approach to the formation of ultrasound and photoacoustic images. The machine-learning approach is used to reduce or remove artifacts to create a new type of high-contrast, high-resolution, artifact-free image. The method of the present invention uses convolutional neural networks (CNNs) to determine target locations to replace the geometry-based beamforming that is currently used. The approach is extendable to any application where beamforming is required, such as radar or seismography.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 28, 2018
    Inventors: Muyinatu Bell, Austin Reiter
  • Patent number: 9418442
    Abstract: A system and method for tracking a surgical implement in a patient can have an imaging system configured to obtain sequential images of the patient, and an image recognition system coupled to the imaging system and configured to identify the surgical implement in individual images. The image recognition system can be configured to identify the surgical implement relative to the patient in one of the images based, at least in part, on an identification of the surgical implement in at least one preceding one of the sequential images, and a probabilistic analysis of individual sections of the one of the images, the sections being selected by the image recognition system based on a position of the surgical implement in the patient as identified in the at least one preceding one of the images.
    Type: Grant
    Filed: July 19, 2012
    Date of Patent: August 16, 2016
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Austin Reiter, Peter K. Allen
  • Publication number: 20160143509
    Abstract: According to some embodiments of the present invention, a system for stereo reconstruction from a monoscopic endoscope is provided. The monoscopic endoscope comprising an image pick-up element at a distal end thereof and a working channel defined by a body of the monoscopic endoscope. The working channel provides a port at the distal end of the monoscopic endoscope. The system for stereo reconstruction comprises a light patterning component configured to be disposed within the working channel of the monoscopic endoscope such that a light emitting end of the light patterning component will be fixed with a defined relative distance from the distal end of the image pick-up element. The system for stereo reconstruction also includes a data processor adapted to be in communication with the image pick-up element. The light patterning component forms a pattern of light that is projected onto a region of interest.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 26, 2016
    Applicant: The Johns Hopkins University
    Inventors: Kevin C. Olds, Tae Soo Kim, Russell H. Taylor, Austin Reiter
  • Publication number: 20150297313
    Abstract: Appearance learning systems, methods and computer products for three-dimensional markerless tracking of robotic surgical tools. An appearance learning approach is provided that is used to detect and track surgical robotic tools in laparoscopic sequences. By training a robust visual feature descriptor on low-level landmark features, a framework is built for fusing robot kinematics and 3D visual observations to track surgical tools over long periods of time across various types of environments. Three-dimensional tracking is enabled on multiple tools of multiple types with different overall appearances. The presently disclosed subject matter is applicable to surgical robot systems such as the da Vinci® surgical robot in both ex vivo and in vivo environments.
    Type: Application
    Filed: December 13, 2013
    Publication date: October 22, 2015
    Applicant: The Trustees of Columbia University in the City of New York
    Inventors: Austin Reiter, Peter K. Allen
  • Publication number: 20140341424
    Abstract: A system and method for tracking a surgical implement in a patient can have an imaging system configured to obtain sequential images of the patient, and an image recognition system coupled to the imaging system and configured to identify the surgical implement in individual images. The image recognition system can be configured to identify the surgical implement relative to the patient in one of the images based, at least in part, on an identification of the surgical implement in at least one preceding one of the sequential images, and a probabilistic analysis of individual sections of the one of the images, the sections being selected by the image recognition system based on a position of the surgical implement in the patient as identified in the at least one preceding one of the images.
    Type: Application
    Filed: July 19, 2012
    Publication date: November 20, 2014
    Applicant: The Trustees of Columbia University in the City of New York
    Inventors: Austin Reiter, Peter K. Allen
  • Publication number: 20140336461
    Abstract: A Surgical Structured Light (SSL) system is disclosed that provides real-time, dynamic 3D visual information of the surgical environment, allowing registration of pre- and intra-operative imaging, online metric measurements of tissue, and improved navigation and safety within the surgical field.
    Type: Application
    Filed: July 25, 2014
    Publication date: November 13, 2014
    Inventors: Austin Reiter, Peter K. Allen
  • Publication number: 20130046137
    Abstract: A surgical instrument has a distal end portion with an outer surface with an outer radius. One or more image capture elements are movably mounted in the distal end portion. In a first state, the one or more image capture elements are un-deployed. In the first state, a surface having an aperture of at least one of the one or more image capture elements is enclosed within the outer surface of the surgical instrument so that the surface having the aperture does not extend beyond the outer surface. In a second state, the one or more image capture elements are deployed. In the second state the surface having the aperture of the at least one of the one or more image capture elements extends beyond the outer surface.
    Type: Application
    Filed: August 15, 2011
    Publication date: February 21, 2013
    Applicant: Intuitive Surgical Operations, Inc.
    Inventors: Tao Zhao, Simon P. DiMaio, David W. Bailey, Amy E. Kerdok, Gregory W. Dachs, II, Stephen J. Blumenkranz, Austin Reiter, Christopher J. Hasser