Patents by Inventor Austin Reiter
Austin Reiter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11832969Abstract: An embodiment according to the present invention includes a method for a machine-learning based approach to the formation of ultrasound and photoacoustic images. The machine-learning approach is used to reduce or remove artifacts to create a new type of high-contrast, high-resolution, artifact-free image. The method of the present invention uses convolutional neural networks (CNNs) to determine target locations to replace the geometry-based beamforming that is currently used. The approach is extendable to any application where beamforming is required, such as radar or seismography.Type: GrantFiled: December 22, 2017Date of Patent: December 5, 2023Assignee: The Johns Hopkins UniversityInventors: Muyinatu Bell, Austin Reiter
-
Patent number: 11721354Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.Type: GrantFiled: September 14, 2021Date of Patent: August 8, 2023Assignee: Snap Inc.Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
-
Patent number: 11497417Abstract: An embodiment in accordance with the present invention includes a technology to continuously measure patient mobility automatically, using sensors that capture color and depth images along with algorithms that process the data and analyze the activities of the patients and providers to assess the highest level of mobility of the patient. An algorithm according to the present invention employs the following five steps: 1) analyze individual images to locate the regions containing every person in the scene (Person Localization), 2) for each person region, assign an identity to distinguish ‘patient’ vs. ‘not patient’ (Patient Identification), 3) determine the pose of the patient, with the help of contextual information (Patient Pose Classification and Context Detection), 4) measure the degree of motion of the patient (Motion Analysis), and 5) infer the highest mobility level of the patient using the combination of pose and motion characteristics (Mobility Classification).Type: GrantFiled: October 4, 2017Date of Patent: November 15, 2022Assignee: The Johns Hopkins UniversityInventors: Suchi Saria, Andy Jinhua Ma, Austin Reiter
-
Publication number: 20220108713Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.Type: ApplicationFiled: September 14, 2021Publication date: April 7, 2022Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
-
Patent number: 11189298Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.Type: GrantFiled: August 30, 2019Date of Patent: November 30, 2021Assignee: Snap Inc.Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
-
Patent number: 11152012Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.Type: GrantFiled: August 30, 2019Date of Patent: October 19, 2021Assignee: Snap Inc.Inventors: Changxi Zheng, Arun Asokan Nair, Austin Reiter, Shree K. Nayar
-
Publication number: 20210217432Abstract: Method of performing acoustic zooming starts with microphones capturing acoustic signals associated with video content. Beamformers generate beamformer signals using the acoustic signals. Beamformer signals correspond respectively to tiles of video content. Each of the beamformers is respectively directed to a center of each of the tiles. Target enhanced signal is generated using beamformer signals. Target enhanced signal is associated with a zoom area of video content. Target enhanced signal is generated by identifying the tiles respectively having at least portions that are included in the zoom area, selecting beamformer signals corresponding to identified tiles, and combining selected beamformer signals to generate target enhanced signal. Combining selected beamformer signals may include determining proportions for each of the identified tiles in relation to the zoom area and combining selected beamformer signals based on the proportions to generate the target enhanced signal.Type: ApplicationFiled: August 30, 2019Publication date: July 15, 2021Inventors: Changxi Zheng, Arun Asokan Nair, AUSTIN REITER, Shree K. Nayar
-
Patent number: 10368720Abstract: A system for stereo reconstruction from a monoscopic endoscope includes an image pick-up element at a distal end thereof and a working channel defined by a body of the monoscopic endoscope. The system comprises a light patterning component configured to be disposed within the working channel such that a light emitting end of the light patterning component will be fixed with a defined relative distance from the distal end of the image pick-up element. The system also includes a data processor adapted to be in communication with the image pick-up element. The light patterning component forms a pattern of light that is projected onto a region of interest. The data processor receives an image signal of the region of interest that includes the pattern, and determines a distance from the endoscope to the region of interest based on the image signal and based on the defined relative distance.Type: GrantFiled: November 20, 2014Date of Patent: August 6, 2019Assignee: The Johns Hopkins UniversityInventors: Kevin C. Olds, Tae Soo Kim, Russell H. Taylor, Austin Reiter
-
Publication number: 20190231231Abstract: An embodiment in accordance with the present invention includes a technology to continuously measure patient mobility automatically, using sensors that capture color and depth images along with algorithms that process the data and analyze the activities of the patients and providers to assess the highest level of mobility of the patient. An algorithm according to the present invention employs the following five steps: 1) analyze individual images to locate the regions containing every person in the scene (Person Localization), 2) for each person region, assign an identity to distinguish ‘patient’ vs. ‘not patient’ (Patient Identification), 3) determine the pose of the patient, with the help of contextual information (Patient Pose Classification and Context Detection), 4) measure the degree of motion of the patient (Motion Analysis), and 5) infer the highest mobility level of the patient using the combination of pose and motion characteristics (Mobility Classification).Type: ApplicationFiled: October 4, 2017Publication date: August 1, 2019Inventors: Suchi Saria, Andy Jinhua Ma, Austin Reiter
-
Publication number: 20180177461Abstract: An embodiment according to the present invention includes a method for a machine-learning based approach to the formation of ultrasound and photoacoustic images. The machine-learning approach is used to reduce or remove artifacts to create a new type of high-contrast, high-resolution, artifact-free image. The method of the present invention uses convolutional neural networks (CNNs) to determine target locations to replace the geometry-based beamforming that is currently used. The approach is extendable to any application where beamforming is required, such as radar or seismography.Type: ApplicationFiled: December 22, 2017Publication date: June 28, 2018Inventors: Muyinatu Bell, Austin Reiter
-
Patent number: 9418442Abstract: A system and method for tracking a surgical implement in a patient can have an imaging system configured to obtain sequential images of the patient, and an image recognition system coupled to the imaging system and configured to identify the surgical implement in individual images. The image recognition system can be configured to identify the surgical implement relative to the patient in one of the images based, at least in part, on an identification of the surgical implement in at least one preceding one of the sequential images, and a probabilistic analysis of individual sections of the one of the images, the sections being selected by the image recognition system based on a position of the surgical implement in the patient as identified in the at least one preceding one of the images.Type: GrantFiled: July 19, 2012Date of Patent: August 16, 2016Assignee: The Trustees of Columbia University in the City of New YorkInventors: Austin Reiter, Peter K. Allen
-
Publication number: 20160143509Abstract: According to some embodiments of the present invention, a system for stereo reconstruction from a monoscopic endoscope is provided. The monoscopic endoscope comprising an image pick-up element at a distal end thereof and a working channel defined by a body of the monoscopic endoscope. The working channel provides a port at the distal end of the monoscopic endoscope. The system for stereo reconstruction comprises a light patterning component configured to be disposed within the working channel of the monoscopic endoscope such that a light emitting end of the light patterning component will be fixed with a defined relative distance from the distal end of the image pick-up element. The system for stereo reconstruction also includes a data processor adapted to be in communication with the image pick-up element. The light patterning component forms a pattern of light that is projected onto a region of interest.Type: ApplicationFiled: November 20, 2014Publication date: May 26, 2016Applicant: The Johns Hopkins UniversityInventors: Kevin C. Olds, Tae Soo Kim, Russell H. Taylor, Austin Reiter
-
Publication number: 20150297313Abstract: Appearance learning systems, methods and computer products for three-dimensional markerless tracking of robotic surgical tools. An appearance learning approach is provided that is used to detect and track surgical robotic tools in laparoscopic sequences. By training a robust visual feature descriptor on low-level landmark features, a framework is built for fusing robot kinematics and 3D visual observations to track surgical tools over long periods of time across various types of environments. Three-dimensional tracking is enabled on multiple tools of multiple types with different overall appearances. The presently disclosed subject matter is applicable to surgical robot systems such as the da Vinci® surgical robot in both ex vivo and in vivo environments.Type: ApplicationFiled: December 13, 2013Publication date: October 22, 2015Applicant: The Trustees of Columbia University in the City of New YorkInventors: Austin Reiter, Peter K. Allen
-
Publication number: 20140341424Abstract: A system and method for tracking a surgical implement in a patient can have an imaging system configured to obtain sequential images of the patient, and an image recognition system coupled to the imaging system and configured to identify the surgical implement in individual images. The image recognition system can be configured to identify the surgical implement relative to the patient in one of the images based, at least in part, on an identification of the surgical implement in at least one preceding one of the sequential images, and a probabilistic analysis of individual sections of the one of the images, the sections being selected by the image recognition system based on a position of the surgical implement in the patient as identified in the at least one preceding one of the images.Type: ApplicationFiled: July 19, 2012Publication date: November 20, 2014Applicant: The Trustees of Columbia University in the City of New YorkInventors: Austin Reiter, Peter K. Allen
-
Publication number: 20140336461Abstract: A Surgical Structured Light (SSL) system is disclosed that provides real-time, dynamic 3D visual information of the surgical environment, allowing registration of pre- and intra-operative imaging, online metric measurements of tissue, and improved navigation and safety within the surgical field.Type: ApplicationFiled: July 25, 2014Publication date: November 13, 2014Inventors: Austin Reiter, Peter K. Allen
-
Publication number: 20130046137Abstract: A surgical instrument has a distal end portion with an outer surface with an outer radius. One or more image capture elements are movably mounted in the distal end portion. In a first state, the one or more image capture elements are un-deployed. In the first state, a surface having an aperture of at least one of the one or more image capture elements is enclosed within the outer surface of the surgical instrument so that the surface having the aperture does not extend beyond the outer surface. In a second state, the one or more image capture elements are deployed. In the second state the surface having the aperture of the at least one of the one or more image capture elements extends beyond the outer surface.Type: ApplicationFiled: August 15, 2011Publication date: February 21, 2013Applicant: Intuitive Surgical Operations, Inc.Inventors: Tao Zhao, Simon P. DiMaio, David W. Bailey, Amy E. Kerdok, Gregory W. Dachs, II, Stephen J. Blumenkranz, Austin Reiter, Christopher J. Hasser