Patents by Inventor Larry H. Matthies

Larry H. Matthies has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11461912
    Abstract: A method and system provide for temporal fusion of depth maps in an image space representation. A series of depth maps are obtained/acquired from one or more depth sensors at a first time. A first Gaussian mixture model (GMM) is initialized using one of the series of depth maps. A second depth map is obtained from the depth sensors at a second time. An estimate of the motion of the depth sensors, from the first time to the second time, is received. A predictive GMM at the second time is created based on a transform of the first GMM and the estimate of the motion. The predictive GMM is updated based on the second depth map.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: October 4, 2022
    Assignee: CALIFORNIA INSTITUTE OF TECHNOLOGY
    Inventors: Larry H. Matthies, Cevahir Cigla
  • Patent number: 10665115
    Abstract: A method, device, framework, and system provide the ability to control an unmanned aerial vehicle (UAV) to avoid obstacle collision. Range data of a real-world scene is acquired using range sensors (that provide depth data to visible objects). The range data is combined into an egospace representation (consisting of pixels in egospace). An apparent size of each of the visible objects is expanded based on a dimension of the UAV. An assigned destination in the real world scene based on world space is received and transformed into egospace coordinates in egospace. A trackable path from the UAV to the assigned destination through egospace that avoids collision with the visible objects (based on the expanded apparent sizes of each of the visible objects) is generated. Inputs that control the UAV to follow the trackable path are identified.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: May 26, 2020
    Assignee: CALIFORNIA INSTITUTE OF TECHNOLOGY
    Inventors: Anthony T. S. Fragoso, Larry H. Matthies, Roland Brockers, Richard M. Murray
  • Publication number: 20180322646
    Abstract: A method and system provide for temporal fusion of depth maps in an image space representation. A series of depth maps are obtained/acquired from one or more depth sensors at a first time. A first Gaussian mixture model (GMM) is initialized using one of the series of depth maps. A second depth map is obtained from the depth sensors at a second time. An estimate of the motion of the depth sensors, from the first time to the second time, is received. A predictive GMM at the second time is created based on a transform of the first GMM and the estimate of the motion. The predictive GMM is updated based on the second depth map.
    Type: Application
    Filed: July 19, 2018
    Publication date: November 8, 2018
    Applicant: California Institute of Technology
    Inventors: Larry H. Matthies, Cevahir Cigla
  • Publication number: 20170193830
    Abstract: A method, device, framework, and system provide the ability to control an unmanned aerial vehicle (UAV) to avoid obstacle collision. Range data of a real-world scene is acquired using range sensors (that provide depth data to visible objects). The range data is combined into an egospace representation (consisting of pixels in egospace). An apparent size of each of the visible objects is expanded based on a dimension of the UAV. An assigned destination in the real world scene based on world space is received and transformed into egospace coordinates in egospace. A trackable path from the UAV to the assigned destination through egospace that avoids collision with the visible objects (based on the expanded apparent sizes of each of the visible objects) is generated. Inputs that control the UAV to follow the trackable path are identified.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 6, 2017
    Applicant: California Institute of Technology
    Inventors: Anthony T. S. Fragoso, Larry H. Matthies, Roland Brockers, Richard M. Murray
  • Patent number: 9460353
    Abstract: Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.
    Type: Grant
    Filed: September 16, 2011
    Date of Patent: October 4, 2016
    Assignee: California Institute of Technology
    Inventors: Arturo L. Rankin, Larry H. Matthies, Paolo Bellutta
  • Publication number: 20120070071
    Abstract: Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.
    Type: Application
    Filed: September 16, 2011
    Publication date: March 22, 2012
    Applicant: California Institute of Technology
    Inventors: Arturo L. Rankin, Larry H. Matthies, Paolo Bellutta
  • Patent number: 5179441
    Abstract: Apparatus and methods for near real-time stereo vision system for use with a robotic vehicle comprises two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60.times.64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing due aboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
    Type: Grant
    Filed: December 18, 1991
    Date of Patent: January 12, 1993
    Assignee: The United States of America as represented by the Administrator of the National Aeronautics and Space Administration
    Inventors: Charles H. Anderson, Larry H. Matthies