Patents by Inventor Suya You

Suya You has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9619691
    Abstract: A method of detecting objects in three-dimensional (3D) point clouds and detecting differences between 3D point clouds and the objects therein is disclosed. A method includes receiving a first scene 3D point cloud and a second scene 3D point cloud, wherein the first scene 3D point cloud and the second scene 3D point cloud include first and second target objects, respectively; aligning the first scene 3D point cloud and the second scene 3D point cloud; detecting the first and second target objects from the first scene 3D point cloud and the second scene 3D point cloud, respectively; comparing the detected first target object with the detected second target object; and identifying, based on the comparison, one or more differences between the detected first target object and the detected second target object. Further aspects relate to detecting changes of target objects within scenes of multiple 3D point clouds.
    Type: Grant
    Filed: March 6, 2015
    Date of Patent: April 11, 2017
    Assignees: University of Southern California, Chevron U.S.A. Inc.
    Inventors: Guan Pang, Jing Huang, Amir Anvar, Michael Brandon Casey, Christopher Lee Fisher, Suya You, Ulrich Neumann
  • Patent number: 9472022
    Abstract: A scene point cloud is processed and a solution to an inverse-function is determined to determine its source objects. A primitive extraction process and a part matching process are used to compute the inverse function solution. The extraction process estimates models and parameters based on evidence of cylinder and planar geometry in the scene. The matching process matches clusters of 3D points to models of parts from a library. A selected part and its associated polygon model is used to represent the point cluster. Iterations of the extraction and matching processes complete a 3D model for a complex scene made up of planes, cylinders, and complex parts from the parts library. Connecting regions between primitives and/or parts are processed to determine their existence and type. Constraints may be used to ensure a connected model and alignment of its components.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 18, 2016
    Assignee: University of Southern California
    Inventors: Ulrich Neumann, Suya You, Rongqi Qiu, Guan Pang, Jing Huang, Luciano Nocera
  • Publication number: 20150254499
    Abstract: A method of detecting objects in three-dimensional (3D) point clouds and detecting differences between 3D point clouds and the objects therein is disclosed. A method includes receiving a first scene 3D point cloud and a second scene 3D point cloud, wherein the first scene 3D point cloud and the second scene 3D point cloud include first and second target objects, respectively; aligning the first scene 3D point cloud and the second scene 3D point cloud; detecting the first and second target objects from the first scene 3D point cloud and the second scene 3D point cloud, respectively; comparing the detected first target object with the detected second target object; and identifying, based on the comparison, one or more differences between the detected first target object and the detected second target object. Further aspects relate to detecting changes of target objects within scenes of multiple 3D point clouds.
    Type: Application
    Filed: March 6, 2015
    Publication date: September 10, 2015
    Applicants: CHEVRON U.S.A. INC., UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Guan PANG, Jing HUANG, Amir ANVAR, Michael Brandon CASEY, Christopher Lee FISHER, Suya YOU, Ulrich NEUMANN
  • Patent number: 9098773
    Abstract: A system and method of detecting one or more objects in a three-dimensional point cloud scene are provided. The method includes receiving a three-dimensional point cloud scene, the three-dimensional point cloud scene comprising a plurality of points; classifying at least a portion of the plurality of points in the three-dimensional point cloud into two or more categories by applying a classifying-oriented three-dimensional local descriptor and learning-based classifier; extracting from the three-dimensional point cloud scene one or more clusters of points utilizing the two or more categories by applying at least one of segmenting and clustering; and matching the extracted clusters with objects within a library by applying a matching-oriented three-dimensional local descriptor.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: August 4, 2015
    Assignees: CHEVRON U.S.A. INC., UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Jing Huang, Suya You, Amir Anvar, Christopher Lee Fisher
  • Publication number: 20150003723
    Abstract: A system and method of detecting one or more objects in a three-dimensional point cloud scene are provided. The method includes receiving a three-dimensional point cloud scene, the three-dimensional point cloud scene comprising a plurality of points; classifying at least a portion of the plurality of points in the three-dimensional point cloud into two or more categories by applying a classifying-oriented three-dimensional local descriptor and learning-based classifier; extracting from the three-dimensional point cloud scene one or more clusters of points utilizing the two or more categories by applying at least one of segmenting and clustering; and matching the extracted clusters with objects within a library by applying a matching-oriented three-dimensional local descriptor.
    Type: Application
    Filed: June 27, 2013
    Publication date: January 1, 2015
    Inventors: Jing HUANG, Suya YOU, Amir ANVAR, Christopher Lee FISHER
  • Publication number: 20140098094
    Abstract: A scene point cloud is processed and a solution to an inverse-function is determined to determine its source objects. A primitive extraction process and a part matching process are used to compute the inverse function solution. The extraction process estimates models and parameters based on evidence of cylinder and planar geometry in the scene. The matching process matches clusters of 3D points to models of parts from a library. A selected part and its associated polygon model is used to represent the point cluster. Iterations of the extraction and matching processes complete a 3D model for a complex scene made up of planes, cylinders, and complex parts from the parts library. Connecting regions between primitives and/or parts are processed to determine their existence and type. Constraints may be used to ensure a connected model and alignment of its components.
    Type: Application
    Filed: March 15, 2013
    Publication date: April 10, 2014
    Inventors: Ulrich NEUMANN, Suya YOU, Rongqi QIU, Guan PANG, Jing HUANG, Luciano NOCERA
  • Patent number: 8406532
    Abstract: A method and system of matching features in a pair of images using line signatures. The method includes determining a first similarity measure between a first line signature in a first image in the pair of images and a second line signature in a second image in the pair of images; determining a second similarity measure between the first line signature in the first image and a third line signature in the second image; comparing the first similarity measure with a first threshold value; comparing a difference between the first similarity and the second similarity with a second threshold value; and if the first similarity measure is greater than the first threshold value and the difference between the first similarity and the second similarity is greater than the second threshold value, the first line signature and the second line signature produce a match.
    Type: Grant
    Filed: June 17, 2009
    Date of Patent: March 26, 2013
    Assignee: Chevron U.S.A., Inc.
    Inventors: Lu Wang, Ulrich Neumann, Suya You
  • Publication number: 20100322522
    Abstract: A method and system of matching features in a pair of images using line signatures. The method includes determining a first similarity measure between a first line signature in a first image in the pair of images and a second line signature in a second image in the pair of images; determining a second similarity measure between the first line signature in the first image and a third line signature in the second image; comparing the first similarity measure with a first threshold value; comparing a difference between the first similarity and the second similarity with a second threshold value; and if the first similarity measure is greater than the first threshold value and the difference between the first similarity and the second similarity is greater than the second threshold value, the first line signature and the second line signature produce a match.
    Type: Application
    Filed: June 17, 2009
    Publication date: December 23, 2010
    Applicant: Chevron U.S.A., Inc.
    Inventors: Lu Wang, Ulrich Neumann, Suya You
  • Publication number: 20100092093
    Abstract: In a feature matching method for recognizing an object in two-dimensional or three-dimensional image data, features in each of which a predetermined attribute in the two-dimensional or three-dimensional image data takes a local maximum and/or minimum are detected, and features existing along edges and line contours from the detected features are excluded. Thereafter, the remaining features are allocated to a plane, some features are selected from the allocated features by using local information, and feature matching for the selected features being set as objects is performed.
    Type: Application
    Filed: August 12, 2009
    Publication date: April 15, 2010
    Applicant: OLYMPUS CORPORATION
    Inventors: Yuichiro AKATSUKA, Takao SHIBASAKI, Yukihito FURUHASHI, Kazuo ONO, Ulrich NEUMANN, Suya YOU
  • Patent number: 7583275
    Abstract: Systems and techniques to implement augmented virtual environments. In one implementation, the technique includes: generating a three dimensional (3D) model of an environment from range sensor information representing a height field for the environment, tracking orientation information of image sensors in the environment with respect to the 3D model in real-time, projecting real-time video from the image sensors onto the 3D model based on the tracked orientation information, and visualizing the 3D model with the projected real-time video. Generating the 3D model can involve parametric fitting of geometric primitives to the range sensor information. The technique can also include: identifying in real time a region in motion with respect to a background image in real-time video, the background image being a single distribution background dynamically modeled from a time average of the real-time video, and placing a surface that corresponds to the moving region in the 3D model.
    Type: Grant
    Filed: September 30, 2003
    Date of Patent: September 1, 2009
    Assignee: University of Southern California
    Inventors: Ulrich Neumann, Suya You
  • Publication number: 20080278582
    Abstract: Methods, systems, and apparatus, including medium-encoded computer program products, for managing video bandwidth over a network connecting one or more cameras and one or more client video display stations. In one aspect, a system includes a data communication network, cameras coupled with the network, arranged in different locations, and operable to provide video imagery of the different locations via the network, one or more video fusion clients operable to display the video imagery of the different locations received via the network, one or more camera manager components operable to manage transmission of the video imagery from the cameras over the network based on client-side information, and one or more client manager components operable to define the client-side information based on display parameters of the one or more video fusion clients.
    Type: Application
    Filed: May 6, 2008
    Publication date: November 13, 2008
    Applicant: SENTINEL AVE LLC
    Inventors: Tat Leung Chung, Ulrich Neumann, Suya You
  • Patent number: 6765569
    Abstract: An augmented reality tool makes use of autocalibrated features for rendering annotations into images of a scene as a camera moves about relative to the scene. The autocalibrated features are used for positioning the annotations and for recovery of tracking, correspondences and camera pose. An improved method of autocalibration for autocalibrating structured sets of point features together is also described. The augmented reality tool makes use of manual, semi-automatic and automatic methods employing autocalibrated features.
    Type: Grant
    Filed: September 25, 2001
    Date of Patent: July 20, 2004
    Assignee: University of Southern California
    Inventors: Ulrich Neumann, Suya You
  • Publication number: 20040105573
    Abstract: Systems and techniques to implement augmented virtual environments. In one implementation, the technique includes: generating a three dimensional (3D) model of an environment from range sensor information representing a height field for the environment, tracking orientation information of image sensors in the environment with respect to the 3D model in real-time, projecting real-time video from the image sensors onto the 3D model based on the tracked orientation information, and visualizing the 3D model with the projected real-time video. Generating the 3D model can involve parametric fitting of geometric primitives to the range sensor information. The technique can also include: identifying in real time a region in motion with respect to a background image in real-time video, the background image being a single distribution background dynamically modeled from a time average of the real-time video, and placing a surface that corresponds to the moving region in the 3D model.
    Type: Application
    Filed: September 30, 2003
    Publication date: June 3, 2004
    Inventors: Ulrich Neumann, Suya You
  • Publication number: 20020191862
    Abstract: An augmented reality tool makes use of autocalibrated features for rendering annotations into images of a scene as a camera moves about relative to the scene. The autocalibrated features are used for positioning the annotations and for recovery of tracking, correspondences and camera pose. An improved method of autocalibration for autocalibrating structured sets of point features together is also described. The augmented reality tool makes use of manual, semi-automatic and automatic methods employing autocalibrated features.
    Type: Application
    Filed: September 25, 2001
    Publication date: December 19, 2002
    Inventors: Ulrich Neumann, Suya You