Patents by Inventor Huapeng Su

Huapeng Su has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230290090
    Abstract: Embodiments herein may relate to generating, based on a three-dimensional (3D) graphical representation of a 3D space, a two-dimensional (2D) image that includes respective indications of respective locations of one or more objects in the 3D space. The 2D image may then be displayed to a user that provides user input related to selection of an object of the one or more objects. The graphical representation of the object in the 2D image may then be altered based on the user input. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: June 27, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng SU
  • Publication number: 20230290068
    Abstract: A mesh model of a 3D space is provided with improved accuracy based on user inputs. In one aspect, a triangle face of the mesh is divided into three smaller triangle faces base on a user-selected point in a 3D space. A user can select the point on a display screen, for example, where a corresponding vertex in the mesh is a point in the mesh which is intersected by a ray cast from the selected point. This process can be repeated to provide new vertices in the mesh model which more accurately represent an object in the 3D space and therefore allow a more accurate measurement of the size or area of the object. For example, the user might select four points to identify a rectangular object.
    Type: Application
    Filed: August 1, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng SU
  • Publication number: 20230290069
    Abstract: A mesh model of a 3D space is modified based on semantic segmentation data to more accurately represent boundaries of an object in the 3D space. In one aspect, semantic segmentation images define one or more boundaries of the object. The semantic segmentation images are projected to a 3D mesh representation of the 3D space, and the 3D mesh representation is updated based on the one or more boundaries in the projected semantic segmentation image. In another aspect, the 3D mesh representation is updated based on one or more boundaries defined by the semantic segmentation images as applied to a point cloud of the 3D space.
    Type: Application
    Filed: August 1, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng Su
  • Publication number: 20230290062
    Abstract: Artificial neural networks (ANN) may be trained to output estimated floor plans from 3D spaces that would be challenging or impossible for existing techniques to estimate. In embodiments, an ANN may be trained using a supervised approach where top-down views of 3D meshes or point clouds are provided to the ANN as input, with ground truth floor plans provided as output for comparison. A suitably large training set may be used to fully train the ANN on challenging scenarios such as open loop scans and/or unusual geometries. The trained ANN may then be used to accurately estimate floor plans for such 3D spaces. Other embodiments are described.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng Su
  • Publication number: 20230196670
    Abstract: Embodiments include systems and methods for generating 2D and 3D layouts from a physical 3D space captured by a capturing device, the layouts having an identical scale to the physical 3D space, and estimating measurements of the physical 3D space from the layouts. The capturing device captures a point cloud or 3D mesh of the physical 3D space, from which one or more planes are identified. These one or more planes can then be used to create a virtual 3D reconstruction of the captured 3D space. In other embodiments, one plane may be identified as a floor plane, and features from the point cloud or 3D mesh that are above the floor plane may be projected onto the floor plane to create a top down view and 2D layout. Other embodiments are described.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 22, 2023
    Inventor: Huapeng SU
  • Patent number: 10891488
    Abstract: Described is a system for visual activity recognition. In operation, the system detects a set of objects of interest (OI) in video data and determines an object classification for each object in the set of OI, the set including at least one OI. A corresponding activity track is formed for each object in the set of OI by tracking each object across frames. Using a feature extractor, the system determines a corresponding feature in the video data for each OI, which is then used to determine a corresponding initial activity classification for each OI. One or more OI are then detected in each activity track via foveation, with the initial object detection and foveated object detection thereafter being appended into a new detected-objects list. Finally, a final classification is provided for each activity track using the new detected-objects list and filtering the initial activity classification results using contextual logic.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: January 12, 2021
    Assignee: HRL Laboratories, LLC
    Inventors: Deepak Khosla, Ryan M. Uhlenbrock, Huapeng Su, Yang Chen
  • Publication number: 20190251358
    Abstract: Described is a system for visual activity recognition. In operation, the system detects a set of objects of interest (OI) in video data and determines an object classification for each object in the set of OI, the set including at least one OI. A corresponding activity track is formed for each object in the set of OI by tracking each object across frames. Using a feature extractor, the system determines a corresponding feature in the video data for each OI, which is then used to determine a corresponding initial activity classification for each OI. One or more OI are then detected in each activity track via foveation, with the initial object detection and foveated object detection thereafter being appended into a new detected-objects list. Finally, a final classification is provided for each activity track using the new detected-objects list and filtering the initial activity classification results using contextual logic.
    Type: Application
    Filed: January 14, 2019
    Publication date: August 15, 2019
    Inventors: Deepak Khosla, Ryan M. Uhlenbrock, Huapeng Su, Yang Chen