Patents by Inventor Zhiliu Yang

Zhiliu Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230099521
    Abstract: A system for generating a semantic 3D map, that includes at least one image capture device capable of capturing and transmitting digital frames of images; a temporal and unified panoptic segmentation module programmed, structed, and/or configured to receive the frames of images from the at least one image capture device and integrate a heuristic panoptic label fusion module with a loss function of a neural network to realize end-to-end panoptic segmentation; a geometric segmentation module programmed, structured and/or configured to receive the frames of images from the at least one image capture device and for discovering previously unseen scene elements, wherein at every frame, it generates a set of closed 2D regions and a set of corresponding 3D segments from a depth image; a segmentation refinement module programmed, structed, and/or configured to refine geometric labels using panoptic labels; and a 3D volumetric integration module programmed, structed, and/or configured to directly register each pixel of
    Type: Application
    Filed: September 28, 2022
    Publication date: March 30, 2023
    Applicant: CLARKSON UNIVERSITY
    Inventors: Zhiliu Yang, Chen Liu
  • Publication number: 20220075068
    Abstract: A tightly coupled fusion approach that dynamically consumes light detection and ranging (LiDAR) and sonar data to generate reliable and scalable indoor maps for autonomous robot navigation. The approach may be used for the ubiquitous deployment of indoor robots that require the availability of affordable, reliable, and scalable indoor maps. A key feature of the approach is the utilization of a fusion mechanism that works in three stages: the first LiDAR scan matching stage efficiently generates initial key localization poses; a second optimization stage is used to eliminate errors accumulated from the previous stage and guarantees that accurate large-scale maps can be generated; and a final revisit scan fusion stage effectively fuses the LiDAR map and the sonar map to generate a highly accurate representation of the indoor environment.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 10, 2022
    Applicant: CLARKSON UNIVERSITY
    Inventors: Chen Liu, Zhiliu Yang, Shaoshan Liu