Patents by Inventor Yingying SHI

Yingying SHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11970246
    Abstract: A ship cabin loading capacity measurement method and apparatus thereof, comprises: acquiring point cloud measurement data of a ship cabin; optimizing the point cloud measurement data according to a predetermined point cloud data processing rule, and generating optimized ship cabin point cloud data; calculating said ship cabin point cloud data with a predetermined loading capacity calculation rule, and getting ship cabin loading capacity data. According to the ship cabin loading capacity measurement method of the present invention, the point cloud measurement data can be acquired by a lidar, and processing the point cloud measurement data of the ship cabin with a predetermined point cloud data processing law and a computation law, and as the point cloud data processing law and the computation law can be deployed in a computer device in advance, after point cloud measurement data acquisition, loading capacity of a ship cabin can be acquired quickly and precisely.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: April 30, 2024
    Assignee: Zhoushan Institute of Calibration and Testing for Quality and Technology Supervision
    Inventors: Huadong Hao, Cunjun Li, Xianlei Chen, Haolei Shi, Ze'nan Wu, Junxue Chen, Zhengqian Shen, Yingying Wang, Huizhong Xu
  • Publication number: 20240130637
    Abstract: System and method for sensing movement such as chest movement of each of a plurality of target test subjects by: transmitting, at each of N transmitting antennas (TXs) of a phased multiple-input multiple-output (phased-MIMO) radar, a common frequency modulated continuous wave (FMCW) signal in each of a plurality of time division multiplex (TDM) slots, each TDM slot having associated with it a respective weight selected in accordance with a transmit steering vector configured to cause a coherent summation of transmitted signal in a desired direction ?0 toward at least one target; receiving target-reflected energy associated with the transmitted FMCW signals at a virtual array formed by stacking signal from P TDM slots received via M receiving antennas (RXs) of the phased-MIMO radar; and processing an output of the virtual array to extract therefrom signal received from the desired direction ?0 to determine thereby target movement in the desired direction ?0.
    Type: Application
    Filed: October 18, 2023
    Publication date: April 25, 2024
    Applicant: Rutgers, The State University of New Jersey
    Inventors: Athina Petropulu, Yingying Chen, Zhaoyi Xu, Chung-Tse Michael Wu, Cong Shi, Tianfang Zhang, Shuping Li, Yichao Yuan
  • Patent number: 11928957
    Abstract: An audio visual haptic signal reconstruction method includes first utilizing a large-scale audio-visual database stored in a central cloud to learn knowledge, and transferring same to an edge node; then combining, by means of the edge node, a received audio-visual signal with knowledge in the central cloud, and fully mining semantic correlation and consistency between modals; and finally fusing the semantic features of the obtained audio and video signals and inputting the semantic features to a haptic generation network, thereby realizing the reconstruction of the haptic signal. The method effectively solves the problems that the number of audio and video signals of a multi-modal dataset is insufficient, and semantic tags cannot be added to all the audio-visual signals in a training dataset by means of manual annotation. Also, the semantic association between heterogeneous data of different modals are better mined, and the heterogeneity gap between modals are eliminated.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: March 12, 2024
    Assignee: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Xin Wei, Liang Zhou, Yingying Shi, Zhe Zhang, Siqi Zhang
  • Publication number: 20230290234
    Abstract: An audio visual haptic signal reconstruction method includes first utilizing a large-scale audio-visual database stored in a central cloud to learn knowledge, and transferring same to an edge node; then combining, by means of the edge node, a received audio-visual signal with knowledge in the central cloud, and fully mining semantic correlation and consistency between modals; and finally fusing the semantic features of the obtained audio and video signals and inputting the semantic features to a haptic generation network, thereby realizing the reconstruction of the haptic signal. The method effectively solves the problems that the number of audio and video signals of a multi-modal dataset is insufficient, and semantic tags cannot be added to all the audio-visual signals in a training dataset by means of manual annotation. Also, the semantic association between heterogeneous data of different modals are better mined, and the heterogeneity gap between modals are eliminated.
    Type: Application
    Filed: July 1, 2022
    Publication date: September 14, 2023
    Inventors: Xin WEI, Liang ZHOU, Yingying SHI, Zhe ZHANG, Siqi ZHANG