Patents by Inventor Siyuan Qiao

Siyuan Qiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12282857
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks through contrastive learning. In particular, the contrastive learning is modified to use a relative margin to adjust a training pair's contribution to optimization.
    Type: Grant
    Filed: September 27, 2024
    Date of Patent: April 22, 2025
    Assignee: Google LLC
    Inventors: Siyuan Qiao, Chenxi Liu, Jiahui Yu, Yonghui Wu
  • Publication number: 20250111235
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks through contrastive learning. In particular, the contrastive learning is modified to use a relative margin to adjust a training pair's contribution to optimization.
    Type: Application
    Filed: September 27, 2024
    Publication date: April 3, 2025
    Inventors: Siyuan Qiao, Chenxi Liu, Jiahui Yu, Yonghui Wu
  • Publication number: 20250029424
    Abstract: A method includes obtaining dual-pixel image data that represents an object and includes a first sub-image and a second sub-image, and generating (i) a first feature map based on the first sub-image and (ii) a second feature map based on the second sub-image. The method also includes generating a correlation volume by determining, for each respective offset of a plurality of offsets between the first feature map and the second feature map, pixel-wise similarities between (i) the first feature map and (ii) the second feature map offset from the first feature map by the respective offset. The method further includes determining, by an anti-spoofing model and based on the correlation volume, a spoofing value indicative of a likelihood that the object represented by the dual-pixel image data is being spoofed.
    Type: Application
    Filed: April 1, 2022
    Publication date: January 23, 2025
    Inventors: Siyuan Qiao, Wen-Sheng Chu
  • Patent number: 12079725
    Abstract: In some embodiments, an application receives a request to execute a convolutional neural network model. The application determines the computational complexity requirement for the neural network based on the computing resource available on the device. The application further determines the architecture of the convolutional neural network model by determining the locations of down-sampling layers within the convolutional neural network model based on the computational complexity requirement. The application reconfigures the architecture of the convolutional neural network model by moving the down-sampling layers to the determined locations and executes the convolutional neural network model to generate output results.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: September 3, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
  • Patent number: 11790234
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: October 17, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Publication number: 20230281824
    Abstract: Methods, systems, and apparatus for generating a panoptic segmentation label for a sensor data sample. In one aspect, a system comprises one or more computers configured to obtain a sensor data sample characterizing a scene in an environment. The one or more computers obtain a 3D bounding box annotation at each time point for a point cloud characterizing the scene at the time point. The one or more computers obtain, for each camera image and each time point, annotation data identifying object instances depicted in the camera image, and the one or more computers generate a panoptic segmentation label for the sensor data sample characterizing the scene in the environment.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 7, 2023
    Inventors: Jieru Mei, Hang Yan, Liang-Chieh Chen, Siyuan Qiao, Yukun Zhu, Alex Zihao Zhu, Xinchen Yan, Henrik Kretzschmar
  • Publication number: 20230105994
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Application
    Filed: December 9, 2022
    Publication date: April 6, 2023
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Patent number: 11551093
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Publication number: 20210232927
    Abstract: In some embodiments, an application receives a request to execute a convolutional neural network model. The application determines the computational complexity requirement for the neural network based on the computing resource available on the device. The application further determines the architecture of the convolutional neural network model by determining the locations of down-sampling layers within the convolutional neural network model based on the computational complexity requirement. The application reconfigures the architecture of the convolutional neural network model by moving the down-sampling layers to the determined locations and executes the convolutional neural network model to generate output results.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 29, 2021
    Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
  • Publication number: 20210073644
    Abstract: A machine learning model compression system and related techniques are described herein. The machine learning model compression system can intelligently remove certain parameters of a machine learning model, without introducing a loss in performance of the machine learning model. Various parameters of a machine learning model can be removed during compression of the machine learning model, such as one or more channels of a single-branch or multi-branch neural network, one or more branches of a multi-branch neural network, certain weights of a channel of a single-branch or multi-branch neural network, and/or other parameters. In some cases, compression is performed only on certain selected layers or branches of the machine learning model. Candidate filters from the selected layers or branches can be removed from the machine learning model in a way that preserves local features of the machine learning model.
    Type: Application
    Filed: September 6, 2019
    Publication date: March 11, 2021
    Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
  • Publication number: 20200234128
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Application
    Filed: January 22, 2019
    Publication date: July 23, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang