Patents by Inventor Siyuan Qiao
Siyuan Qiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12282857Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks through contrastive learning. In particular, the contrastive learning is modified to use a relative margin to adjust a training pair's contribution to optimization.Type: GrantFiled: September 27, 2024Date of Patent: April 22, 2025Assignee: Google LLCInventors: Siyuan Qiao, Chenxi Liu, Jiahui Yu, Yonghui Wu
-
Publication number: 20250111235Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks through contrastive learning. In particular, the contrastive learning is modified to use a relative margin to adjust a training pair's contribution to optimization.Type: ApplicationFiled: September 27, 2024Publication date: April 3, 2025Inventors: Siyuan Qiao, Chenxi Liu, Jiahui Yu, Yonghui Wu
-
Publication number: 20250029424Abstract: A method includes obtaining dual-pixel image data that represents an object and includes a first sub-image and a second sub-image, and generating (i) a first feature map based on the first sub-image and (ii) a second feature map based on the second sub-image. The method also includes generating a correlation volume by determining, for each respective offset of a plurality of offsets between the first feature map and the second feature map, pixel-wise similarities between (i) the first feature map and (ii) the second feature map offset from the first feature map by the respective offset. The method further includes determining, by an anti-spoofing model and based on the correlation volume, a spoofing value indicative of a likelihood that the object represented by the dual-pixel image data is being spoofed.Type: ApplicationFiled: April 1, 2022Publication date: January 23, 2025Inventors: Siyuan Qiao, Wen-Sheng Chu
-
Patent number: 12079725Abstract: In some embodiments, an application receives a request to execute a convolutional neural network model. The application determines the computational complexity requirement for the neural network based on the computing resource available on the device. The application further determines the architecture of the convolutional neural network model by determining the locations of down-sampling layers within the convolutional neural network model based on the computational complexity requirement. The application reconfigures the architecture of the convolutional neural network model by moving the down-sampling layers to the determined locations and executes the convolutional neural network model to generate output results.Type: GrantFiled: January 24, 2020Date of Patent: September 3, 2024Assignee: Adobe Inc.Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
-
Patent number: 11790234Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: GrantFiled: December 9, 2022Date of Patent: October 17, 2023Assignee: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
-
Publication number: 20230281824Abstract: Methods, systems, and apparatus for generating a panoptic segmentation label for a sensor data sample. In one aspect, a system comprises one or more computers configured to obtain a sensor data sample characterizing a scene in an environment. The one or more computers obtain a 3D bounding box annotation at each time point for a point cloud characterizing the scene at the time point. The one or more computers obtain, for each camera image and each time point, annotation data identifying object instances depicted in the camera image, and the one or more computers generate a panoptic segmentation label for the sensor data sample characterizing the scene in the environment.Type: ApplicationFiled: March 7, 2023Publication date: September 7, 2023Inventors: Jieru Mei, Hang Yan, Liang-Chieh Chen, Siyuan Qiao, Yukun Zhu, Alex Zihao Zhu, Xinchen Yan, Henrik Kretzschmar
-
Publication number: 20230105994Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: ApplicationFiled: December 9, 2022Publication date: April 6, 2023Applicant: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
-
Patent number: 11551093Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: GrantFiled: January 22, 2019Date of Patent: January 10, 2023Assignee: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
-
Publication number: 20210232927Abstract: In some embodiments, an application receives a request to execute a convolutional neural network model. The application determines the computational complexity requirement for the neural network based on the computing resource available on the device. The application further determines the architecture of the convolutional neural network model by determining the locations of down-sampling layers within the convolutional neural network model based on the computational complexity requirement. The application reconfigures the architecture of the convolutional neural network model by moving the down-sampling layers to the determined locations and executes the convolutional neural network model to generate output results.Type: ApplicationFiled: January 24, 2020Publication date: July 29, 2021Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
-
Publication number: 20210073644Abstract: A machine learning model compression system and related techniques are described herein. The machine learning model compression system can intelligently remove certain parameters of a machine learning model, without introducing a loss in performance of the machine learning model. Various parameters of a machine learning model can be removed during compression of the machine learning model, such as one or more channels of a single-branch or multi-branch neural network, one or more branches of a multi-branch neural network, certain weights of a channel of a single-branch or multi-branch neural network, and/or other parameters. In some cases, compression is performed only on certain selected layers or branches of the machine learning model. Candidate filters from the selected layers or branches can be removed from the machine learning model in a way that preserves local features of the machine learning model.Type: ApplicationFiled: September 6, 2019Publication date: March 11, 2021Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
-
Publication number: 20200234128Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: ApplicationFiled: January 22, 2019Publication date: July 23, 2020Applicant: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang