Patents by Inventor Shikun Liu
Shikun Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240265690Abstract: A vision-language model learns skills and domain knowledge via distinct and separate task-specific neural networks, referred to as experts. Each expert is independently optimized for a specific task, facilitating the use of domain-specific data and architectures that are not feasible with a single large neural network trained for multiple tasks. The vision-language model implemented as an ensemble of pre-trained experts and is more efficiently trained compared with the single large neural network. During training, the vision-language model integrates specialized skills and domain knowledge, rather than trying to simultaneously learn multiple tasks, resulting in effective multi-modal learning.Type: ApplicationFiled: December 19, 2023Publication date: August 8, 2024Inventors: Animashree Anandkumar, Linxi Fan, Zhiding Yu, Chaowei Xiao, Shikun Liu
-
Patent number: 11983632Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.Type: GrantFiled: April 28, 2023Date of Patent: May 14, 2024Assignee: Adobe Inc.Inventors: Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi
-
Publication number: 20240005598Abstract: A method comprising obtaining image data captured by a camera device. The image data represents an observation of at least part of an environment. A camera pose estimate associated with the observation is obtained. Rendered image data is generated based on the camera pose estimate and a model of the environment for generating a three-dimensional representation of the at least part of the environment. The rendered image data is representative of at least one rendered image portion corresponding to the at least part of the environment. The method includes evaluating a loss function based on the image data and the rendered image data, thereby generating a loss. At least the camera pose estimate and the model are jointly optimised based on the loss, thereby generating an update to the camera pose estimate, and an update to the model.Type: ApplicationFiled: September 18, 2023Publication date: January 4, 2024Inventors: Edgar SUCAR, Shikun LIU, Joseph ORTIZ, Andrew DAVISON
-
Publication number: 20240005597Abstract: A method comprising obtaining image data captured by a camera device. The image data represents an observation of an environment. A two-dimensional representation of at least part of the environment is obtained using a model of the environment. The method includes evaluating a difference between the two-dimensional representation and at least part of the observation. The at least part of the observation is of the at least part of the environment represented by the two-dimensional representation. Based on the difference, a portion of the image data is selected for optimising the model. The portion of the image data represents a portion of the observation of the environment. The method comprises optimising the model using the portion of the image data.Type: ApplicationFiled: September 18, 2023Publication date: January 4, 2024Inventors: Edgar SUCAR, Shikun LIU, Joseph ORTIZ, Andrew DAVISON
-
Publication number: 20230259778Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.Type: ApplicationFiled: April 28, 2023Publication date: August 17, 2023Inventors: Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi
-
Patent number: 11710042Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.Type: GrantFiled: February 5, 2020Date of Patent: July 25, 2023Assignee: Adobe Inc.Inventors: Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi
-
Patent number: 11663481Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.Type: GrantFiled: February 24, 2020Date of Patent: May 30, 2023Assignee: Adobe Inc.Inventors: Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi
-
Publication number: 20210264278Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.Type: ApplicationFiled: February 24, 2020Publication date: August 26, 2021Inventors: Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi
-
Publication number: 20210241111Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.Type: ApplicationFiled: February 5, 2020Publication date: August 5, 2021Inventors: Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi