Patents by Inventor Chuyuan FU

Chuyuan FU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11845190
    Abstract: Implementations are provided for increasing realism of robot simulation by injecting noise into various aspects of the robot simulation. In various implementations, a three-dimensional (3D) environment may be simulated and may include a simulated robot controlled by an external robot controller. Joint command(s) issued by the robot controller and/or simulated sensor data passed to the robot controller may be intercepted. Noise may be injected into the joint command(s) to generate noisy commands. Additionally or alternatively, noise may be injected into the simulated sensor data to generate noisy sensor data. Joint(s) of the simulated robot may be operated in the simulated 3D environment based on the one or more noisy commands. Additionally or alternatively, the noisy sensor data may be provided to the robot controller to cause the robot controller to generate joint commands to control the simulated robot in the simulated 3D environment.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Matthew Bennice, Paul Bechard, Joséphine Simon, Chuyuan Fu, Wenlong Lu
  • Patent number: 11833661
    Abstract: Utilization of past dynamics sample(s), that reflect past contact physics information, in training and/or utilizing a neural network model. The neural network model represents a learned value function (e.g., a Q-value function) and that, when trained, can be used in selecting a sequence of robotic actions to implement in robotic manipulation (e.g., pushing) of an object by a robot. In various implementations, a past dynamics sample for an episode of robotic manipulation can include at least two past images from the episode, as well as one or more past force sensor readings that temporally correspond to the past images from the episode.
    Type: Grant
    Filed: October 31, 2021
    Date of Patent: December 5, 2023
    Assignee: GOOGLE LLC
    Inventors: Zhuo Xu, Wenhao Yu, Alexander Herzog, Wenlong Lu, Chuyuan Fu, Yunfei Bai, C. Karen Liu, Daniel Ho
  • Publication number: 20220134546
    Abstract: Utilization of past dynamics sample(s), that reflect past contact physics information, in training and/or utilizing a neural network model. The neural network model represents a learned value function (e.g., a Q-value function) and that, when trained, can be used in selecting a sequence of robotic actions to implement in robotic manipulation (e.g., pushing) of an object by a robot. In various implementations, a past dynamics sample for an episode of robotic manipulation can include at least two past images from the episode, as well as one or more past force sensor readings that temporally correspond to the past images from the episode.
    Type: Application
    Filed: October 31, 2021
    Publication date: May 5, 2022
    Inventors: Zhuo Xu, Wenhao Yu, Alexander Herzog, Wenlong Lu, Chuyuan Fu, Yunfei Bai, C. Karen Liu, Daniel Ho
  • Publication number: 20200302672
    Abstract: A method of rendering an animated object includes: (1) determining momentums of a plurality of particles of the object as sums of polynomials; (2) transferring the momentums of the particles of the object to a grid including a plurality of grid nodes; (3) updating momentums of the grid nodes based on the transferred momentums of the particles; (4) transferring the updated momentums of the grid nodes to the particles of the object; (5) updating positions of the particles based on the updated momentums of the grid nodes; and (6) outputting a visualization of the object based on the updated positions of the particles of the object.
    Type: Application
    Filed: October 8, 2018
    Publication date: September 24, 2020
    Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Joseph M. TERAN, Chenfanfu JIANG, Theodore F. GAST, Chuyuan FU, Qi GUO