Patents by Inventor Matthew Bennice
Matthew Bennice has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250058460Abstract: Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.Type: ApplicationFiled: November 4, 2024Publication date: February 20, 2025Inventors: Matthew Bennice, Paul Bechard
-
Patent number: 12226920Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.Type: GrantFiled: August 11, 2023Date of Patent: February 18, 2025Assignee: GOOGLE LLCInventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
-
Patent number: 12214507Abstract: Implementations are provided for increasing realism of robot simulation by injecting noise into various aspects of the robot simulation. In various implementations, a three-dimensional (3D) environment may be simulated and may include a simulated robot controlled by an external robot controller. Joint command(s) issued by the robot controller and/or simulated sensor data passed to the robot controller may be intercepted. Noise may be injected into the joint command(s) to generate noisy commands. Additionally or alternatively, noise may be injected into the simulated sensor data to generate noisy sensor data. Joint(s) of the simulated robot may be operated in the simulated 3D environment based on the one or more noisy commands. Additionally or alternatively, the noisy sensor data may be provided to the robot controller to cause the robot controller to generate joint commands to control the simulated robot in the simulated 3D environment.Type: GrantFiled: October 19, 2023Date of Patent: February 4, 2025Assignee: GOOGLE LLCInventors: Matthew Bennice, Paul Bechard, Joséphine Simon, Chuyuan Fu, Wenlong Lu
-
Patent number: 12202140Abstract: Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.Type: GrantFiled: October 12, 2023Date of Patent: January 21, 2025Assignee: GOOGLE LLCInventors: Matthew Bennice, Paul Bechard
-
Patent number: 12168296Abstract: Implementations are provided for generating a plurality of simulated training instances based on a recorded user-directed robot control episode, and training one or more robot control policies based on such training instances. In various implementations, a three-dimensional environment may be simulated and may include a robot controlled by an external robot controller. A user may operate the robot controller to control the robot in the simulated 3D environment to perform one or more robotic tasks. The user-directed robot control episode, including responses of the external robot controller and the simulated robot to user commands and/or the virtual environment, can be captured. Features of the captured user-directed robot control episode can be altered in order to generate a plurality of training instances. One or more robot control policies can then be trained based on the plurality of training instances.Type: GrantFiled: September 1, 2021Date of Patent: December 17, 2024Assignee: GOOGLE LLCInventors: Matthew Bennice, Paul Bechard, Joséphine Simon
-
Publication number: 20240253215Abstract: Implementations are provided for training a robot control policy for controlling a robot. During a first training phase, the robot control policy is trained using a first set of training data that includes (i) training data generated based on simulated operation of the robot in a first fidelity simulation, and (ii) training data generated based on simulated operation of the robot in a second fidelity simulation, wherein the second fidelity is greater than the first fidelity. When one or more criteria for commencing a second training phase are satisfied, the robot control policy is further trained using a second set of training data that also include training data generate based on simulated operation of the robot in the first and second fidelity simulations, which has a ratio therebetween lower than that in the first set of training data.Type: ApplicationFiled: January 31, 2023Publication date: August 1, 2024Inventors: Matthew Bennice, Paul Bechard, Joséphine Simon, Jiayi Lin
-
Publication number: 20240190004Abstract: Active utilization of a robotic simulator in control of one or more real world robots. A simulated environment of the robotic simulator can be configured to reflect a real world environment in which a real robot is currently disposed, or will be disposed. The robotic simulator can then be used to determine a sequence of robotic actions for use by the real world robot(s) in performing at least part of a robotic task. The sequence of robotic actions can be applied, to a simulated robot of the robotic simulator, to generate a sequence of anticipated simulated state data instances. The real robot can be controlled to implement the sequence of robotic actions. The implementation of one or more of the robotic actions can be contingent on a real state data instance having at least a threshold degree of similarity to a corresponding one of the anticipated simulated state data instances.Type: ApplicationFiled: February 20, 2024Publication date: June 13, 2024Inventors: Yunfei Bai, Tigran Gasparian, Brent Austin, Andreas Christiansen, Matthew Bennice, Paul Bechard
-
Patent number: 11938638Abstract: Active utilization of a robotic simulator in control of one or more real world robots. A simulated environment of the robotic simulator can be configured to reflect a real world environment in which a real robot is currently disposed, or will be disposed. The robotic simulator can then be used to determine a sequence of robotic actions for use by the real world robot(s) in performing at least part of a robotic task. The sequence of robotic actions can be applied, to a simulated robot of the robotic simulator, to generate a sequence of anticipated simulated state data instances. The real robot can be controlled to implement the sequence of robotic actions. The implementation of one or more of the robotic actions can be contingent on a real state data instance having at least a threshold degree of similarity to a corresponding one of the anticipated simulated state data instances.Type: GrantFiled: June 3, 2021Date of Patent: March 26, 2024Assignee: GOOGLE LLCInventors: Yunfei Bai, Tigran Gasparian, Brent Austin, Andreas Christiansen, Matthew Bennice, Paul Bechard
-
Publication number: 20240058954Abstract: Implementations are provided for training robot control policies using augmented reality (AR) sensor data comprising physical sensor data injected with virtual objects. In various implementations, physical pose(s) of physical sensor(s) of a physical robot operating in a physical environment may be determined. Virtual pose(s) of virtual object(s) in the physical environment may also be determined. Based on the physical poses virtual poses, the virtual object(s) may be injected into sensor data generated by the one or more physical sensors to generate AR sensor data. The physical robot may be operated in the physical environment based on the AR sensor data and a robot control policy. The robot control policy may be trained based on virtual interactions between the physical robot and the one or more virtual objects.Type: ApplicationFiled: August 18, 2022Publication date: February 22, 2024Inventors: Matthew Bennice, Paul Bechard, Joséphine Simon, Jiayi Lin
-
Publication number: 20240033904Abstract: Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.Type: ApplicationFiled: October 12, 2023Publication date: February 1, 2024Inventors: Matthew Bennice, Paul Bechard
-
Patent number: 11845190Abstract: Implementations are provided for increasing realism of robot simulation by injecting noise into various aspects of the robot simulation. In various implementations, a three-dimensional (3D) environment may be simulated and may include a simulated robot controlled by an external robot controller. Joint command(s) issued by the robot controller and/or simulated sensor data passed to the robot controller may be intercepted. Noise may be injected into the joint command(s) to generate noisy commands. Additionally or alternatively, noise may be injected into the simulated sensor data to generate noisy sensor data. Joint(s) of the simulated robot may be operated in the simulated 3D environment based on the one or more noisy commands. Additionally or alternatively, the noisy sensor data may be provided to the robot controller to cause the robot controller to generate joint commands to control the simulated robot in the simulated 3D environment.Type: GrantFiled: June 2, 2021Date of Patent: December 19, 2023Assignee: GOOGLE LLCInventors: Matthew Bennice, Paul Bechard, Joséphine Simon, Chuyuan Fu, Wenlong Lu
-
Publication number: 20230381970Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.Type: ApplicationFiled: August 11, 2023Publication date: November 30, 2023Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
-
Patent number: 11813748Abstract: Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.Type: GrantFiled: October 13, 2020Date of Patent: November 14, 2023Assignee: GOOGLE LLCInventors: Matthew Bennice, Paul Bechard
-
Patent number: 11772272Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.Type: GrantFiled: March 16, 2021Date of Patent: October 3, 2023Assignee: GOOGLE LLCInventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
-
Patent number: 11654550Abstract: Implementations are described herein for single iteration, multiple permutation robot simulation. In various implementations, one or more poses of a simulated object may be determined across one or more virtual environments. A plurality of simulated robots may be operated across the one or more virtual environments. For each simulated robot of the plurality of simulated robots, a camera transformation may be determined based on respective poses of the simulated robot and simulated object in the particular virtual environment. The camera transformation may be applied to the simulated object in the particular virtual environment of the one or more virtual environments in which the simulated robot operates. Based on the camera transformation, simulated vision data may be rendered that depicts the simulated object from a perspective of the simulated robot. Each of the plurality of simulated robots may be operated based on corresponding simulated vision data.Type: GrantFiled: November 13, 2020Date of Patent: May 23, 2023Assignee: X DEVELOPMENT LLCInventors: Paul Bechard, Matthew Bennice
-
Publication number: 20220297303Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.Type: ApplicationFiled: March 16, 2021Publication date: September 22, 2022Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
-
Publication number: 20220288782Abstract: Implementations are provided for controlling a plurality of simulated robots in a virtual environment using a single robot controller. In various implementations, a three-dimensional (3D) environment may be simulated that includes a plurality of simulated robots controlled by a single robot controller. Multiple instances of an interactive object may be rendered in the simulated 3D environment. Each instance of the interactive object may have a simulated physical characteristics such as a pose that is unique among the multiple instances of the interactive object. A common set of joint commands may be received from the single robot controller. The common set of joint commands may be issued to each of the plurality of simulated robots. For each simulated robot of the plurality of simulated robots, the common command may cause actuation of one or more joints of the simulated robot to interact with a respective instance of the interactive object in the simulated 3D environment.Type: ApplicationFiled: March 10, 2021Publication date: September 15, 2022Inventors: Matthew Bennice, Paul Bechard
-
Publication number: 20220203535Abstract: Active utilization of a robotic simulator in control of one or more real world robots. A simulated environment of the robotic simulator can be configured to reflect a real world environment in which a real robot is currently disposed, or will be disposed. The robotic simulator can then be used to determine a sequence of robotic actions for use by the real world robot(s) in performing at least part of a robotic task. The sequence of robotic actions can be applied, to a simulated robot of the robotic simulator, to generate a sequence of anticipated simulated state data instances. The real robot can be controlled to implement the sequence of robotic actions. The implementation of one or more of the robotic actions can be contingent on a real state data instance having at least a threshold degree of similarity to a corresponding one of the anticipated simulated state data instances.Type: ApplicationFiled: June 3, 2021Publication date: June 30, 2022Inventors: Yunfei Bai, Tigran Gasparian, Brent Austin, Andreas Christiansen, Matthew Bennice, Paul Bechard
-
Publication number: 20220111517Abstract: Implementations are provided for operably coupling multiple robot controllers to a single virtual environment, e.g., to generate training examples for training machine learning model(s). In various implementations, a virtual environment may be simulated that includes an interactive object and a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are external from the virtual environment. Sensor data generated from a perspective of each robot avatar of the plurality of robot avatars may be provided to a corresponding robot controller. Joint commands that cause actuation of one or more joints of each robot avatar may be received from the corresponding robot controller. Joint(s) of each robot avatar may be actuated pursuant to corresponding joint commands. The actuating may cause two or more of the robot avatars to act upon the interactive object in the virtual environment.Type: ApplicationFiled: October 13, 2020Publication date: April 14, 2022Inventors: Matthew Bennice, Paul Bechard
-
Publication number: 20220061677Abstract: A phone may be used to conduct physiological measurements such as heart rate, respiration rate, and arterial oxygen saturation level measurements. A mobile app may be installed on a user's portable electronic device, and may direct the user to place a part of the user's body onto a user-facing optical detector such as a camera. The portable electronic device may transmit at least two light signals to the body part using the portable electronic device's screen as an emission source. Reflections of the light signals are recorded by the optical detector. Based on the reflected light signal, the portable electronic device may determine the absorption of different light frequencies and the physiological parameter values.Type: ApplicationFiled: July 21, 2021Publication date: March 3, 2022Inventors: Matthew Bennice, Anupama Thubagere Jagadeesh, David Andre