Patents by Inventor Peter Pastor

Peter Pastor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230368261
    Abstract: A vehicle matching system and method to improve the auto loan process for a borrower and a seller of vehicles, particularly a vehicle dealership. Particularly, the system finds all vehicles in a dealership's inventory database or a provided third party marketplace that are compatible with an auto loan approval and maximizes deals by structuring them in the most profitable manner for the vehicle dealership.
    Type: Application
    Filed: May 11, 2022
    Publication date: November 16, 2023
    Inventors: Chris Avery, Arthur Lim, Peter Pastor, Ted Lam
  • Publication number: 20230368314
    Abstract: A virtual dining system where at least two diners may eat and communicate with each other in at least two remote locations. The system includes stations which permit a diner to be viewed and spoken to through audio and video connections which simulate another diner’s presence.
    Type: Application
    Filed: April 26, 2023
    Publication date: November 16, 2023
    Inventor: Peter Pastor
  • Patent number: 11717959
    Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
  • Patent number: 11571809
    Abstract: Techniques are described herein for robotic control using value distributions. In various implementations, as part of performing a robotic task, state data associated with the robot in an environment may be generated based at least in part on vision data captured by a vision component of the robot. A plurality of candidate actions may be sampled, e.g., from continuous action space. A trained critic neural network model that represents a learned value function may be used to process a plurality of state-action pairs to generate a corresponding plurality of value distributions. Each state-action pair may include the state data and one of the plurality of sampled candidate actions. The state-action pair corresponding to the value distribution that satisfies one or more criteria may be selected from the plurality of state-action pairs. The robot may then be controlled to implement the sampled candidate action of the selected state-action pair.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: February 7, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor Sampedro, Mrinal Kalakrishnan
  • Patent number: 11565401
    Abstract: Methods and apparatus related to receiving a request that includes robot instructions and/or environmental parameters, operating each of a plurality of robots based on the robot instructions and/or in an environment configured based on the environmental parameters, and storing data generated by the robots during the operating. In some implementations, at least part of the stored data that is generated by the robots is provided in response to the request and/or additional data that is generated based on the stored data is provided in response to the request.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: January 31, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Peter Pastor Sampedro, Mrinal Kalakrishnan, Ali Yahya Valdovinos, Adrian Li, Kurt Konolige, Vincent Dureau
  • Patent number: 11548145
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: January 10, 2023
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Peter Pastor Sampedro, Alex Krizhevsky
  • Patent number: 11325252
    Abstract: Deep machine learning methods and apparatus related to the manipulation of an object by an end effector of a robot are described herein. Some implementations relate to training an action prediction network to predict a probability density which can include candidate actions of successful grasps by the end effector given an input image. Some implementations are directed to utilization of an action prediction network to visually servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: May 10, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Adrian Li, Peter Pastor Sampedro, Mengyuan Yan, Mrinal Kalakrishnan
  • Publication number: 20210237266
    Abstract: Using large-scale reinforcement learning to train a policy model that can be utilized by a robot in performing a robotic task in which the robot interacts with one or more environmental objects. In various implementations, off-policy deep reinforcement learning is used to train the policy model, and the off-policy deep reinforcement learning is based on self-supervised data collection. The policy model can be a neural network model. Implementations of the reinforcement learning utilized in training the neural network model utilize a continuous-action variant of Q-learning. Through techniques disclosed herein, implementations can learn policies that generalize effectively to previously unseen objects, previously unseen environments, etc.
    Type: Application
    Filed: June 14, 2019
    Publication date: August 5, 2021
    Inventors: Dmitry Kalashnikov, Alexander Irpan, Peter Pastor Sampedro, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Sergey Levine
  • Patent number: 11045949
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: June 29, 2021
    Assignee: GOOGLE LLC
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Publication number: 20210162590
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
    Type: Application
    Filed: February 10, 2021
    Publication date: June 3, 2021
    Inventors: Sergey Levine, Peter Pastor Sampedro, Alex Krizhevsky
  • Patent number: 10981270
    Abstract: Methods and apparatus related to receiving a request that includes robot instructions and/or environmental parameters, operating each of a plurality of robots based on the robot instructions and/or in an environment configured based on the environmental parameters, and storing data generated by the robots during the operating. In some implementations, at least part of the stored data that is generated by the robots is provided in response to the request and/or additional data that is generated based on the stored data is provided in response to the request.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: April 20, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Peter Pastor Sampedro, Mrinal Kalakrishnan, Ali Yahya Valdovinos, Adrian Li, Kurt Konolige, Vincent Dureau
  • Patent number: 10946515
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: March 16, 2021
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Peter Pastor Sampedro, Alex Krizhevsky
  • Publication number: 20200338722
    Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Application
    Filed: June 28, 2018
    Publication date: October 29, 2020
    Inventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
  • Publication number: 20200215686
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Patent number: 10639792
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: May 5, 2020
    Assignee: GOOGLE LLC
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Publication number: 20200086483
    Abstract: Deep machine learning methods and apparatus related to the manipulation of an object by an end effector of a robot are described herein. Some implementations relate to training an action prediction network to predict a probability density which can include candidate actions of successful grasps by the end effector given an input image. Some implementations are directed to utilization of an action prediction network to visually servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector.
    Type: Application
    Filed: September 13, 2019
    Publication date: March 19, 2020
    Inventors: Adrian Li, Peter Pastor Sampedro, Mengyuan Yan, Mrinal Kalakrishnan
  • Patent number: 10427296
    Abstract: Methods and apparatus related to receiving a request that includes robot instructions and/or environmental parameters, operating each of a plurality of robots based on the robot instructions and/or in an environment configured based on the environmental parameters, and storing data generated by the robots during the operating. In some implementations, at least part of the stored data that is generated by the robots is provided in response to the request and/or additional data that is generated based on the stored data is provided in response to the request.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: October 1, 2019
    Assignee: X DEVELOPMENT LLC
    Inventors: Peter Pastor Sampedro, Mrinal Kalakrishnan, Ali Yahya Valdovinoa, Adrian Li, Kurt Konolige, Vincent Dureau
  • Publication number: 20190283245
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
    Type: Application
    Filed: December 27, 2018
    Publication date: September 19, 2019
    Inventors: Sergey Levine, Peter Pastor Sampedro, Alex Krizhevsky
  • Patent number: 10207402
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: February 19, 2019
    Assignee: GOOGLE LLC
    Inventors: Sergey Levine, Peter Pastor Sampedro, Alex Krizhevsky
  • Patent number: 10131053
    Abstract: Methods and apparatus related to robot collision avoidance. One method may include: receiving robot instructions to be performed by a robot; at each of a plurality of control cycles of processor(s) of the robot: receiving trajectories to be implemented by actuators of the robot, wherein the trajectories define motion states for the actuators of the robot during the control cycle or a next control cycle, and wherein the trajectories are generated based on the robot instructions; determining, based on a current motion state of the actuators and the trajectories to be implemented, whether implementation of the trajectories by the actuators prevents any collision avoidance trajectory from being achieved; and selectively providing the trajectories or collision avoidance trajectories for operating the actuators of the robot during the control cycle or the next control cycle depending on a result of the determining.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: November 20, 2018
    Assignee: X DEVELOPMENT LLC
    Inventors: Peter Pastor Sampedro, Umashankar Nagarajan