Patents by Inventor Jonathan Tremblay

Jonathan Tremblay has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11375176
    Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: June 28, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
  • Publication number: 20220134537
    Abstract: Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.
    Type: Application
    Filed: February 16, 2021
    Publication date: May 5, 2022
    Inventors: Visak Chadalavada Vijay Kumar, David Hoeller, Balakumar Sundaralingam, Jonathan Tremblay, Stanley Thomas Birchfield
  • Publication number: 20220126445
    Abstract: Apparatuses, systems, and techniques are described that solve task and motion planning problems. In at least one embodiment, a task and motion planning problem is modeled using a geometric scene graph that records positions and orientations of objects within a playfield, and a symbolic scene graph that represents states of objects within context of a task to be solved. In at least one embodiment, task planning is performed using symbolic scene graph, and motion planning is performed using a geometric scene graph.
    Type: Application
    Filed: October 28, 2020
    Publication date: April 28, 2022
    Inventors: Yuke Zhu, Yifeng Zhu, Stanley Thomas Birchfield, Jonathan Tremblay
  • Publication number: 20220068024
    Abstract: One or more images (e.g., images taken from one or more cameras) may be received, where each of the one or more images may depict a two-dimensional (2D) view of a three-dimensional (3D) scene. Additionally, the one or more images may be utilized to determine a three-dimensional (3D) representation of a scene. This representation may help an entity navigate an environment represented by the 3D scene.
    Type: Application
    Filed: February 22, 2021
    Publication date: March 3, 2022
    Inventors: Yunzhi Lin, Jonathan Tremblay, Stephen Walter Tyree, Stanley Thomas Birchfield
  • Publication number: 20220044075
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Application
    Filed: October 21, 2021
    Publication date: February 10, 2022
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20210390653
    Abstract: Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.
    Type: Application
    Filed: August 26, 2021
    Publication date: December 16, 2021
    Inventors: Jonathan Tremblay, Stan Birchfield, Stephen Tyree, Thang To, Jan Kautz, Artem Molchanov
  • Patent number: 11182649
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Patent number: 11074717
    Abstract: An object detection neural network receives an input image including an object and generates belief maps for vertices of a bounding volume that encloses the object. The belief maps are used, along with three-dimensional (3D) coordinates defining the bounding volume, to compute the pose of the object in 3D space during post-processing. When multiple objects are present in the image, the object detection neural network may also generate vector fields for the vertices. A vector field comprises vectors pointing from the vertex to a centroid of the object enclosed by the bounding volume defined by the vertex. The object detection neural network may be trained using images of computer-generated objects rendered in 3D scenes (e.g., photorealistic synthetic data). Automatically labelled training datasets may be easily constructed using the photorealistic synthetic data. The object detection neural network may be trained for object detection using only the photorealistic synthetic data.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: July 27, 2021
    Assignee: NVIDIA Corporation
    Inventors: Jonathan Tremblay, Thang Hong To, Stanley Thomas Birchfield
  • Publication number: 20210146531
    Abstract: A robot is controlled using a combination of model-based and model-free control methods. In some examples, the model-based method uses a physical model of the environment around the robot to guide the robot. The physical model is oriented using a perception system such as a camera. Characteristics of the perception system may be are used to determine an uncertainty for the model. Based at least in part on this uncertainty, the system transitions from the model-based method to a model-free method where, in some embodiments, information provided directly from the perception system is used to direct the robot without reliance on the physical model.
    Type: Application
    Filed: February 3, 2020
    Publication date: May 20, 2021
    Inventors: Jonathan Tremblay, Dieter Fox, Michelle Lee, Carlos Florensa, Nathan Donald Ratliff, Animesh Garg, Fabio Tozeto Ramos
  • Publication number: 20210125036
    Abstract: Apparatuses, systems, and techniques to determine orientation of an objects in an image. In at least one embodiment, images are processed using a neural network trained to determine orientation of an object.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 29, 2021
    Inventors: Jonathan Tremblay, Ming-Yu Liu, Dieter Fox, Philip Ammirato
  • Publication number: 20210125052
    Abstract: Apparatuses, systems, and techniques to perform a grasp of on object using an articulated robotic hand equipped with one or more tactile sensors. In at least one embodiment, a machine-learned model trained in simulation to grasp a cuboid using signals received from tactile sensors is applied to grasping objects of various shapes in a real-world environment.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 29, 2021
    Inventors: Jonathan Tremblay, Visak Chadalavada Vijay Kumar, Tucker Hermans
  • Publication number: 20210118166
    Abstract: Apparatuses, systems, and techniques are presented to determine a pose of an object. In at least one embodiment, a network is trained to predict a pose of an autonomous object based, at least in part, on only one image of the autonomous object.
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Jonathan Tremblay, Stan Birchfield, Timothy Lee
  • Publication number: 20210097346
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Application
    Filed: December 11, 2020
    Publication date: April 1, 2021
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Patent number: 10867214
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: December 15, 2020
    Assignee: NVIDIA Corporation
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20200311855
    Abstract: Pose estimation generally refers to a computer vision technique that determines the pose of some object, usually with respect to a particular camera. Pose estimation has many applications, but is particularly useful in the context of robotic manipulation systems. To date, robotic manipulation systems have required a camera to be installed on the robot itself (i.e. a camera-in-hand) for capturing images of the object and/or a camera external to the robot for capturing images of the object. Unfortunately, the camera-in-hand has a limited field of view for capturing objects, whereas the external camera, which may have a greater field of view, requires costly calibration each time the camera is even slightly moved. Similar issues apply when estimating the pose of any object with respect to another object (i.e. which may be moving or not). The present disclosure avoids these issues and provides object-to-object pose estimation from a single image.
    Type: Application
    Filed: June 15, 2020
    Publication date: October 1, 2020
    Inventors: Jonathan Tremblay, Stephen Walter Tyree, Stanley Thomas Birchfield
  • Publication number: 20200252600
    Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
    Type: Application
    Filed: February 3, 2020
    Publication date: August 6, 2020
    Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
  • Publication number: 20200061811
    Abstract: In at least one embodiment, under the control of a robotic control system, a gripper on a robot is positioned to grasp a 3-dimensional object. In at least one embodiment, the relative position of the object and the gripper is determined, at least in part, by using a camera mounted on the gripper.
    Type: Application
    Filed: August 23, 2019
    Publication date: February 27, 2020
    Inventors: Shariq Iqbal, Jonathan Tremblay, Thang Hong To, Jia Cheng, Erik Leitch, Duncan J. McKay, Stanley Thomas Birchfield
  • Publication number: 20190355150
    Abstract: An object detection neural network receives an input image including an object and generates belief maps for vertices of a bounding volume that encloses the object. The belief maps are used, along with three-dimensional (3D) coordinates defining the bounding volume, to compute the pose of the object in 3D space during post-processing. When multiple objects are present in the image, the object detection neural network may also generate vector fields for the vertices. A vector field comprises vectors pointing from the vertex to a centroid of the object enclosed by the bounding volume defined by the vertex. The object detection neural network may be trained using images of computer-generated objects rendered in 3D scenes (e.g., photorealistic synthetic data). Automatically labelled training datasets may be easily constructed using the photorealistic synthetic data. The object detection neural network may be trained for object detection using only the photorealistic synthetic data.
    Type: Application
    Filed: May 7, 2019
    Publication date: November 21, 2019
    Inventors: Jonathan Tremblay, Thang Hong To, Stanley Thomas Birchfield
  • Publication number: 20190251397
    Abstract: Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
    Type: Application
    Filed: January 24, 2019
    Publication date: August 15, 2019
    Inventors: Jonathan Tremblay, Aayush Prakash, Mark A. Brophy, Varun Jampani, Cem Anil, Stanley Thomas Birchfield, Thang Hong To, David Jesus Acuna Marrero
  • Publication number: 20190228495
    Abstract: Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.
    Type: Application
    Filed: January 23, 2019
    Publication date: July 25, 2019
    Inventors: Jonathan Tremblay, Stan Birchfield, Stephen Tyree, Thang To, Jan Kautz, Artem Molchanov