Patents by Inventor Stephen Tyree
Stephen Tyree has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250104277Abstract: One embodiment of a method for determining object poses includes receiving first sensor data and second sensor data, where the first sensor data is associated with a first modality, and the second sensor data is associated with a second modality that is different from the first modality, and performing one or more iterative operations to determine a pose of an object based on one or more comparisons of (i) one or more renderings of a three-dimensional (3D) representation of the object in the first modality with the first sensor data, and (ii) one or more renderings of the 3D representation of the object in the second modality with the second sensor data.Type: ApplicationFiled: March 18, 2024Publication date: March 27, 2025Inventors: Jonathan TREMBLAY, Stanley BIRCHFIELD, Valts BLUKIS, Balakumar SUNDARALINGAM, Stephen TYREE, Bowen WEN
-
Patent number: 11989642Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: GrantFiled: September 26, 2022Date of Patent: May 21, 2024Assignee: NVIDIA CorporationInventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Publication number: 20240127075Abstract: Machine learning is a process that learns a model from a given dataset, where the model can then be used to make a prediction about new data. In order to reduce the costs associated with collecting and labeling real world datasets for use in training the model, computer processes can synthetically generate datasets which simulate real world data. The present disclosure improves the effectiveness of such synthetic datasets for training machine learning models used in real world applications, in particular by generating a synthetic dataset that is specifically targeted to a specified downstream task (e.g. a particular computer vision task, a particular natural language processing task, etc.).Type: ApplicationFiled: June 21, 2023Publication date: April 18, 2024Applicant: NVIDIA CorporationInventors: Shalini De Mello, Christian Jacobsen, Xunlei Wu, Stephen Tyree, Alice Li, Wonmin Byeon, Shangru Li
-
Patent number: 11941719Abstract: Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.Type: GrantFiled: January 23, 2019Date of Patent: March 26, 2024Assignee: NVIDIA CorporationInventors: Jonathan Tremblay, Stan Birchfield, Stephen Tyree, Thang To, Jan Kautz, Artem Molchanov
-
Publication number: 20240066710Abstract: One embodiment of a method for controlling a robot includes generating a representation of spatial occupancy within an environment based on a plurality of red, green, blue (RGB) images of the environment, determining one or more actions for the robot based on the representation of spatial occupancy and a goal, and causing the robot to perform at least a portion of a movement based on the one or more actions.Type: ApplicationFiled: February 13, 2023Publication date: February 29, 2024Inventors: Balakumar SUNDARALINGAM, Stanley BIRCHFIELD, Zhenggang TANG, Jonathan TREMBLAY, Stephen TYREE, Bowen WEN, Ye YUAN, Charles LOOP
-
Publication number: 20230088912Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: ApplicationFiled: September 26, 2022Publication date: March 23, 2023Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Patent number: 11514293Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: GrantFiled: September 9, 2019Date of Patent: November 29, 2022Assignee: NVIDIA CorporationInventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Publication number: 20210390653Abstract: Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.Type: ApplicationFiled: August 26, 2021Publication date: December 16, 2021Inventors: Jonathan Tremblay, Stan Birchfield, Stephen Tyree, Thang To, Jan Kautz, Artem Molchanov
-
Publication number: 20210124353Abstract: Sensors measure information about actors or other objects near an object, such as a vehicle or robot, to be maneuvered. Sensor data is used to determine a sequence of possible actions for the maneuverable object to achieve a determined goal. For each possible action to be considered, one or more probable reactions of the nearby actors or objects are determined. This can take the form of a decision tree in some embodiments, with alternative levels of nodes corresponding to possible actions of the present object and probable reactive actions of one or more other vehicles or actors. Machine learning can be used to determine the probabilities, as well as to project out the options along the paths of the decision tree including the sequences. A value function is used to generate a value for each considered sequence, or path, and a path having a highest value is selected for use in determining how to navigate the object.Type: ApplicationFiled: January 4, 2021Publication date: April 29, 2021Inventors: Bill Dally, Stephen Tyree, Iuri Frosio, Alejandro Troccoli
-
Publication number: 20200249674Abstract: Sensors measure information about actors or other objects near an object, such as a vehicle or robot, to be maneuvered. Sensor data is used to determine a sequence of possible actions for the maneuverable object to achieve a determined goal. For each possible action to be considered, one or more probable reactions of the nearby actors or objects are determined. This can take the form of a decision tree in some embodiments, with alternative levels of nodes corresponding to possible actions of the present object and probable reactive actions of one or more other vehicles or actors. Machine learning can be used to determine the probabilities, as well as to project out the options along the paths of the decision tree including the sequences. A value function is used to generate a value for each considered sequence, or path, and a path having a highest value is selected for use in determining how to navigate the object.Type: ApplicationFiled: February 5, 2019Publication date: August 6, 2020Inventors: Bill Dally, Stephen Tyree, Iuri Frosio, Alejandro Troccoli
-
Publication number: 20200082248Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.Type: ApplicationFiled: September 9, 2019Publication date: March 12, 2020Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
-
Publication number: 20190228495Abstract: Various embodiments enable a robot, or other autonomous or semi-autonomous device or system, to receive data involving the performance of a task in the physical world. The data can be provided as input to a perception network to infer a set of percepts about the task, which can correspond to relationships between objects observed during the performance. The percepts can be provided as input to a plan generation network, which can infer a set of actions as part of a plan. Each action can correspond to one of the observed relationships. The plan can be reviewed and any corrections made, either manually or through another demonstration of the task. Once the plan is verified as correct, the plan (and any related data) can be provided as input to an execution network that can infer instructions to cause the robot, and/or another robot, to perform the task.Type: ApplicationFiled: January 23, 2019Publication date: July 25, 2019Inventors: Jonathan Tremblay, Stan Birchfield, Stephen Tyree, Thang To, Jan Kautz, Artem Molchanov