Patents by Inventor Balakumar Sundaralingam
Balakumar Sundaralingam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12318935Abstract: One embodiment of a method for controlling a robot includes receiving sensor data associated with an environment that includes an object; applying a machine learning model to a portion of the sensor data associated with the object and one or more trajectories of motion of the robot to determine one or more path lengths of the one or more trajectories; generating a new trajectory of motion of the robot based on the one or more trajectories and the one or more path lengths; and causing the robot to perform one or more movements based on the new trajectory.Type: GrantFiled: July 1, 2022Date of Patent: June 3, 2025Assignee: NVIDIA CORPORATIONInventors: Adithyavairavan Murali, Balakumar Sundaralingam, Yun-Chun Chen, Dieter Fox, Animesh Garg
-
Publication number: 20250104277Abstract: One embodiment of a method for determining object poses includes receiving first sensor data and second sensor data, where the first sensor data is associated with a first modality, and the second sensor data is associated with a second modality that is different from the first modality, and performing one or more iterative operations to determine a pose of an object based on one or more comparisons of (i) one or more renderings of a three-dimensional (3D) representation of the object in the first modality with the first sensor data, and (ii) one or more renderings of the 3D representation of the object in the second modality with the second sensor data.Type: ApplicationFiled: March 18, 2024Publication date: March 27, 2025Inventors: Jonathan TREMBLAY, Stanley BIRCHFIELD, Valts BLUKIS, Balakumar SUNDARALINGAM, Stephen TYREE, Bowen WEN
-
Publication number: 20250083309Abstract: In various examples, systems and methods are disclosed relating to geometric fabrics for accelerated policy learning and sim-to-real transfer in robotics systems, platforms, and/or applications. For example, a system can provide an input indicative of a goal pose for a robot to a model to cause the model to generate an output, the output representing a plurality of points along a path for movement of the robot to the goal pose; and generate one or more control signals for operation of the robot based at least on the plurality of points along the path and a policy corresponding to one or more criteria for the operation of the robot. In examples, the system can provide the one or more control signals to the robot to cause the robot to move toward the goal pose.Type: ApplicationFiled: April 25, 2024Publication date: March 13, 2025Applicant: NVIDIA CorporationInventors: Nathan Donald RATLIFF, Karl VAN WYK, Ankur HANDA, Viktor MAKOVIICHUK, Yijie GUO, Jie XU, Tyler LUM, Balakumar SUNDARALINGAM, Jingzhou LIU
-
Publication number: 20240338598Abstract: One embodiment of a method for generating simulation data to train a machine learning model includes generating a plurality of simulation environments based on a user input, and for each simulation environment included in the plurality of simulation environments: generating a plurality of tasks for a robot to perform within the simulation environment, performing one or more operations to determine a plurality of robot trajectories for performing the plurality of tasks, and generating simulation data for training a machine learning model by performing one or more operations to simulate the robot moving within the simulation environment according to the plurality of trajectories.Type: ApplicationFiled: March 15, 2024Publication date: October 10, 2024Inventors: Caelan Reed GARRETT, Fabio TOZETO RAMOS, Iretiayo AKINOLA, Alperen DEGIRMENCI, Clemens EPPNER, Dieter FOX, Tucker Ryer HERMANS, Ajay Uday MANDLEKAR, Arsalan MOUSAVIAN, Yashraj Shyam NARANG, Rowland Wilde O'FLAHERTY, Balakumar SUNDARALINGAM, Wei YANG
-
Patent number: 12017352Abstract: Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.Type: GrantFiled: February 16, 2021Date of Patent: June 25, 2024Assignee: NVIDIA CORPORATIONInventors: Visak Chadalavada Vijay Kumar, David Hoeller, Balakumar Sundaralingam, Jonathan Tremblay, Stanley Thomas Birchfield
-
Publication number: 20240131706Abstract: Apparatuses, systems, and techniques to perform collision-free motion generation (e.g., to operate a real-world or virtual robot). In at least one embodiment, at least a portion of the collision-free motion generation is performed in parallel.Type: ApplicationFiled: May 22, 2023Publication date: April 25, 2024Inventors: Balakumar Sundaralingam, Siva Kumar Sastry Hari, Adam Harper Fishman, Caelan Reed Garrett, Alexander James Millane, Elena Oleynikova, Ankur Handa, Fabio Tozeto Ramos, Nathan Donald Ratliff, Karl Van Wyk, Dieter Fox
-
Publication number: 20240095527Abstract: Systems and techniques are described related to training one or more machine learning models for use in control of a robot. In at least one embodiment, one or more machine learning models are trained based at least on simulations of the robot and renderings of such simulations—which may be performed using one or more ray tracing algorithms, operations, or techniques.Type: ApplicationFiled: August 10, 2023Publication date: March 21, 2024Inventors: Ankur HANDA, Gavriel STATE, Arthur David ALLSHIRE, Dieter FOX, Jean-Francois Victor LAFLECHE, Jingzhou LIU, Viktor MAKOVIICHUK, Yashraj Shyam NARANG, Aleksei Vladimirovich PETRENKO, Ritvik SINGH, Balakumar SUNDARALINGAM, Karl VAN WYK, Alexander ZHURKEVICH
-
Publication number: 20240066710Abstract: One embodiment of a method for controlling a robot includes generating a representation of spatial occupancy within an environment based on a plurality of red, green, blue (RGB) images of the environment, determining one or more actions for the robot based on the representation of spatial occupancy and a goal, and causing the robot to perform at least a portion of a movement based on the one or more actions.Type: ApplicationFiled: February 13, 2023Publication date: February 29, 2024Inventors: Balakumar SUNDARALINGAM, Stanley BIRCHFIELD, Zhenggang TANG, Jonathan TREMBLAY, Stephen TYREE, Bowen WEN, Ye YUAN, Charles LOOP
-
Publication number: 20230294277Abstract: Approaches presented herein provide for predictive control of a robot or automated assembly in performing a specific task. A task to be performed may depend on the location and orientation of the robot performing that task. A predictive control system can determine a state of a physical environment at each of a series of time steps, and can select an appropriate location and orientation at each of those time steps. At individual time steps, an optimization process can determine a sequence of future motions or accelerations to be taken that comply with one or more constraints on that motion. For example, at individual time steps, a respective action in the sequence may be performed, then another motion sequence predicted for a next time step, which can help drive robot motion based upon predicted future motion and allow for quick reactions.Type: ApplicationFiled: June 30, 2022Publication date: September 21, 2023Inventors: Wei Yang, Balakumar Sundaralingam, Christopher Jason Paxton, Maya Cakmak, Yu-Wei Chao, Dieter Fox, Iretiayo Akinola
-
Publication number: 20230294276Abstract: Approaches presented herein provide for simulation of human motion for human-robot interactions, such as may involve a handover of an object. Motion capture can be performed for a hand grasping and moving an object to a location and orientation appropriate for a handover, without a need for a robot to be present or an actual handover to occur. This motion data can be used to separately model the hand and the object for use in a handover simulation, where a component such as a physics engine may be used to ensure realistic modeling of the motion or behavior. During a simulation, a robot control model or algorithm can predict an optimal location and orientation to grasp an object, and an optimal path to move to that location and orientation, using a control model or algorithm trained, based at least in part, using the motion models for the hand and object.Type: ApplicationFiled: December 30, 2022Publication date: September 21, 2023Inventors: Yu-Wei Chao, Yu Xiang, Wei Yang, Dieter Fox, Chris Paxton, Balakumar Sundaralingam, Maya Cakmak
-
Patent number: 11745347Abstract: Candidate grasping models of a deformable object are applied to generate a simulation of a response of the deformable object to the grasping model. From the simulation, grasp performance metrics for stress, deformation controllability, and instability of the response to the grasping model are obtained, and the grasp performance metrics are correlated with robotic grasp features.Type: GrantFiled: March 19, 2021Date of Patent: September 5, 2023Assignee: NVIDIA CORP.Inventors: Isabella Huang, Yashraj Shyam Narang, Clemens Eppner, Balakumar Sundaralingam, Miles Macklin, Tucker Ryer Hermans, Dieter Fox
-
Publication number: 20230271330Abstract: Approaches presented herein provide for a framework to integrate human provided feedback in natural language to update a robot planning cost or value. The natural language feedback may be modeled as a cost or value associated with completing a task assigned to the robot. This cost or value may then be added to an initial task cost or value to update one or more actions to be performed by the robot. The framework can be applied to both real work and simulated environments where the robot may receive instructions, in natural language, that either provide a goal, modify an existing goal, or provide constraints to actions to achieve an existing goal.Type: ApplicationFiled: November 15, 2022Publication date: August 31, 2023Inventors: Balakumar Sundaralingam, Pratyusha Sharma, Christopher Jason Paxton, Valts Blukis, Tucker Hermans, Dieter Fox
-
Publication number: 20230256595Abstract: One embodiment of a method for controlling a robot includes receiving sensor data associated with an environment that includes an object; applying a machine learning model to a portion of the sensor data associated with the object and one or more trajectories of motion of the robot to determine one or more path lengths of the one or more trajectories; generating a new trajectory of motion of the robot based on the one or more trajectories and the one or more path lengths; and causing the robot to perform one or more movements based on the new trajectory.Type: ApplicationFiled: July 1, 2022Publication date: August 17, 2023Inventors: Adithyavairavan MURALI, Balakumar SUNDARALINGAM, Yun-Chun CHEN, Dieter FOX, Animesh GARG
-
Publication number: 20230145208Abstract: Apparatuses, systems, and techniques to train a machine learning model. In at least one embodiment, a first machine learning model is trained to infer a concept based on first information, training data is labeled using the first machine learning model, and a second machine learning model is trained to infer the concept using the labeled training data.Type: ApplicationFiled: November 7, 2022Publication date: May 11, 2023Inventors: Andreea Bobu, Balakumar Sundaralingam, Christopher Jason Paxton, Maya Cakmak, Wei Yang, Yu-Wei Chao, Dieter Fox
-
Publication number: 20220318459Abstract: Apparatuses, systems, and techniques to model a tactile force sensor. In at least one embodiment, output of tactile sensor is predicted from a modeled force and shape imposed on the sensor. In at least one embodiment, a shape of the surface of the tactile sensor is determined based at least in part on electrical signals received from the sensor.Type: ApplicationFiled: March 25, 2021Publication date: October 6, 2022Inventors: Yashraj Shyam Narang, Balakumar Sundaralingam, Karl Van Wyk, Arsalan Mousavian, Miles Macklin, Dieter Fox
-
Publication number: 20220297297Abstract: Candidate grasping models of a deformable object are applied to generate a simulation of a response of the deformable object to the grasping model. From the simulation, grasp performance metrics for stress, deformation controllability, and instability of the response to the grasping model are obtained, and the grasp performance metrics are correlated with robotic grasp features.Type: ApplicationFiled: March 19, 2021Publication date: September 22, 2022Applicant: NVIDIA Corp.Inventors: Isabella Huang, Yashraj Shyam Narang, Clemens Eppner, Balakumar Sundaralingam, Miles Macklin, Tucker Ryer Hermans, Dieter Fox
-
Publication number: 20220134537Abstract: Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.Type: ApplicationFiled: February 16, 2021Publication date: May 5, 2022Inventors: Visak Chadalavada Vijay Kumar, David Hoeller, Balakumar Sundaralingam, Jonathan Tremblay, Stanley Thomas Birchfield
-
Publication number: 20200301510Abstract: A computer system generates a tactile force model for a tactile force sensor by performing a number of calibration tasks. In various embodiments, the calibration tasks include pressing the tactile force sensor while the tactile force sensor is attached to a pressure gauge, interacting with a ball, and pushing an object along a planar surface. Data collected from these calibration tasks is used to train a neural network. The resulting tactile force model allows the computer system to convert signals received from the tactile force sensor into a force magnitude and direction with greater accuracy than conventional methods. In an embodiment, force on the tactile force sensor is inferred by interacting with an object, determining the motion of the object, and estimating the forces on the object based on a physical model of the object.Type: ApplicationFiled: March 19, 2019Publication date: September 24, 2020Inventors: Stan Birchfield, Byron Boots, Dieter Fox, Ankur Handa, Nathan Ratliff, Balakumar Sundaralingam, Alexander Lambert