Patents by Inventor Balakumar Sundaralingam

Balakumar Sundaralingam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12318935
    Abstract: One embodiment of a method for controlling a robot includes receiving sensor data associated with an environment that includes an object; applying a machine learning model to a portion of the sensor data associated with the object and one or more trajectories of motion of the robot to determine one or more path lengths of the one or more trajectories; generating a new trajectory of motion of the robot based on the one or more trajectories and the one or more path lengths; and causing the robot to perform one or more movements based on the new trajectory.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: June 3, 2025
    Assignee: NVIDIA CORPORATION
    Inventors: Adithyavairavan Murali, Balakumar Sundaralingam, Yun-Chun Chen, Dieter Fox, Animesh Garg
  • Publication number: 20250104277
    Abstract: One embodiment of a method for determining object poses includes receiving first sensor data and second sensor data, where the first sensor data is associated with a first modality, and the second sensor data is associated with a second modality that is different from the first modality, and performing one or more iterative operations to determine a pose of an object based on one or more comparisons of (i) one or more renderings of a three-dimensional (3D) representation of the object in the first modality with the first sensor data, and (ii) one or more renderings of the 3D representation of the object in the second modality with the second sensor data.
    Type: Application
    Filed: March 18, 2024
    Publication date: March 27, 2025
    Inventors: Jonathan TREMBLAY, Stanley BIRCHFIELD, Valts BLUKIS, Balakumar SUNDARALINGAM, Stephen TYREE, Bowen WEN
  • Publication number: 20250083309
    Abstract: In various examples, systems and methods are disclosed relating to geometric fabrics for accelerated policy learning and sim-to-real transfer in robotics systems, platforms, and/or applications. For example, a system can provide an input indicative of a goal pose for a robot to a model to cause the model to generate an output, the output representing a plurality of points along a path for movement of the robot to the goal pose; and generate one or more control signals for operation of the robot based at least on the plurality of points along the path and a policy corresponding to one or more criteria for the operation of the robot. In examples, the system can provide the one or more control signals to the robot to cause the robot to move toward the goal pose.
    Type: Application
    Filed: April 25, 2024
    Publication date: March 13, 2025
    Applicant: NVIDIA Corporation
    Inventors: Nathan Donald RATLIFF, Karl VAN WYK, Ankur HANDA, Viktor MAKOVIICHUK, Yijie GUO, Jie XU, Tyler LUM, Balakumar SUNDARALINGAM, Jingzhou LIU
  • Publication number: 20240338598
    Abstract: One embodiment of a method for generating simulation data to train a machine learning model includes generating a plurality of simulation environments based on a user input, and for each simulation environment included in the plurality of simulation environments: generating a plurality of tasks for a robot to perform within the simulation environment, performing one or more operations to determine a plurality of robot trajectories for performing the plurality of tasks, and generating simulation data for training a machine learning model by performing one or more operations to simulate the robot moving within the simulation environment according to the plurality of trajectories.
    Type: Application
    Filed: March 15, 2024
    Publication date: October 10, 2024
    Inventors: Caelan Reed GARRETT, Fabio TOZETO RAMOS, Iretiayo AKINOLA, Alperen DEGIRMENCI, Clemens EPPNER, Dieter FOX, Tucker Ryer HERMANS, Ajay Uday MANDLEKAR, Arsalan MOUSAVIAN, Yashraj Shyam NARANG, Rowland Wilde O'FLAHERTY, Balakumar SUNDARALINGAM, Wei YANG
  • Patent number: 12017352
    Abstract: Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: June 25, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Visak Chadalavada Vijay Kumar, David Hoeller, Balakumar Sundaralingam, Jonathan Tremblay, Stanley Thomas Birchfield
  • Publication number: 20240131706
    Abstract: Apparatuses, systems, and techniques to perform collision-free motion generation (e.g., to operate a real-world or virtual robot). In at least one embodiment, at least a portion of the collision-free motion generation is performed in parallel.
    Type: Application
    Filed: May 22, 2023
    Publication date: April 25, 2024
    Inventors: Balakumar Sundaralingam, Siva Kumar Sastry Hari, Adam Harper Fishman, Caelan Reed Garrett, Alexander James Millane, Elena Oleynikova, Ankur Handa, Fabio Tozeto Ramos, Nathan Donald Ratliff, Karl Van Wyk, Dieter Fox
  • Publication number: 20240095527
    Abstract: Systems and techniques are described related to training one or more machine learning models for use in control of a robot. In at least one embodiment, one or more machine learning models are trained based at least on simulations of the robot and renderings of such simulations—which may be performed using one or more ray tracing algorithms, operations, or techniques.
    Type: Application
    Filed: August 10, 2023
    Publication date: March 21, 2024
    Inventors: Ankur HANDA, Gavriel STATE, Arthur David ALLSHIRE, Dieter FOX, Jean-Francois Victor LAFLECHE, Jingzhou LIU, Viktor MAKOVIICHUK, Yashraj Shyam NARANG, Aleksei Vladimirovich PETRENKO, Ritvik SINGH, Balakumar SUNDARALINGAM, Karl VAN WYK, Alexander ZHURKEVICH
  • Publication number: 20240066710
    Abstract: One embodiment of a method for controlling a robot includes generating a representation of spatial occupancy within an environment based on a plurality of red, green, blue (RGB) images of the environment, determining one or more actions for the robot based on the representation of spatial occupancy and a goal, and causing the robot to perform at least a portion of a movement based on the one or more actions.
    Type: Application
    Filed: February 13, 2023
    Publication date: February 29, 2024
    Inventors: Balakumar SUNDARALINGAM, Stanley BIRCHFIELD, Zhenggang TANG, Jonathan TREMBLAY, Stephen TYREE, Bowen WEN, Ye YUAN, Charles LOOP
  • Publication number: 20230294277
    Abstract: Approaches presented herein provide for predictive control of a robot or automated assembly in performing a specific task. A task to be performed may depend on the location and orientation of the robot performing that task. A predictive control system can determine a state of a physical environment at each of a series of time steps, and can select an appropriate location and orientation at each of those time steps. At individual time steps, an optimization process can determine a sequence of future motions or accelerations to be taken that comply with one or more constraints on that motion. For example, at individual time steps, a respective action in the sequence may be performed, then another motion sequence predicted for a next time step, which can help drive robot motion based upon predicted future motion and allow for quick reactions.
    Type: Application
    Filed: June 30, 2022
    Publication date: September 21, 2023
    Inventors: Wei Yang, Balakumar Sundaralingam, Christopher Jason Paxton, Maya Cakmak, Yu-Wei Chao, Dieter Fox, Iretiayo Akinola
  • Publication number: 20230294276
    Abstract: Approaches presented herein provide for simulation of human motion for human-robot interactions, such as may involve a handover of an object. Motion capture can be performed for a hand grasping and moving an object to a location and orientation appropriate for a handover, without a need for a robot to be present or an actual handover to occur. This motion data can be used to separately model the hand and the object for use in a handover simulation, where a component such as a physics engine may be used to ensure realistic modeling of the motion or behavior. During a simulation, a robot control model or algorithm can predict an optimal location and orientation to grasp an object, and an optimal path to move to that location and orientation, using a control model or algorithm trained, based at least in part, using the motion models for the hand and object.
    Type: Application
    Filed: December 30, 2022
    Publication date: September 21, 2023
    Inventors: Yu-Wei Chao, Yu Xiang, Wei Yang, Dieter Fox, Chris Paxton, Balakumar Sundaralingam, Maya Cakmak
  • Patent number: 11745347
    Abstract: Candidate grasping models of a deformable object are applied to generate a simulation of a response of the deformable object to the grasping model. From the simulation, grasp performance metrics for stress, deformation controllability, and instability of the response to the grasping model are obtained, and the grasp performance metrics are correlated with robotic grasp features.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: September 5, 2023
    Assignee: NVIDIA CORP.
    Inventors: Isabella Huang, Yashraj Shyam Narang, Clemens Eppner, Balakumar Sundaralingam, Miles Macklin, Tucker Ryer Hermans, Dieter Fox
  • Publication number: 20230271330
    Abstract: Approaches presented herein provide for a framework to integrate human provided feedback in natural language to update a robot planning cost or value. The natural language feedback may be modeled as a cost or value associated with completing a task assigned to the robot. This cost or value may then be added to an initial task cost or value to update one or more actions to be performed by the robot. The framework can be applied to both real work and simulated environments where the robot may receive instructions, in natural language, that either provide a goal, modify an existing goal, or provide constraints to actions to achieve an existing goal.
    Type: Application
    Filed: November 15, 2022
    Publication date: August 31, 2023
    Inventors: Balakumar Sundaralingam, Pratyusha Sharma, Christopher Jason Paxton, Valts Blukis, Tucker Hermans, Dieter Fox
  • Publication number: 20230256595
    Abstract: One embodiment of a method for controlling a robot includes receiving sensor data associated with an environment that includes an object; applying a machine learning model to a portion of the sensor data associated with the object and one or more trajectories of motion of the robot to determine one or more path lengths of the one or more trajectories; generating a new trajectory of motion of the robot based on the one or more trajectories and the one or more path lengths; and causing the robot to perform one or more movements based on the new trajectory.
    Type: Application
    Filed: July 1, 2022
    Publication date: August 17, 2023
    Inventors: Adithyavairavan MURALI, Balakumar SUNDARALINGAM, Yun-Chun CHEN, Dieter FOX, Animesh GARG
  • Publication number: 20230145208
    Abstract: Apparatuses, systems, and techniques to train a machine learning model. In at least one embodiment, a first machine learning model is trained to infer a concept based on first information, training data is labeled using the first machine learning model, and a second machine learning model is trained to infer the concept using the labeled training data.
    Type: Application
    Filed: November 7, 2022
    Publication date: May 11, 2023
    Inventors: Andreea Bobu, Balakumar Sundaralingam, Christopher Jason Paxton, Maya Cakmak, Wei Yang, Yu-Wei Chao, Dieter Fox
  • Publication number: 20220318459
    Abstract: Apparatuses, systems, and techniques to model a tactile force sensor. In at least one embodiment, output of tactile sensor is predicted from a modeled force and shape imposed on the sensor. In at least one embodiment, a shape of the surface of the tactile sensor is determined based at least in part on electrical signals received from the sensor.
    Type: Application
    Filed: March 25, 2021
    Publication date: October 6, 2022
    Inventors: Yashraj Shyam Narang, Balakumar Sundaralingam, Karl Van Wyk, Arsalan Mousavian, Miles Macklin, Dieter Fox
  • Publication number: 20220297297
    Abstract: Candidate grasping models of a deformable object are applied to generate a simulation of a response of the deformable object to the grasping model. From the simulation, grasp performance metrics for stress, deformation controllability, and instability of the response to the grasping model are obtained, and the grasp performance metrics are correlated with robotic grasp features.
    Type: Application
    Filed: March 19, 2021
    Publication date: September 22, 2022
    Applicant: NVIDIA Corp.
    Inventors: Isabella Huang, Yashraj Shyam Narang, Clemens Eppner, Balakumar Sundaralingam, Miles Macklin, Tucker Ryer Hermans, Dieter Fox
  • Publication number: 20220134537
    Abstract: Apparatuses, systems, and techniques to map coordinates in task space to a set of joint angles of an articulated robot. In at least one embodiment, a neural network is trained to map task-space coordinates to joint space coordinates of a robot by simulating a plurality of robots at various joint angles, and determining the position of their respective manipulators in task space.
    Type: Application
    Filed: February 16, 2021
    Publication date: May 5, 2022
    Inventors: Visak Chadalavada Vijay Kumar, David Hoeller, Balakumar Sundaralingam, Jonathan Tremblay, Stanley Thomas Birchfield
  • Publication number: 20200301510
    Abstract: A computer system generates a tactile force model for a tactile force sensor by performing a number of calibration tasks. In various embodiments, the calibration tasks include pressing the tactile force sensor while the tactile force sensor is attached to a pressure gauge, interacting with a ball, and pushing an object along a planar surface. Data collected from these calibration tasks is used to train a neural network. The resulting tactile force model allows the computer system to convert signals received from the tactile force sensor into a force magnitude and direction with greater accuracy than conventional methods. In an embodiment, force on the tactile force sensor is inferred by interacting with an object, determining the motion of the object, and estimating the forces on the object based on a physical model of the object.
    Type: Application
    Filed: March 19, 2019
    Publication date: September 24, 2020
    Inventors: Stan Birchfield, Byron Boots, Dieter Fox, Ankur Handa, Nathan Ratliff, Balakumar Sundaralingam, Alexander Lambert