Patents by Inventor Ankur Handa
Ankur Handa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240131706Abstract: Apparatuses, systems, and techniques to perform collision-free motion generation (e.g., to operate a real-world or virtual robot). In at least one embodiment, at least a portion of the collision-free motion generation is performed in parallel.Type: ApplicationFiled: May 22, 2023Publication date: April 25, 2024Inventors: Balakumar Sundaralingam, Siva Kumar Sastry Hari, Adam Harper Fishman, Caelan Reed Garrett, Alexander James Millane, Elena Oleynikova, Ankur Handa, Fabio Tozeto Ramos, Nathan Donald Ratliff, Karl Van Wyk, Dieter Fox
-
Publication number: 20240100694Abstract: Systems techniques to control a robot are described herein. In at least one embodiment, a machine learning model for controlling a robot is trained based at least on one or more population-based training operations or one or more reinforcement learning operations. Once trained, the machine learning model can be deployed and used to control a robot to perform a task.Type: ApplicationFiled: June 7, 2023Publication date: March 28, 2024Inventors: Ankur HANDA, Gavriel STATE, Arthur David ALLSHIRE, Victor MAKOVIICHUK, Aleksei Vladimirovich PETRENKO
-
Publication number: 20240095527Abstract: Systems and techniques are described related to training one or more machine learning models for use in control of a robot. In at least one embodiment, one or more machine learning models are trained based at least on simulations of the robot and renderings of such simulations—which may be performed using one or more ray tracing algorithms, operations, or techniques.Type: ApplicationFiled: August 10, 2023Publication date: March 21, 2024Inventors: Ankur HANDA, Gavriel STATE, Arthur David ALLSHIRE, Dieter FOX, Jean-Francois Victor LAFLECHE, Jingzhou LIU, Viktor MAKOVIICHUK, Yashraj Shyam NARANG, Aleksei Vladimirovich PETRENKO, Ritvik SINGH, Balakumar SUNDARALINGAM, Karl VAN WYK, Alexander ZHURKEVICH
-
Publication number: 20230405820Abstract: Apparatuses, systems, and techniques to generate a predicted outcome of an object resulting from a robotic component applying a force. In at least one embodiment, a predicted outcome of an object resulting from a robotic component applying a force is generated based on, for example, a neural network.Type: ApplicationFiled: June 12, 2023Publication date: December 21, 2023Inventors: Isabella Huang, Yashraj Narang, Tucker Ryer Hermans, Fabio Tozeto Ramos, Ankur Handa, Miles Andrew Macklin, Dieter Fox
-
Publication number: 20230321822Abstract: One embodiment of a method for controlling a robot includes performing a plurality of simulations of a robot interacting with one or more objects represented by one or more signed distance functions (SDFs), where performing the plurality of simulations comprises reducing a number of contacts between the one or more objects that are being simulated, and updating one or more parameters of a machine learning model based on the plurality of simulations to generate a trained machine learning model.Type: ApplicationFiled: December 2, 2022Publication date: October 12, 2023Inventors: Yashraj Shyam NARANG, Kier STOREY, Iretiayo AKINOLA, Dieter FOX, Kelly GUO, Ankur HANDA, Fengyun LU, Miles MACKLIN, Adam MORAVANSZKY, Philipp REIST, Gavriel STATE, Lukasz WAWRZYNIAK
-
Publication number: 20230191596Abstract: A technique for training a neural network, including generating a plurality of input vectors based on a first plurality of task demonstrations associated with a first robot performing a first task in a simulated environment, wherein each input vector included in the plurality of input vectors specifies a sequence of poses of an end-effector of the first robot, and training the neural network to generate a plurality of output vectors based on the plurality of input vectors. Another technique for generating a task demonstration, including generating a simulated environment that includes a robot and at least one object, causing the robot to at least partially perform a task associated with the at least one object within the simulated environment based on a first output vector generated by a trained neural network, and recording demonstration data of the robot at least partially performing the task within the simulated environment.Type: ApplicationFiled: March 15, 2022Publication date: June 22, 2023Inventors: Ankur HANDA, Iretiayo AKINOLA, Dieter FOX, Yashraj Shyam NARANG
-
Publication number: 20230191605Abstract: A technique for training a neural network, including generating a plurality of input vectors based on a first plurality of task demonstrations associated with a first robot performing a first task in a simulated environment, wherein each input vector included in the plurality of input vectors specifies a sequence of poses of an end-effector of the first robot, and training the neural network to generate a plurality of output vectors based on the plurality of input vectors. Another technique for generating a task demonstration, including generating a simulated environment that includes a robot and at least one object, causing the robot to at least partially perform a task associated with the at least one object within the simulated environment based on a first output vector generated by a trained neural network, and recording demonstration data of the robot at least partially performing the task within the simulated environment.Type: ApplicationFiled: March 15, 2022Publication date: June 22, 2023Inventors: Ankur HANDA, Iretiayo AKINOLA, Dieter FOX, Yashraj Shyam NARANG
-
Publication number: 20230169329Abstract: Systems and methods related to incorporating uncertain inputs into a neural network are described herein. A distribution is obtained and processed by a Reproducing Kernel Hilbert Space (RKHS) module to generate an embedding that represents the distribution. The features of the embedding may correspond to a number of Random Fourier Features (RFFs). The embedding can be added to additional features to form an aggregate input for the neural network. The neural network then processes the aggregate input to generate an output based on, at least in part, the embedding of the distribution. In some embodiments, a simulation can be run to generate a distribution for a feature, where each simulator instance generates a different sample for the feature over a plurality of time steps of the simulation. In some embodiments, the output neural network can be used to control robotic systems, vehicles, or other systems.Type: ApplicationFiled: December 1, 2021Publication date: June 1, 2023Inventors: Fabio Tozeto Ramos, Rika Antonova, Ankur Handa, Dieter Fox
-
Publication number: 20210122045Abstract: Apparatuses, systems, and techniques are described that estimate the pose of an object while the object is being manipulated by a robotic appendage. In at least one embodiment, a sample-based optimization algorithm tracks in-hand object poses during manipulation via contact feedback and a GPU-accelerated robotic simulation is developed. In at least one embodiment, parallel simulations concurrently model object pose changes that may be caused by complex contact dynamics. In at least one embodiment, the optimization algorithm tunes simulation parameters during object pose tracking to further improve tracking performance. In various embodiments, real-world contact sensing may be improved by utilizing vision in-the-loop.Type: ApplicationFiled: April 30, 2020Publication date: April 29, 2021Inventors: Ankur Handa, Karl Van Wyk, Viktor Makoviichuk, Dieter Fox
-
Publication number: 20210086364Abstract: A human pilot controls a robotic arm and gripper by simulating a set of desired motions with the human hand. In at least one embodiment, one or more images of the pilot's hand are captured and analyzed to determine a set of hand poses. In at least one embodiment, the set of hand poses is translated to a corresponding set of robotic-gripper poses. In at least one embodiment, a set of motions is determined that perform the set of robotic-gripper poses, and the robot is directed to perform the set of motions.Type: ApplicationFiled: July 17, 2020Publication date: March 25, 2021Inventors: Ankur Handa, Karl Van Wyk, Wei Yang, Yu-Wei Chao, Dieter Fox, Qian Wan
-
Patent number: 10915731Abstract: Certain examples described herein enable semantically-labelled representations of a three-dimensional (3D) space to be generated from video data. In described examples, a 3D representation is a surface element or ‘surfel’ representation, where the geometry of the space is modelled using a plurality of surfaces that are defined within a 3D co-ordinate system. Object-label probability values for spatial elements of frames of video data may be determined using a two-dimensional image classifier. Surface elements that correspond to the spatial elements are identified based on a projection of the surface element representation using an estimated pose for a frame. Object-label probability values for the surface elements are then updated based on the object-label probability values for corresponding spatial elements. This results in a semantically-labelled 3D surface element representation of objects present in the video data.Type: GrantFiled: December 20, 2018Date of Patent: February 9, 2021Assignee: Imperial College Innovations LimitedInventors: John Brendan Mccormac, Ankur Handa, Andrew Davison, Stefan Leutenegger
-
Publication number: 20200306960Abstract: A machine-learning control system is trained to perform a task using a simulation. The simulation is governed by parameters that, in various embodiments, are not precisely known. In an embodiment, the parameters are specified with an initial value and expected range. After training on the simulation, the machine-learning control system attempts to perform the task in the real world. In an embodiment, the results of the attempt are compared to the expected results of the simulation, and the parameters that govern the simulation are adjusted so that the simulated result matches the real-world attempt. In an embodiment, the machine-learning control system is retrained on the updated simulation. In an embodiment, as additional real-world attempts are made, the simulation parameters are refined and the control system is retrained until the simulation is accurate and the control system is able to successfully perform the task in the real world.Type: ApplicationFiled: April 1, 2019Publication date: October 1, 2020Inventors: Ankur Handa, Viktor Makoviichuk, Miles Macklin, Nathan Ratliff, Dieter Fox, Yevgen Chebotar, Jan Issac
-
Publication number: 20200301510Abstract: A computer system generates a tactile force model for a tactile force sensor by performing a number of calibration tasks. In various embodiments, the calibration tasks include pressing the tactile force sensor while the tactile force sensor is attached to a pressure gauge, interacting with a ball, and pushing an object along a planar surface. Data collected from these calibration tasks is used to train a neural network. The resulting tactile force model allows the computer system to convert signals received from the tactile force sensor into a force magnitude and direction with greater accuracy than conventional methods. In an embodiment, force on the tactile force sensor is inferred by interacting with an object, determining the motion of the object, and estimating the forces on the object based on a physical model of the object.Type: ApplicationFiled: March 19, 2019Publication date: September 24, 2020Inventors: Stan Birchfield, Byron Boots, Dieter Fox, Ankur Handa, Nathan Ratliff, Balakumar Sundaralingam, Alexander Lambert
-
Publication number: 20190147220Abstract: Certain examples described herein enable semantically-labelled representations of a three-dimensional (3D) space to be generated from video data. In described examples, a 3D representation is a surface element or ‘surfel’ representation, where the geometry of the space is modelled using a plurality of surfaces that are defined within a 3D co-ordinate system. Object-label probability values for spatial elements of frames of video data may be determined using a two-dimensional image classifier. Surface elements that correspond to the spatial elements are identified based on a projection of the surface element representation using an estimated pose for a frame. Object-label probability values for the surface elements are then updated based on the object-label probability values for corresponding spatial elements. This results in a semantically-labelled 3D surface element representation of objects present in the video data.Type: ApplicationFiled: December 20, 2018Publication date: May 16, 2019Inventors: John Brendan MCCORMAC, Ankur HANDA, Andrew DAVISON, Stefan LEUTENEGGER
-
Patent number: 9507751Abstract: Embodiments of the invention provide systems and methods for managing seed data in a computing system (e.g., middleware computing system). A disclosed server computer may include a processor and a memory coupled with and readable by the processor and storing therein a set of instructions which, when executed by the processor, cause the processor to perform a method. The method may include obtaining first input data via a graphical interface. The first input data indicates a first memory storage location of seed data. The seed data comprises data to initialize an application for operation. The method further includes accessing the seed data from the first memory storage location based on the first input data. The method includes storing data based on the seed data to a second memory storage location.Type: GrantFiled: October 24, 2013Date of Patent: November 29, 2016Assignee: Oracle International CorporationInventors: Ankur Handa, Xiaojun Chen, Venkat Srinivas, Ramrajesh Ravichandran, Jagdesh Veerapandian, Vijay Pandya
-
Publication number: 20150081832Abstract: Embodiments of the invention provide systems and methods for managing seed data in a computing system (e.g., middleware computing system). A disclosed server computer may include a processor and a memory coupled with and readable by the processor and storing therein a set of instructions which, when executed by the processor, cause the processor to perform a method. The method may include obtaining first input data via a graphical interface. The first input data indicates a first memory storage location of seed data. The seed data comprises data to initialize an application for operation. The method further includes accessing the seed data from the first memory storage location based on the first input data. The method includes storing data based on the seed data to a second memory storage location.Type: ApplicationFiled: October 24, 2013Publication date: March 19, 2015Applicant: Oracle International CorporationInventors: Ankur Handa, Xiaojun Chen, Venkat Srinivas, Ramrajesh Ravichandran, Jagdesh Veerapandian, Vijay Pandya