Patents by Inventor Umashankar Nagarajan
Umashankar Nagarajan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11691277Abstract: Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.Type: GrantFiled: July 19, 2021Date of Patent: July 4, 2023Assignee: X DEVELOPMENT LLCInventors: Umashankar Nagarajan, Bianca Homberg
-
Publication number: 20230154015Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Patent number: 11580724Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: GrantFiled: September 13, 2019Date of Patent: February 14, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Publication number: 20210347040Abstract: Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.Type: ApplicationFiled: July 19, 2021Publication date: November 11, 2021Inventors: Umashankar Nagarajan, Bianca Homberg
-
Patent number: 11097418Abstract: Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.Type: GrantFiled: January 4, 2018Date of Patent: August 24, 2021Assignee: X DEVELOPMENT LLCInventors: Umashankar Nagarajan, Bianca Homberg
-
Patent number: 10981272Abstract: Methods, systems, and apparatus, including computer-readable media, for robot grasp learning. In some implementations, grasp data describing grasp attempts by robots is received. A set of the grasp attempts that represent unsuccessful grasp attempts is identified. Based on the set of grasp attempts representing unsuccessful grasp attempts, a grasp model based on sensor data for the unsuccessful grasp attempts. After training the grasp model, a performance level of the trained grasp model is verified based on one or more simulations of grasp attempts. In response to verifying the performance level of the trained grasp model, the trained grasp model is provided to one or more robots.Type: GrantFiled: December 18, 2017Date of Patent: April 20, 2021Assignee: X Development LLCInventors: Umashankar Nagarajan, Devesh Yamparala
-
Patent number: 10955811Abstract: Techniques described herein relate to using reduced-dimensionality embeddings generated from robot sensor data to identify predetermined semantic labels that guide robot interaction with objects. In various implementations, obtaining, from one or more sensors of a robot, sensor data that includes data indicative of an object observed in an environment in which the robot operates. The sensor data may be processed utilizing a first trained machine learning model to generate a first embedded feature vector that maps the data indicative of the object to an embedding space. Nearest neighbor(s) of the first embedded feature vector may be identified in the embedding space. Semantic label(s) may be identified based on the nearest neighbor(s). A given grasp option may be selected from enumerated grasp options previously associated with the semantic label(s). The robot may be operated to interact with the object based on the pose and using the given grasp option.Type: GrantFiled: July 17, 2020Date of Patent: March 23, 2021Assignee: X DEVELOPMENT LLCInventor: Umashankar Nagarajan
-
Publication number: 20210023707Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Publication number: 20200348642Abstract: Techniques described herein relate to using reduced-dimensionality embeddings generated from robot sensor data to identify predetermined semantic labels that guide robot interaction with objects. In various implementations, obtaining, from one or more sensors of a robot, sensor data that includes data indicative of an object observed in an environment in which the robot operates. The sensor data may be processed utilizing a first trained machine learning model to generate a first embedded feature vector that maps the data indicative of the object to an embedding space. Nearest neighbor(s) of the first embedded feature vector may be identified in the embedding space. Semantic label(s) may be identified based on the nearest neighbor(s). A given grasp option may be selected from enumerated grasp options previously associated with the semantic label(s). The robot may be operated to interact with the object based on the pose and using the given grasp option.Type: ApplicationFiled: July 17, 2020Publication date: November 5, 2020Inventor: Umashankar Nagarajan
-
Patent number: 10754318Abstract: Techniques described herein relate to using reduced-dimensionality embeddings generated from robot sensor data to identify predetermined semantic labels that guide robot interaction with objects. In various implementations, sensor data obtained from one or more sensors of a robot includes data indicative of an object observed in an environment in which the robot operates. The sensor data is processed utilizing a first trained machine learning model to generate a first embedded feature vector that maps the data indicative of the object to an embedding space. Nearest neighbor(s) of the first embedded feature vector is identified in the embedding space. Semantic label(s) are identified based on the nearest neighbor(s). A given grasp option is selected from enumerated grasp options previously associated with the semantic label(s). The robot is operated to interact with the object based on the pose and using the given grasp option.Type: GrantFiled: December 21, 2017Date of Patent: August 25, 2020Assignee: X DEVELOPMENT LLCInventor: Umashankar Nagarajan
-
Publication number: 20190248003Abstract: Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.Type: ApplicationFiled: January 4, 2018Publication date: August 15, 2019Inventors: Umashankar Nagarajan, Bianca Homberg
-
Publication number: 20190196436Abstract: Techniques described herein relate to using reduced-dimensionality embeddings generated from robot sensor data to identify predetermined semantic labels that guide robot interaction with objects. In various implementations, obtaining, from one or more sensors of a robot, sensor data that includes data indicative of an object observed in an environment in which the robot operates. The sensor data may be processed utilizing a first trained machine learning model to generate a first embedded feature vector that maps the data indicative of the object to an embedding space. Nearest neighbor(s) of the first embedded feature vector may be identified in the embedding space. Semantic label(s) may be identified based on the nearest neighbor(s). A given grasp option may be selected from enumerated grasp options previously associated with the semantic label(s). The robot may be operated to interact with the object based on the pose and using the given grasp option.Type: ApplicationFiled: December 21, 2017Publication date: June 27, 2019Inventor: Umashankar Nagarajan
-
Patent number: 10131053Abstract: Methods and apparatus related to robot collision avoidance. One method may include: receiving robot instructions to be performed by a robot; at each of a plurality of control cycles of processor(s) of the robot: receiving trajectories to be implemented by actuators of the robot, wherein the trajectories define motion states for the actuators of the robot during the control cycle or a next control cycle, and wherein the trajectories are generated based on the robot instructions; determining, based on a current motion state of the actuators and the trajectories to be implemented, whether implementation of the trajectories by the actuators prevents any collision avoidance trajectory from being achieved; and selectively providing the trajectories or collision avoidance trajectories for operating the actuators of the robot during the control cycle or the next control cycle depending on a result of the determining.Type: GrantFiled: September 14, 2016Date of Patent: November 20, 2018Assignee: X DEVELOPMENT LLCInventors: Peter Pastor Sampedro, Umashankar Nagarajan
-
Patent number: 10105847Abstract: Methods, apparatus, systems, and computer-readable media are provided for detecting a geometric change in a robot's configuration and taking responsive action in instances where the geometric change is likely to impact operation of the robot. In various implementations, a geometric model of a robot in a selected pose may be obtained. Image data of the actual robot in the selected pose may also be obtained. The image data may be compared to the geometric model to detect a geometric difference between the geometric model and the actual robot. Output may be provided that is indicative of the geometric difference between the geometric model and the actual robot.Type: GrantFiled: June 8, 2016Date of Patent: October 23, 2018Assignee: X DEVELOPMENT LLCInventors: Craig Latimer, Umashankar Nagarajan
-
Patent number: 10016332Abstract: The control method for lower-limb assistive exoskeletons assists human movement by producing a desired dynamic response on the human leg. Wearing the exoskeleton replaces the leg's natural admittance with the equivalent admittance of the coupled system formed by the leg and the exoskeleton. The control goal is to make the leg obey an admittance model defined by target values of natural frequency, resonant peak magnitude and zero-frequency response. The control achieves these objectives objective via positive feedback of the leg's angular position and angular acceleration. The method achieves simultaneous performance and robust stability through a constrained optimization that maximizes the system's gain margins while ensuring the desired location of its dominant poles.Type: GrantFiled: December 5, 2017Date of Patent: July 10, 2018Assignee: HONDA MOTOR CO., LTD.Inventors: Gabriel Aguirre-Ollinger, Umashankar Nagarajan, Ambarish Goswami
-
Patent number: 9981383Abstract: Methods, apparatus, systems, and computer readable media are provided for real-time generation of trajectories for actuators of a robot, where the trajectories are generated to lessen the chance of collision with one or more objects in the environment of the robot. In some implementations, a real-time trajectory generator is used to generate trajectories for actuators of a robot based on a current motion state of the actuators, a target motion state of the actuators, and kinematic motion constraints of the actuators. The acceleration constraints and/or other kinematic constraints that are used by the real-time trajectory generator to generate trajectories at a given time are determined so as to lessen the chance of collision with one or more obstacles in the environment of the robot.Type: GrantFiled: August 2, 2016Date of Patent: May 29, 2018Assignee: X DEVELOPMENT LLCInventor: Umashankar Nagarajan
-
Patent number: 9981381Abstract: Methods, apparatus, systems, and computer readable media are provided for generating phase synchronized trajectories for actuators of a robot to enable the actuators of the robot to transition from a current motion state to a target motion state. Phase synchronized trajectories produce motion of a reference point of the robot in a one-dimensional straight line in a multi-dimensional space. For example, phase synchronized trajectories of a plurality of actuators that control the movement of an end effector may cause a reference point of the end effector to move in a straight line in Cartesian space. In some implementations, phase synchronized trajectories may be generated and utilized even when those phase synchronized trajectories are less time-optimal than one or more other non-phase synchronized trajectories.Type: GrantFiled: June 8, 2016Date of Patent: May 29, 2018Assignee: X DEVELOPMENT LLCInventor: Umashankar Nagarajan
-
Patent number: 9975244Abstract: Methods, apparatus, systems, and computer readable media are provided for generating updated robot actuator trajectories in response to violation of torque constraints and/or other constraints in previously generated robot actuator trajectories. A real-time trajectory generator is used to generate trajectories for actuators of a robot based on a current motion state of the actuators, a target motion state of the actuators, and kinematic motion constraints of the actuators. The generated trajectory of each of the actuators is analyzed to determine whether a violation of at least one additional constraint occurs. In response to determining violation(s) of the additional constraint, one or more new kinematic motion constraints of the actuators are determined based on the violation(s).Type: GrantFiled: August 2, 2016Date of Patent: May 22, 2018Assignee: X DEVELOPMENT LLCInventor: Umashankar Nagarajan
-
Publication number: 20180098907Abstract: The control method for lower-limb assistive exoskeletons assists human movement by producing a desired dynamic response on the human leg. Wearing the exoskeleton replaces the leg's natural admittance with the equivalent admittance of the coupled system formed by the leg and the exoskeleton. The control goal is to make the leg obey an admittance model defined by target values of natural frequency, resonant peak magnitude and zero-frequency response. The control achieves these objectives objective via positive feedback of the leg's angular position and angular acceleration. The method achieves simultaneous performance and robust stability through a constrained optimization that maximizes the system's gain margins while ensuring the desired location of its dominant poles.Type: ApplicationFiled: December 5, 2017Publication date: April 12, 2018Inventors: Gabriel AGUIRRE-OLLINGER, Umashankar Nagarajan, Ambarish Goswami
-
Patent number: 9907722Abstract: The control method for lower-limb assistive exoskeletons assists human movement by producing a desired dynamic response on the human leg. Wearing the exoskeleton replaces the leg's natural admittance with the equivalent admittance of the coupled system formed by the leg and the exoskeleton. The control goal is to make the leg obey an admittance model defined by target values of natural frequency, resonant peak magnitude and zero-frequency response. The control achieves these objectives objective via positive feedback of the leg's angular position and angular acceleration. The method achieves simultaneous performance and robust stability through a constrained optimization that maximizes the system's gain margins while ensuring the desired location of its dominant poles.Type: GrantFiled: June 25, 2015Date of Patent: March 6, 2018Assignee: HONDA MOTOR CO., LTD.Inventors: Gabriel Aguirre-Ollinger, Umashankar Nagarajan, Ambarish Goswami