Patents by Inventor Max Bajracharya
Max Bajracharya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12131529Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: GrantFiled: January 18, 2023Date of Patent: October 29, 2024Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Patent number: 11741701Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: GrantFiled: February 8, 2022Date of Patent: August 29, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
-
Publication number: 20230154015Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Patent number: 11580724Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: GrantFiled: September 13, 2019Date of Patent: February 14, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Publication number: 20220374024Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: ApplicationFiled: July 11, 2022Publication date: November 24, 2022Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Patent number: 11416003Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: GrantFiled: September 17, 2019Date of Patent: August 16, 2022Assignee: Boston Dynamics, Inc.Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Publication number: 20220165057Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: ApplicationFiled: February 8, 2022Publication date: May 26, 2022Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Kevin STONE, Max BAJRACHARYA, Krishna SHANKAR
-
Patent number: 11288883Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: GrantFiled: September 13, 2019Date of Patent: March 29, 2022Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
-
Publication number: 20210041887Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: ApplicationFiled: September 17, 2019Publication date: February 11, 2021Applicant: Boston Dynamics, Inc.Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Publication number: 20210027058Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Kevin STONE, Max BAJRACHARYA, Krishna SHANKAR
-
Publication number: 20210023707Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Patent number: 10891484Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: GrantFiled: February 6, 2019Date of Patent: January 12, 2021Assignee: X DEVELOPMENT LLCInventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
-
Publication number: 20190171881Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: ApplicationFiled: February 6, 2019Publication date: June 6, 2019Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
-
Patent number: 10229317Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: GrantFiled: August 6, 2016Date of Patent: March 12, 2019Assignee: X DEVELOPMENT LLCInventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
-
Patent number: 10078333Abstract: Methods, apparatus, systems, and computer-readable media are provided for efficient mapping of a robot environment. In various implementations, a group of data points may be sensed by a three-dimensional sensor. One or more voxels of a three-dimensional voxel model that are occupied by the group of data points may be identified. For each occupied voxel, a column of the three-dimensional voxel model that contains the occupied voxel may be identified. Occupied voxels contained in each column may be indexed by elevation. In various implementations, one or more sparse linked data structures may be used to represent the columns.Type: GrantFiled: April 17, 2016Date of Patent: September 18, 2018Assignee: X DEVELOPMENT LLCInventor: Max Bajracharya
-
Publication number: 20180039835Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: ApplicationFiled: August 6, 2016Publication date: February 8, 2018Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya