Patents by Inventor Max Bajracharya

Max Bajracharya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11741701
    Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: August 29, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
  • Publication number: 20230154015
    Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.
    Type: Application
    Filed: January 18, 2023
    Publication date: May 18, 2023
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
  • Patent number: 11580724
    Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: February 14, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
  • Publication number: 20220374024
    Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.
    Type: Application
    Filed: July 11, 2022
    Publication date: November 24, 2022
    Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
  • Patent number: 11416003
    Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: August 16, 2022
    Assignee: Boston Dynamics, Inc.
    Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
  • Publication number: 20220165057
    Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
    Type: Application
    Filed: February 8, 2022
    Publication date: May 26, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy MA, Kevin STONE, Max BAJRACHARYA, Krishna SHANKAR
  • Patent number: 11288883
    Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: March 29, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
  • Publication number: 20210041887
    Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.
    Type: Application
    Filed: September 17, 2019
    Publication date: February 11, 2021
    Applicant: Boston Dynamics, Inc.
    Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
  • Publication number: 20210027058
    Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
    Type: Application
    Filed: September 13, 2019
    Publication date: January 28, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy MA, Kevin STONE, Max BAJRACHARYA, Krishna SHANKAR
  • Publication number: 20210023707
    Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
    Type: Application
    Filed: September 13, 2019
    Publication date: January 28, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
  • Patent number: 10891484
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: January 12, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
  • Publication number: 20190171881
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.
    Type: Application
    Filed: February 6, 2019
    Publication date: June 6, 2019
    Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
  • Patent number: 10229317
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.
    Type: Grant
    Filed: August 6, 2016
    Date of Patent: March 12, 2019
    Assignee: X DEVELOPMENT LLC
    Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
  • Patent number: 10078333
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for efficient mapping of a robot environment. In various implementations, a group of data points may be sensed by a three-dimensional sensor. One or more voxels of a three-dimensional voxel model that are occupied by the group of data points may be identified. For each occupied voxel, a column of the three-dimensional voxel model that contains the occupied voxel may be identified. Occupied voxels contained in each column may be indexed by elevation. In various implementations, one or more sparse linked data structures may be used to represent the columns.
    Type: Grant
    Filed: April 17, 2016
    Date of Patent: September 18, 2018
    Assignee: X DEVELOPMENT LLC
    Inventor: Max Bajracharya
  • Publication number: 20180039835
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.
    Type: Application
    Filed: August 6, 2016
    Publication date: February 8, 2018
    Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya