Patents by Inventor Max Bajracharya
Max Bajracharya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12372982Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: GrantFiled: July 11, 2022Date of Patent: July 29, 2025Assignee: Boston Dynamics, Inc.Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Publication number: 20250238945Abstract: A method for generating a refined disparity estimate is disclosed. The method includes receiving, with a computing device, a stereo image pair, implementing, with the computing device, a learned stereo architecture trained on fully synthetic image data, generating, with two feature extractors of the learned stereo architecture, a pair of feature maps, where each one of the pair of feature maps corresponds to one of the images of the stereo image pair, generating, with a cost volume stage of the learned stereo architecture comprising one or more 3D convolution networks, a first disparity estimate, upsampling the first disparity estimate to a resolution corresponding to a resolution of the stereo image pair to form a full resolution disparity estimate, refining the full resolution disparity estimate with a disparity residual thereby generating a refined full resolution disparity estimate, and outputting the refined full resolution disparity estimate.Type: ApplicationFiled: January 19, 2024Publication date: July 24, 2025Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Mark Tjersland, Max Bajracharya
-
Publication number: 20250238946Abstract: A method for controlling a learned stereo architecture includes implementing a learned stereo architecture capable of generating refined disparity estimates for a predefined range of disparities based on training with fully synthetic image data; inputting a first range parameter into the learned stereo architecture, where the first range parameter is a subset of the predefined range of disparities; receiving, from a first stereo system having a first baseline, a first stereo image pair; generating a first disparity estimate, wherein disparities estimated in the first disparity estimate correspond to the first range parameter; upsampling the first disparity estimate to a resolution corresponding to a resolution of the first stereo image pair to form a first full resolution disparity estimate; refining the first full resolution disparity estimate with a first disparity residual thereby generating a first refined full resolution disparity estimate; and outputting the first refined full resolution disparity estimType: ApplicationFiled: January 19, 2024Publication date: July 24, 2025Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Mark Tjersland, Max Bajracharya
-
Publication number: 20250238948Abstract: A method for training a learned stereo architecture includes receiving, from a graphic rendering system, a plurality of stereo image pairs comprising a variety of disparate scenes and scene parameters, where: a first subset of stereo image pairs correspond to a first baseline, and a second subset of stereo image pairs correspond to a second baseline different from the first baseline; inputting the plurality of stereo image pairs into a stereo architecture comprising one or more 3D convolution networks configured to learn disparity estimation based on the plurality of stereo image pairs; comparing disparity estimations from the stereo architecture with ground truth disparity from the graphic rendering system to generate training feedback; and adjusting one or more neural network models implemented by the stereo architecture based on the training feedback thereby configuring the learned stereo architecture.Type: ApplicationFiled: January 19, 2024Publication date: July 24, 2025Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Mark Tjersland, Max Bajracharya
-
Patent number: 12131529Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: GrantFiled: January 18, 2023Date of Patent: October 29, 2024Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Patent number: 11741701Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: GrantFiled: February 8, 2022Date of Patent: August 29, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
-
Publication number: 20230154015Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Patent number: 11580724Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: GrantFiled: September 13, 2019Date of Patent: February 14, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Publication number: 20220374024Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: ApplicationFiled: July 11, 2022Publication date: November 24, 2022Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Patent number: 11416003Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: GrantFiled: September 17, 2019Date of Patent: August 16, 2022Assignee: Boston Dynamics, Inc.Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Publication number: 20220165057Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: ApplicationFiled: February 8, 2022Publication date: May 26, 2022Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Kevin STONE, Max BAJRACHARYA, Krishna SHANKAR
-
Patent number: 11288883Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: GrantFiled: September 13, 2019Date of Patent: March 29, 2022Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
-
Publication number: 20210041887Abstract: A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.Type: ApplicationFiled: September 17, 2019Publication date: February 11, 2021Applicant: Boston Dynamics, Inc.Inventors: Eric Whitman, Gina Christine Fay, Alex Khripin, Max Bajracharya, Matthew Malchano, Adam Komoroski, Christopher Stathis
-
Publication number: 20210023707Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Publication number: 20210027058Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Kevin STONE, Max BAJRACHARYA, Krishna SHANKAR
-
Patent number: 10891484Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: GrantFiled: February 6, 2019Date of Patent: January 12, 2021Assignee: X DEVELOPMENT LLCInventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
-
Publication number: 20190171881Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: ApplicationFiled: February 6, 2019Publication date: June 6, 2019Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
-
Patent number: 10229317Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: GrantFiled: August 6, 2016Date of Patent: March 12, 2019Assignee: X DEVELOPMENT LLCInventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya
-
Patent number: 10078333Abstract: Methods, apparatus, systems, and computer-readable media are provided for efficient mapping of a robot environment. In various implementations, a group of data points may be sensed by a three-dimensional sensor. One or more voxels of a three-dimensional voxel model that are occupied by the group of data points may be identified. For each occupied voxel, a column of the three-dimensional voxel model that contains the occupied voxel may be identified. Occupied voxels contained in each column may be indexed by elevation. In various implementations, one or more sparse linked data structures may be used to represent the columns.Type: GrantFiled: April 17, 2016Date of Patent: September 18, 2018Assignee: X DEVELOPMENT LLCInventor: Max Bajracharya
-
Publication number: 20180039835Abstract: Methods, apparatus, systems, and computer-readable media are provided for downloading targeted object recognition modules that are selected from a library of candidate targeted object recognition modules based on various signals. In some implementations, an object recognition client may be operated to facilitate object recognition for a robot. It may download targeted object recognition module(s). Each targeted object recognition module may facilitate inference of an object type or pose of an observed object. The targeted object module(s) may be selected from a library of targeted object recognition modules based on various signals, such as a task to be performed by the robot. The object recognition client may obtain vision data capturing at least a portion of an environment in which the robot operates. The object recognition client may determine, based on the vision data and the downloaded object recognition module(s), information about an observed object in the environment.Type: ApplicationFiled: August 6, 2016Publication date: February 8, 2018Inventors: Nareshkumar Rajkumar, Stefan Hinterstoisser, Max Bajracharya