Patents by Inventor Michael Laskey
Michael Laskey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12288340Abstract: A method for 3D object perception is described. The method includes extracting features from each image of a synthetic stereo pair of images. The method also includes generating a low-resolution disparity image based on the features extracted from each image of the synthetic stereo pair images. The method further includes predicting, by a trained neural network, a feature map based on the low-resolution disparity image and one of the synthetic stereo pair of images. The method also includes generating, by a perception prediction head, a perception prediction of a detected 3D object based on the feature map predicted by the trained neural network.Type: GrantFiled: June 13, 2022Date of Patent: April 29, 2025Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Thomas Kollar, Kevin Stone, Michael Laskey, Mark Edward Tjersland
-
Patent number: 12131529Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: GrantFiled: January 18, 2023Date of Patent: October 29, 2024Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Publication number: 20230401721Abstract: A method for 3D object perception is described. The method includes extracting features from each image of a synthetic stereo pair of images. The method also includes generating a low-resolution disparity image based on the features extracted from each image of the synthetic stereo pair images. The method further includes predicting, by a trained neural network, a feature map based on the low-resolution disparity image and one of the synthetic stereo pair of images. The method also includes generating, by a perception prediction head, a perception prediction of a detected 3D object based on the feature map predicted by the trained neural network.Type: ApplicationFiled: June 13, 2022Publication date: December 14, 2023Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Thomas KOLLAR, Kevin STONE, Michael LASKEY, Mark Edward TJERSLAND
-
Publication number: 20230398692Abstract: A method for training a neural network to perform 3D object manipulation is described. The method includes extracting features from each image of a synthetic stereo pair of images. The method also includes generating a low-resolution disparity image based on the features extracted from each image of the synthetic stereo pair of images. The method further includes generating, by the neural network, a feature map based on the low-resolution disparity image and one of the synthetic stereo pair of images. The method also includes manipulating an unknown object perceived from the feature map according to a perception prediction from a prediction head.Type: ApplicationFiled: June 13, 2022Publication date: December 14, 2023Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Thomas KOLLAR, Kevin STONE, Michael LASKEY, Mark Edward TJERSLAND
-
Publication number: 20230154015Abstract: A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.Type: ApplicationFiled: January 18, 2023Publication date: May 18, 2023Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Publication number: 20230077856Abstract: System, methods, and other embodiments described herein relate to single-shot multi-object three-dimensional (3D) shape reconstruction and categorical six-dimensional (6D) pose and size estimation. In one embodiment, a method includes inferring a heatmap based upon a feature pyramid, where the feature pyramid is generated based upon a red green blue depth (RGB-D) image that includes objects. The method further includes sampling a 3D parameter map at locations corresponding to peaks in the heatmap, where the 3D parameter map is inferred based upon the feature pyramid, and where the locations include latent shape codes, 6D poses, and one-dimensional (1D) scales. The method further includes generating point clouds based upon the latent shape codes, the 6D poses, and the 1D scales.Type: ApplicationFiled: August 25, 2022Publication date: March 16, 2023Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki KaishaInventors: Muhammad Zubair Irshad, Thomas Kollar, Michael Laskey, Kevin Stone
-
Patent number: 11580724Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: GrantFiled: September 13, 2019Date of Patent: February 14, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy Ma, Josh Petersen, Umashankar Nagarajan, Michael Laskey, Daniel Helmick, James Borders, Krishna Shankar, Kevin Stone, Max Bajracharya
-
Patent number: 11113526Abstract: A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.Type: GrantFiled: September 13, 2019Date of Patent: September 7, 2021Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Kevin Stone, Krishna Shankar, Michael Laskey
-
Publication number: 20210023707Abstract: A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Jeremy MA, Josh PETERSEN, Umashankar NAGARAJAN, Michael LASKEY, Daniel HELMICK, James BORDERS, Krishna SHANKAR, Kevin STONE, Max BAJRACHARYA
-
Publication number: 20210027097Abstract: A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.Type: ApplicationFiled: September 13, 2019Publication date: January 28, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Kevin STONE, Krishna SHANKAR, Michael LASKEY
-
Publication number: 20090242441Abstract: A protective cap for holding a paint brush includes a housing defining an open top and an open bottom, the housing having a top lid and a bottom lid positioned to selectively cover each of the open top and open bottom, respectively. The housing includes side walls defining a plurality of openings configured to allow airflow into an open space within the housing. A paint brush may be inserted into the housing open bottom and through an aperture in the top lid such that the top lid holds the paint brush from falling back through the housing. The bristles of the brush may be held in a perfect vertical configuration while the brush handle is held by the top lid aperture.Type: ApplicationFiled: March 31, 2008Publication date: October 1, 2009Inventor: Michael Laskey