Patents by Inventor Sean Kirmani
Sean Kirmani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240153314Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.Type: ApplicationFiled: January 19, 2024Publication date: May 9, 2024Inventors: Sean Kirmani, Michael Quinlan, Sarah Coe
-
Patent number: 11945106Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.Type: GrantFiled: January 23, 2023Date of Patent: April 2, 2024Assignee: Google LLCInventors: Michael Quinlan, Sean Kirmani
-
Patent number: 11915523Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.Type: GrantFiled: July 27, 2022Date of Patent: February 27, 2024Assignee: Google LLCInventors: Sean Kirmani, Michael Quinlan, Sarah Coe
-
Patent number: 11766783Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.Type: GrantFiled: August 3, 2022Date of Patent: September 26, 2023Assignee: Google LLCInventors: Sean Kirmani, Guy Satat, Michael Quinlan
-
Patent number: 11769269Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.Type: GrantFiled: August 1, 2022Date of Patent: September 26, 2023Assignee: Google LLCInventors: Guy Satat, Michael Quinlan, Sean Kirmani, Anelia Angelova, Ariel Gordon
-
Publication number: 20230150113Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.Type: ApplicationFiled: January 23, 2023Publication date: May 18, 2023Inventors: Michael Quinlan, Sean Kirmani
-
Patent number: 11587302Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.Type: GrantFiled: December 17, 2019Date of Patent: February 21, 2023Assignee: X Development LLCInventors: Michael Quinlan, Sean Kirmani
-
Publication number: 20220388175Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.Type: ApplicationFiled: August 3, 2022Publication date: December 8, 2022Inventors: Sean Kirmani, Guy Satat, Michael Quinlan
-
Publication number: 20220366725Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.Type: ApplicationFiled: July 27, 2022Publication date: November 17, 2022Inventors: Sean Kirmani, Michael Quinlan, Sarah Coe
-
Publication number: 20220366590Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.Type: ApplicationFiled: August 1, 2022Publication date: November 17, 2022Inventors: Guy Satat, Michael Quinlan, Sean Kirmani, Anelia Angelova, Ariel Gordon
-
Patent number: 11450018Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.Type: GrantFiled: December 24, 2019Date of Patent: September 20, 2022Assignee: X Development LLCInventors: Guy Satat, Michael Quinlan, Sean Kirmani, Anelia Angelova, Ariel Gordon
-
Patent number: 11440196Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.Type: GrantFiled: December 17, 2019Date of Patent: September 13, 2022Assignee: X Development LLCInventors: Sean Kirmani, Guy Satat, Michael Quinlan
-
Patent number: 11436869Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.Type: GrantFiled: December 9, 2019Date of Patent: September 6, 2022Assignee: X Development LLCInventors: Sean Kirmani, Michael Quinlan, Sarah Coe
-
Publication number: 20220268939Abstract: A method includes receiving first sensor data captured by a first sensor. The method further includes receiving a plurality of labels or predictions corresponding to the first sensor data. The method also includes receiving second sensor data captured by a second sensor. The method further includes determining time-synchronized sensor data comprising a subset of the first sensor data and a subset of the second sensor data. The method additionally includes determining, based on the plurality of labels or predictions and the time-synchronized sensor data, a plurality of pseudo-labels corresponding to the second sensor data. The method also includes generating a training data set comprising at least the subset of the second sensor data and one or more pseudo-labels from the plurality of pseudo-labels.Type: ApplicationFiled: February 25, 2021Publication date: August 25, 2022Inventors: Sarah Najmark, Sean Kirmani
-
Publication number: 20210181716Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.Type: ApplicationFiled: December 17, 2019Publication date: June 17, 2021Inventors: Michael Quinlan, Sean Kirmani