Patents by Inventor Sean Kirmani

Sean Kirmani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240153314
    Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.
    Type: Application
    Filed: January 19, 2024
    Publication date: May 9, 2024
    Inventors: Sean Kirmani, Michael Quinlan, Sarah Coe
  • Patent number: 11945106
    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.
    Type: Grant
    Filed: January 23, 2023
    Date of Patent: April 2, 2024
    Assignee: Google LLC
    Inventors: Michael Quinlan, Sean Kirmani
  • Patent number: 11915523
    Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: February 27, 2024
    Assignee: Google LLC
    Inventors: Sean Kirmani, Michael Quinlan, Sarah Coe
  • Patent number: 11766783
    Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Sean Kirmani, Guy Satat, Michael Quinlan
  • Patent number: 11769269
    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Guy Satat, Michael Quinlan, Sean Kirmani, Anelia Angelova, Ariel Gordon
  • Publication number: 20230150113
    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.
    Type: Application
    Filed: January 23, 2023
    Publication date: May 18, 2023
    Inventors: Michael Quinlan, Sean Kirmani
  • Patent number: 11587302
    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: February 21, 2023
    Assignee: X Development LLC
    Inventors: Michael Quinlan, Sean Kirmani
  • Publication number: 20220388175
    Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.
    Type: Application
    Filed: August 3, 2022
    Publication date: December 8, 2022
    Inventors: Sean Kirmani, Guy Satat, Michael Quinlan
  • Publication number: 20220366725
    Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.
    Type: Application
    Filed: July 27, 2022
    Publication date: November 17, 2022
    Inventors: Sean Kirmani, Michael Quinlan, Sarah Coe
  • Publication number: 20220366590
    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.
    Type: Application
    Filed: August 1, 2022
    Publication date: November 17, 2022
    Inventors: Guy Satat, Michael Quinlan, Sean Kirmani, Anelia Angelova, Ariel Gordon
  • Patent number: 11450018
    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: September 20, 2022
    Assignee: X Development LLC
    Inventors: Guy Satat, Michael Quinlan, Sean Kirmani, Anelia Angelova, Ariel Gordon
  • Patent number: 11440196
    Abstract: A method includes receiving sensor data representing a first object in an environment and generating, based on the sensor data, a first state vector that represents physical properties of the first object. The method also includes generating, by a first machine learning model and based on the first state vector and a second state vector that represents physical properties of a second object previously observed in the environment, a metric indicating a likelihood that the first object is the same as the second object. The method further includes determining, based on the metric, to update the second state vector and updating, by a second machine learning model configured to maintain the second state vector over time and based on the first state vector, the second state vector to incorporate into the second state vector information concerning physical properties of the second object as represented in the first state vector.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: September 13, 2022
    Assignee: X Development LLC
    Inventors: Sean Kirmani, Guy Satat, Michael Quinlan
  • Patent number: 11436869
    Abstract: A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: September 6, 2022
    Assignee: X Development LLC
    Inventors: Sean Kirmani, Michael Quinlan, Sarah Coe
  • Publication number: 20220268939
    Abstract: A method includes receiving first sensor data captured by a first sensor. The method further includes receiving a plurality of labels or predictions corresponding to the first sensor data. The method also includes receiving second sensor data captured by a second sensor. The method further includes determining time-synchronized sensor data comprising a subset of the first sensor data and a subset of the second sensor data. The method additionally includes determining, based on the plurality of labels or predictions and the time-synchronized sensor data, a plurality of pseudo-labels corresponding to the second sensor data. The method also includes generating a training data set comprising at least the subset of the second sensor data and one or more pseudo-labels from the plurality of pseudo-labels.
    Type: Application
    Filed: February 25, 2021
    Publication date: August 25, 2022
    Inventors: Sarah Najmark, Sean Kirmani
  • Publication number: 20210181716
    Abstract: A method includes receiving image data representing an environment of a robotic device from a camera on the robotic device. The method further includes applying a trained dense network to the image data to generate a set of feature values, where the trained dense network has been trained to accomplish a first robot vision task. The method additionally includes applying a trained task-specific head to the set of feature values to generate a task-specific output to accomplish a second robot vision task, where the trained task-specific head has been trained to accomplish the second robot vision task based on previously generated feature values from the trained dense network, where the second robot vision task is different from the first robot vision task. The method also includes controlling the robotic device to operate in the environment based on the task-specific output generated to accomplish the second robot vision task.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Inventors: Michael Quinlan, Sean Kirmani