Patents by Inventor Ammar Husain

Ammar Husain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139405
    Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.
    Type: Application
    Filed: January 6, 2025
    Publication date: May 1, 2025
    Inventors: Ammar Husain, Joerg Mueller
  • Patent number: 12190221
    Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.
    Type: Grant
    Filed: July 25, 2023
    Date of Patent: January 7, 2025
    Assignee: GOOGLE LLC
    Inventors: Ammar Husain, Joerg Mueller
  • Publication number: 20240419169
    Abstract: A method includes receiving one or more past trajectories navigated by a robotic device in an environment, wherein the one or more past trajectories are associated with initial environmental sensor data and one or more obstacle detection heuristics. The method also includes determining, based at least on subsequent environmental sensor data, one or more updated obstacle detection heuristics. The method further includes determining, based on the one or more updated obstacle detection heuristics and the initial environmental sensor data, one or more predicted drivable areas in the environment. The method additionally includes, based on the one or more predicted drivable areas including the one or more past trajectories, using the one or more updated obstacle detection heuristics to determine future navigation of the robotic device.
    Type: Application
    Filed: August 22, 2024
    Publication date: December 19, 2024
    Inventors: Ammar Husain, Ting Lu
  • Patent number: 12090672
    Abstract: A method includes receiving, from a sensor on a robotic device, a captured image representative of an environment of the robotic device when the robotic device is at a location in the environment. The method also includes determining, based at least on the location of the robotic device, a rendered image representative of the environment of the robotic device. The method further includes determining, by applying at least one pre-trained machine learning model to at least the captured image and the rendered image, a property of one or more portions of the captured image.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: September 17, 2024
    Assignee: Google LLC
    Inventor: Ammar Husain
  • Patent number: 12085942
    Abstract: A method includes receiving one or more past trajectories navigated by a robotic device in an environment, wherein the one or more past trajectories are associated with initial environmental sensor data and one or more obstacle detection heuristics. The method also includes determining, based at least on subsequent environmental sensor data, one or more updated obstacle detection heuristics. The method further includes determining, based on the one or more updated obstacle detection heuristics and the initial environmental sensor data, one or more predicted drivable areas in the environment. The method additionally includes, based on the one or more predicted drivable areas including the one or more past trajectories, using the one or more updated obstacle detection heuristics to determine future navigation of the robotic device.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: September 10, 2024
    Assignee: GOOGLE LLC
    Inventors: Ammar Husain, Ting Lu
  • Publication number: 20240248458
    Abstract: Systems and methods are provided for improved generation and selection of robot sensor data for manual annotation and/or use in training machine learning models used to operate robots. An on-robot controller can operate to determine a cross-modal inconsistency, that a temporally proximate target task was failed, and/or that a confidence in a model output indicate that particular sensor data should be transmitted to a remote system for human annotation and/or use in updating the machine learning model(s) of the robot. Embedding vector(s) representing such selected sensor data (e.g., representing common aspects across a population of sets of sensor data) could also be determined and transmitted to the robot. The robot could then determine embeddings for sensor data and, if the embeddings are similar enough to the transmitted embedding(s), the sensor data could be transmitted to the remote system for annotation and/or model updating.
    Type: Application
    Filed: January 23, 2024
    Publication date: July 25, 2024
    Inventors: Sarah Najmark, Ammar Husain
  • Publication number: 20230401419
    Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.
    Type: Application
    Filed: July 25, 2023
    Publication date: December 14, 2023
    Inventors: Ammar Husain, Joerg Mueller
  • Patent number: 11741336
    Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: August 29, 2023
    Assignee: GOOGLE LLC
    Inventors: Ammar Husain, Joerg Mueller
  • Publication number: 20230084774
    Abstract: A method includes determining, for a robotic device that comprises a perception system, a robot planner state representing at least one future path for the robotic device in an environment. The method also includes determining a perception system trajectory by inputting at least the robot planner state into a machine learning model trained based on training data comprising at least a plurality of robot planner states corresponding to a plurality of operator-directed perception system trajectories. The method further includes controlling, by the robotic device, the perception system to move through the determined perception system trajectory.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 16, 2023
    Inventors: Ammar Husain, Mikael Persson
  • Publication number: 20220281113
    Abstract: A method includes receiving, from a sensor on a robotic device, a captured image representative of an environment of the robotic device when the robotic device is at a location in the environment. The method also includes determining, based at least on the location of the robotic device, a rendered image representative of the environment of the robotic device. The method further includes determining, by applying at least one pre-trained machine learning model to at least the captured image and the rendered image, a property of one or more portions of the captured image.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 8, 2022
    Inventor: Ammar Husain
  • Publication number: 20210316448
    Abstract: Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.
    Type: Application
    Filed: December 19, 2019
    Publication date: October 14, 2021
    Inventors: Ammar Husain, Joerg Mueller
  • Publication number: 20210080970
    Abstract: Implementations set forth herein relate to a robot that employs a stereo camera and LIDAR for generating point cloud data while the robot is traversing an area. The point cloud data can characterize spaces within the area as occupied, unoccupied, or uncategorized. For instance, an uncategorized space can refer to a point in three-dimensional (3D) space where occupancy of the space is unknown and/or where no observation has been made by the robot—such as in circumstances where a blind spot is located at or near a base of the robot. In order to efficiently traverse certain areas, the robot can estimate resource costs of either sweeping the stereo camera indiscriminately between spaces and/or specifically focusing the stereo camera on uncategorized space(s) during the route. Based on such resource cost estimations, the robot can adaptively maneuver the stereo camera during routes while also minimizing resource consumption by the robot.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 18, 2021
    Inventors: Ammar Husain, Mikael Persson