Patents by Inventor Gowtham Garimella
Gowtham Garimella has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11958554Abstract: Model-based control of dynamical systems typically requires accurate domain-specific knowledge and specifications system components. Generally, steering actuator dynamics can be difficult to model due to, for example, an integrated power steering control module, proprietary black box controls, etc. Further, it is difficult to capture the complex interplay of non-linear interactions, such as power steering, tire forces, etc. with sufficient accuracy. To overcome this limitation, a recurring neural network can be employed to model the steering dynamics of an autonomous vehicle. The resulting model can be used to generate feedforward steering commands for embedded control. Such a neural network model can be automatically generated with less domain-specific knowledge, can predict steering dynamics more accurately, and perform comparably to a high-fidelity first principle model when used for controlling the steering system of a self-driving vehicle.Type: GrantFiled: November 9, 2020Date of Patent: April 16, 2024Assignee: Zoox, Inc.Inventors: Joseph Funke, Gowtham Garimella, Marin Kobilarov, Chuang Wang
-
Patent number: 11891088Abstract: A reward determined as part of a machine learning technique, such as reinforcement learning, may be used to control an adversarial agent in a simulation such that a component for controlling motion of the adversarial agent is trained to reduce the reward. Training the adversarial agent component may be subject to one or more constraints and/or may be balanced against one or more additional goals. Additionally or alternatively, the reward may be used to alter scenario data so that the scenario data reduces the reward, allowing the discovery of difficult scenarios and/or prospective events.Type: GrantFiled: June 14, 2021Date of Patent: February 6, 2024Assignee: ZOOX, INC.Inventors: Marin Kobilarov, Jefferson Bradfield Packer, Gowtham Garimella, Andreas Pasternak, Yiteng Zhang, Ruikun Yu
-
Publication number: 20240001958Abstract: Techniques for improving operational decisions of an autonomous vehicle are discussed herein. In some cases, a system may generate reference graphs associated with a route of the autonomous vehicle. Such reference graphs can comprise precomputed feature vectors based on grid regions and/or lane segments. The feature vectors are usable to determine scene context data associated with static objects to reduce computational expenses and compute time.Type: ApplicationFiled: June 30, 2022Publication date: January 4, 2024Inventors: Gowtham Garimella, Gary Linscott, Ethan Miller Pronovost
-
Patent number: 11858514Abstract: Techniques for top-down scene discrimination are discussed. A system receives scene data associated with an environment proximate a vehicle. The scene data is input to a convolutional neural network (CNN) discriminator trained using a generator and a classification of the output of the CNN discriminator. The CNN discriminator generates an indication of whether the scene data is a generated scene or a captured scene. If the scene data is data generated scene, the system generates a caution notification indicating that a current environmental situation is different from any previous situations. Additionally, the caution notification is communicated to at least one of a vehicle system or a remote vehicle monitoring system.Type: GrantFiled: March 30, 2021Date of Patent: January 2, 2024Assignee: ZOOX, INC.Inventors: Gerrit Bagschik, Andrew Scott Crego, Gowtham Garimella, Michael Haggblade, Andraz Kavalar, Kai Zhenyu Wang
-
Patent number: 11810365Abstract: Techniques for modeling the probability distribution of errors in perception systems are discussed herein. For example, techniques may include modeling error distribution for attributes such as position, size, pose, and velocity of objects detected in an environment, and training a mixture model to output specific error probability distributions based on input features such as object classification, distance to the object, and occlusion. The output of the trained model may be used to control the operation of a vehicle in an environment, generate simulations, perform collision probability analyses, and to mine log data to detect collision risks.Type: GrantFiled: December 15, 2020Date of Patent: November 7, 2023Assignee: Zoox, Inc.Inventors: Andrew Scott Crego, Gowtham Garimella, Mahsa Ghafarianzadeh, Rasmus Fonseca, Muhammad Farooq Rama, Kai Zhenyu Wang
-
Patent number: 11810225Abstract: Techniques for top-down scene generation are discussed. A generator component may receive multi-dimensional input data associated with an environment. The generator component may generate, based at least in part on the multi-dimensional input data, a generated top-down scene. A discriminator component receives the generated top-down scene and a real top-down scene. The discriminator component generates binary classification data indicating whether an individual scene in the scene data is classified as generated or classified as real. The binary classification data is provided as a loss to the generator component and the discriminator component.Type: GrantFiled: March 30, 2021Date of Patent: November 7, 2023Assignee: Zoox, Inc.Inventors: Gerrit Bagschik, Andrew Scott Crego, Gowtham Garimella, Michael Haggblade, Andraz Kavalar, Kai Zhenyu Wang
-
Patent number: 11734832Abstract: Techniques for determining predictions on a top-down representation of an environment based on object movement are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) may capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle, a pedestrian, a bicycle). A multi-channel image representing a top-down view of the object(s) and the environment may be generated based in part on the sensor data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) may also be encoded in the image. Multiple images may be generated representing the environment over time and input into a prediction system configured to output a trajectory template (e.g., general intent for future movement) and a predicted trajectory (e.g., more accurate predicted movement) associated with each object. The prediction system may include a machine learned model configured to output the trajectory template(s) and the predicted trajector(ies).Type: GrantFiled: February 2, 2022Date of Patent: August 22, 2023Assignee: Zoox, Inc.Inventors: Andres Guillermo Morales Morales, Marin Kobilarov, Gowtham Garimella, Kai Zhenyu Wang
-
Patent number: 11708093Abstract: Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include determining a trajectory of the object, determining an intent of the trajectory, and sending the trajectory and the intent to a vehicle computing system to control an autonomous vehicle. The vehicle computing system may implement a machine learned model to process data such as sensor data and map data. The machine learned model can associate different intentions of an object in an environment with different trajectories. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on object's intentions and trajectories.Type: GrantFiled: May 8, 2020Date of Patent: July 25, 2023Assignee: Zoox, Inc.Inventors: Kenneth Michael Siebert, Gowtham Garimella, Benjamin Isaac Mattinson, Samir Parikh, Kai Zhenyu Wang
-
Publication number: 20230159060Abstract: Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a distribution of predicted positions for the object in the future that meet a criterion, allowing for more efficient sampling. A predicted position of the object in the future may be determined by sampling from the distribution.Type: ApplicationFiled: November 24, 2021Publication date: May 25, 2023Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Ethan Miller Pronovost, Kai Zhenyu Wang, Xiaosi Zeng
-
Publication number: 20230159059Abstract: Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a predicted position of the object at a subsequent timestep. Further, a predicted trajectory of the object may be determined using predicted positions of the object at various timesteps.Type: ApplicationFiled: November 24, 2021Publication date: May 25, 2023Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Ethan Miller Pronovost, Kai Zhenyu Wang, Xiaosi Zeng
-
Patent number: 11631200Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.Type: GrantFiled: May 20, 2021Date of Patent: April 18, 2023Assignee: Zoox, Inc.Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang
-
Patent number: 11565709Abstract: Techniques for generating simulations for evaluating a performance of a controller of an autonomous vehicle are described. A computing system may evaluate the performance of the controller to navigate the simulation and respond to actions of one or more objects (e.g., other vehicles, bicyclists, pedestrians, etc.) in a simulation. Actions of the objects in the simulation may be controlled by the computing system (e.g., by an artificial intelligence) and/or one or more users inputting object controls, such as via a user interface. The computing system may calculate performance metrics associated with the actions performed by the vehicle in the simulation as directed by the autonomous controller. The computing system may utilize the performance metrics to verify parameters of the autonomous controller (e.g., validate the autonomous controller) and/or to train the autonomous controller utilizing machine learning techniques to bias toward preferred actions.Type: GrantFiled: August 29, 2019Date of Patent: January 31, 2023Assignee: Zoox, Inc.Inventors: Timothy Caldwell, Jefferson Bradfield Packer, William Anthony Silva, Rick Zhang, Gowtham Garimella
-
Patent number: 11554790Abstract: Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include inputting data into a model and receiving an output from the model representing a discretized representation. The discretized representation may be associated with a probability of an object reaching a location in the environment at a future time. A vehicle computing system may determine a trajectory and a weight associated with the trajectory using the discretized representation and the probability. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on the trajectory and the weight output by the vehicle computing system.Type: GrantFiled: May 8, 2020Date of Patent: January 17, 2023Assignee: Zoox, Inc.Inventors: Kenneth Michael Siebert, Gowtham Garimella, Samir Parikh
-
Publication number: 20220319057Abstract: Techniques for top-down scene generation are discussed. A generator component may receive multi-dimensional input data associated with an environment. The generator component may generate, based at least in part on the multi-dimensional input data, a generated top-down scene. A discriminator component receives the generated top-down scene and a real top-down scene. The discriminator component generates binary classification data indicating whether an individual scene in the scene data is classified as generated or classified as real. The binary classification data is provided as a loss to the generator component and the discriminator component.Type: ApplicationFiled: March 30, 2021Publication date: October 6, 2022Inventors: Gerrit Bagschik, Andrew Scott Crego, Gowtham Garimella, Michael Haggblade, Andraz Kavalar, Kai Zhenyu Wang
-
Publication number: 20220314993Abstract: Techniques for top-down scene discrimination are discussed. A system receives scene data associated with an environment proximate a vehicle. The scene data is input to a convolutional neural network (CNN) discriminator trained using a generator and a classification of the output of the CNN discriminator. The CNN discriminator generates an indication of whether the scene data is a generated scene or a captured scene. If the scene data is data generated scene, the system generates a caution notification indicating that a current environmental situation is different from any previous situations. Additionally, the caution notification is communicated to at least one of a vehicle system or a remote vehicle monitoring system.Type: ApplicationFiled: March 30, 2021Publication date: October 6, 2022Inventors: Gerrit Bagschik, Andrew Scott Crego, Gowtham Garimella, Michael Haggblade, Andraz Kavalar, Kai Zhenyu Wang
-
Publication number: 20220274625Abstract: Techniques are discussed herein for generating and using graph neural networks (GNNs) including vectorized representations of map elements and entities within the environment of an autonomous vehicle. Various techniques may include vectorizing map data into representations of map elements, and object data representing entities in the environment of the autonomous vehicle. In some examples, the autonomous vehicle may generate and/or use a GNN representing the environment, including nodes stored as vectorized representations of map elements and entities, and edge features including the relative position and relative yaw between the objects. Machine-learning inference operations may be executed on the GNN, and the node and edge data may be extracted and decoded to predict future states of the entities in the environment.Type: ApplicationFiled: February 26, 2021Publication date: September 1, 2022Inventors: Gowtham Garimella, Andres Guillermo Morales Morales
-
Patent number: 11276179Abstract: Techniques for determining predictions on a top-down representation of an environment based on object movement are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) may capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle, a pedestrian, a bicycle). A multi-channel image representing a top-down view of the object(s) and the environment may be generated based in part on the sensor data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) may also be encoded in the image. Multiple images may be generated representing the environment over time and input into a prediction system configured to output a trajectory template (e.g., general intent for future movement) and a predicted trajectory (e.g., more accurate predicted movement) associated with each object. The prediction system may include a machine learned model configured to output the trajectory template(s) and the predicted trajector(ies).Type: GrantFiled: December 18, 2019Date of Patent: March 15, 2022Assignee: Zoox, Inc.Inventors: Andres Guillermo Morales Morales, Marin Kobilarov, Gowtham Garimella, Kai Zhenyu Wang
-
Publication number: 20210347383Abstract: Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include determining a trajectory of the object, determining an intent of the trajectory, and sending the trajectory and the intent to a vehicle computing system to control an autonomous vehicle. The vehicle computing system may implement a machine learned model to process data such as sensor data and map data. The machine learned model can associate different intentions of an object in an environment with different trajectories. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on object's intentions and trajectories.Type: ApplicationFiled: May 8, 2020Publication date: November 11, 2021Inventors: Kenneth Michael Siebert, Gowtham Garimella, Benjamin Isaac Mattinson, Samir Parikh, Kai Zhenyu Wang
-
Publication number: 20210347377Abstract: Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include inputting data into a model and receiving an output from the model representing a discretized representation. The discretized representation may be associated with a probability of an object reaching a location in the environment at a future time. A vehicle computing system may determine a trajectory and a weight associated with the trajectory using the discretized representation and the probability. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on the trajectory and the weight output by the vehicle computing system.Type: ApplicationFiled: May 8, 2020Publication date: November 11, 2021Inventors: Kenneth Michael Siebert, Gowtham Garimella, Samir Parikh
-
Publication number: 20210271901Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.Type: ApplicationFiled: May 20, 2021Publication date: September 2, 2021Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang