Patents by Inventor Vasiliy Karasev

Vasiliy Karasev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11776135
    Abstract: Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: October 3, 2023
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, Sarah Tariq
  • Patent number: 11710352
    Abstract: Techniques for detecting attributes and/or gestures associated with pedestrians in an environment are described herein. The techniques may include receiving sensor data associated with a pedestrian in an environment of a vehicle and inputting the sensor data into a machine-learned model that is configured to determine a gesture and/or an attribute of the pedestrian. Based on the input data, an output may be received from the machine-learned model that indicates the gesture and/or the attribute of the pedestrian and the vehicle may be controlled based at least in part on the gesture and/or the attribute of the pedestrian. The techniques may also include training the machine-learned model to detect the attribute and/or the gesture of the pedestrian.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: July 25, 2023
    Assignee: Zoox, Inc.
    Inventors: Oytun Ulutan, Xin Wang, Kratarth Goel, Vasiliy Karasev, Sarah Tariq, Yi Xu
  • Patent number: 11548512
    Abstract: Techniques for determining a vehicle action and controlling a vehicle to perform the vehicle action for navigating the vehicle in an environment can include determining a vehicle action, such as a lane change action, for a vehicle to perform in an environment. The vehicle can detect, based at least in part on sensor data, an object associated with a target lane associated with the lane change action sensor data. In some instances, the vehicle may determine attribute data associated with the object and input the attribute data to a machine-learned model that can output a yield score. Based on such a yield score, the vehicle may determine whether it is safe to perform the lane change action.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: January 10, 2023
    Assignee: Zoox, Inc.
    Inventors: Abishek Krishna Akella, Vasiliy Karasev, Kai Zhenyu Wang, Rick Zhang
  • Patent number: 11460850
    Abstract: A trajectory estimate of a wheeled vehicle can be determined based at least in part on determining a wheel angle associated with the vehicle. In some examples, at least a portion of the image associated with the wheeled vehicle may be input into a machine-learned model that is trained to classify and/or regress wheel directions of wheeled vehicles. The machine-learned model may output a predicted wheel direction. The wheel direction and/or additional or historical sensor data may be used to estimate a trajectory of the wheeled vehicle. The predicted trajectory of the object can then be used to generate and refine an autonomous vehicle's trajectory as the autonomous vehicle proceeds through the environment.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: October 4, 2022
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Patent number: 11292462
    Abstract: A trajectory estimate of a wheeled vehicle can be determined based at least in part on determining a wheel angle associated with the vehicle. In some examples, at least a portion of the image associated with the wheeled vehicle may be input into a machine-learned model that is trained to classify and/or regress wheel directions of wheeled vehicles. The machine-learned model may output a predicted wheel direction. The wheel direction and/or additional or historical sensor data may be used to estimate a trajectory of the wheeled vehicle. The predicted trajectory of the object can then be used to generate and refine an autonomous vehicle's trajectory as the autonomous vehicle proceeds through the environment.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: April 5, 2022
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Patent number: 11126179
    Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: September 21, 2021
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Patent number: 11062461
    Abstract: An object position and/or orientation can be determined based on image data and object contact points. Image data can be captured representing an object, such as a vehicle. Vehicle contact points can be identified in the image data representing wheel contacts with the ground. For an individual vehicle contact point (e.g., a left-front wheel of the second vehicle), a ray can be determined that emanates from the image sensor and passes through the vehicle contact point. To determine a location and velocity of the vehicle, the ray can be unprojected onto a three-dimensional surface mesh, and an intersection point between the ray and the three-dimensional surface mesh can be used as an initial estimate for the projected location of the vehicle contact point in the world. The estimated location can be adjusted based on various cost functions to optimize an accuracy of the locations of the estimated vehicle contact points.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: July 13, 2021
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, Juhana Kangaspunta, James William Vaisey Philbin
  • Publication number: 20210053570
    Abstract: Techniques for determining a vehicle action and controlling a vehicle to perform the vehicle action for navigating the vehicle in an environment can include determining a vehicle action, such as a lane change action, for a vehicle to perform in an environment. The vehicle can detect, based at least in part on sensor data, an object associated with a target lane associated with the lane change action sensor data. In some instances, the vehicle may determine attribute data associated with the object and input the attribute data to a machine-learned model that can output a yield score. Based on such a yield score, the vehicle may determine whether it is safe to perform the lane change action.
    Type: Application
    Filed: August 23, 2019
    Publication date: February 25, 2021
    Inventors: Abishek Krishna Akella, Vasiliy Karasev, Kai Zhenyu Wang, Rick Zhang
  • Publication number: 20210049778
    Abstract: Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.
    Type: Application
    Filed: November 3, 2020
    Publication date: February 18, 2021
    Inventors: Vasiliy Karasev, Sarah Tariq
  • Patent number: 10832418
    Abstract: Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: November 10, 2020
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, Sarah Tariq
  • Publication number: 20200272148
    Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.
    Type: Application
    Filed: February 21, 2019
    Publication date: August 27, 2020
    Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Publication number: 20190147600
    Abstract: An object position and/or orientation can be determined based on image data and object contact points. Image data can be captured representing an object, such as a vehicle. Vehicle contact points can be identified in the image data representing wheel contacts with the ground. For an individual vehicle contact point (e.g., a left-front wheel of the second vehicle), a ray can be determined that emanates from the image sensor and passes through the vehicle contact point. To determine a location and velocity of the vehicle, the ray can be unprojected onto a three-dimensional surface mesh, and an intersection point between the ray and the three-dimensional surface mesh can be used as an initial estimate for the projected location of the vehicle contact point in the world. The estimated location can be adjusted based on various cost functions to optimize an accuracy of the locations of the estimated vehicle contact points.
    Type: Application
    Filed: November 16, 2017
    Publication date: May 16, 2019
    Inventors: Vasiliy Karasev, Juhana Kangaspunta, James William Vaisey Philbin