Patents by Inventor Vasiliy Karasev
Vasiliy Karasev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12100224Abstract: Techniques for detecting key points associated with objects in an environment are described herein. The techniques may include receiving sensor data representing a portion of an environment in which the vehicle is operating and inputting the sensor data into a machine-learned model. Based on the input sensor data, the machine-learned model may detect one or more key points corresponding to physical features (e.g., hands, feet, eyes, etc.) of a pedestrian who is in the environment. Based on the one or more key points, a bounding box associated with the pedestrian may be generated and the vehicle may be controlled based on at least one of the key points or the bounding box. The techniques may also include training the machine-learned model to detect key points associated with pedestrians.Type: GrantFiled: April 30, 2021Date of Patent: September 24, 2024Assignee: Zoox, Inc.Inventors: Kratarth Goel, Vasiliy Karasev, Sarah Tariq
-
Patent number: 12051276Abstract: Techniques for detecting attributes and/or gestures associated with pedestrians in an environment are described herein. The techniques may include receiving sensor data associated with a pedestrian in an environment of a vehicle and inputting the sensor data into a machine-learned model that is configured to determine a gesture and/or an attribute of the pedestrian. Based on the input data, an output may be received from the machine-learned model that indicates the gesture and/or the attribute of the pedestrian and the vehicle may be controlled based at least in part on the gesture and/or the attribute of the pedestrian. The techniques may also include training the machine-learned model to detect the attribute and/or the gesture of the pedestrian.Type: GrantFiled: June 5, 2023Date of Patent: July 30, 2024Assignee: Zoox, Inc.Inventors: Oytun Ulutan, Xin Wang, Kratarth Goel, Vasiliy Karasev, Sarah Tariq, Yi Xu
-
Patent number: 11776135Abstract: Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.Type: GrantFiled: November 3, 2020Date of Patent: October 3, 2023Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, Sarah Tariq
-
Patent number: 11710352Abstract: Techniques for detecting attributes and/or gestures associated with pedestrians in an environment are described herein. The techniques may include receiving sensor data associated with a pedestrian in an environment of a vehicle and inputting the sensor data into a machine-learned model that is configured to determine a gesture and/or an attribute of the pedestrian. Based on the input data, an output may be received from the machine-learned model that indicates the gesture and/or the attribute of the pedestrian and the vehicle may be controlled based at least in part on the gesture and/or the attribute of the pedestrian. The techniques may also include training the machine-learned model to detect the attribute and/or the gesture of the pedestrian.Type: GrantFiled: May 14, 2021Date of Patent: July 25, 2023Assignee: Zoox, Inc.Inventors: Oytun Ulutan, Xin Wang, Kratarth Goel, Vasiliy Karasev, Sarah Tariq, Yi Xu
-
Patent number: 11548512Abstract: Techniques for determining a vehicle action and controlling a vehicle to perform the vehicle action for navigating the vehicle in an environment can include determining a vehicle action, such as a lane change action, for a vehicle to perform in an environment. The vehicle can detect, based at least in part on sensor data, an object associated with a target lane associated with the lane change action sensor data. In some instances, the vehicle may determine attribute data associated with the object and input the attribute data to a machine-learned model that can output a yield score. Based on such a yield score, the vehicle may determine whether it is safe to perform the lane change action.Type: GrantFiled: August 23, 2019Date of Patent: January 10, 2023Assignee: Zoox, Inc.Inventors: Abishek Krishna Akella, Vasiliy Karasev, Kai Zhenyu Wang, Rick Zhang
-
Patent number: 11460850Abstract: A trajectory estimate of a wheeled vehicle can be determined based at least in part on determining a wheel angle associated with the vehicle. In some examples, at least a portion of the image associated with the wheeled vehicle may be input into a machine-learned model that is trained to classify and/or regress wheel directions of wheeled vehicles. The machine-learned model may output a predicted wheel direction. The wheel direction and/or additional or historical sensor data may be used to estimate a trajectory of the wheeled vehicle. The predicted trajectory of the object can then be used to generate and refine an autonomous vehicle's trajectory as the autonomous vehicle proceeds through the environment.Type: GrantFiled: May 14, 2019Date of Patent: October 4, 2022Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
-
Patent number: 11292462Abstract: A trajectory estimate of a wheeled vehicle can be determined based at least in part on determining a wheel angle associated with the vehicle. In some examples, at least a portion of the image associated with the wheeled vehicle may be input into a machine-learned model that is trained to classify and/or regress wheel directions of wheeled vehicles. The machine-learned model may output a predicted wheel direction. The wheel direction and/or additional or historical sensor data may be used to estimate a trajectory of the wheeled vehicle. The predicted trajectory of the object can then be used to generate and refine an autonomous vehicle's trajectory as the autonomous vehicle proceeds through the environment.Type: GrantFiled: May 14, 2019Date of Patent: April 5, 2022Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
-
Patent number: 11126179Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.Type: GrantFiled: February 21, 2019Date of Patent: September 21, 2021Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
-
Patent number: 11062461Abstract: An object position and/or orientation can be determined based on image data and object contact points. Image data can be captured representing an object, such as a vehicle. Vehicle contact points can be identified in the image data representing wheel contacts with the ground. For an individual vehicle contact point (e.g., a left-front wheel of the second vehicle), a ray can be determined that emanates from the image sensor and passes through the vehicle contact point. To determine a location and velocity of the vehicle, the ray can be unprojected onto a three-dimensional surface mesh, and an intersection point between the ray and the three-dimensional surface mesh can be used as an initial estimate for the projected location of the vehicle contact point in the world. The estimated location can be adjusted based on various cost functions to optimize an accuracy of the locations of the estimated vehicle contact points.Type: GrantFiled: November 16, 2017Date of Patent: July 13, 2021Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, Juhana Kangaspunta, James William Vaisey Philbin
-
Publication number: 20210053570Abstract: Techniques for determining a vehicle action and controlling a vehicle to perform the vehicle action for navigating the vehicle in an environment can include determining a vehicle action, such as a lane change action, for a vehicle to perform in an environment. The vehicle can detect, based at least in part on sensor data, an object associated with a target lane associated with the lane change action sensor data. In some instances, the vehicle may determine attribute data associated with the object and input the attribute data to a machine-learned model that can output a yield score. Based on such a yield score, the vehicle may determine whether it is safe to perform the lane change action.Type: ApplicationFiled: August 23, 2019Publication date: February 25, 2021Inventors: Abishek Krishna Akella, Vasiliy Karasev, Kai Zhenyu Wang, Rick Zhang
-
Publication number: 20210049778Abstract: Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.Type: ApplicationFiled: November 3, 2020Publication date: February 18, 2021Inventors: Vasiliy Karasev, Sarah Tariq
-
Patent number: 10832418Abstract: Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.Type: GrantFiled: May 9, 2019Date of Patent: November 10, 2020Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, Sarah Tariq
-
Publication number: 20200272148Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.Type: ApplicationFiled: February 21, 2019Publication date: August 27, 2020Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
-
Publication number: 20190147600Abstract: An object position and/or orientation can be determined based on image data and object contact points. Image data can be captured representing an object, such as a vehicle. Vehicle contact points can be identified in the image data representing wheel contacts with the ground. For an individual vehicle contact point (e.g., a left-front wheel of the second vehicle), a ray can be determined that emanates from the image sensor and passes through the vehicle contact point. To determine a location and velocity of the vehicle, the ray can be unprojected onto a three-dimensional surface mesh, and an intersection point between the ray and the three-dimensional surface mesh can be used as an initial estimate for the projected location of the vehicle contact point in the world. The estimated location can be adjusted based on various cost functions to optimize an accuracy of the locations of the estimated vehicle contact points.Type: ApplicationFiled: November 16, 2017Publication date: May 16, 2019Inventors: Vasiliy Karasev, Juhana Kangaspunta, James William Vaisey Philbin