Patents by Inventor Tencia Lee
Tencia Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11816852Abstract: A monocular image often does not contain enough information to determine, with certainty, the depth of an object in a scene reflected in the image. Combining image data and LIDAR data may enable determining a depth estimate of the object relative to the camera. Specifically, LIDAR points corresponding to a region of interest (“ROI”) in the image that corresponds to the object may be combined with the image data. These LIDAR points may be scored according to a monocular image model and/or a factor based on a distance between projections of the LIDAR points into the ROI and a center of the region of interest may improve the accuracy of the depth estimate. Using these scores as weights in a weighted median of the LIDAR points may improve the accuracy of the depth estimate, for example, by discerning between a detected object and an occluding object and/or background.Type: GrantFiled: July 27, 2020Date of Patent: November 14, 2023Assignee: Zoox, Inc.Inventors: Tencia Lee, Sabeek Mani Pradhan, Dragomir Dimitrov Anguelov
-
Patent number: 11433922Abstract: Techniques for determining an uncertainty metric associated with an object in an environment can include determining the object in the environment and a set of candidate trajectories associated with the object. Further, a vehicle, such as an autonomous vehicle, can be controlled based at least in part on the uncertainty metric. The vehicle can determine a traversed trajectory associated with the object and determine a difference between the traversed trajectory and the set of candidate trajectories. Based on the difference, the vehicle can determine an uncertainty metric associated with the object. In some instances, the vehicle can input the traversed trajectory and the set of candidate trajectories to a machine-learned model that can output the uncertainty metric.Type: GrantFiled: December 20, 2019Date of Patent: September 6, 2022Assignee: Zoox, Inc.Inventors: Matthew Van Heukelom, Tencia Lee, Kai Zhenyu Wang
-
Patent number: 11361196Abstract: Systems and methods for estimating a height of an object from a monocular image are described herein. Objects are detected in the image, each object being indicated by a region of interest. The image is then cropped for each region of interest and the cropped image scaled to a predetermined size. The cropped and scaled image is then input into a convolutional neural network (CNN), the output of which is an estimated height for the object. The height may be represented by a mean of a probability distribution of possible sizes, a standard deviation, as well as a level of confidence. A location of the object may be determined based on the estimated height and region of interest. A ground truth dataset may be generated for training the CNN by simultaneously capturing a LIDAR sequence with a monocular image sequence.Type: GrantFiled: July 29, 2020Date of Patent: June 14, 2022Assignee: Zoox, Inc.Inventors: Tencia Lee, James William Vaisey Philbin
-
Patent number: 11126179Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.Type: GrantFiled: February 21, 2019Date of Patent: September 21, 2021Assignee: Zoox, Inc.Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
-
Patent number: 11126873Abstract: Techniques for determining lighting states of a tracked object, such as a vehicle, are discussed herein. An autonomous vehicle can include an image sensor to capture image data of an environment. Objects such can be identified in the image data as objects to be tracked. Frames of the image data representing the tracked object can be selected and input to a machine learning algorithm (e.g., a convolutional neural network, a recurrent neural network, etc.) that is trained to determine probabilities associated with one or more lighting states of the tracked object. Such lighting states include, but are not limited to, a blinker state(s), a brake state, a hazard state, etc. Based at least in part on the one or more probabilities associated with the one or more lighting states, the autonomous vehicle can determine a trajectory for the autonomous vehicle and/or can determine a predicted trajectory for the tracked object.Type: GrantFiled: May 17, 2018Date of Patent: September 21, 2021Assignee: Zoox, Inc.Inventors: Tencia Lee, Kai Zhenyu Wang, James William Vaisey Philbin
-
Publication number: 20210104056Abstract: A monocular image often does not contain enough information to determine, with certainty, the depth of an object in a scene reflected in the image. Combining image data and LIDAR data may enable determining a depth estimate of the object relative to the camera. Specifically, LIDAR points corresponding to a region of interest (“ROI”) in the image that corresponds to the object may be combined with the image data. These LIDAR points may be scored according to a monocular image model and/or a factor based on a distance between projections of the LIDAR points into the ROI and a center of the region of interest may improve the accuracy of the depth estimate. Using these scores as weights in a weighted median of the LIDAR points may improve the accuracy of the depth estimate, for example, by discerning between a detected object and an occluding object and/or background.Type: ApplicationFiled: July 27, 2020Publication date: April 8, 2021Inventors: Tencia Lee, Sabeek Mani Pradhan, Dragomir Dimitrov Anguelov
-
Publication number: 20200380316Abstract: Systems and methods for estimating a height of an object from a monocular image are described herein. Objects are detected in the image, each object being indicated by a region of interest. The image is then cropped for each region of interest and the cropped image scaled to a predetermined size. The cropped and scaled image is then input into a convolutional neural network (CNN), the output of which is an estimated height for the object. The height may be represented by a mean of a probability distribution of possible sizes, a standard deviation, as well as a level of confidence. A location of the object may be determined based on the estimated height and region of interest. A ground truth dataset may be generated for training the CNN by simultaneously capturing a LIDAR sequence with a monocular image sequence.Type: ApplicationFiled: July 29, 2020Publication date: December 3, 2020Inventors: Tencia Lee, James William Vaisey Philbin
-
Publication number: 20200272148Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.Type: ApplicationFiled: February 21, 2019Publication date: August 27, 2020Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
-
Patent number: 10733482Abstract: Systems and methods for estimating a height of an object from a monocular image are described herein. Objects are detected in the image, each object being indicated by a region of interest. The image is then cropped for each region of interest and the cropped image scaled to a predetermined size. The cropped and scaled image is then input into a convolutional neural network (CNN), the output of which is an estimated height for the object. The height may be represented by a mean of a probability distribution of possible sizes, a standard deviation, as well as a level of confidence. A location of the object may be determined based on the estimated height and region of interest. A ground truth dataset may be generated for training the CNN by simultaneously capturing a LIDAR sequence with a monocular image sequence.Type: GrantFiled: March 8, 2017Date of Patent: August 4, 2020Assignee: Zoox, Inc.Inventors: Tencia Lee, James William Vaisey Philbin
-
Patent number: 10726567Abstract: A monocular image often does not contain enough information to determine, with certainty, the depth of an object in a scene reflected in the image. Combining image data and LIDAR data may enable determining a depth estimate of the object relative to the camera. Specifically, LIDAR points corresponding to a region of interest (“ROI”) in the image that corresponds to the object may be combined with the image data. These LIDAR points may be scored according to a monocular image model and/or a factor based on a distance between projections of the LIDAR points into the ROI and a center of the region of interest may improve the accuracy of the depth estimate. Using these scores as weights in a weighted median of the LIDAR points may improve the accuracy of the depth estimate, for example, by discerning between a detected object and an occluding object and/or background.Type: GrantFiled: May 3, 2018Date of Patent: July 28, 2020Assignee: Zoox, Inc.Inventors: Tencia Lee, Sabeek Mani Pradhan, Dragomir Dimitrov Anguelov
-
Publication number: 20190354786Abstract: Techniques for determining lighting states of a tracked object, such as a vehicle, are discussed herein. An autonomous vehicle can include an image sensor to capture image data of an environment. Objects such can be identified in the image data as objects to be tracked. Frames of the image data representing the tracked object can be selected and input to a machine learning algorithm (e.g., a convolutional neural network, a recurrent neural network, etc.) that is trained to determine probabilities associated with one or more lighting states of the tracked object. Such lighting states include, but are not limited to, a blinker state(s), a brake state, a hazard state, etc. Based at least in part on the one or more probabilities associated with the one or more lighting states, the autonomous vehicle can determine a trajectory for the autonomous vehicle and/or can determine a predicted trajectory for the tracked object.Type: ApplicationFiled: May 17, 2018Publication date: November 21, 2019Inventors: Tencia Lee, Kai Zhenyu Wang, James William Vaisey Philbin
-
Publication number: 20190340775Abstract: A monocular image often does not contain enough information to determine, with certainty, the depth of an object in a scene reflected in the image. Combining image data and LIDAR data may enable determining a depth estimate of the object relative to the camera. Specifically, LIDAR points corresponding to a region of interest (“ROI”) in the image that corresponds to the object may be combined with the image data. These LIDAR points may be scored according to a monocular image model and/or a factor based on a distance between projections of the LIDAR points into the ROI and a center of the region of interest may improve the accuracy of the depth estimate. Using these scores as weights in a weighted median of the LIDAR points may improve the accuracy of the depth estimate, for example, by discerning between a detected object and an occluding object and/or background.Type: ApplicationFiled: May 3, 2018Publication date: November 7, 2019Inventors: Tencia Lee, Sabeek Mani Pradhan, Dragomir Dimitrov Anguelov