Patents by Inventor Ashesh Jain
Ashesh Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12367677Abstract: Embodiments are disclosed for real-time event detection using edge and cloud AI. An event monitoring system can receive live video data from one or more video capture devices at a surveillance location. A first machine learning model identifies a first portion of the live video data as depicting an event. The first portion of the live video data is provided to a second machine learning model. The second machine learning model identifies the first portion of the live video data as depicting the event. An event notification corresponding to the event is then sent to a user device.Type: GrantFiled: January 16, 2025Date of Patent: July 22, 2025Assignee: Coram AI, Inc.Inventors: Peter Ondruska, Ashesh Jain, Balazs Kovacs, Luca Bergamini
-
Publication number: 20250077576Abstract: Embodiments are disclosed for using natural language processing (NLP) to manage security video data. A method of using NLP to search security video data includes receiving, by a surveillance video query system, a text query. A query embedding corresponding to the text query is obtained using a text query model. One or more matching frame embeddings that match the query embedding are identified in a vector database. Matching surveillance video data corresponding to the one or more matching frame embeddings is then obtained from a surveillance video data store. The matching surveillance video data is returned in response to receipt of the text query.Type: ApplicationFiled: April 3, 2024Publication date: March 6, 2025Inventors: Ashesh Jain, Peter Ondruska, Yawei Ye, Qiangui Huang
-
Patent number: 11954151Abstract: Embodiments are disclosed for using natural language processing (NLP) to manage security video data. A method of using NLP to search security video data includes receiving, by a surveillance video query system, a text query. A query embedding corresponding to the text query is obtained using a text query model. One or more matching frame embeddings that match the query embedding are identified in a vector database. Matching surveillance video data corresponding to the one or more matching frame embeddings is then obtained from a surveillance video data store. The matching surveillance video data is returned in response to receipt of the text query.Type: GrantFiled: September 6, 2023Date of Patent: April 9, 2024Assignee: Coram AI, Inc.Inventors: Ashesh Jain, Peter Ondruska, Yawei Ye, Qiangui Huang
-
Patent number: 11927967Abstract: In one embodiment, a computing system of a vehicle may access sensor data associated with a surrounding environment of a vehicle. The system may generate, based on the sensor data, a first trajectory having one or more first driving characteristics for navigating the vehicle in the surrounding environment. The system may generate a second trajectory having one or more second driving characteristics by modifying the one or more first driving characteristics of the first trajectory. The modifying may use adjustment parameters based on one or more human-driving characteristics of observed human-driven trajectories such that the one or more second driving characteristics satisfy a similarity threshold relative to the one or more human-driving characteristics. The system may determine, based on the second trajectory, vehicle operations to navigate the vehicle in the surrounding environment.Type: GrantFiled: August 31, 2020Date of Patent: March 12, 2024Assignee: Woven by Toyota, U.S., Inc.Inventors: Ashesh Jain, Anantha Rao Kancherla, Taggart C Matthiesen
-
Patent number: 11875680Abstract: Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes.Type: GrantFiled: August 3, 2020Date of Patent: January 16, 2024Assignee: Lyft, Inc.Inventors: Ashesh Jain, Yunjian Jiang, Mushfiqur Rouf, Henru Wang, Lei Zhang
-
Patent number: 11868136Abstract: In one embodiment, a method includes, by a computing system associated with a vehicle, determining a current location of the vehicle in a first region, identifying one or more first sets of model parameters associated with the first region and one or more second sets of model parameters associated with a second region, generating, using one or more machine-learning models based on the first sets of model parameters, one or more first inferences based on first sensor data captured by the vehicle, switching the configurations of the models from the first sets of model parameters to the second sets of model parameters, generating, using the models having configurations based on the second sets of model parameters, one or more second inferences based on second sensor data generated by the sensors of the vehicle in the second region, and causing the vehicle to perform one or more operations based on the second inferences.Type: GrantFiled: December 19, 2019Date of Patent: January 9, 2024Assignee: Woven by Toyota, U.S., Inc.Inventors: Michael Jared Benisch, Ashesh Jain
-
Publication number: 20220066459Abstract: In one embodiment, a computing system of a vehicle may access sensor data associated with a surrounding environment of a vehicle. The system may generate, based on the sensor data, a first trajectory having one or more first driving characteristics for navigating the vehicle in the surrounding environment. The system may generate a second trajectory having one or more second driving characteristics by modifying the one or more first driving characteristics of the first trajectory. The modifying may use adjustment parameters based on one or more human-driving characteristics of observed human-driven trajectories such that the one or more second driving characteristics satisfy a similarity threshold relative to the one or more human-driving characteristics. The system may determine, based on the second trajectory, vehicle operations to navigate the vehicle in the surrounding environment.Type: ApplicationFiled: August 31, 2020Publication date: March 3, 2022Applicant: Woven Planet North America, Inc.Inventors: Ashesh Jain, Anantha Rao Kancherla, Taggart C. Matthiesen
-
Patent number: 11238370Abstract: Systems, methods, and non-transitory computer-readable media can determine first sensor data captured by a first sensor of a vehicle. Second sensor data captured by a second sensor of the vehicle can be determined. Information describing the first sensor data and the second sensor data can be provided to a machine learning model trained to predict whether a pair of sensors are calibrated or mis-calibrated based on sensor data captured by the pair of sensors. A determination is made whether the first sensor and the second sensor are calibrated or mis-calibrated based on an output from the machine learning model.Type: GrantFiled: December 31, 2018Date of Patent: February 1, 2022Assignee: Woven Planet North America, Inc.Inventors: Ashesh Jain, Lei Zhang
-
Patent number: 11216971Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.Type: GrantFiled: August 30, 2019Date of Patent: January 4, 2022Assignee: Zoox, Inc.Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain
-
Patent number: 11170238Abstract: Systems, methods, and non-transitory computer-readable media can determine sensor data captured by at least one sensor of a vehicle over a set of time intervals while navigating an environment. Three-dimensional data describing the environment over the set of time intervals can be determined from the captured sensor data. The three-dimensional data capturing a traffic motion pattern for at least one direction of travel. Image data of at least one traffic light in the environment can be determined over the set of time intervals from the captured sensor data. A state of the at least one traffic light can be predicted based at least in part on the three-dimensional data describing the environment and the image data of at least one traffic light in the environment over the set of time intervals.Type: GrantFiled: June 26, 2019Date of Patent: November 9, 2021Assignee: Woven Planet North America, Inc.Inventors: Meng Gao, Ashesh Jain
-
Publication number: 20210303877Abstract: Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes.Type: ApplicationFiled: August 3, 2020Publication date: September 30, 2021Inventors: Ashesh Jain, Yunjian Jiang, Mushfiqur Rouf, Henru Wang, Lei Zhang
-
Publication number: 20210191407Abstract: In one embodiment, a method includes, by a computing system associated with a vehicle, determining a current location of the vehicle in a first region, identifying one or more first sets of model parameters associated with the first region and one or more second sets of model parameters associated with a second region, generating, using one or more machine-learning models based on the first sets of model parameters, one or more first inferences based on first sensor data captured by the vehicle, switching the configurations of the models from the first sets of model parameters to the second sets of model parameters, generating, using the models having configurations based on the second sets of model parameters, one or more second inferences based on second sensor data generated by the sensors of the vehicle in the second region, and causing the vehicle to perform one or more operations based on the second inferences.Type: ApplicationFiled: December 19, 2019Publication date: June 24, 2021Inventors: Michael Jared Benisch, Ashesh Jain
-
Publication number: 20200410263Abstract: Systems, methods, and non-transitory computer-readable media can determine sensor data captured by at least one sensor of a vehicle over a set of time intervals while navigating an environment. Three-dimensional data describing the environment over the set of time intervals can be determined from the captured sensor data. The three-dimensional data capturing a traffic motion pattern for at least one direction of travel. Image data of at least one traffic light in the environment can be determined over the set of time intervals from the captured sensor data. A state of the at least one traffic light can be predicted based at least in part on the three-dimensional data describing the environment and the image data of at least one traffic light in the environment over the set of time intervals.Type: ApplicationFiled: June 26, 2019Publication date: December 31, 2020Applicant: Lyft, Inc.Inventors: Meng Gao, Ashesh Jain
-
Patent number: 10733463Abstract: Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes.Type: GrantFiled: March 31, 2020Date of Patent: August 4, 2020Assignee: Lyft, Inc.Inventors: Ashesh Jain, Yunjian Jiang, Mushfiqur Rouf, Henru Wang, Lei Zhang
-
Publication number: 20200210887Abstract: Systems, methods, and non-transitory computer-readable media can determine first sensor data captured by a first sensor of a vehicle. Second sensor data captured by a second sensor of the vehicle can be determined. Information describing the first sensor data and the second sensor data can be provided to a machine learning model trained to predict whether a pair of sensors are calibrated or mis-calibrated based on sensor data captured by the pair of sensors. A determination is made whether the first sensor and the second sensor are calibrated or mis-calibrated based on an output from the machine learning model.Type: ApplicationFiled: December 31, 2018Publication date: July 2, 2020Applicant: Lyft, Inc.Inventors: Ashesh Jain, Lei Zhang
-
Publication number: 20200005485Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.Type: ApplicationFiled: August 30, 2019Publication date: January 2, 2020Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain
-
Patent number: 10438371Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.Type: GrantFiled: October 30, 2017Date of Patent: October 8, 2019Assignee: Zoox, Inc.Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain
-
Publication number: 20190096086Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.Type: ApplicationFiled: October 30, 2017Publication date: March 28, 2019Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain