Patents by Inventor Ashesh Jain

Ashesh Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12367677
    Abstract: Embodiments are disclosed for real-time event detection using edge and cloud AI. An event monitoring system can receive live video data from one or more video capture devices at a surveillance location. A first machine learning model identifies a first portion of the live video data as depicting an event. The first portion of the live video data is provided to a second machine learning model. The second machine learning model identifies the first portion of the live video data as depicting the event. An event notification corresponding to the event is then sent to a user device.
    Type: Grant
    Filed: January 16, 2025
    Date of Patent: July 22, 2025
    Assignee: Coram AI, Inc.
    Inventors: Peter Ondruska, Ashesh Jain, Balazs Kovacs, Luca Bergamini
  • Publication number: 20250077576
    Abstract: Embodiments are disclosed for using natural language processing (NLP) to manage security video data. A method of using NLP to search security video data includes receiving, by a surveillance video query system, a text query. A query embedding corresponding to the text query is obtained using a text query model. One or more matching frame embeddings that match the query embedding are identified in a vector database. Matching surveillance video data corresponding to the one or more matching frame embeddings is then obtained from a surveillance video data store. The matching surveillance video data is returned in response to receipt of the text query.
    Type: Application
    Filed: April 3, 2024
    Publication date: March 6, 2025
    Inventors: Ashesh Jain, Peter Ondruska, Yawei Ye, Qiangui Huang
  • Patent number: 11954151
    Abstract: Embodiments are disclosed for using natural language processing (NLP) to manage security video data. A method of using NLP to search security video data includes receiving, by a surveillance video query system, a text query. A query embedding corresponding to the text query is obtained using a text query model. One or more matching frame embeddings that match the query embedding are identified in a vector database. Matching surveillance video data corresponding to the one or more matching frame embeddings is then obtained from a surveillance video data store. The matching surveillance video data is returned in response to receipt of the text query.
    Type: Grant
    Filed: September 6, 2023
    Date of Patent: April 9, 2024
    Assignee: Coram AI, Inc.
    Inventors: Ashesh Jain, Peter Ondruska, Yawei Ye, Qiangui Huang
  • Patent number: 11927967
    Abstract: In one embodiment, a computing system of a vehicle may access sensor data associated with a surrounding environment of a vehicle. The system may generate, based on the sensor data, a first trajectory having one or more first driving characteristics for navigating the vehicle in the surrounding environment. The system may generate a second trajectory having one or more second driving characteristics by modifying the one or more first driving characteristics of the first trajectory. The modifying may use adjustment parameters based on one or more human-driving characteristics of observed human-driven trajectories such that the one or more second driving characteristics satisfy a similarity threshold relative to the one or more human-driving characteristics. The system may determine, based on the second trajectory, vehicle operations to navigate the vehicle in the surrounding environment.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: March 12, 2024
    Assignee: Woven by Toyota, U.S., Inc.
    Inventors: Ashesh Jain, Anantha Rao Kancherla, Taggart C Matthiesen
  • Patent number: 11875680
    Abstract: Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 16, 2024
    Assignee: Lyft, Inc.
    Inventors: Ashesh Jain, Yunjian Jiang, Mushfiqur Rouf, Henru Wang, Lei Zhang
  • Patent number: 11868136
    Abstract: In one embodiment, a method includes, by a computing system associated with a vehicle, determining a current location of the vehicle in a first region, identifying one or more first sets of model parameters associated with the first region and one or more second sets of model parameters associated with a second region, generating, using one or more machine-learning models based on the first sets of model parameters, one or more first inferences based on first sensor data captured by the vehicle, switching the configurations of the models from the first sets of model parameters to the second sets of model parameters, generating, using the models having configurations based on the second sets of model parameters, one or more second inferences based on second sensor data generated by the sensors of the vehicle in the second region, and causing the vehicle to perform one or more operations based on the second inferences.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: January 9, 2024
    Assignee: Woven by Toyota, U.S., Inc.
    Inventors: Michael Jared Benisch, Ashesh Jain
  • Publication number: 20220066459
    Abstract: In one embodiment, a computing system of a vehicle may access sensor data associated with a surrounding environment of a vehicle. The system may generate, based on the sensor data, a first trajectory having one or more first driving characteristics for navigating the vehicle in the surrounding environment. The system may generate a second trajectory having one or more second driving characteristics by modifying the one or more first driving characteristics of the first trajectory. The modifying may use adjustment parameters based on one or more human-driving characteristics of observed human-driven trajectories such that the one or more second driving characteristics satisfy a similarity threshold relative to the one or more human-driving characteristics. The system may determine, based on the second trajectory, vehicle operations to navigate the vehicle in the surrounding environment.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 3, 2022
    Applicant: Woven Planet North America, Inc.
    Inventors: Ashesh Jain, Anantha Rao Kancherla, Taggart C. Matthiesen
  • Patent number: 11238370
    Abstract: Systems, methods, and non-transitory computer-readable media can determine first sensor data captured by a first sensor of a vehicle. Second sensor data captured by a second sensor of the vehicle can be determined. Information describing the first sensor data and the second sensor data can be provided to a machine learning model trained to predict whether a pair of sensors are calibrated or mis-calibrated based on sensor data captured by the pair of sensors. A determination is made whether the first sensor and the second sensor are calibrated or mis-calibrated based on an output from the machine learning model.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: February 1, 2022
    Assignee: Woven Planet North America, Inc.
    Inventors: Ashesh Jain, Lei Zhang
  • Patent number: 11216971
    Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 4, 2022
    Assignee: Zoox, Inc.
    Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain
  • Patent number: 11170238
    Abstract: Systems, methods, and non-transitory computer-readable media can determine sensor data captured by at least one sensor of a vehicle over a set of time intervals while navigating an environment. Three-dimensional data describing the environment over the set of time intervals can be determined from the captured sensor data. The three-dimensional data capturing a traffic motion pattern for at least one direction of travel. Image data of at least one traffic light in the environment can be determined over the set of time intervals from the captured sensor data. A state of the at least one traffic light can be predicted based at least in part on the three-dimensional data describing the environment and the image data of at least one traffic light in the environment over the set of time intervals.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: November 9, 2021
    Assignee: Woven Planet North America, Inc.
    Inventors: Meng Gao, Ashesh Jain
  • Publication number: 20210303877
    Abstract: Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes.
    Type: Application
    Filed: August 3, 2020
    Publication date: September 30, 2021
    Inventors: Ashesh Jain, Yunjian Jiang, Mushfiqur Rouf, Henru Wang, Lei Zhang
  • Publication number: 20210191407
    Abstract: In one embodiment, a method includes, by a computing system associated with a vehicle, determining a current location of the vehicle in a first region, identifying one or more first sets of model parameters associated with the first region and one or more second sets of model parameters associated with a second region, generating, using one or more machine-learning models based on the first sets of model parameters, one or more first inferences based on first sensor data captured by the vehicle, switching the configurations of the models from the first sets of model parameters to the second sets of model parameters, generating, using the models having configurations based on the second sets of model parameters, one or more second inferences based on second sensor data generated by the sensors of the vehicle in the second region, and causing the vehicle to perform one or more operations based on the second inferences.
    Type: Application
    Filed: December 19, 2019
    Publication date: June 24, 2021
    Inventors: Michael Jared Benisch, Ashesh Jain
  • Publication number: 20200410263
    Abstract: Systems, methods, and non-transitory computer-readable media can determine sensor data captured by at least one sensor of a vehicle over a set of time intervals while navigating an environment. Three-dimensional data describing the environment over the set of time intervals can be determined from the captured sensor data. The three-dimensional data capturing a traffic motion pattern for at least one direction of travel. Image data of at least one traffic light in the environment can be determined over the set of time intervals from the captured sensor data. A state of the at least one traffic light can be predicted based at least in part on the three-dimensional data describing the environment and the image data of at least one traffic light in the environment over the set of time intervals.
    Type: Application
    Filed: June 26, 2019
    Publication date: December 31, 2020
    Applicant: Lyft, Inc.
    Inventors: Meng Gao, Ashesh Jain
  • Patent number: 10733463
    Abstract: Examples disclosed herein may involve a computing system that is configured to (i) obtain previously-derived perception data for a collection of sensor data including a sequence of frames observed by a vehicle within one or more scenes, where the previously-derived perception data includes a respective set of object-level information for each of a plurality of objects detected within the sequence of frames, (ii) derive supplemental object-level information for at least one object detected within the sequence of frames that adds to the previously-derived object-level information for the at least one object, (iii) augment the previously-derived perception data to include the supplemental object-level information for the at least one object, and (iv) store the augmented perception data in an arrangement that encodes a hierarchical relationship between the plurality of objects, the sequence of frames, and the one or more scenes.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: August 4, 2020
    Assignee: Lyft, Inc.
    Inventors: Ashesh Jain, Yunjian Jiang, Mushfiqur Rouf, Henru Wang, Lei Zhang
  • Publication number: 20200210887
    Abstract: Systems, methods, and non-transitory computer-readable media can determine first sensor data captured by a first sensor of a vehicle. Second sensor data captured by a second sensor of the vehicle can be determined. Information describing the first sensor data and the second sensor data can be provided to a machine learning model trained to predict whether a pair of sensors are calibrated or mis-calibrated based on sensor data captured by the pair of sensors. A determination is made whether the first sensor and the second sensor are calibrated or mis-calibrated based on an output from the machine learning model.
    Type: Application
    Filed: December 31, 2018
    Publication date: July 2, 2020
    Applicant: Lyft, Inc.
    Inventors: Ashesh Jain, Lei Zhang
  • Publication number: 20200005485
    Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.
    Type: Application
    Filed: August 30, 2019
    Publication date: January 2, 2020
    Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain
  • Patent number: 10438371
    Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: October 8, 2019
    Assignee: Zoox, Inc.
    Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain
  • Publication number: 20190096086
    Abstract: A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis.
    Type: Application
    Filed: October 30, 2017
    Publication date: March 28, 2019
    Inventors: Danfei Xu, Dragomir Dimitrov Anguelov, Ashesh Jain