Patents by Inventor Joan Devassy
Joan Devassy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250095483Abstract: Examples disclosed herein involve a computing system configured to (i) obtain (a) a first set of sensor data captured by a first sensor system of a first vehicle that indicates the first vehicle's movement and location with a first degree of accuracy and (b) a second set of sensor data captured by a second sensor system of a second vehicle that indicates the second vehicle's movement and location with a second degree of accuracy that differs from the first degree of accuracy, (ii) based on the first set of sensor data, derive a first trajectory for the first vehicle that is defined in terms of a source-agnostic coordinate frame, (iii) based on the second set of sensor data, derive a second trajectory for the second vehicle that is defined in terms of the source-agnostic coordinate frame, and (iv) store the first and second trajectories in a database of source-agnostic trajectories.Type: ApplicationFiled: August 19, 2024Publication date: March 20, 2025Inventors: Joan Devassy, Mousom Dhar Gupta, Hugo Oscar Bonay Grimmett, Swarn Avinash Kumar, Michal Witkowski
-
Publication number: 20240355084Abstract: This disclosure describes a card-scan system that can update a card-scan machine learning model to improve card-character predictions for character-bearing cards by using an active-learning technique that learns from card-scan representations indicating corrections by users to predicted card characters. In particular, the disclosed systems can use a client device to capture and analyze a set of card images of a character-bearing card to predict card characters using a card-scan machine learning model. The disclosed systems can further receive card-scan gradients representing one or more corrections to incorrectly predicted card characters. Based on the card-scan gradients, the disclosed systems can generate active-learning metrics and retrain or update the card-scan machine learning model based on such active-learning metrics.Type: ApplicationFiled: June 24, 2024Publication date: October 24, 2024Inventors: Ritwik Subir Das, Joan Devassy, Nadha Nafeeza Gafoor, Aahel Iyer, Swarn Avinash Kumar, Angela Lam, Kia Nishimine, Wiebke Poerschke, John Michael Sparks, Hristo Stefanov Stefanov, Wei You
-
Patent number: 12067869Abstract: Examples disclosed herein involve a computing system configured to (i) obtain (a) a first set of sensor data captured by a first sensor system of a first vehicle that indicates the first vehicle's movement and location with a first degree of accuracy and (b) a second set of sensor data captured by a second sensor system of a second vehicle that indicates the second vehicle's movement and location with a second degree of accuracy that differs from the first degree of accuracy, (ii) based on the first set of sensor data, derive a first trajectory for the first vehicle that is defined in terms of a source-agnostic coordinate frame, (iii) based on the second set of sensor data, derive a second trajectory for the second vehicle that is defined in terms of the source-agnostic coordinate frame, and (iv) store the first and second trajectories in a database of source-agnostic trajectories.Type: GrantFiled: July 24, 2020Date of Patent: August 20, 2024Assignee: Lyft, Inc.Inventors: Joan Devassy, Mousom Dhar Gupta, Hugo Oscar Bonay Grimmett, Swarn Avinash Kumar, Michal Witkowski
-
Patent number: 12026926Abstract: This disclosure describes a card-scan system that can update a card-scan machine learning model to improve card-character predictions for payment cards, driver licenses, or other character-bearing cards by using an active-learning technique that learns from card-scan representations indicating corrections by users to predicted card characters. In particular, the disclosed systems can use a client device to capture and analyze a set of card images of a character-bearing card to predict card characters using a card-scan machine learning model. The disclosed systems can further receive card-scan gradients representing one or more corrections to incorrectly predicted card characters. Based on the card-scan gradients, the disclosed systems can generate active-learning metrics and retrain or update the card-scan machine learning model based on such active-learning metrics.Type: GrantFiled: October 5, 2020Date of Patent: July 2, 2024Assignee: Lyft, Inc.Inventors: Ritwik Subir Das, Joan Devassy, Nadha Nafeeza Gafoor, Aahel Iyer, Swarn Avinash Kumar, Angela Lam, Kia Nishimine, Wiebke Poerschke, John Michael Sparks, Hristo Stefanov Stefanov, Wei You
-
Patent number: 11610409Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: GrantFiled: February 1, 2021Date of Patent: March 21, 2023Assignee: Woven Planet North America, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Publication number: 20220161830Abstract: Examples disclosed herein involve a computing system configured to (i) receive sensor data associated with a vehicle's period of operation in an environment including (a) trajectory data associated with the vehicle and (b) at least one of trajectory data associated with one or more agents in the environment or data associated with one or more static objects in the environment, (ii) determine that at least one of (a) the one or more agents or (b) the one or more static objects is relevant to the vehicle, (iii) identify one or more times when there is a change to the one or more agents or the one or more static objects relevant to the vehicle, (iv) designate each identified time as a boundary point that separates the period of operation into one or more scenes, and (v) generate a representation of the one or more scenes based on the designated boundary points.Type: ApplicationFiled: November 23, 2020Publication date: May 26, 2022Inventors: Joan Devassy, Mousom Dhar Gupta, Sakshi Madan, Emil Constantin Praun
-
Publication number: 20220108121Abstract: This disclosure describes a card-scan system that can update a card-scan machine learning model to improve card-character predictions for payment cards, driver licenses, or other character-bearing cards by using an active-learning technique that learns from card-scan representations indicating corrections by users to predicted card characters. In particular, the disclosed systems can use a client device to capture and analyze a set of card images of a character-bearing card to predict card characters using a card-scan machine learning model. The disclosed systems can further receive card-scan gradients representing one or more corrections to incorrectly predicted card characters. Based on the card-scan gradients, the disclosed systems can generate active-learning metrics and retrain or update the card-scan machine learning model based on such active-learning metrics.Type: ApplicationFiled: October 5, 2020Publication date: April 7, 2022Inventors: Ritwik Subir Das, Joan Devassy, Nadha Nafeeza Gafoor, Aahel Iyer, Swarn Avinash Kumar, Angela Lam, Kia Nishimine, Wiebke Poerschke, John Michael Sparks, Hristo Stefanov Stefanov, Wei You
-
Publication number: 20220028262Abstract: Examples disclosed herein involve a computing system configured to (i) obtain (a) a first set of sensor data captured by a first sensor system of a first vehicle that indicates the first vehicle's movement and location with a first degree of accuracy and (b) a second set of sensor data captured by a second sensor system of a second vehicle that indicates the second vehicle's movement and location with a second degree of accuracy that differs from the first degree of accuracy, (ii) based on the first set of sensor data, derive a first trajectory for the first vehicle that is defined in terms of a source-agnostic coordinate frame, (iii) based on the second set of sensor data, derive a second trajectory for the second vehicle that is defined in terms of the source-agnostic coordinate frame, and (iv) store the first and second trajectories in a database of source-agnostic trajectories.Type: ApplicationFiled: July 24, 2020Publication date: January 27, 2022Inventors: Joan Devassy, Mousom Dhar Gupta, Hugo Oscar Bonay Grimmett, Swarn Avinash Kumar, Michal Witkowski
-
Patent number: 11151788Abstract: Examples disclosed herein may involve (i) identifying, in a 3D point cloud representative of a real-world environment in which a vehicle was operating during a window of time, a set of 3D data points associated with an object detected in the environment that comprises different subsets of 3D data points corresponding to different capture times within the window of time, (ii) based at least on the 3D data points, evaluating a trajectory of the object and thereby determining that the object was in motion during some portion of the window of time, (iii) in response to determining that the object was in motion, reconstructing the different subsets of 3D data points into a single, assembled 3D representation of the object, and (iv) generating a time-aggregated, 3D visualization of the environment that presents the single, assembled 3D representation of the object at one or more points along the trajectory of the object.Type: GrantFiled: December 27, 2019Date of Patent: October 19, 2021Assignee: Woven Planet North America, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Publication number: 20210201055Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: ApplicationFiled: February 1, 2021Publication date: July 1, 2021Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Publication number: 20210201578Abstract: Examples disclosed herein may involve (i) identifying, in a 3D point cloud representative of a real-world environment in which a vehicle was operating during a window of time, a set of 3D data points associated with an object detected in the environment that comprises different subsets of 3D data points corresponding to different capture times within the window of time, (ii) based at least on the 3D data points, evaluating a trajectory of the object and thereby determining that the object was in motion during some portion of the window of time, (iii) in response to determining that the object was in motion, reconstructing the different subsets of 3D data points into a single, assembled 3D representation of the object, and (iv) generating a time-aggregated, 3D visualization of the environment that presents the single, assembled 3D representation of the object at one or more points along the trajectory of the object.Type: ApplicationFiled: December 27, 2019Publication date: July 1, 2021Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Patent number: 10909392Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: GrantFiled: December 27, 2019Date of Patent: February 2, 2021Assignee: Lyft, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah