Patents by Inventor Suneet Rajendra Shah
Suneet Rajendra Shah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12039785Abstract: Systems, methods, and non-transitory computer-readable media can determine sensor data captured by at least one sensor of a vehicle while navigating an environment over a period of time. Information describing one or more agents associated with the environment during the period of time can be determined based at least in part on the captured sensor data. A parameter-based encoding describing the one or more agents associated with the environment during the period of time can be generated based at least in part on the determined information and a scenario schema, wherein the parameter-based encoding provides a structured representation of the information describing the one or more agents associated with the environment. A scenario represented by the parameter-based encoding can be determined based at least in part on a cluster of parameter-based encodings to which the parameter-based encoding is assigned.Type: GrantFiled: October 23, 2019Date of Patent: July 16, 2024Assignee: Lyft, Inc.Inventors: Ivan Kirigan, David Tse-Zhou Lu, Sheng Yang, Ranjith Unnikrishnan, Emilie Jeanne Anne Danna, Weiyi Hou, Daxiao Liu, Suneet Rajendra Shah, Ying Liu
-
Publication number: 20240104932Abstract: Systems, methods, and non-transitory computer-readable media can access a plurality of parameter-based encodings providing a structured representation of an environment captured by one or more sensors associated with a plurality of vehicles traveling through the environment. A given parameter-based encoding of the environment identifies one or more agents that were detected by a vehicle within the environment and respective location information for the one or more agents within the environment. The plurality of parameter-based encodings can be clustered into one or more clusters of parameter-based encodings. At least one scenario associated with the environment can be determined based at least in part on the one or more clusters of parameter-based encodings.Type: ApplicationFiled: October 6, 2023Publication date: March 28, 2024Applicant: Lyft, Inc.Inventors: Ivan Kirigan, David Tse-Zhou Lu, Sheng Yang, Ranjith Unnikrishnan, Emilie Jeanne Anne Danna, Weiyi Hou, Daxiao Liu, Suneet Rajendra Shah, Ying Liu
-
Patent number: 11816900Abstract: Systems, methods, and non-transitory computer-readable media can access a plurality of parameter-based encodings providing a structured representation of an environment captured by one or more sensors associated with a plurality of vehicles traveling through the environment. A given parameter-based encoding of the environment identifies one or more agents that were detected by a vehicle within the environment and respective location information for the one or more agents within the environment. The plurality of parameter-based encodings can be clustered into one or more clusters of parameter-based encodings. At least one scenario associated with the environment can be determined based at least in part on the one or more clusters of parameter-based encodings.Type: GrantFiled: October 23, 2019Date of Patent: November 14, 2023Assignee: Lyft, Inc.Inventors: Ivan Kirigan, David Tse-Zhou Lu, Sheng Yang, Ranjith Unnikrishnan, Emilie Jeanne Anne Danna, Weiyi Hou, Daxiao Liu, Suneet Rajendra Shah, Ying Liu
-
Patent number: 11610409Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: GrantFiled: February 1, 2021Date of Patent: March 21, 2023Assignee: Woven Planet North America, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Patent number: 11580655Abstract: Examples disclosed herein may involve a computing system that is operable to (i) present, via a visual interface, a virtual shape associated with a three-dimensional (3D) coordinate system, (ii) present, via the visual interface, a visual indicator positioned in proximity to the virtual shape and indicating that a specified spatial parameter of the virtual shape will be modified along a specified dimension of the 3D coordinate system in response to a given type of user input associated with the visual indicator, (iii) while presenting the visual indicator, detect an instance of the given type of user input associated with the visual indicator, and (iv) after detecting the instance of the given type of user input, update the virtual shape that is presented via the visual interface by modifying the specified spatial parameter of the virtual shape along the specified dimension.Type: GrantFiled: June 25, 2020Date of Patent: February 14, 2023Assignee: WOVEN PLANET NORTH AMERICA, INC.Inventors: Tanner Cotant Christensen, Suneet Rajendra Shah, Newton Ko Yue Der, Brandon Huang, Kim Hoang Nguyen
-
Publication number: 20210407116Abstract: Examples disclosed herein may involve a computing system that is operable to (i) present, via a visual interface, a virtual shape associated with a three-dimensional (3D) coordinate system, (ii) present, via the visual interface, a visual indicator positioned in proximity to the virtual shape and indicating that a specified spatial parameter of the virtual shape will be modified along a specified dimension of the 3D coordinate system in response to a given type of user input associated with the visual indicator, (iii) while presenting the visual indicator, detect an instance of the given type of user input associated with the visual indicator, and (iv) after detecting the instance of the given type of user input, update the virtual shape that is presented via the visual interface by modifying the specified spatial parameter of the virtual shape along the specified dimension.Type: ApplicationFiled: June 25, 2020Publication date: December 30, 2021Applicant: Woven Planet North America, Inc.Inventors: Tanner Cotant Christensen, Suneet Rajendra Shah, Newton Ko Yue Der, Brandon Huang, Kim Hoang Nguyen
-
Patent number: 11151788Abstract: Examples disclosed herein may involve (i) identifying, in a 3D point cloud representative of a real-world environment in which a vehicle was operating during a window of time, a set of 3D data points associated with an object detected in the environment that comprises different subsets of 3D data points corresponding to different capture times within the window of time, (ii) based at least on the 3D data points, evaluating a trajectory of the object and thereby determining that the object was in motion during some portion of the window of time, (iii) in response to determining that the object was in motion, reconstructing the different subsets of 3D data points into a single, assembled 3D representation of the object, and (iv) generating a time-aggregated, 3D visualization of the environment that presents the single, assembled 3D representation of the object at one or more points along the trajectory of the object.Type: GrantFiled: December 27, 2019Date of Patent: October 19, 2021Assignee: Woven Planet North America, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Publication number: 20210201578Abstract: Examples disclosed herein may involve (i) identifying, in a 3D point cloud representative of a real-world environment in which a vehicle was operating during a window of time, a set of 3D data points associated with an object detected in the environment that comprises different subsets of 3D data points corresponding to different capture times within the window of time, (ii) based at least on the 3D data points, evaluating a trajectory of the object and thereby determining that the object was in motion during some portion of the window of time, (iii) in response to determining that the object was in motion, reconstructing the different subsets of 3D data points into a single, assembled 3D representation of the object, and (iv) generating a time-aggregated, 3D visualization of the environment that presents the single, assembled 3D representation of the object at one or more points along the trajectory of the object.Type: ApplicationFiled: December 27, 2019Publication date: July 1, 2021Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Publication number: 20210201055Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: ApplicationFiled: February 1, 2021Publication date: July 1, 2021Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Publication number: 20210124355Abstract: Systems, methods, and non-transitory computer-readable media can determine sensor data captured by at least one sensor of a vehicle while navigating an environment over a period of time. Information describing one or more agents associated with the environment during the period of time can be determined based at least in part on the captured sensor data. A parameter-based encoding describing the one or more agents associated with the environment during the period of time can be generated based at least in part on the determined information and a scenario schema, wherein the parameter-based encoding provides a structured representation of the information describing the one or more agents associated with the environment. A scenario represented by the parameter-based encoding can be determined based at least in part on a cluster of parameter-based encodings to which the parameter-based encoding is assigned.Type: ApplicationFiled: October 23, 2019Publication date: April 29, 2021Applicant: Lyft, Inc.Inventors: Ivan Kirigan, David Tse-Zhou Lu, Sheng Yang, Ranjith Unnikrishnan, Emilie Jeanne Anne Danna, Weiyi Hou, Daxiao Liu, Suneet Rajendra Shah, Ying Liu
-
Publication number: 20210124350Abstract: Systems, methods, and non-transitory computer-readable media can access a plurality of parameter-based encodings providing a structured representation of an environment captured by one or more sensors associated with a plurality of vehicles traveling through the environment. A given parameter-based encoding of the environment identifies one or more agents that were detected by a vehicle within the environment and respective location information for the one or more agents within the environment. The plurality of parameter-based encodings can be clustered into one or more clusters of parameter-based encodings. At least one scenario associated with the environment can be determined based at least in part on the one or more clusters of parameter-based encodings.Type: ApplicationFiled: October 23, 2019Publication date: April 29, 2021Applicant: Lyft, Inc.Inventors: Ivan Kirigan, David Tse-Zhou Lu, Sheng Yang, Ranjith Unnikrishnan, Emilie Jeanne Anne Danna, Weiyi Hou, Daxiao Liu, Suneet Rajendra Shah, Ying Liu
-
Patent number: 10909392Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: GrantFiled: December 27, 2019Date of Patent: February 2, 2021Assignee: Lyft, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah