Patents by Inventor Ganesh Yalla
Ganesh Yalla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11815623Abstract: Embodiments of the present disclosure are directed to a method for object detection. The method includes receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem. The sensor data is received simultaneously and within one frame for each of the subsystems. The method also includes extracting one or more feature representations of the objects from camera image data, LiDAR point cloud data and imaging RADAR point cloud data and generating image feature maps, LiDAR feature maps and imaging RADAR feature maps. The method further includes combining the image feature maps, the LiDAR feature maps and the imaging RADAR feature maps to generate merged feature maps and generating object classification, object position, object dimensions, object heading and object velocity from the merged feature maps.Type: GrantFiled: August 31, 2021Date of Patent: November 14, 2023Assignee: NIO Technology (Anhui) Co., Ltd.Inventors: Huazeng Deng, Ajaya H S Rao, Ashwath Aithal, Xu Chen, Ruoyu Tan, Veera Ganesh Yalla
-
Patent number: 11340354Abstract: Methods and systems include localization of a vehicle localize precisely and in near real-time. As described, localization of a vehicle using a Global Navigation Satellite System (GNSS) can comprise receiving a signal from each of a plurality of satellites of a GNSS constellation and receiving input from one or more sensors of the vehicle. The input from the sensors can indicate current physical surroundings of the vehicle. A model of the current physical surrounding of the vehicle can be generated based on the input from the one or more sensors of the vehicle. One or more multipath signals received from the plurality of satellites can be mitigated based on the model and the vehicle can be localized using the received signals from the plurality of satellites of the GNSS constellation and based on the mitigation of the one or more multipath signals.Type: GrantFiled: June 26, 2019Date of Patent: May 24, 2022Assignee: NIO USA, Inc.Inventors: Tong Lin, Hiu Hong Yu, Veera Ganesh Yalla, Farzad Cyrus Foroughi Abari, Andre Michelin, Xu Chen
-
Publication number: 20210397880Abstract: Embodiments of the present disclosure are directed to a method for object detection. The method includes receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem. The sensor data is received simultaneously and within one frame for each of the subsystems. The method also includes extracting one or more feature representations of the objects from camera image data, LiDAR point cloud data and imaging RADAR point cloud data and generating image feature maps, LiDAR feature maps and imaging RADAR feature maps. The method further includes combining the image feature maps, the LiDAR feature maps and the imaging RADAR feature maps to generate merged feature maps and generating object classification, object position, object dimensions, object heading and object velocity from the merged feature maps.Type: ApplicationFiled: August 31, 2021Publication date: December 23, 2021Applicant: NIO Technology (Anhui) Co., Ltd.Inventors: Huazeng Deng, Ajaya H S Rao, Ashwath Aithal, Xu Chen, Ruoyu Tan, Veera Ganesh Yalla
-
Publication number: 20210372796Abstract: Methods and systems include localization of a vehicle localize precisely and in near real-time. As described, localization of a vehicle using a Global Navigation Satellite System (GNSS) can comprise receiving a signal from each of a plurality of satellites of a GNSS constellation and receiving input from one or more sensors of the vehicle. The input from the sensors can indicate current physical surroundings of the vehicle. A model of the current physical surrounding of the vehicle can be generated based on the input from the one or more sensors of the vehicle. One or more multipath signals received from the plurality of satellites can be mitigated based on the model and the vehicle can be localized using the received signals from the plurality of satellites of the GNSS constellation and based on the mitigation of the one or more multipath signals.Type: ApplicationFiled: June 26, 2019Publication date: December 2, 2021Inventors: Tong Lin, Hiu Hong Yu, Veera Ganesh Yalla, Farzad Cyrus Foroughi Abari, Andre Michelin, Xu Chen
-
Patent number: 11113584Abstract: Embodiments of the present disclosure are directed to a method for object detection. The method includes receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem. The sensor data is received simultaneously and within one frame for each of the subsystems. The method also includes extracting one or more feature representations of the objects from camera image data, LiDAR point cloud data and imaging RADAR point cloud data and generating image feature maps, LiDAR feature maps and imaging RADAR feature maps. The method further includes combining the image feature maps, the LiDAR feature maps and the imaging RADAR feature maps to generate merged feature maps and generating object classification, object position, object dimensions, object heading and object velocity from the merged feature maps.Type: GrantFiled: February 4, 2020Date of Patent: September 7, 2021Assignee: NIO USA, Inc.Inventors: Huazeng Deng, Ajaya H S Rao, Ashwath Aithal, Xu Chen, Ruoyu Tan, Veera Ganesh Yalla
-
Publication number: 20210241026Abstract: Embodiments of the present disclosure are directed to a method for object detection. The method includes receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem. The sensor data is received simultaneously and within one frame for each of the subsystems. The method also includes extracting one or more feature representations of the objects from camera image data, LiDAR point cloud data and imaging RADAR point cloud data and generating image feature maps, LiDAR feature maps and imaging RADAR feature maps. The method further includes combining the image feature maps, the LiDAR feature maps and the imaging RADAR feature maps to generate merged feature maps and generating object classification, object position, object dimensions, object heading and object velocity from the merged feature maps.Type: ApplicationFiled: February 4, 2020Publication date: August 5, 2021Inventors: Huazeng Deng, Ajaya H S Rao, Ashwath Aithal, Xu Chen, Ruoyu Tan, Veera Ganesh Yalla
-
Publication number: 20210141093Abstract: Embodiments include simultaneous localization and mapping in an autonomous machine using unsynchronized data from a plurality of sensors by receiving navigation information from a first sensor and a second sensor of a plurality of sensors. The navigation information from the first sensor is not time synchronized with the localization information from the second sensor. A constraint equation can be applied to the navigation information from the first sensor, the constraint equation comprising a point-to-line constraint, wherein a line of the point-to-line constraint is based on a trajectory of the autonomous machine determined from the navigation information. Localization of the autonomous machine and mapping of physical surroundings of the autonomous machine can be performed using the point-to-line constrained navigation information and the localization information from the second sensor.Type: ApplicationFiled: November 13, 2019Publication date: May 13, 2021Inventors: Hyungjin Kim, Sathya Narayanan Kasturi Rangan, Shishir Pagad, Veera Ganesh Yalla
-
Patent number: 10935978Abstract: Methods and systems herein can let an autonomous vehicle localize itself precisely and in near real-time in a digital map using visual place recognition. Commercial GPS solutions used in the production of autonomous vehicles generally have very low accuracy. For autonomous driving, the vehicle may need to be able to localize in the map very precisely, for example, within a few centimeters. The method and systems herein incorporate visual place recognition into the digital map and localization process. The roadways or routes within the map can be characterized as a set of nodes, which can be augmented with feature vectors that represent the visual scenes captured using camera sensors. These feature vectors can be constantly updated on the map server and then provided to the vehicles driving the roadways. This process can help create and maintain a diverse set of features for visual place recognition.Type: GrantFiled: January 14, 2019Date of Patent: March 2, 2021Assignee: NIO USA, Inc.Inventors: Veera Ganesh Yalla, Sathya Narayanan Kasturi Rangan, Davide Bacchet
-
Patent number: 10845803Abstract: According to one embodiment, an autonomous vehicle safety system can be implemented with a plurality of sensors, each of the plurality of sensors being configured to produce an electrical signal that is indicative of an environmental condition about a vehicle; a sensor distribution hub that receives the electrical signals from the plurality of sensors and generates two streams of data based on the electrical signals received from the plurality of sensors; a first micro-processing unit configured to receive a first of the two streams of data generated by the sensor distribution hub, where the first micro-processing unit is further configured to autonomously control the vehicle; and a second micro-processing unit configured to receive a second of the two streams of data generated by the sensor distribution hub, where the second micro-processing unit is also configured to autonomously control the vehicle.Type: GrantFiled: November 29, 2017Date of Patent: November 24, 2020Assignee: NIO USA, Inc.Inventors: Stephen Eric Sidle, Veera Ganesh Yalla, Samir Agrawal, Jerry L. Petree, Dennis Polischuk
-
Patent number: 10606274Abstract: Methods and systems herein can let an autonomous vehicle localize itself precisely and in near real-time in a digital map using visual place recognition. Commercial GPS solutions used in the production of autonomous vehicles generally have very low accuracy. For autonomous driving, the vehicle may need to be able to localize in the map very precisely, for example, within a few centimeters. The method and systems herein incorporate visual place recognition into the digital map and localization process. The roadways or routes within the map can be characterized as a set of nodes, which can be augmented with feature vectors that represent the visual scenes captured using camera sensors. These feature vectors can be constantly updated on the map server and then provided to the vehicles driving the roadways. This process can help create and maintain a diverse set of features for visual place recognition.Type: GrantFiled: October 30, 2017Date of Patent: March 31, 2020Assignee: NIO USA, Inc.Inventors: Veera Ganesh Yalla, Sathya Narayanan Kasturi Rangan, Davide Bacchet
-
Patent number: 10527440Abstract: Features of a vehicle navigation system are discussed in this disclosure. In particular, systems and methods for identifying glare-prone areas and establishing a route that avoids one or more of the identified glare-prone areas that are determined to be likely to degrade visibility of an environment outside of the vehicle. In some embodiments, the navigation system can be configured to calculate the likely duration of a trip and then based on sun angle data identify locations where glare is likely to be problematic. In some embodiments, the navigation system can also be configured to choose routes that avoid bright sources of light that could adversely affect visual acuity at night.Type: GrantFiled: July 26, 2017Date of Patent: January 7, 2020Assignee: FARADAY&FUTURE INC.Inventor: Veera Ganesh Yalla
-
Patent number: 10423162Abstract: Various embodiments relate generally to autonomous vehicles and associated mechanical, electrical and electronic hardware, computing software, including autonomy applications, image processing applications, cloud storage, cloud computing applications, etc., and computing systems, and wired and wireless network communications to facilitate autonomous control of vehicles, and, more specifically, to systems, devices, and methods configured to identify permissioned parking relative to multiple classes of restricted and privileged parking.Type: GrantFiled: May 8, 2017Date of Patent: September 24, 2019Assignee: NIO USA, Inc.Inventors: Veera Ganesh Yalla, Davide Bacchet
-
Patent number: 10395530Abstract: The disclosure includes implementations for executing one or more computations for a vehicle. Some implementations of a method for a vehicle may include identifying one or more computations as being un-executable by any processor-based computing device of the vehicle. The method may include generating a query including query data describing the one or more computations to be executed for the vehicle. The method may include providing the query to a network. The method may include receiving a response from the network. The response may include solution data describing a result of executing the one or more computations. The response may be provided to the network by a processor-based computing device included in a hierarchy of processor-based computing devices that have greater computational ability than any processor-based computing devices of the vehicle.Type: GrantFiled: July 5, 2018Date of Patent: August 27, 2019Inventors: Rahul Parundekar, Kentaro Oguchi, Veera Ganesh Yalla, Preeti Jayagopi Pillai
-
Publication number: 20190163178Abstract: According to one embodiment, an autonomous vehicle safety system can be implemented with a plurality of sensors, each of the plurality of sensors being configured to produce an electrical signal that is indicative of an environmental condition about a vehicle; a sensor distribution hub that receives the electrical signals from the plurality of sensors and generates two streams of data based on the electrical signals received from the plurality of sensors; a first micro-processing unit configured to receive a first of the two streams of data generated by the sensor distribution hub, where the first micro-processing unit is further configured to autonomously control the vehicle; and a second micro-processing unit configured to receive a second of the two streams of data generated by the sensor distribution hub, where the second micro-processing unit is also configured to autonomously control the vehicle.Type: ApplicationFiled: November 29, 2017Publication date: May 30, 2019Inventors: Stephen Eric Sidle, Veera Ganesh Yalla, Samir Agrawal, Jerry L. Petree, Dennis Polischuk
-
Publication number: 20190146500Abstract: Methods and systems herein can let an autonomous vehicle localize itself precisely and in near real-time in a digital map using visual place recognition. Commercial GPS solutions used in the production of autonomous vehicles generally have very low accuracy. For autonomous driving, the vehicle may need to be able to localize in the map very precisely, for example, within a few centimeters. The method and systems herein incorporate visual place recognition into the digital map and localization process. The roadways or routes within the map can be characterized as a set of nodes, which can be augmented with feature vectors that represent the visual scenes captured using camera sensors. These feature vectors can be constantly updated on the map server and then provided to the vehicles driving the roadways. This process can help create and maintain a diverse set of features for visual place recognition.Type: ApplicationFiled: January 14, 2019Publication date: May 16, 2019Inventors: Veera Ganesh Yalla, Sathya Narayanan Kasturi Rangan, Davide Bacchet
-
Publication number: 20190129431Abstract: Methods and systems herein can let an autonomous vehicle localize itself precisely and in near real-time in a digital map using visual place recognition. Commercial GPS solutions used in the production of autonomous vehicles generally have very low accuracy. For autonomous driving, the vehicle may need to be able to localize in the map very precisely, for example, within a few centimeters. The method and systems herein incorporate visual place recognition into the digital map and localization process. The roadways or routes within the map can be characterized as a set of nodes, which can be augmented with feature vectors that represent the visual scenes captured using camera sensors. These feature vectors can be constantly updated on the map server and then provided to the vehicles driving the roadways. This process can help create and maintain a diverse set of features for visual place recognition.Type: ApplicationFiled: October 30, 2017Publication date: May 2, 2019Inventors: Veera Ganesh Yalla, Sathya Narayanan Kasturi Rangan, Davide Bacchet
-
Publication number: 20190054922Abstract: A system that performs a method is disclosed. The system determines one or more characteristics about an area surrounding a vehicle via one or more sensors (e.g., one or more characteristics about a road on which the vehicle is traveling and/or one or more characteristics about one or more other vehicles on the road) and that one or more vehicle passing criteria are satisfied. The system also determines whether passing is allowed on the road at a current location of the vehicle. In response to the determination of whether passing is allowed: in accordance with a determination that passing is allowed, the system performs an automated pass operation to pass one or more other vehicles. In accordance with a determination that passing is not allowed, the system forgoes performing the automated pass operation to pass the one or more other vehicles.Type: ApplicationFiled: September 27, 2017Publication date: February 21, 2019Inventors: Veera Ganesh Yalla, Carlos John Rosario
-
Publication number: 20180321685Abstract: Various embodiments relate generally to autonomous vehicles and associated mechanical, electrical and electronic hardware, computing software, including autonomy applications, image processing applications, cloud storage, cloud computing applications, etc., and computing systems, and wired and wireless network communications to facilitate autonomous control of vehicles, and, more specifically, to systems, devices, and methods configured to identify permissioned parking relative to multiple classes of restricted and privileged parking.Type: ApplicationFiled: May 8, 2017Publication date: November 8, 2018Applicant: NEXTEV USA, INC.Inventors: Veera Ganesh Yalla, Davide Bacchet
-
Publication number: 20180322780Abstract: The disclosure includes implementations for executing one or more computations for a vehicle. Some implementations of a method for a vehicle may include identifying one or more computations as being un-executable by any processor-based computing device of the vehicle. The method may include generating a query including query data describing the one or more computations to be executed for the vehicle. The method may include providing the query to a network. The method may include receiving a response from the network. The response may include solution data describing a result of executing the one or more computations. The response may be provided to the network by a processor-based computing device included in a hierarchy of processor-based computing devices that have greater computational ability than any processor-based computing devices of the vehicle.Type: ApplicationFiled: July 5, 2018Publication date: November 8, 2018Inventors: Rahul PARUNDEKAR, Kentaro OGUCHI, Veera Ganesh YALLA, Preeti Jayagopi PILLAI
-
Patent number: 10074274Abstract: This disclosure relates to a method of safely and automatically navigating in the presence of emergency vehicles. A first vehicle may receive, via communication hardware, a message indicating presence of the emergency vehicle. Such a message may originate from the emergency vehicle itself and/or from infrastructure such as a smart traffic light that can sense the presence of the emergency vehicle. The first vehicle may then determine the relative location of the emergency vehicle and automatically respond appropriately by determining a safe trajectory and navigating according to that trajectory until the emergency vehicle is out of range. In some examples, the first vehicle may detect the presence of an emergency vehicle using on-board sensors such as distance measuring sensors, depth sensors, and cameras, in addition to receiving a message via communication hardware.Type: GrantFiled: February 28, 2017Date of Patent: September 11, 2018Assignee: FARADAY & FUTURE INC.Inventors: Jan Becker, Veera Ganesh Yalla, Chongyu Wang, Bibhrajit Halder