Patents by Inventor Venkatapathi Raju Nallapa

Venkatapathi Raju Nallapa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220026919
    Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.
    Type: Application
    Filed: October 8, 2021
    Publication date: January 27, 2022
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
  • Patent number: 11169534
    Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: November 9, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
  • Patent number: 10726248
    Abstract: The present invention extends to methods, systems, and computer program products for validating gesture recognition capabilities of automated systems. Aspects include a gesture recognition training system that is scalable, efficient, repeatable, and accounts for permutations of physical characteristics, clothing, types of gestures, environment, culture, weather, road conditions, etc. The gesture recognition training system includes sensors and algorithms used to generate training data sets that facilitate more accurate recognition of and reaction to human gestures. A training data set can be scaled from both monitoring and recording gestures performed by a humanoid robot and performed by animated humans in a simulation environment. From a scaled training data set, autonomous devices can be trained to recognize and react to a diverse set of human gestures in varying conditions with substantially improved capabilities.
    Type: Grant
    Filed: February 1, 2018
    Date of Patent: July 28, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Venkatapathi Raju Nallapa, Anjali Krishnamachar, Gautham Sholingar
  • Patent number: 10635912
    Abstract: The disclosure relates to methods, systems, and apparatuses for virtual sensor data generation and more particularly relates to generation of virtual sensor data for training and testing models or algorithms to detect objects or obstacles. A method for generating virtual sensor data includes simulating, using one or more processors, a three-dimensional (3D) environment comprising one or more virtual objects. The method includes generating, using one or more processors, virtual sensor data for a plurality of positions of one or more sensors within the 3D environment. The method includes determining, using one or more processors, virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth comprises a dimension or parameter of the one or more virtual objects. The method includes storing and associating the virtual sensor data and the virtual ground truth using one or more processors.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: April 28, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Harpreetsingh Banvait, Scott Vincent Myers
  • Patent number: 10508485
    Abstract: Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: December 17, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Venkatapathi Raju Nallapa, Alexandru Mihai Gurghian
  • Patent number: 10453256
    Abstract: A method and an apparatus pertaining to generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual driving environment defining a plurality of lane markings or virtual objects that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual driving environment as dictated by a vehicle-dynamic model modeling motion of a vehicle driving on a virtual road surface of the virtual driving environment while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual driving environment may be recorded. The virtual sensor data may correspond to what an actual sensor would produce in a real-world environment that is similar or substantially matching the virtual driving environment.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: October 22, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Brielle Reiff, Vidya Nariyambut Murali, Sneha Kadetotad
  • Patent number: 10394237
    Abstract: The present invention extends to methods, systems, and computer program products for perceiving roadway conditions from fused sensor data. Aspects of the invention use a combination of different types of cameras mounted to a vehicle to achieve visual perception for autonomous driving of the vehicle. Each camera in the combination of cameras generates sensor data by sensing at least part of the environment around the vehicle. The sensor data form each camera is fused together into a view of the environment around the vehicle. Sensor data from each camera (and, when appropriate, each other type of sensor) is fed into a central sensor perception chip. The central sensor perception chip uses a sensor fusion algorithm to fuse the sensor data into a view of the environment around the vehicle.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: August 27, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Wei Xu, Fazal Urrahman Syed, Venkatapathi Raju Nallapa, Scott Vincent Myers
  • Publication number: 20190236341
    Abstract: The present invention extends to methods, systems, and computer program products for validating gesture recognition capabilities of automated systems. Aspects include a gesture recognition training system that is scalable, efficient, repeatable, and accounts for permutations of physical characteristics, clothing, types of gestures, environment, culture, weather, road conditions, etc. The gesture recognition training system includes sensors and algorithms used to generate training data sets that facilitate more accurate recognition of and reaction to human gestures. A training data set can be scaled from both monitoring and recording gestures performed by a humanoid robot and performed by animated humans in a simulation environment. From a scaled training data set, autonomous devices can be trained to recognize and react to a diverse set of human gestures in varying conditions with substantially improved capabilities.
    Type: Application
    Filed: February 1, 2018
    Publication date: August 1, 2019
    Inventors: Venkatapathi Raju Nallapa, Anjali Krishnamachar, Gautham Sholingar
  • Publication number: 20190096128
    Abstract: A method and an apparatus pertaining to generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual driving environment defining a plurality of lane markings or virtual objects that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual driving environment as dictated by a vehicle-dynamic model modeling motion of a vehicle driving on a virtual road surface of the virtual driving environment while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual driving environment may be recorded. The virtual sensor data may correspond to what an actual sensor would produce in a real-world environment that is similar or substantially matching the virtual driving environment.
    Type: Application
    Filed: November 29, 2018
    Publication date: March 28, 2019
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Brielle Reiff, Vidya Nariyambut Murali, Sneha Kadetotad
  • Patent number: 10229231
    Abstract: A method for generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual road surface defining a plurality of virtual anomalies that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual road surface as dictated by a vehicle-motion model modeling motion of a vehicle driving on the virtual road surface while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual road surface may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the road surface in the real world.
    Type: Grant
    Filed: September 11, 2015
    Date of Patent: March 12, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Venkatapathi Raju Nallapa, Martin Saeger, Ashley Elizabeth Micks, Douglas Blue
  • Publication number: 20190032390
    Abstract: Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.
    Type: Application
    Filed: September 21, 2018
    Publication date: January 31, 2019
    Inventors: Scott Vincent Myers, Venkatapathi Raju Nallapa, Alexandru Mihai Gurghian
  • Patent number: 10176634
    Abstract: A method and an apparatus pertaining to generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual driving environment defining a plurality of lane markings or virtual objects that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual driving environment as dictated by a vehicle-dynamic model modeling motion of a vehicle driving on a virtual road surface of the virtual driving environment while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual driving environment may be recorded. The virtual sensor data may correspond to what an actual sensor would produce in a real-world environment that is similar or substantially matching the virtual driving environment.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: January 8, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Brielle Reiff, Vidya Nariyambut Murali, Sneha Kadetotad
  • Patent number: 10150412
    Abstract: A driving assistance system includes a drive detection component, a presence component, and a notification component. The drive detection component is configured to determine that a vehicle or driver is exiting or preparing to exit a parking location. The presence component is configured to determine, from a drive history database, whether a parking barrier is present in front of or behind the parking location. The notification component is configured to provide an indication that the parking barrier is present to a human driver or an automated driving system of the vehicle.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: December 11, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Venkatapathi Raju Nallapa, Scott Vincent Meyers, Harpreetsingh Banvait, Ashley Elizabeth Micks
  • Patent number: 10151136
    Abstract: Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: December 11, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Venkatapathi Raju Nallapa, Alexandru Mihai Gurghian
  • Publication number: 20180341273
    Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.
    Type: Application
    Filed: August 1, 2018
    Publication date: November 29, 2018
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
  • Patent number: 10082797
    Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: September 25, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
  • Patent number: 10055652
    Abstract: Systems, methods, and devices for pedestrian detection are disclosed herein. A method includes receiving one or more images from a rear-facing camera on a vehicle. The method further includes determining that a pedestrian is present in the one or more images, predicting future motion of the pedestrian, and notifying a driver-assistance or automated driving system when a conflict exists between forward motion of the vehicle and the predicted future motion of the pedestrian.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: August 21, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Madeline Jane Goh
  • Publication number: 20180067487
    Abstract: The present invention extends to methods, systems, and computer program products for perceiving roadway conditions from fused sensor data. Aspects of the invention use a combination of different types of cameras mounted to a vehicle to achieve visual perception for autonomous driving of the vehicle. Each camera in the combination of cameras generates sensor data by sensing at least part of the environment around the vehicle. The sensor data form each camera is fused together into a view of the environment around the vehicle. Sensor data from each camera (and, when appropriate, each other type of sensor) is fed into a central sensor perception chip. The central sensor perception chip uses a sensor fusion algorithm to fuse the sensor data into a view of the environment around the vehicle.
    Type: Application
    Filed: September 8, 2016
    Publication date: March 8, 2018
    Inventors: Wei Xu, Fazal Urrahman Syed, Venkatapathi Raju Nallapa, Scott Vincent Myers
  • Publication number: 20180002972
    Abstract: Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.
    Type: Application
    Filed: September 18, 2017
    Publication date: January 4, 2018
    Inventors: Scott Vincent Myers, Venkatapathi Raju Nallapa, Alexandru Mihai Gurghian
  • Patent number: 9816308
    Abstract: Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: November 14, 2017
    Assignee: Ford Global Technologies, LLC
    Inventors: Scott Vincent Myers, Venkatapathi Raju Nallapa, Alexandru Mihai Gurghian