Patents by Inventor Ashley Elizabeth Micks
Ashley Elizabeth Micks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11299169Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine six degree of freedom (DoF) data for a first object in a first video image and generate a synthetic video image corresponding to the first video image including a synthetic object and a synthetic object label based on the six DoF data. The instructions can include further instructions to train a generative adversarial network (GAN) based on a paired first video image and a synthetic video image to generate a modified synthetic image and train a deep neural network to locate the synthetic object in the modified synthetic video image based on the synthetic object. The instructions can include further instructions to download the trained deep neural network to a computing device in a vehicle.Type: GrantFiled: January 24, 2020Date of Patent: April 12, 2022Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Ashley Elizabeth Micks
-
Publication number: 20220026919Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.Type: ApplicationFiled: October 8, 2021Publication date: January 27, 2022Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
-
Patent number: 11210436Abstract: A method for generating training data is disclosed. The method may include executing a simulation process. The simulation process may include traversing a virtual, forward-looking sensor over a virtual road surface defining at least one virtual railroad crossing. During the traversing, the virtual sensor may be moved with respect to the virtual road surface as dictated by a vehicle-motion model modeling motion of a vehicle driving on the virtual road surface while carrying the virtual sensor. Virtual sensor data characterizing the virtual road surface may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the road surface in the real world.Type: GrantFiled: July 7, 2016Date of Patent: December 28, 2021Assignee: Ford Global Technologies, LLCInventors: Ashley Elizabeth Micks, Scott Vincent Myers, Harpreetsingh Banvait, Sneha Kadetotad
-
Patent number: 11169534Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.Type: GrantFiled: August 1, 2018Date of Patent: November 9, 2021Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
-
Patent number: 11126877Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.Type: GrantFiled: August 12, 2019Date of Patent: September 21, 2021Assignee: Ford Global Technologies, LLC.Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff
-
Publication number: 20210229680Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine six degree of freedom (DoF) data for a first object in a first video image and generate a synthetic video image corresponding to the first video image including a synthetic object and a synthetic object label based on the six DoF data. The instructions can include further instructions to train a generative adversarial network (GAN) based on a paired first video image and a synthetic video image to generate a modified synthetic image and train a deep neural network to locate the synthetic object in the modified synthetic video image based on the synthetic object. The instructions can include further instructions to download the trained deep neural network to a computing device in a vehicle.Type: ApplicationFiled: January 24, 2020Publication date: July 29, 2021Applicant: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Ashley Elizabeth Micks
-
Patent number: 10971014Abstract: The disclosure relates generally to methods, systems, and apparatuses for automated or assisted driving and more particularly relates to identification, localization, and navigation with respect to bollard receivers. A method for detecting bollard receivers includes receiving perception data from one or more perception sensors of a vehicle. The method includes determining, based on the perception data, a location of one or more bollard receivers in relation to a body of the vehicle. The method also includes providing an indication of the location of the one or more bollard receivers to one or more of a driver and component or system that makes driving maneuver decisions.Type: GrantFiled: March 5, 2019Date of Patent: April 6, 2021Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Harpreetsingh Banvait, Scott Vincent Myers, Ashley Elizabeth Micks, Sneha Kadetotad
-
Patent number: 10937438Abstract: Systems, methods, and devices for speech transformation and generating synthetic speech using deep generative models are disclosed. A method of the disclosure includes receiving input audio data comprising a plurality of iterations of a speech utterance from a plurality of speakers. The method includes generating an input spectrogram based on the input audio data and transmitting the input spectrogram to a neural network configured to generate an output spectrogram. The method includes receiving the output spectrogram from the neural network and, based on the output spectrogram, generating synthetic audio data comprising the speech utterance.Type: GrantFiled: March 29, 2018Date of Patent: March 2, 2021Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Praveen Narayanan, Lisa Scaria, Francois Charette, Ashley Elizabeth Micks, Ryan Burke
-
Patent number: 10832478Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.Type: GrantFiled: November 14, 2019Date of Patent: November 10, 2020Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
-
Patent number: 10817736Abstract: A vehicle includes one or more laterally mounted microphones and a controller programmed to detect a signature of an unoccupied position adjacent the vehicle in outputs of the microphones. The signature may be identified using a machine learning algorithm. In response to detecting an unoccupied position, the controller may invoke autonomous parking, store the location of the unoccupied position for later use, and/or report the unoccupied position to a server, which then informs other vehicles of the available parking. The unoccupied position may be verified by evaluating whether map data indicates legal parking at that location. The unoccupied position may also be confirmed with one or more other sensors, such as a camera, LIDAR, RADAR, SONAR, or other type of sensor.Type: GrantFiled: October 19, 2016Date of Patent: October 27, 2020Assignee: Ford Motor CompanyInventors: Harpreestsingh Banvait, Ashley Elizabeth Micks, Jinesh J. Jain, Scott Vincent Myers
-
Patent number: 10800455Abstract: Systems, methods, and devices for detecting a vehicle's turn signal status for collision avoidance during lane-switching maneuvers or otherwise. A method includes detecting, at a first vehicle, a presence of a second vehicle in an adjacent lane. The method includes identifying, in an image of the second vehicle, a sub-portion containing a turn signal indicator of the second vehicle. The method includes processing the sub-portion of the image to determine a state of the turn signal indicator. The method also includes notifying a driver or performing a driving maneuver, at the first vehicle, based on the state of the turn signal indicator.Type: GrantFiled: December 17, 2015Date of Patent: October 13, 2020Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff, Sneha Kadetotad
-
Patent number: 10762358Abstract: A method for determining lane information includes receiving perception data from at least two sensors, the at least two sensors including a rear facing camera of a vehicle. The method includes determining, based on the perception data, a number of lanes on a roadway within a field of view captured by the perception data using a neural network. The method includes providing an indication of the number of lanes to an automated driving system or driving assistance system.Type: GrantFiled: July 20, 2016Date of Patent: September 1, 2020Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Scott Vincent Myers, Alexandru Mihai Gurghian, Ashley Elizabeth Micks, Alexandro Walsh
-
Patent number: 10635912Abstract: The disclosure relates to methods, systems, and apparatuses for virtual sensor data generation and more particularly relates to generation of virtual sensor data for training and testing models or algorithms to detect objects or obstacles. A method for generating virtual sensor data includes simulating, using one or more processors, a three-dimensional (3D) environment comprising one or more virtual objects. The method includes generating, using one or more processors, virtual sensor data for a plurality of positions of one or more sensors within the 3D environment. The method includes determining, using one or more processors, virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth comprises a dimension or parameter of the one or more virtual objects. The method includes storing and associating the virtual sensor data and the virtual ground truth using one or more processors.Type: GrantFiled: June 19, 2017Date of Patent: April 28, 2020Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Harpreetsingh Banvait, Scott Vincent Myers
-
Publication number: 20200082622Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.Type: ApplicationFiled: November 14, 2019Publication date: March 12, 2020Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
-
Patent number: 10521677Abstract: A method for generating training data is disclosed. The method may include executing a simulation process. The simulation process may include traversing a virtual camera through a virtual driving environment comprising at least one virtual precipitation condition and at least one virtual no precipitation condition. During the traversing, the virtual camera may be moved with respect to the virtual driving environment as dictated by a vehicle-motion model modeling motion of a vehicle driving through the virtual driving environment while carrying the virtual camera. Virtual sensor data characterizing the virtual driving environment in both virtual precipitation and virtual no precipitation conditions may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the virtual driving environment in the real world.Type: GrantFiled: July 14, 2016Date of Patent: December 31, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Vidya Nariyambut Murali
-
Patent number: 10510187Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.Type: GrantFiled: August 27, 2018Date of Patent: December 17, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
-
Publication number: 20190362168Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.Type: ApplicationFiled: August 12, 2019Publication date: November 28, 2019Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J. Jain, Brielle Reiff
-
Patent number: 10474964Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a lane-splitting vehicle. The location of the lane-splitting vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of a lane-splitting vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a lane-splitting vehicle based on actual sensor outputs input to the machine learning model.Type: GrantFiled: January 26, 2016Date of Patent: November 12, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Jinesh J Jain, Harpreetsingh Banvait, Kyu Jeong Han
-
Patent number: 10453256Abstract: A method and an apparatus pertaining to generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual driving environment defining a plurality of lane markings or virtual objects that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual driving environment as dictated by a vehicle-dynamic model modeling motion of a vehicle driving on a virtual road surface of the virtual driving environment while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual driving environment may be recorded. The virtual sensor data may correspond to what an actual sensor would produce in a real-world environment that is similar or substantially matching the virtual driving environment.Type: GrantFiled: November 29, 2018Date of Patent: October 22, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Brielle Reiff, Vidya Nariyambut Murali, Sneha Kadetotad
-
Publication number: 20190304480Abstract: Systems, methods, and devices for speech transformation and generating synthetic speech using deep generative models are disclosed. A method of the disclosure includes receiving input audio data comprising a plurality of iterations of a speech utterance from a plurality of speakers. The method includes generating an input spectrogram based on the input audio data and transmitting the input spectrogram to a neural network configured to generate an output spectrogram. The method includes receiving the output spectrogram from the neural network and, based on the output spectrogram, generating synthetic audio data comprising the speech utterance.Type: ApplicationFiled: March 29, 2018Publication date: October 3, 2019Inventors: Praveen Narayanan, Lisa Scaria, Francois Charette, Ashley Elizabeth Micks, Ryan Burke