Patents by Inventor Ashley Elizabeth Micks

Ashley Elizabeth Micks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11299169
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine six degree of freedom (DoF) data for a first object in a first video image and generate a synthetic video image corresponding to the first video image including a synthetic object and a synthetic object label based on the six DoF data. The instructions can include further instructions to train a generative adversarial network (GAN) based on a paired first video image and a synthetic video image to generate a modified synthetic image and train a deep neural network to locate the synthetic object in the modified synthetic video image based on the synthetic object. The instructions can include further instructions to download the trained deep neural network to a computing device in a vehicle.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: April 12, 2022
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Ashley Elizabeth Micks
  • Publication number: 20220026919
    Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.
    Type: Application
    Filed: October 8, 2021
    Publication date: January 27, 2022
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
  • Patent number: 11210436
    Abstract: A method for generating training data is disclosed. The method may include executing a simulation process. The simulation process may include traversing a virtual, forward-looking sensor over a virtual road surface defining at least one virtual railroad crossing. During the traversing, the virtual sensor may be moved with respect to the virtual road surface as dictated by a vehicle-motion model modeling motion of a vehicle driving on the virtual road surface while carrying the virtual sensor. Virtual sensor data characterizing the virtual road surface may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the road surface in the real world.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: December 28, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Ashley Elizabeth Micks, Scott Vincent Myers, Harpreetsingh Banvait, Sneha Kadetotad
  • Patent number: 11169534
    Abstract: The disclosure relates to methods, systems, and apparatuses for autonomous driving vehicles or driving assistance systems and more particularly relates to vehicle radar perception and location. The vehicle driving system disclosed may include a storage media, a radar system, a location component and a driver controller. The storage media stores a map of roadways. The radar system is configured to generate perception information from a region near the vehicle. The location component is configured to determine a location of the vehicle on the map based on the radar perception information and other navigation related data. The drive controller is configured to control driving of the vehicle based on the map and the determined location.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: November 9, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Vidya Nariyambut Murali, Scott Vincent Myers
  • Patent number: 11126877
    Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: September 21, 2021
    Assignee: Ford Global Technologies, LLC.
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff
  • Publication number: 20210229680
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine six degree of freedom (DoF) data for a first object in a first video image and generate a synthetic video image corresponding to the first video image including a synthetic object and a synthetic object label based on the six DoF data. The instructions can include further instructions to train a generative adversarial network (GAN) based on a paired first video image and a synthetic video image to generate a modified synthetic image and train a deep neural network to locate the synthetic object in the modified synthetic video image based on the synthetic object. The instructions can include further instructions to download the trained deep neural network to a computing device in a vehicle.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 29, 2021
    Applicant: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Ashley Elizabeth Micks
  • Patent number: 10971014
    Abstract: The disclosure relates generally to methods, systems, and apparatuses for automated or assisted driving and more particularly relates to identification, localization, and navigation with respect to bollard receivers. A method for detecting bollard receivers includes receiving perception data from one or more perception sensors of a vehicle. The method includes determining, based on the perception data, a location of one or more bollard receivers in relation to a body of the vehicle. The method also includes providing an indication of the location of the one or more bollard receivers to one or more of a driver and component or system that makes driving maneuver decisions.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: April 6, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Harpreetsingh Banvait, Scott Vincent Myers, Ashley Elizabeth Micks, Sneha Kadetotad
  • Patent number: 10937438
    Abstract: Systems, methods, and devices for speech transformation and generating synthetic speech using deep generative models are disclosed. A method of the disclosure includes receiving input audio data comprising a plurality of iterations of a speech utterance from a plurality of speakers. The method includes generating an input spectrogram based on the input audio data and transmitting the input spectrogram to a neural network configured to generate an output spectrogram. The method includes receiving the output spectrogram from the neural network and, based on the output spectrogram, generating synthetic audio data comprising the speech utterance.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: March 2, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Praveen Narayanan, Lisa Scaria, Francois Charette, Ashley Elizabeth Micks, Ryan Burke
  • Patent number: 10832478
    Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: November 10, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
  • Patent number: 10817736
    Abstract: A vehicle includes one or more laterally mounted microphones and a controller programmed to detect a signature of an unoccupied position adjacent the vehicle in outputs of the microphones. The signature may be identified using a machine learning algorithm. In response to detecting an unoccupied position, the controller may invoke autonomous parking, store the location of the unoccupied position for later use, and/or report the unoccupied position to a server, which then informs other vehicles of the available parking. The unoccupied position may be verified by evaluating whether map data indicates legal parking at that location. The unoccupied position may also be confirmed with one or more other sensors, such as a camera, LIDAR, RADAR, SONAR, or other type of sensor.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: October 27, 2020
    Assignee: Ford Motor Company
    Inventors: Harpreestsingh Banvait, Ashley Elizabeth Micks, Jinesh J. Jain, Scott Vincent Myers
  • Patent number: 10800455
    Abstract: Systems, methods, and devices for detecting a vehicle's turn signal status for collision avoidance during lane-switching maneuvers or otherwise. A method includes detecting, at a first vehicle, a presence of a second vehicle in an adjacent lane. The method includes identifying, in an image of the second vehicle, a sub-portion containing a turn signal indicator of the second vehicle. The method includes processing the sub-portion of the image to determine a state of the turn signal indicator. The method also includes notifying a driver or performing a driving maneuver, at the first vehicle, based on the state of the turn signal indicator.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: October 13, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff, Sneha Kadetotad
  • Patent number: 10762358
    Abstract: A method for determining lane information includes receiving perception data from at least two sensors, the at least two sensors including a rear facing camera of a vehicle. The method includes determining, based on the perception data, a number of lanes on a roadway within a field of view captured by the perception data using a neural network. The method includes providing an indication of the number of lanes to an automated driving system or driving assistance system.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: September 1, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Alexandru Mihai Gurghian, Ashley Elizabeth Micks, Alexandro Walsh
  • Patent number: 10635912
    Abstract: The disclosure relates to methods, systems, and apparatuses for virtual sensor data generation and more particularly relates to generation of virtual sensor data for training and testing models or algorithms to detect objects or obstacles. A method for generating virtual sensor data includes simulating, using one or more processors, a three-dimensional (3D) environment comprising one or more virtual objects. The method includes generating, using one or more processors, virtual sensor data for a plurality of positions of one or more sensors within the 3D environment. The method includes determining, using one or more processors, virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth comprises a dimension or parameter of the one or more virtual objects. The method includes storing and associating the virtual sensor data and the virtual ground truth using one or more processors.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: April 28, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Harpreetsingh Banvait, Scott Vincent Myers
  • Publication number: 20200082622
    Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
    Type: Application
    Filed: November 14, 2019
    Publication date: March 12, 2020
    Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
  • Patent number: 10521677
    Abstract: A method for generating training data is disclosed. The method may include executing a simulation process. The simulation process may include traversing a virtual camera through a virtual driving environment comprising at least one virtual precipitation condition and at least one virtual no precipitation condition. During the traversing, the virtual camera may be moved with respect to the virtual driving environment as dictated by a vehicle-motion model modeling motion of a vehicle driving through the virtual driving environment while carrying the virtual camera. Virtual sensor data characterizing the virtual driving environment in both virtual precipitation and virtual no precipitation conditions may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the virtual driving environment in the real world.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: December 31, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Vidya Nariyambut Murali
  • Patent number: 10510187
    Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: December 17, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
  • Publication number: 20190362168
    Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.
    Type: Application
    Filed: August 12, 2019
    Publication date: November 28, 2019
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J. Jain, Brielle Reiff
  • Patent number: 10474964
    Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a lane-splitting vehicle. The location of the lane-splitting vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of a lane-splitting vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a lane-splitting vehicle based on actual sensor outputs input to the machine learning model.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: November 12, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Jinesh J Jain, Harpreetsingh Banvait, Kyu Jeong Han
  • Patent number: 10453256
    Abstract: A method and an apparatus pertaining to generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual driving environment defining a plurality of lane markings or virtual objects that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual driving environment as dictated by a vehicle-dynamic model modeling motion of a vehicle driving on a virtual road surface of the virtual driving environment while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual driving environment may be recorded. The virtual sensor data may correspond to what an actual sensor would produce in a real-world environment that is similar or substantially matching the virtual driving environment.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: October 22, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Brielle Reiff, Vidya Nariyambut Murali, Sneha Kadetotad
  • Publication number: 20190304480
    Abstract: Systems, methods, and devices for speech transformation and generating synthetic speech using deep generative models are disclosed. A method of the disclosure includes receiving input audio data comprising a plurality of iterations of a speech utterance from a plurality of speakers. The method includes generating an input spectrogram based on the input audio data and transmitting the input spectrogram to a neural network configured to generate an output spectrogram. The method includes receiving the output spectrogram from the neural network and, based on the output spectrogram, generating synthetic audio data comprising the speech utterance.
    Type: Application
    Filed: March 29, 2018
    Publication date: October 3, 2019
    Inventors: Praveen Narayanan, Lisa Scaria, Francois Charette, Ashley Elizabeth Micks, Ryan Burke