Patents by Inventor Ashley Elizabeth Micks

Ashley Elizabeth Micks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10832478
    Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: November 10, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
  • Patent number: 10817736
    Abstract: A vehicle includes one or more laterally mounted microphones and a controller programmed to detect a signature of an unoccupied position adjacent the vehicle in outputs of the microphones. The signature may be identified using a machine learning algorithm. In response to detecting an unoccupied position, the controller may invoke autonomous parking, store the location of the unoccupied position for later use, and/or report the unoccupied position to a server, which then informs other vehicles of the available parking. The unoccupied position may be verified by evaluating whether map data indicates legal parking at that location. The unoccupied position may also be confirmed with one or more other sensors, such as a camera, LIDAR, RADAR, SONAR, or other type of sensor.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: October 27, 2020
    Assignee: Ford Motor Company
    Inventors: Harpreestsingh Banvait, Ashley Elizabeth Micks, Jinesh J. Jain, Scott Vincent Myers
  • Patent number: 10800455
    Abstract: Systems, methods, and devices for detecting a vehicle's turn signal status for collision avoidance during lane-switching maneuvers or otherwise. A method includes detecting, at a first vehicle, a presence of a second vehicle in an adjacent lane. The method includes identifying, in an image of the second vehicle, a sub-portion containing a turn signal indicator of the second vehicle. The method includes processing the sub-portion of the image to determine a state of the turn signal indicator. The method also includes notifying a driver or performing a driving maneuver, at the first vehicle, based on the state of the turn signal indicator.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: October 13, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff, Sneha Kadetotad
  • Patent number: 10762358
    Abstract: A method for determining lane information includes receiving perception data from at least two sensors, the at least two sensors including a rear facing camera of a vehicle. The method includes determining, based on the perception data, a number of lanes on a roadway within a field of view captured by the perception data using a neural network. The method includes providing an indication of the number of lanes to an automated driving system or driving assistance system.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: September 1, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Alexandru Mihai Gurghian, Ashley Elizabeth Micks, Alexandro Walsh
  • Patent number: 10635912
    Abstract: The disclosure relates to methods, systems, and apparatuses for virtual sensor data generation and more particularly relates to generation of virtual sensor data for training and testing models or algorithms to detect objects or obstacles. A method for generating virtual sensor data includes simulating, using one or more processors, a three-dimensional (3D) environment comprising one or more virtual objects. The method includes generating, using one or more processors, virtual sensor data for a plurality of positions of one or more sensors within the 3D environment. The method includes determining, using one or more processors, virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth comprises a dimension or parameter of the one or more virtual objects. The method includes storing and associating the virtual sensor data and the virtual ground truth using one or more processors.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: April 28, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Harpreetsingh Banvait, Scott Vincent Myers
  • Publication number: 20200082622
    Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
    Type: Application
    Filed: November 14, 2019
    Publication date: March 12, 2020
    Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
  • Patent number: 10521677
    Abstract: A method for generating training data is disclosed. The method may include executing a simulation process. The simulation process may include traversing a virtual camera through a virtual driving environment comprising at least one virtual precipitation condition and at least one virtual no precipitation condition. During the traversing, the virtual camera may be moved with respect to the virtual driving environment as dictated by a vehicle-motion model modeling motion of a vehicle driving through the virtual driving environment while carrying the virtual camera. Virtual sensor data characterizing the virtual driving environment in both virtual precipitation and virtual no precipitation conditions may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the virtual driving environment in the real world.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: December 31, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Vidya Nariyambut Murali
  • Patent number: 10510187
    Abstract: Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: December 17, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Alexander Groh, Ashley Elizabeth Micks, Vidya Nariyambut Murali
  • Publication number: 20190362168
    Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.
    Type: Application
    Filed: August 12, 2019
    Publication date: November 28, 2019
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J. Jain, Brielle Reiff
  • Patent number: 10474964
    Abstract: A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a lane-splitting vehicle. The location of the lane-splitting vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of a lane-splitting vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a lane-splitting vehicle based on actual sensor outputs input to the machine learning model.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: November 12, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Jinesh J Jain, Harpreetsingh Banvait, Kyu Jeong Han
  • Patent number: 10453256
    Abstract: A method and an apparatus pertaining to generating training data. The method may include executing a simulation process. The simulation process may include traversing one or more virtual sensors over a virtual driving environment defining a plurality of lane markings or virtual objects that are each sensible by the one or more virtual sensors. During the traversing, each of the one or more virtual sensors may be moved with respect to the virtual driving environment as dictated by a vehicle-dynamic model modeling motion of a vehicle driving on a virtual road surface of the virtual driving environment while carrying the one or more virtual sensors. Virtual sensor data characterizing the virtual driving environment may be recorded. The virtual sensor data may correspond to what an actual sensor would produce in a real-world environment that is similar or substantially matching the virtual driving environment.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: October 22, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Brielle Reiff, Vidya Nariyambut Murali, Sneha Kadetotad
  • Publication number: 20190304480
    Abstract: Systems, methods, and devices for speech transformation and generating synthetic speech using deep generative models are disclosed. A method of the disclosure includes receiving input audio data comprising a plurality of iterations of a speech utterance from a plurality of speakers. The method includes generating an input spectrogram based on the input audio data and transmitting the input spectrogram to a neural network configured to generate an output spectrogram. The method includes receiving the output spectrogram from the neural network and, based on the output spectrogram, generating synthetic audio data comprising the speech utterance.
    Type: Application
    Filed: March 29, 2018
    Publication date: October 3, 2019
    Inventors: Praveen Narayanan, Lisa Scaria, Francois Charette, Ashley Elizabeth Micks, Ryan Burke
  • Patent number: 10423847
    Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: September 24, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff
  • Patent number: 10408937
    Abstract: Example metal bridge detection systems and methods are described. In one implementation, a method receives LIDAR data from a LIDAR system mounted to a vehicle and receives camera data from a camera system mounted to the vehicle. The method analyzes the received LIDAR data and the camera data to identify a metal bridge proximate the vehicle. If a metal bridge is identified, the method adjusts vehicle operations to improve vehicle control as it drives across the metal bridge.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: September 10, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Ashley Elizabeth Micks, Sneha Kadetotad, Harpreetsingh Banvait
  • Publication number: 20190266422
    Abstract: A vehicle includes one or more laterally mounted microphones and a controller programmed to detect a signature of an unoccupied position adjacent the vehicle in outputs of the microphones. The signature may be identified using a machine learning algorithm. In response to detecting an unoccupied position, the controller may invoke autonomous parking, store the location of the unoccupied position for later use, and/or report the unoccupied position to a server, which then informs other vehicles of the available parking. The unoccupied position may be verified by evaluating whether map data indicates legal parking at that location. The unoccupied position may also be confirmed with one or more other sensors, such as a camera, LIDAR, RADAR, SONAR, or other type of sensor.
    Type: Application
    Filed: October 19, 2016
    Publication date: August 29, 2019
    Inventors: Harpreestsingh Banvait, Ashley Elizabeth MICKS, Jinesh J. JAIN, Scott Vincent Myers
  • Patent number: 10372996
    Abstract: A vehicle controller receives images from a camera upon arrival and upon departure. A location of the vehicle may be tracked and images captured by the camera may be tagged with a location. A departure image may be compared to an arrival image captured closest to the same location as the arrival image. A residual image based on a difference between the arrival and departure images is evaluated for anomalies. Attributes of the anomaly such as texture, color, and the like are determined and the anomaly is classified based on the attributes. If the classification indicates an automotive fluid, then an alert is generated.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: August 6, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Jinesh J Jain, Harpreetsingh Banvait, Bruno Sielly Jales Costa, Ashley Elizabeth Micks
  • Patent number: 10369994
    Abstract: A method for detecting stubs or intersecting roadways includes receiving perception data from at least two sensors. The at least two sensors include a rear facing camera of a vehicle and another sensor. The perception data includes information for a current roadway on which the vehicle is located. The method includes detecting, based on the perception data, an intersecting roadway connecting with the current roadway. The method also includes storing an indication of a location and a direction of the intersecting roadway with respect to the current roadway.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: August 6, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Alexandru Mihai Gurghian, Ashley Elizabeth Micks, Alexandro Walsh
  • Patent number: 10372128
    Abstract: Example sinkhole detection systems and methods are described. In one implementation, a method receives data from multiple sensors mounted to a vehicle and analyzes the received data to identify a sinkhole in a roadway ahead of the vehicle. If a sinkhole is identified, the method adjusts vehicle operations and reports the sinkhole to a shared database and/or another vehicle.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: August 6, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Scott Vincent Myers, Ashley Elizabeth Micks, Alexandru Mihai Gurghian, Harpreetsingh Banvait, Parsa Mahmoudieh
  • Publication number: 20190197901
    Abstract: The disclosure relates generally to methods, systems, and apparatuses for automated or assisted driving and more particularly relates to identification, localization, and navigation with respect to bollard receivers. A method for detecting bollard receivers includes receiving perception data from one or more perception sensors of a vehicle. The method includes determining, based on the perception data, a location of one or more bollard receivers in relation to a body of the vehicle. The method also includes providing an indication of the location of the one or more bollard receivers to one or more of a driver and component or system that makes driving maneuver decisions.
    Type: Application
    Filed: March 5, 2019
    Publication date: June 27, 2019
    Inventors: Harpreetsingh Banvait, Scott Vincent Myers, Ashley Elizabeth Micks, Sneha Kadetotad
  • Patent number: 10296816
    Abstract: A vehicle controller receives images from a camera upon arrival and upon departure. A location of the vehicle may be tracked and images captured by the camera may be tagged with a location. A departure image may be compared to an arrival image captured closest to the same location as the arrival image. A residual image based on a difference between the arrival and departure images is evaluated for anomalies. Attributes of the anomaly such as texture, color, and the like are determined and the anomaly is classified based on the attributes. If the classification indicates an automotive fluid, then an alert is generated. A machine learning algorithm for generating classifications from image data may be trained using arrival and departure images obtained by rendering of a three-dimensional model or by adding simulated fluid leaks to two-dimensional images.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: May 21, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Jinesh J Jain, Harpreetsingh Banvait, Bruno Sielly Jales Costa