Patents by Inventor Sarah Houts

Sarah Houts has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967109
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 23, 2024
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Patent number: 11625856
    Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: April 11, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
  • Patent number: 11544635
    Abstract: Systems, methods, and devices for reserving a vehicle with a desired vehicle characteristic are disclosed herein. A system includes a receiver to receive a request to reserve a vehicle, wherein the request indicates a desired vehicle characteristic. The system includes a controller configured to determine a change to the vehicle that will satisfy the vehicle characteristic, and the system includes an implementation component configured to implement the change to the vehicle.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: January 3, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Lisa Scaria, Candice Xu, Brielle Reiff, Henry Salvatore Savage, Sarah Houts, Jinhyong Oh, Erick Michael Lavoie
  • Publication number: 20220092816
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Application
    Filed: November 30, 2021
    Publication date: March 24, 2022
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Patent number: 11216972
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: January 4, 2022
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Patent number: 11188089
    Abstract: Systems, methods, and devices for determining a location of a vehicle or other device are disclosed. A method includes receiving sensor data from a sensor and determining a prior map comprising LIDAR intensity values. The method includes extracting a sub-region of the prior map around a hypothesis position of the sensor. The method includes extracting a Gaussian Mixture Model (GMM) distribution of intensity values for a region of the sensor data by expectation-maximization and calculating a log-likelihood for the sub-region of the prior map based on the GMM distribution of intensity values for the sensor data.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: November 30, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Sarah Houts, Praveen Narayanan, Graham Mills, Shreyasha Paudel
  • Publication number: 20210150761
    Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.
    Type: Application
    Filed: January 27, 2021
    Publication date: May 20, 2021
    Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
  • Patent number: 10949997
    Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: March 16, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
  • Patent number: 10928834
    Abstract: A method for autonomous vehicle localization. The method may include receiving, by an autonomous vehicle, millimeter-wave signals from at least two 5G transmission points. Bearing measurements may be calculated relative to each of the 5G transmission points based on the signals. A vehicle velocity may be determined by observing characteristics of the signals. Sensory data, including the bearing measurements and the vehicle velocity, may then be fused to localize the autonomous vehicle. A corresponding system and computer program product are also disclosed and claimed herein.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: February 23, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sarah Houts, Shreyasha Paudel, Lynn Valerie Keiser, Tyler Reid
  • Patent number: 10809079
    Abstract: Systems, methods, and apparatuses are disclosed for providing routing and navigational information to users (e.g., visually handicapped users). Example methods may include determining, by one or more computer processors coupled to at least one memory, a first location of a user device and an orientation of a user device, and generating a first route based on the first location, the first orientation, and a destination location. Further, the method may include determining one or more obstacles on the first route using data corresponding to visual information and one or more artificial intelligence techniques; and generating a second route based on the one or more obstacles detected on the first route.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: October 20, 2020
    Assignee: Ford Global Technologies, LLC
    Inventors: Donna Bell, Sarah Houts, Lynn Valerie Keiser, Jinhyoung Oh, Gaurav Pandey
  • Publication number: 20200286256
    Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.
    Type: Application
    Filed: March 8, 2019
    Publication date: September 10, 2020
    Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
  • Patent number: 10592805
    Abstract: A machine learning module may generate a probability distribution from training data including labeled modeling data correlated with reflection data. Modeling data may include data from a LIDAR system, camera, and/or a GPS for a target environment/object. Reflection data may be collected from the same environment/object by a radar and/or an ultrasonic system. The probability distribution may assign reflection coefficients for radar and/or ultrasonic systems conditioned on values for modeling data. A mapping module may create a reflection model to overlay a virtual environment assembled from a second set of modeling data by applying the second set to the probability distribution to assign reflection values to surfaces within the virtual environment. Additionally, a test bench may evaluate an algorithm, for processing reflection data to generate control signals to an autonomous vehicle, with simulated reflection data from a virtual sensor engaging reflection values assigned within the virtual environment.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: March 17, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Alexander Groh, Kay Kunes, Sarah Houts
  • Publication number: 20200064141
    Abstract: Systems, methods, and apparatuses are disclosed for providing routing and navigational information to users (e.g., visually handicapped users). Example methods may include determining, by one or more computer processors coupled to at least one memory, a first location of a user device and an orientation of a user device, and generating a first route based on the first location, the first orientation, and a destination location. Further, the method may include determining one or more obstacles on the first route using data corresponding to visual information and one or more artificial intelligence techniques; and generating a second route based on the one or more obstacles detected on the first route.
    Type: Application
    Filed: August 24, 2018
    Publication date: February 27, 2020
    Applicant: Ford Global Technologies, LLC
    Inventors: Donna Bell, Sarah Houts, Lynn Valerie Keiser, Jinhyoung Oh, Gaurav Pandey
  • Publication number: 20190391268
    Abstract: Systems, methods, and devices for determining a location of a vehicle or other device are disclosed. A method includes receiving sensor data from a sensor and determining a prior map comprising LIDAR intensity values. The method includes extracting a sub-region of the prior map around a hypothesis position of the sensor. The method includes extracting a Gaussian Mixture Model (GMM) distribution of intensity values for a region of the sensor data by expectation-maximization and calculating a log-likelihood for the sub-region of the prior map based on the GMM distribution of intensity values for the sensor data.
    Type: Application
    Filed: June 21, 2018
    Publication date: December 26, 2019
    Inventors: Sarah Houts, Praveen Narayanan, Graham Mills, Shreyasha Paudel
  • Publication number: 20190385336
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Application
    Filed: August 20, 2019
    Publication date: December 19, 2019
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Publication number: 20190370701
    Abstract: Systems, methods, and devices for reserving a vehicle with a desired vehicle characteristic are disclosed herein. A system includes a receiver to receive a request to reserve a vehicle, wherein the request indicates a desired vehicle characteristic. The system includes a controller configured to determine a change to the vehicle that will satisfy the vehicle characteristic, and the system includes an implementation component configured to implement the change to the vehicle.
    Type: Application
    Filed: January 31, 2017
    Publication date: December 5, 2019
    Inventors: Lisa SCARIA, Candice XU, Brielle REIFF, Henry Salvatore SAVAGE, Sarah HOUTS, Jinhyong OH, Erick Michael LAVOIE
  • Publication number: 20190346860
    Abstract: A method for autonomous vehicle localization. The method may include receiving, by an autonomous vehicle, millimeter-wave signals from at least two 5G transmission points. Bearing measurements may be calculated relative to each of the 5G transmission points based on the signals. A vehicle velocity may be determined by observing characteristics of the signals. Sensory data, including the bearing measurements and the vehicle velocity, may then be fused to localize the autonomous vehicle. A corresponding system and computer program product are also disclosed and claimed herein.
    Type: Application
    Filed: May 14, 2018
    Publication date: November 14, 2019
    Inventors: Sarah Houts, Shreyasha Paudel, Lynn Valerie Keiser, Tyler Reid
  • Patent number: 10430968
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: October 1, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Publication number: 20180268566
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Application
    Filed: March 14, 2017
    Publication date: September 20, 2018
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Publication number: 20180060725
    Abstract: A machine learning module may generate a probability distribution from training data including labeled modeling data correlated with reflection data. Modeling data may include data from a LIDAR system, camera, and/or a GPS for a target environment/object. Reflection data may be collected from the same environment/object by a radar and/or an ultrasonic system. The probability distribution may assign reflection coefficients for radar and/or ultrasonic systems conditioned on values for modeling data. A mapping module may create a reflection model to overlay a virtual environment assembled from a second set of modeling data by applying the second set to the probability distribution to assign reflection values to surfaces within the virtual environment. Additionally, a test bench may evaluate an algorithm, for processing reflection data to generate control signals to an autonomous vehicle, with simulated reflection data from a virtual sensor engaging reflection values assigned within the virtual environment.
    Type: Application
    Filed: August 26, 2016
    Publication date: March 1, 2018
    Inventors: Alexander Groh, Kay Kunes, Sarah Houts