Patents by Inventor Sarah Houts
Sarah Houts has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240193956Abstract: Provided are systems and methods for detecting an emergency services vehicle and controlling an autonomous vehicle to interact with the emergency services vehicle and emergency services personnel. In one example, a method may include storing sensor data captured of an environment surrounding the vehicle while the vehicle is on a road, determining whether an emergency services vehicle is present in the surrounding environment based on the sensor data, and in response to determining that the emergency vehicle is present in the surrounding environment, generating an alert and transmitting the alert to a user interface.Type: ApplicationFiled: December 12, 2022Publication date: June 13, 2024Inventors: Sarah HOUTS, Gilbran James ALVAREZ
-
Patent number: 11967109Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.Type: GrantFiled: November 30, 2021Date of Patent: April 23, 2024Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
-
Patent number: 11625856Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.Type: GrantFiled: January 27, 2021Date of Patent: April 11, 2023Assignee: Ford Global Technologies, LLCInventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
-
Patent number: 11544635Abstract: Systems, methods, and devices for reserving a vehicle with a desired vehicle characteristic are disclosed herein. A system includes a receiver to receive a request to reserve a vehicle, wherein the request indicates a desired vehicle characteristic. The system includes a controller configured to determine a change to the vehicle that will satisfy the vehicle characteristic, and the system includes an implementation component configured to implement the change to the vehicle.Type: GrantFiled: January 31, 2017Date of Patent: January 3, 2023Assignee: Ford Global Technologies, LLCInventors: Lisa Scaria, Candice Xu, Brielle Reiff, Henry Salvatore Savage, Sarah Houts, Jinhyong Oh, Erick Michael Lavoie
-
Publication number: 20220092816Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.Type: ApplicationFiled: November 30, 2021Publication date: March 24, 2022Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
-
Patent number: 11216972Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.Type: GrantFiled: August 20, 2019Date of Patent: January 4, 2022Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
-
Patent number: 11188089Abstract: Systems, methods, and devices for determining a location of a vehicle or other device are disclosed. A method includes receiving sensor data from a sensor and determining a prior map comprising LIDAR intensity values. The method includes extracting a sub-region of the prior map around a hypothesis position of the sensor. The method includes extracting a Gaussian Mixture Model (GMM) distribution of intensity values for a region of the sensor data by expectation-maximization and calculating a log-likelihood for the sub-region of the prior map based on the GMM distribution of intensity values for the sensor data.Type: GrantFiled: June 21, 2018Date of Patent: November 30, 2021Assignee: Ford Global Technologies, LLCInventors: Sarah Houts, Praveen Narayanan, Graham Mills, Shreyasha Paudel
-
Publication number: 20210150761Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.Type: ApplicationFiled: January 27, 2021Publication date: May 20, 2021Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
-
Patent number: 10949997Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.Type: GrantFiled: March 8, 2019Date of Patent: March 16, 2021Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
-
Patent number: 10928834Abstract: A method for autonomous vehicle localization. The method may include receiving, by an autonomous vehicle, millimeter-wave signals from at least two 5G transmission points. Bearing measurements may be calculated relative to each of the 5G transmission points based on the signals. A vehicle velocity may be determined by observing characteristics of the signals. Sensory data, including the bearing measurements and the vehicle velocity, may then be fused to localize the autonomous vehicle. A corresponding system and computer program product are also disclosed and claimed herein.Type: GrantFiled: May 14, 2018Date of Patent: February 23, 2021Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Sarah Houts, Shreyasha Paudel, Lynn Valerie Keiser, Tyler Reid
-
Patent number: 10809079Abstract: Systems, methods, and apparatuses are disclosed for providing routing and navigational information to users (e.g., visually handicapped users). Example methods may include determining, by one or more computer processors coupled to at least one memory, a first location of a user device and an orientation of a user device, and generating a first route based on the first location, the first orientation, and a destination location. Further, the method may include determining one or more obstacles on the first route using data corresponding to visual information and one or more artificial intelligence techniques; and generating a second route based on the one or more obstacles detected on the first route.Type: GrantFiled: August 24, 2018Date of Patent: October 20, 2020Assignee: Ford Global Technologies, LLCInventors: Donna Bell, Sarah Houts, Lynn Valerie Keiser, Jinhyoung Oh, Gaurav Pandey
-
Publication number: 20200286256Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.Type: ApplicationFiled: March 8, 2019Publication date: September 10, 2020Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
-
Patent number: 10592805Abstract: A machine learning module may generate a probability distribution from training data including labeled modeling data correlated with reflection data. Modeling data may include data from a LIDAR system, camera, and/or a GPS for a target environment/object. Reflection data may be collected from the same environment/object by a radar and/or an ultrasonic system. The probability distribution may assign reflection coefficients for radar and/or ultrasonic systems conditioned on values for modeling data. A mapping module may create a reflection model to overlay a virtual environment assembled from a second set of modeling data by applying the second set to the probability distribution to assign reflection values to surfaces within the virtual environment. Additionally, a test bench may evaluate an algorithm, for processing reflection data to generate control signals to an autonomous vehicle, with simulated reflection data from a virtual sensor engaging reflection values assigned within the virtual environment.Type: GrantFiled: August 26, 2016Date of Patent: March 17, 2020Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Alexander Groh, Kay Kunes, Sarah Houts
-
Publication number: 20200064141Abstract: Systems, methods, and apparatuses are disclosed for providing routing and navigational information to users (e.g., visually handicapped users). Example methods may include determining, by one or more computer processors coupled to at least one memory, a first location of a user device and an orientation of a user device, and generating a first route based on the first location, the first orientation, and a destination location. Further, the method may include determining one or more obstacles on the first route using data corresponding to visual information and one or more artificial intelligence techniques; and generating a second route based on the one or more obstacles detected on the first route.Type: ApplicationFiled: August 24, 2018Publication date: February 27, 2020Applicant: Ford Global Technologies, LLCInventors: Donna Bell, Sarah Houts, Lynn Valerie Keiser, Jinhyoung Oh, Gaurav Pandey
-
Publication number: 20190391268Abstract: Systems, methods, and devices for determining a location of a vehicle or other device are disclosed. A method includes receiving sensor data from a sensor and determining a prior map comprising LIDAR intensity values. The method includes extracting a sub-region of the prior map around a hypothesis position of the sensor. The method includes extracting a Gaussian Mixture Model (GMM) distribution of intensity values for a region of the sensor data by expectation-maximization and calculating a log-likelihood for the sub-region of the prior map based on the GMM distribution of intensity values for the sensor data.Type: ApplicationFiled: June 21, 2018Publication date: December 26, 2019Inventors: Sarah Houts, Praveen Narayanan, Graham Mills, Shreyasha Paudel
-
Publication number: 20190385336Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.Type: ApplicationFiled: August 20, 2019Publication date: December 19, 2019Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
-
Publication number: 20190370701Abstract: Systems, methods, and devices for reserving a vehicle with a desired vehicle characteristic are disclosed herein. A system includes a receiver to receive a request to reserve a vehicle, wherein the request indicates a desired vehicle characteristic. The system includes a controller configured to determine a change to the vehicle that will satisfy the vehicle characteristic, and the system includes an implementation component configured to implement the change to the vehicle.Type: ApplicationFiled: January 31, 2017Publication date: December 5, 2019Inventors: Lisa SCARIA, Candice XU, Brielle REIFF, Henry Salvatore SAVAGE, Sarah HOUTS, Jinhyong OH, Erick Michael LAVOIE
-
Publication number: 20190346860Abstract: A method for autonomous vehicle localization. The method may include receiving, by an autonomous vehicle, millimeter-wave signals from at least two 5G transmission points. Bearing measurements may be calculated relative to each of the 5G transmission points based on the signals. A vehicle velocity may be determined by observing characteristics of the signals. Sensory data, including the bearing measurements and the vehicle velocity, may then be fused to localize the autonomous vehicle. A corresponding system and computer program product are also disclosed and claimed herein.Type: ApplicationFiled: May 14, 2018Publication date: November 14, 2019Inventors: Sarah Houts, Shreyasha Paudel, Lynn Valerie Keiser, Tyler Reid
-
Patent number: 10430968Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.Type: GrantFiled: March 14, 2017Date of Patent: October 1, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
-
Publication number: 20180268566Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.Type: ApplicationFiled: March 14, 2017Publication date: September 20, 2018Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith