Patents by Inventor Madeline Jane Schrier
Madeline Jane Schrier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11847917Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.Type: GrantFiled: July 9, 2021Date of Patent: December 19, 2023Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Madeline Jane Schrier, Vidya Nariyambut Murali
-
Publication number: 20230351544Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: ApplicationFiled: July 11, 2023Publication date: November 2, 2023Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Patent number: 11734786Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: GrantFiled: September 16, 2021Date of Patent: August 22, 2023Assignee: Ford Global Technologies, LLCInventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Publication number: 20220004807Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: ApplicationFiled: September 16, 2021Publication date: January 6, 2022Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Patent number: 11200447Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: GrantFiled: June 18, 2019Date of Patent: December 14, 2021Assignee: Ford Global Technologies, LLCInventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Publication number: 20210334610Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.Type: ApplicationFiled: July 9, 2021Publication date: October 28, 2021Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
-
Patent number: 11087186Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.Type: GrantFiled: October 18, 2019Date of Patent: August 10, 2021Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Madeline Jane Schrier, Vidya Nariyambut Murali
-
Patent number: 10665104Abstract: Example systems and methods for detecting an animal proximate a vehicle are described. In one implementation, a wearable device carried by an animal is activated when the wearable device is within a predetermined distance of a vehicle. The vehicle receives a signal from the wearable device and determines an approximate distance between the wearable device and the vehicle. An alert is generated to warn a driver that an animal is near the vehicle. The alert has an intensity level that corresponds to the approximate distance between the wearable device and the vehicle.Type: GrantFiled: October 28, 2015Date of Patent: May 26, 2020Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
-
Publication number: 20200050905Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.Type: ApplicationFiled: October 18, 2019Publication date: February 13, 2020Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
-
Patent number: 10489691Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.Type: GrantFiled: January 15, 2016Date of Patent: November 26, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Madeline Jane Schrier, Vidya Nariyambut Murali
-
Publication number: 20190311221Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: ApplicationFiled: June 18, 2019Publication date: October 10, 2019Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Patent number: 10410522Abstract: Example systems and methods for communicating animal proximity to a vehicle are described. In one implementation, a device implanted in an animal is activated when the device is within a predetermined distance of a vehicle. The vehicle receives a signal from the device and determines an approximate distance between the device and the vehicle. A symbol is flashed to a driver of the vehicle at a frequency that corresponds to the approximate distance between the device and the vehicle.Type: GrantFiled: October 28, 2015Date of Patent: September 10, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
-
Patent number: 10373019Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: GrantFiled: January 13, 2016Date of Patent: August 6, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Publication number: 20180350240Abstract: Example systems and methods for communicating animal proximity to a vehicle are described. In one implementation, a device implanted in an animal is activated when the device is within a predetermined distance of a vehicle. The vehicle receives a signal from the device and determines an approximate distance between the device and the vehicle. A symbol is flashed to a driver of the vehicle at a frequency that corresponds to the approximate distance between the device and the vehicle.Type: ApplicationFiled: October 28, 2015Publication date: December 6, 2018Inventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
-
Publication number: 20180286243Abstract: Example systems and methods for detecting an animal proximate a vehicle are described. In one implementation, a wearable device carried by an animal is activated when the wearable device is within a predetermined distance of a vehicle. The vehicle receives a signal from the wearable device and determines an approximate distance between the wearable device and the vehicle. An alert is generated to warn a driver that an animal is near the vehicle. The alert has an intensity level that corresponds to the approximate distance between the wearable device and the vehicle.Type: ApplicationFiled: October 28, 2015Publication date: October 4, 2018Inventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
-
Publication number: 20180186369Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones and identifies sounds. The controller further identifies an estimated location of the sound origin and the type of sound, i.e. whether the sound is a vehicle and/or the type of vehicle. The controller analyzes map data and attempts to identify a landmark within a tolerance from the estimated location. If a landmark is found corresponding to the estimated location and type of the sound origin, then the certainty is increased that the source of the sound is at that location and is that type of sound source. Collision avoidance is then performed with respect to the location of the sound origin and its type with the certainty as augmented using the map data. Collision avoidance may include automatically actuating brake, steering, and accelerator actuators in order to avoid the location of the sound origin.Type: ApplicationFiled: February 27, 2018Publication date: July 5, 2018Inventors: Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar
-
Patent number: 9937922Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones and identifies sounds. The controller further identifies an estimated location of the sound origin and the type of sound, i.e. whether the sound is a vehicle and/or the type of vehicle. The controller analyzes map data and attempts to identify a landmark within a tolerance from the estimated location. If a landmark is found corresponding to the estimated location and type of the sound origin, then the certainty is increased that the source of the sound is at that location and is that type of sound source. Collision avoidance is then performed with respect to the location of the sound origin and its type with the certainty as augmented using the map data. Collision avoidance may include automatically actuating brake, steering, and accelerator actuators in order to avoid the location of the sound origin.Type: GrantFiled: October 6, 2015Date of Patent: April 10, 2018Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar
-
Patent number: 9881219Abstract: A controller for an autonomous vehicle receives an image stream from one or more imaging devices. The controller identifies vehicle images in the image stream. Vehicle images are compared to the color, shape, badging, markings, license plate, and driver of the autonomous vehicle. If the vehicle image is determined to match the autonomous vehicle, then the vehicle image is ignored as a potential obstacle. The location of a reflective surface that generated the vehicle image may be determined and added to a set of potential obstacles. The color and shape of a vehicle in a vehicle image may be evaluated first. Only if the color and shape in the vehicle image match the autonomous vehicle are other factors such as badging, markings, license plate, and driver considered. Vehicle images not matching the autonomous vehicle are included in a set of potential obstacles.Type: GrantFiled: October 7, 2015Date of Patent: January 30, 2018Assignee: Ford Global Technologies, LLCInventors: Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar
-
Publication number: 20170206434Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.Type: ApplicationFiled: January 14, 2016Publication date: July 20, 2017Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
-
Publication number: 20170206440Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.Type: ApplicationFiled: January 15, 2016Publication date: July 20, 2017Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali