Patents by Inventor Madeline Jane Schrier

Madeline Jane Schrier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847917
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: December 19, 2023
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
  • Publication number: 20230351544
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Application
    Filed: July 11, 2023
    Publication date: November 2, 2023
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Patent number: 11734786
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: August 22, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Publication number: 20220004807
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Application
    Filed: September 16, 2021
    Publication date: January 6, 2022
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Patent number: 11200447
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: December 14, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Publication number: 20210334610
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Application
    Filed: July 9, 2021
    Publication date: October 28, 2021
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
  • Patent number: 11087186
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: August 10, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
  • Patent number: 10665104
    Abstract: Example systems and methods for detecting an animal proximate a vehicle are described. In one implementation, a wearable device carried by an animal is activated when the wearable device is within a predetermined distance of a vehicle. The vehicle receives a signal from the wearable device and determines an approximate distance between the wearable device and the vehicle. An alert is generated to warn a driver that an animal is near the vehicle. The alert has an intensity level that corresponds to the approximate distance between the wearable device and the vehicle.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: May 26, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
  • Publication number: 20200050905
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Application
    Filed: October 18, 2019
    Publication date: February 13, 2020
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
  • Patent number: 10489691
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: November 26, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
  • Publication number: 20190311221
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Application
    Filed: June 18, 2019
    Publication date: October 10, 2019
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Patent number: 10410522
    Abstract: Example systems and methods for communicating animal proximity to a vehicle are described. In one implementation, a device implanted in an animal is activated when the device is within a predetermined distance of a vehicle. The vehicle receives a signal from the device and determines an approximate distance between the device and the vehicle. A symbol is flashed to a driver of the vehicle at a frequency that corresponds to the approximate distance between the device and the vehicle.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: September 10, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
  • Patent number: 10373019
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: August 6, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Publication number: 20180350240
    Abstract: Example systems and methods for communicating animal proximity to a vehicle are described. In one implementation, a device implanted in an animal is activated when the device is within a predetermined distance of a vehicle. The vehicle receives a signal from the device and determines an approximate distance between the device and the vehicle. A symbol is flashed to a driver of the vehicle at a frequency that corresponds to the approximate distance between the device and the vehicle.
    Type: Application
    Filed: October 28, 2015
    Publication date: December 6, 2018
    Inventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
  • Publication number: 20180286243
    Abstract: Example systems and methods for detecting an animal proximate a vehicle are described. In one implementation, a wearable device carried by an animal is activated when the wearable device is within a predetermined distance of a vehicle. The vehicle receives a signal from the wearable device and determines an approximate distance between the wearable device and the vehicle. An alert is generated to warn a driver that an animal is near the vehicle. The alert has an intensity level that corresponds to the approximate distance between the wearable device and the vehicle.
    Type: Application
    Filed: October 28, 2015
    Publication date: October 4, 2018
    Inventors: Nithika Sivashankar, Scott Vincent Myers, Brielle Reiff, Madeline Jane Schrier
  • Publication number: 20180186369
    Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones and identifies sounds. The controller further identifies an estimated location of the sound origin and the type of sound, i.e. whether the sound is a vehicle and/or the type of vehicle. The controller analyzes map data and attempts to identify a landmark within a tolerance from the estimated location. If a landmark is found corresponding to the estimated location and type of the sound origin, then the certainty is increased that the source of the sound is at that location and is that type of sound source. Collision avoidance is then performed with respect to the location of the sound origin and its type with the certainty as augmented using the map data. Collision avoidance may include automatically actuating brake, steering, and accelerator actuators in order to avoid the location of the sound origin.
    Type: Application
    Filed: February 27, 2018
    Publication date: July 5, 2018
    Inventors: Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar
  • Patent number: 9937922
    Abstract: A controller for an autonomous vehicle receives audio signals from one or more microphones and identifies sounds. The controller further identifies an estimated location of the sound origin and the type of sound, i.e. whether the sound is a vehicle and/or the type of vehicle. The controller analyzes map data and attempts to identify a landmark within a tolerance from the estimated location. If a landmark is found corresponding to the estimated location and type of the sound origin, then the certainty is increased that the source of the sound is at that location and is that type of sound source. Collision avoidance is then performed with respect to the location of the sound origin and its type with the certainty as augmented using the map data. Collision avoidance may include automatically actuating brake, steering, and accelerator actuators in order to avoid the location of the sound origin.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: April 10, 2018
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar
  • Patent number: 9881219
    Abstract: A controller for an autonomous vehicle receives an image stream from one or more imaging devices. The controller identifies vehicle images in the image stream. Vehicle images are compared to the color, shape, badging, markings, license plate, and driver of the autonomous vehicle. If the vehicle image is determined to match the autonomous vehicle, then the vehicle image is ignored as a potential obstacle. The location of a reflective surface that generated the vehicle image may be determined and added to a set of potential obstacles. The color and shape of a vehicle in a vehicle image may be evaluated first. Only if the color and shape in the vehicle image match the autonomous vehicle are other factors such as badging, markings, license plate, and driver considered. Vehicle images not matching the autonomous vehicle are included in a set of potential obstacles.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: January 30, 2018
    Assignee: Ford Global Technologies, LLC
    Inventors: Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar
  • Publication number: 20170206434
    Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
    Type: Application
    Filed: January 14, 2016
    Publication date: July 20, 2017
    Inventors: Vidya Nariyambut Murali, Madeline Jane Schrier
  • Publication number: 20170206440
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 20, 2017
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali