Patents by Inventor Marco Scavuzzo

Marco Scavuzzo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11373115
    Abstract: Systems and methods are provided for training a machine learned model on a large number of devices, each device acquiring a local set of training data without sharing data sets across devices. The devices train the model on the respective device's set of training data. The devices communicate a parameter vector from the trained model asynchronously with a parameter server. The parameter server updates a master parameter vector and transmits the master parameter vector to the respective device.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: June 28, 2022
    Assignee: HERE Global B.V.
    Inventors: Michael Kopp, Moritz Neun, Michael Sprague, Amir Jalalirad, Marco Scavuzzo, Catalin Capota
  • Publication number: 20200410288
    Abstract: Systems and methods are provided for managing machine learning processes within distributed and heterogeneous environments. The distributed and heterogeneous environments may include different types of devices that include different specifications, security, and privacy concerns. The devices participate in complex machine learning tasks while maintaining both privacy and autonomy. The systems and methods manage the lifecycle of how machine learning workloads are distributed.
    Type: Application
    Filed: June 26, 2019
    Publication date: December 31, 2020
    Inventors: Catalin Capota, Michael Sprague, Marco Scavuzzo, Amir Jalalirad, Lyman Do, Bala Divakaruni
  • Publication number: 20200334524
    Abstract: Systems and methods are provided for training a model on a large number of devices where, for example, each device acquires a local set of training data without sharing data sets across the devices. The devices train the model on the respective device's set of training data. The devices communicate a parameter vector from the trained model asynchronously with a parameter server. The parameter server updates a master parameter vector and transmits the master parameter vector to the respective device. The update rate of the devices is decoupled from the size of the data that is available to the devices and the computational power of the devices by over or under sampling the local training data.
    Type: Application
    Filed: April 17, 2019
    Publication date: October 22, 2020
    Inventors: Michael Sprague, Amir Jalalirad, Marco Scavuzzo, Catalin Capota
  • Publication number: 20190311298
    Abstract: Systems and methods are provided for training a machine learned model on a large number of devices, each device acquiring a local set of training data without sharing data sets across devices. The devices train the model on the respective device's set of training data. The devices communicate a parameter vector from the trained model asynchronously with a parameter server. The parameter server updates a master parameter vector and transmits the master parameter vector to the respective device.
    Type: Application
    Filed: April 9, 2018
    Publication date: October 10, 2019
    Inventors: Michael Kopp, Moritz Neun, Michael Sprague, Amir Jalalirad, Marco Scavuzzo, Catalin Capota