Patents by Inventor Giorgio Patrini

Giorgio Patrini has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11521106
    Abstract: This disclosure relates to learning with transformed data such as determining multiple training samples from multiple data samples. Each of the multiple data samples comprises one or more feature values and a label that classifies that data sample. A processor determines each of the multiple training samples by randomly selecting a subset of the multiple data samples, and combining the feature values of the data samples of the subset based on the label of each of the data samples of the subset. Since the training samples are combinations of randomly chosen data samples, the training samples can be provided to third parties without disclosing the actual training data. This is an advantage over existing methods in cases where the data is confidential and should therefore not be shared with a learner of a classifier, for example.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: December 6, 2022
    Assignee: National ICT Australia Limited
    Inventors: Richard Nock, Giorgio Patrini, Tiberio Caetano
  • Patent number: 11238364
    Abstract: This disclosure relates to learning from distributed data. In particular, it relates to determining multiple first training samples from multiple first data samples. Each of the multiple first data samples comprises multiple first feature values and a first label that classifies that first data sample. A processor determines each of the multiple first training samples by selecting a first subset of the multiple first data samples such that the first subset comprises data samples with corresponding one or more of the multiple first feature values, and combining the first feature values of the data samples of the first subset based on the first label of each of the first data samples of the first subset. The resulting training samples can be combined with training samples from other databases that share the same corresponding features and entity matching is unnecessary.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: February 1, 2022
    Assignee: NATIONAL ICT AUSTRALIA LIMITED
    Inventors: Richard Nock, Giorgio Patrini
  • Patent number: 11150657
    Abstract: A lossy data compressor for physical measurement data, comprising a parametrized mapping network hat, when applied to a measurement data point x in a space X, produces a point z in a lower-dimensional manifold Z, and configured to provide a point z on manifold Z as output in response to receiving a data point x as input, wherein the manifold Z is a continuous hypersurface that only admits fully continuous paths between any two points on the hypersurface; and the parameters ? of the mapping network are trainable or trained towards an objective that comprises minimizing, on the manifold Z, a distance between a given prior distribution PZ and a distribution PQ induced on manifold Z by mapping a given set PD of physical measurement data from X onto Z using the mapping network, according to a given distance measure.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: October 19, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Marcello Carioni, Giorgio Patrini, Max Welling, Patrick Forré, Tim Genewein
  • Publication number: 20190369619
    Abstract: A lossy data compressor for physical measurement data, comprising a parametrized mapping network hat, when applied to a measurement data point x in a space X, produces a point z in a lower-dimensional manifold Z, and configured to provide a point z on manifold Z as output in response to receiving a data point x as input, wherein the manifold Z is a continuous hypersurface that only admits fully continuous paths between any two points on the hypersurface; and the parameters ? of the mapping network are trainable or trained towards an objective that comprises minimizing, on the manifold Z, a distance between a given prior distribution PZ and a distribution PQ induced on manifold Z by mapping a given set PD of physical measurement data from X onto Z using the mapping network, according to a given distance measure.
    Type: Application
    Filed: May 23, 2019
    Publication date: December 5, 2019
    Inventors: Marcello Carioni, Giorgio Patrini, Max Welling, Patrick Forré, Tim Genewein
  • Publication number: 20180018584
    Abstract: This disclosure relates to learning from distributed data. In particular, it relates to determining multiple first training samples from multiple first data samples. Each of the multiple first data samples comprises multiple first feature values and a first label that classifies that first data sample. A processor determines each of the multiple first training samples by selecting a first subset of the multiple first data samples such that the first subset comprises data samples with corresponding one or more of the multiple first feature values, and combining the first feature values of the data samples of the first subset based on the first label of each of the first data samples of the first subset. The resulting training samples can be combined with training samples from other databases that share the same corresponding features and entity matching is unnecessary.
    Type: Application
    Filed: February 12, 2016
    Publication date: January 18, 2018
    Inventors: Richard Nock, Giorgio Patrini
  • Publication number: 20170337487
    Abstract: This disclosure relates to learning with transformed data such as determining multiple training samples from multiple data samples. Each of the multiple data samples comprises one or more feature values and a label that classifies that data sample. A processor determines each of the multiple training samples by randomly selecting a subset of the multiple data samples, and combining the feature values of the data samples of the subset based on the label of each of the data samples of the subset. Since the training samples are combinations of randomly chosen data samples, the training samples can be provided to third parties without disclosing the actual training data. This is an advantage over existing methods in cases where the data is confidential and should therefore not be shared with a learner of a classifier, for example.
    Type: Application
    Filed: October 23, 2015
    Publication date: November 23, 2017
    Inventors: Richard Nock, Giorgio Patrini, Tiberio Caetano