Patents by Inventor Richard Nock

Richard Nock has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11521106
    Abstract: This disclosure relates to learning with transformed data such as determining multiple training samples from multiple data samples. Each of the multiple data samples comprises one or more feature values and a label that classifies that data sample. A processor determines each of the multiple training samples by randomly selecting a subset of the multiple data samples, and combining the feature values of the data samples of the subset based on the label of each of the data samples of the subset. Since the training samples are combinations of randomly chosen data samples, the training samples can be provided to third parties without disclosing the actual training data. This is an advantage over existing methods in cases where the data is confidential and should therefore not be shared with a learner of a classifier, for example.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: December 6, 2022
    Assignee: National ICT Australia Limited
    Inventors: Richard Nock, Giorgio Patrini, Tiberio Caetano
  • Patent number: 11238364
    Abstract: This disclosure relates to learning from distributed data. In particular, it relates to determining multiple first training samples from multiple first data samples. Each of the multiple first data samples comprises multiple first feature values and a first label that classifies that first data sample. A processor determines each of the multiple first training samples by selecting a first subset of the multiple first data samples such that the first subset comprises data samples with corresponding one or more of the multiple first feature values, and combining the first feature values of the data samples of the first subset based on the first label of each of the first data samples of the first subset. The resulting training samples can be combined with training samples from other databases that share the same corresponding features and entity matching is unnecessary.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: February 1, 2022
    Assignee: NATIONAL ICT AUSTRALIA LIMITED
    Inventors: Richard Nock, Giorgio Patrini
  • Publication number: 20180018584
    Abstract: This disclosure relates to learning from distributed data. In particular, it relates to determining multiple first training samples from multiple first data samples. Each of the multiple first data samples comprises multiple first feature values and a first label that classifies that first data sample. A processor determines each of the multiple first training samples by selecting a first subset of the multiple first data samples such that the first subset comprises data samples with corresponding one or more of the multiple first feature values, and combining the first feature values of the data samples of the first subset based on the first label of each of the first data samples of the first subset. The resulting training samples can be combined with training samples from other databases that share the same corresponding features and entity matching is unnecessary.
    Type: Application
    Filed: February 12, 2016
    Publication date: January 18, 2018
    Inventors: Richard Nock, Giorgio Patrini
  • Publication number: 20170337487
    Abstract: This disclosure relates to learning with transformed data such as determining multiple training samples from multiple data samples. Each of the multiple data samples comprises one or more feature values and a label that classifies that data sample. A processor determines each of the multiple training samples by randomly selecting a subset of the multiple data samples, and combining the feature values of the data samples of the subset based on the label of each of the data samples of the subset. Since the training samples are combinations of randomly chosen data samples, the training samples can be provided to third parties without disclosing the actual training data. This is an advantage over existing methods in cases where the data is confidential and should therefore not be shared with a learner of a classifier, for example.
    Type: Application
    Filed: October 23, 2015
    Publication date: November 23, 2017
    Inventors: Richard Nock, Giorgio Patrini, Tiberio Caetano
  • Publication number: 20110246080
    Abstract: [Object] To provide a gene clustering tool that can perform gene clustering based on the data on gene expression level over time without a priori data forecast but with high precision. [Solving Means] Provided is a gene clustering program for performing at least (1) a step S1 of calculating a feature value reflecting similarity among data from the data representing variation in gene expression level over time, (2) a step S2 of calculating eigenvectors of a similarity matrix M from the calculated feature values for all combinations of the genes, (3) a step S3 of transforming the similarity matrix M into a Boolean matrix N while maintaining eigenvalues of the eigenvectors, and (4) a step S4 of clustering the data based on the Boolean matrix N.
    Type: Application
    Filed: December 1, 2009
    Publication date: October 6, 2011
    Applicant: SONY CORPORATION
    Inventors: Natalia Polouliakh, Richard Nock, Frank Nielsen, Hiroaki Kitano