Patents by Inventor Barnaby Dalton

Barnaby Dalton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11871057
    Abstract: A system and method for content aware monitoring of an output of a media channel by a media system is provided herein. In at least one embodiment, the method comprises: receiving a media assertion schedule comprising a schedule of assertion checks which allow validating that the output of the media channel, by the media system, is synchronized with an expected media channel output; receiving, from a signature generating module, at least one observed signature file; determining that the timestamp data, in each of the at least one received observed signature file, aligns with at least one timecode included in an assertion check in the media assertion schedule; identifying an assertion condition included in the assertion check; and validating the assertion condition using the observed media frame signatures included in each of the at least one received observed signature file.
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: January 9, 2024
    Assignee: Evertz Microsystems Ltd.
    Inventors: Jeremy Blythe, Barnaby Dalton, Robert Beattie, Andrew Duncan
  • Publication number: 20220400300
    Abstract: A system and method for content aware monitoring of an output of a media channel by a media system is provided herein. In at least one embodiment, the method comprises: receiving a media assertion schedule comprising a schedule of assertion checks which allow validating that the output of the media channel, by the media system, is synchronized with an expected media channel output; receiving, from a signature generating module, at least one observed signature file; determining that the timestamp data, in each of the at least one received observed signature file, aligns with at least one timecode included in an assertion check in the media assertion schedule; identifying an assertion condition included in the assertion check; and validating the assertion condition using the observed media frame signatures included in each of the at least one received observed signature file.
    Type: Application
    Filed: June 14, 2022
    Publication date: December 15, 2022
    Applicant: Evertz Microsystems Ltd.
    Inventors: Jeremy Blythe, Barnaby Dalton, Robert Beattie, Andrew Duncan
  • Patent number: 11354230
    Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: June 7, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
  • Publication number: 20210176118
    Abstract: Systems and methods are provided for processing data streams. The system includes at least one data source for transmitting a data stream to a data transmission network; at least one specific purpose processor in communication with the data transmission network, wherein the specific purpose processor is configured to provide a specific data processing operation; a controller coupled the data transmission network, the controller configured to: determine that the data stream requires processing according to a data processing operation; identify a data processing configuration corresponding to the data processing operation; and route the data stream to the at least one specific purpose processor.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 10, 2021
    Applicant: Evertz Microsystems Ltd.
    Inventors: Rakesh Thakor Patel, Jeff Wei, Barnaby Dalton
  • Patent number: 10942671
    Abstract: A circuit for a multistage sequential data process includes a plurality of memory units. Each memory unit is associated with a stage of the sequential data process which, for each data set inputted to the stage, the stage provides an intermediate data set for storage in the associated memory unit for use in at least one subsequent stage of the sequential data process, where each of the plurality of memory units is sized based on relative locations of the stage providing the intermediate data set and the at least one subsequent stage in the sequential data process.
    Type: Grant
    Filed: April 25, 2016
    Date of Patent: March 9, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Vanessa Courville, Manuel Saldana, Barnaby Dalton
  • Patent number: 10896366
    Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y?Rm is the output vector, x?Rn is the input vector, and W?Rm×n is a matrix of connection parameters, matrices Uij and Vij are defined and submatrices Wij are computed as the product of Uij and Vij, so that Wij=VijUij, and W is obtained by appending submatrices Wi,j.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: January 19, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Serdar Sozubek, Barnaby Dalton, Vanessa Courville, Graham Taylor
  • Patent number: 10776697
    Abstract: A method for training a neural network comprising at least one layer comprising a plurality of input nodes, a plurality of output nodes, and a plurality of connections for connecting each one of the plurality of input nodes to each one of the plurality of output nodes, is provided. The method comprises pseudo-randomly selecting a subset of the plurality of connections, each connection of the plurality of connections having associated therewith a weight parameter and a probability of being retained in the neural network, generating output data by feeding input data over the subset of connections, computing an error between the generated output data and desired output data, and, for at least one connection in the subset of connections, determining a contribution of the weight parameter to the error and updating the probability of being retained in the neural network accordingly.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: September 15, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Sepideh Kharaghani, Vanessa Courville, Barnaby Dalton
  • Patent number: 10643126
    Abstract: A memory control unit for handling data stored in a memory device includes a first interface to an interconnection with at least one memory bank; a second interface for communicating with a data requesting unit; and a memory quantization unit. The memory quantization unit is configured to: obtain, via the first interface, a first weight value from the at least one memory bank; quantize the first weight value to generate at least one quantized weight value having a shorter bit length than a bit length of the first weight value; and communicate the at least one quantized weight value to the data requesting unit via the second interface.
    Type: Grant
    Filed: July 14, 2016
    Date of Patent: May 5, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Manuel Saldana, Barnaby Dalton, Vanessa Courville
  • Patent number: 10509996
    Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y is the output vector, x is the input vector, and W is a matrix of connection parameters, vectors uij and vij are defined and submatrices Wi,j are computed as the outer product of uij and vij, so that Wi,j=vij?uij, and W is obtained by appending submatrices Wi,j.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: December 17, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Barnaby Dalton, Serdar Sozubek, Manuel Saldana, Vanessa Courville
  • Publication number: 20190012258
    Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.
    Type: Application
    Filed: August 28, 2018
    Publication date: January 10, 2019
    Inventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
  • Patent number: 10108539
    Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: October 23, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
  • Patent number: 10108540
    Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: October 23, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
  • Publication number: 20180300629
    Abstract: A method for training a neural network comprising at least one layer comprising a plurality of input nodes, a plurality of output nodes, and a plurality of connections for connecting each one of the plurality of input nodes to each one of the plurality of output nodes, is provided. The method comprises pseudo-randomly selecting a subset of the plurality of connections, each connection of the plurality of connections having associated therewith a weight parameter and a probability of being retained in the neural network, generating output data by feeding input data over the subset of connections, computing an error between the generated output data and desired output data, and, for at least one connection in the subset of connections, determining a contribution of the weight parameter to the error and updating the probability of being retained in the neural network accordingly.
    Type: Application
    Filed: April 18, 2017
    Publication date: October 18, 2018
    Inventors: Sepideh KHARAGHANI, Vanessa COURVILLE, Barnaby DALTON
  • Patent number: 9904922
    Abstract: A computing system includes at least one processor and at least one module operable by the at least one processor to calculate a tail of a first dataset by determining elements of the first dataset that fall outside of a specified percentile, and determine locations of the first dataset at which elements of the first dataset that fall outside of the specified percentile are located. The at least one module may be operable to calculate a tail of a second dataset by populating a data structure with elements of the second dataset that correspond to the locations of the first dataset, and determining, using the data structure, elements of the second dataset that fall outside of the specified percentile. The at least one module may be operable to output an indication of at least one of the tail of the first dataset or the tail of the second dataset.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: February 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Robert J. Blainey, Barnaby Dalton, Louis Ly, James A Sedgwick, Lior Velichover, Kai-Ting A. Wang
  • Patent number: 9892411
    Abstract: A computing system includes at least one processor and at least one module operable by the at least one processor to calculate a tail of a first dataset by determining elements of the first dataset that fall outside of a specified percentile, and determine locations of the first dataset at which elements of the first dataset that fall outside of the specified percentile are located. The at least one module may be operable to calculate a tail of a second dataset by populating a data structure with elements of the second dataset that correspond to the locations of the first dataset, and determining, using the data structure, elements of the second dataset that fall outside of the specified percentile. The at least one module may be operable to output an indication of at least one of the tail of the first dataset or the tail of the second dataset.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Robert J. Blainey, Barnaby Dalton, Louis Ly, James A. Sedgwick, Lior Velichover, Kai-Ting A. Wang
  • Publication number: 20180039884
    Abstract: A system for training a neural network includes a first set of neural network units and a second set of neural networking units. Each neural network unit in the first set is configured to compute parameter update data for one of a plurality of instances of a first portion of the neural network. Each neural network unit in the first set includes a communication interface for communicating its parameter update data for combination with parameter update data from another neural network unit in the first set. Each neural network unit in the second set is configured to compute parameter update data for one of a plurality of instances of a second portion of the neural network. Each neural network unit in the second set includes a communication interface for communicating its parameter update data for combination with parameter update data from another neural network unit in the second set.
    Type: Application
    Filed: August 3, 2016
    Publication date: February 8, 2018
    Inventors: Barnaby DALTON, Vanessa COURVILLE, Manuel SALDANA
  • Publication number: 20180018560
    Abstract: A memory control unit for handling data stored in a memory device includes a first interface to an interconnection with at least one memory bank; a second interface for communicating with a data requesting unit; and a memory quantization unit. The memory quantization unit is configured to: obtain, via the first interface, a first weight value from the at least one memory bank; quantize the first weight value to generate at least one quantized weight value having a shorter bit length than a bit length of the first weight value; and communicate the at least one quantized weight value to the data requesting unit via the second interface.
    Type: Application
    Filed: July 14, 2016
    Publication date: January 18, 2018
    Inventors: Manuel SALDANA, Barnaby DALTON, Vanessa COURVILLE
  • Publication number: 20170337465
    Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y ? Rm is the output vector, x ? Rn is the input vector, and W ? Rmxn is a matrix of connection parameters, matrices Uij and Vij are defined and submatrices Wij are computed as the product of Uij and Vij, so that Wij=VijUij, and W is obtained by appending submatrices Wi,j.
    Type: Application
    Filed: March 8, 2017
    Publication date: November 23, 2017
    Inventors: Serdar SOZUBEK, Barnaby DALTON, Vanessa COURVILLE, Graham TAYLOR
  • Publication number: 20170337463
    Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y is the output vector, x is the input vector, and W is a matrix of connection parameters, vectors uij and ij are defined and submatrices Wi,j are computed as the outer product of uij and ij, so that Wi,j=ijuij, and W is obtained by appending submatrices Wi,j.
    Type: Application
    Filed: September 7, 2016
    Publication date: November 23, 2017
    Inventors: Barnaby DALTON, Serdar SOZUBEK, Manuel SALDANA, Vanessa COURVILLE
  • Publication number: 20170308324
    Abstract: A circuit for a multistage sequential data process includes a plurality of memory units. Each memory unit is associated with a stage of the sequential data process which, for each data set inputted to the stage, the stage provides an intermediate data set for storage in the associated memory unit for use in at least one subsequent stage of the sequential data process, where each of the plurality of memory units is sized based on relative locations of the stage providing the intermediate data set and the at least one subsequent stage in the sequential data process.
    Type: Application
    Filed: April 25, 2016
    Publication date: October 26, 2017
    Inventors: Vanessa COURVILLE, Manuel SALDANA, Barnaby DALTON