Patents by Inventor Barnaby Dalton
Barnaby Dalton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11871057Abstract: A system and method for content aware monitoring of an output of a media channel by a media system is provided herein. In at least one embodiment, the method comprises: receiving a media assertion schedule comprising a schedule of assertion checks which allow validating that the output of the media channel, by the media system, is synchronized with an expected media channel output; receiving, from a signature generating module, at least one observed signature file; determining that the timestamp data, in each of the at least one received observed signature file, aligns with at least one timecode included in an assertion check in the media assertion schedule; identifying an assertion condition included in the assertion check; and validating the assertion condition using the observed media frame signatures included in each of the at least one received observed signature file.Type: GrantFiled: June 14, 2022Date of Patent: January 9, 2024Assignee: Evertz Microsystems Ltd.Inventors: Jeremy Blythe, Barnaby Dalton, Robert Beattie, Andrew Duncan
-
Publication number: 20220400300Abstract: A system and method for content aware monitoring of an output of a media channel by a media system is provided herein. In at least one embodiment, the method comprises: receiving a media assertion schedule comprising a schedule of assertion checks which allow validating that the output of the media channel, by the media system, is synchronized with an expected media channel output; receiving, from a signature generating module, at least one observed signature file; determining that the timestamp data, in each of the at least one received observed signature file, aligns with at least one timecode included in an assertion check in the media assertion schedule; identifying an assertion condition included in the assertion check; and validating the assertion condition using the observed media frame signatures included in each of the at least one received observed signature file.Type: ApplicationFiled: June 14, 2022Publication date: December 15, 2022Applicant: Evertz Microsystems Ltd.Inventors: Jeremy Blythe, Barnaby Dalton, Robert Beattie, Andrew Duncan
-
Patent number: 11354230Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.Type: GrantFiled: August 28, 2018Date of Patent: June 7, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
-
Publication number: 20210176118Abstract: Systems and methods are provided for processing data streams. The system includes at least one data source for transmitting a data stream to a data transmission network; at least one specific purpose processor in communication with the data transmission network, wherein the specific purpose processor is configured to provide a specific data processing operation; a controller coupled the data transmission network, the controller configured to: determine that the data stream requires processing according to a data processing operation; identify a data processing configuration corresponding to the data processing operation; and route the data stream to the at least one specific purpose processor.Type: ApplicationFiled: December 4, 2020Publication date: June 10, 2021Applicant: Evertz Microsystems Ltd.Inventors: Rakesh Thakor Patel, Jeff Wei, Barnaby Dalton
-
Patent number: 10942671Abstract: A circuit for a multistage sequential data process includes a plurality of memory units. Each memory unit is associated with a stage of the sequential data process which, for each data set inputted to the stage, the stage provides an intermediate data set for storage in the associated memory unit for use in at least one subsequent stage of the sequential data process, where each of the plurality of memory units is sized based on relative locations of the stage providing the intermediate data set and the at least one subsequent stage in the sequential data process.Type: GrantFiled: April 25, 2016Date of Patent: March 9, 2021Assignee: Huawei Technologies Co., Ltd.Inventors: Vanessa Courville, Manuel Saldana, Barnaby Dalton
-
Patent number: 10896366Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y?Rm is the output vector, x?Rn is the input vector, and W?Rm×n is a matrix of connection parameters, matrices Uij and Vij are defined and submatrices Wij are computed as the product of Uij and Vij, so that Wij=VijUij, and W is obtained by appending submatrices Wi,j.Type: GrantFiled: March 8, 2017Date of Patent: January 19, 2021Assignee: Huawei Technologies Co., Ltd.Inventors: Serdar Sozubek, Barnaby Dalton, Vanessa Courville, Graham Taylor
-
Patent number: 10776697Abstract: A method for training a neural network comprising at least one layer comprising a plurality of input nodes, a plurality of output nodes, and a plurality of connections for connecting each one of the plurality of input nodes to each one of the plurality of output nodes, is provided. The method comprises pseudo-randomly selecting a subset of the plurality of connections, each connection of the plurality of connections having associated therewith a weight parameter and a probability of being retained in the neural network, generating output data by feeding input data over the subset of connections, computing an error between the generated output data and desired output data, and, for at least one connection in the subset of connections, determining a contribution of the weight parameter to the error and updating the probability of being retained in the neural network accordingly.Type: GrantFiled: April 18, 2017Date of Patent: September 15, 2020Assignee: Huawei Technologies Co., Ltd.Inventors: Sepideh Kharaghani, Vanessa Courville, Barnaby Dalton
-
Patent number: 10643126Abstract: A memory control unit for handling data stored in a memory device includes a first interface to an interconnection with at least one memory bank; a second interface for communicating with a data requesting unit; and a memory quantization unit. The memory quantization unit is configured to: obtain, via the first interface, a first weight value from the at least one memory bank; quantize the first weight value to generate at least one quantized weight value having a shorter bit length than a bit length of the first weight value; and communicate the at least one quantized weight value to the data requesting unit via the second interface.Type: GrantFiled: July 14, 2016Date of Patent: May 5, 2020Assignee: Huawei Technologies Co., Ltd.Inventors: Manuel Saldana, Barnaby Dalton, Vanessa Courville
-
Patent number: 10509996Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y is the output vector, x is the input vector, and W is a matrix of connection parameters, vectors uij and vij are defined and submatrices Wi,j are computed as the outer product of uij and vij, so that Wi,j=vij?uij, and W is obtained by appending submatrices Wi,j.Type: GrantFiled: September 7, 2016Date of Patent: December 17, 2019Assignee: Huawei Technologies Co., Ltd.Inventors: Barnaby Dalton, Serdar Sozubek, Manuel Saldana, Vanessa Courville
-
Publication number: 20190012258Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.Type: ApplicationFiled: August 28, 2018Publication date: January 10, 2019Inventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
-
Patent number: 10108539Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.Type: GrantFiled: June 13, 2013Date of Patent: October 23, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
-
Patent number: 10108540Abstract: Allocating distributed data structures and managing allocation of a symmetric heap can include defining, using a processor, the symmetric heap. The symmetric heap includes a symmetric partition for each process of a partitioned global address space (PGAS) system. Each symmetric partition of the symmetric heap begins at a same starting virtual memory address and has a same global symmetric break. One process of a plurality of processes of the PGAS system is configured as an allocator process that controls allocation of blocks of memory for each symmetric partition of the symmetric heap. Using the processor executing the allocator process, isomorphic fragmentation among the symmetric partitions of the symmetric heap is maintained.Type: GrantFiled: June 14, 2013Date of Patent: October 23, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gheorghe Almasi, Barnaby Dalton, Ilie G. Tanase, Ettore Tiotto
-
Publication number: 20180300629Abstract: A method for training a neural network comprising at least one layer comprising a plurality of input nodes, a plurality of output nodes, and a plurality of connections for connecting each one of the plurality of input nodes to each one of the plurality of output nodes, is provided. The method comprises pseudo-randomly selecting a subset of the plurality of connections, each connection of the plurality of connections having associated therewith a weight parameter and a probability of being retained in the neural network, generating output data by feeding input data over the subset of connections, computing an error between the generated output data and desired output data, and, for at least one connection in the subset of connections, determining a contribution of the weight parameter to the error and updating the probability of being retained in the neural network accordingly.Type: ApplicationFiled: April 18, 2017Publication date: October 18, 2018Inventors: Sepideh KHARAGHANI, Vanessa COURVILLE, Barnaby DALTON
-
Patent number: 9904922Abstract: A computing system includes at least one processor and at least one module operable by the at least one processor to calculate a tail of a first dataset by determining elements of the first dataset that fall outside of a specified percentile, and determine locations of the first dataset at which elements of the first dataset that fall outside of the specified percentile are located. The at least one module may be operable to calculate a tail of a second dataset by populating a data structure with elements of the second dataset that correspond to the locations of the first dataset, and determining, using the data structure, elements of the second dataset that fall outside of the specified percentile. The at least one module may be operable to output an indication of at least one of the tail of the first dataset or the tail of the second dataset.Type: GrantFiled: March 1, 2016Date of Patent: February 27, 2018Assignee: International Business Machines CorporationInventors: Robert J. Blainey, Barnaby Dalton, Louis Ly, James A Sedgwick, Lior Velichover, Kai-Ting A. Wang
-
Patent number: 9892411Abstract: A computing system includes at least one processor and at least one module operable by the at least one processor to calculate a tail of a first dataset by determining elements of the first dataset that fall outside of a specified percentile, and determine locations of the first dataset at which elements of the first dataset that fall outside of the specified percentile are located. The at least one module may be operable to calculate a tail of a second dataset by populating a data structure with elements of the second dataset that correspond to the locations of the first dataset, and determining, using the data structure, elements of the second dataset that fall outside of the specified percentile. The at least one module may be operable to output an indication of at least one of the tail of the first dataset or the tail of the second dataset.Type: GrantFiled: February 27, 2015Date of Patent: February 13, 2018Assignee: International Business Machines CorporationInventors: Robert J. Blainey, Barnaby Dalton, Louis Ly, James A. Sedgwick, Lior Velichover, Kai-Ting A. Wang
-
Publication number: 20180039884Abstract: A system for training a neural network includes a first set of neural network units and a second set of neural networking units. Each neural network unit in the first set is configured to compute parameter update data for one of a plurality of instances of a first portion of the neural network. Each neural network unit in the first set includes a communication interface for communicating its parameter update data for combination with parameter update data from another neural network unit in the first set. Each neural network unit in the second set is configured to compute parameter update data for one of a plurality of instances of a second portion of the neural network. Each neural network unit in the second set includes a communication interface for communicating its parameter update data for combination with parameter update data from another neural network unit in the second set.Type: ApplicationFiled: August 3, 2016Publication date: February 8, 2018Inventors: Barnaby DALTON, Vanessa COURVILLE, Manuel SALDANA
-
Publication number: 20180018560Abstract: A memory control unit for handling data stored in a memory device includes a first interface to an interconnection with at least one memory bank; a second interface for communicating with a data requesting unit; and a memory quantization unit. The memory quantization unit is configured to: obtain, via the first interface, a first weight value from the at least one memory bank; quantize the first weight value to generate at least one quantized weight value having a shorter bit length than a bit length of the first weight value; and communicate the at least one quantized weight value to the data requesting unit via the second interface.Type: ApplicationFiled: July 14, 2016Publication date: January 18, 2018Inventors: Manuel SALDANA, Barnaby DALTON, Vanessa COURVILLE
-
Publication number: 20170337465Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y ? Rm is the output vector, x ? Rn is the input vector, and W ? Rmxn is a matrix of connection parameters, matrices Uij and Vij are defined and submatrices Wij are computed as the product of Uij and Vij, so that Wij=VijUij, and W is obtained by appending submatrices Wi,j.Type: ApplicationFiled: March 8, 2017Publication date: November 23, 2017Inventors: Serdar SOZUBEK, Barnaby DALTON, Vanessa COURVILLE, Graham TAYLOR
-
Publication number: 20170337463Abstract: The present disclosure is drawn to the reduction of parameters in fully connected layers of neural networks. For a layer whose output is defined by y=Wx, where y is the output vector, x is the input vector, and W is a matrix of connection parameters, vectors uij and ij are defined and submatrices Wi,j are computed as the outer product of uij and ij, so that Wi,j=ijuij, and W is obtained by appending submatrices Wi,j.Type: ApplicationFiled: September 7, 2016Publication date: November 23, 2017Inventors: Barnaby DALTON, Serdar SOZUBEK, Manuel SALDANA, Vanessa COURVILLE
-
Publication number: 20170308324Abstract: A circuit for a multistage sequential data process includes a plurality of memory units. Each memory unit is associated with a stage of the sequential data process which, for each data set inputted to the stage, the stage provides an intermediate data set for storage in the associated memory unit for use in at least one subsequent stage of the sequential data process, where each of the plurality of memory units is sized based on relative locations of the stage providing the intermediate data set and the at least one subsequent stage in the sequential data process.Type: ApplicationFiled: April 25, 2016Publication date: October 26, 2017Inventors: Vanessa COURVILLE, Manuel SALDANA, Barnaby DALTON