Patents by Inventor Paul Nicholas Whatmough
Paul Nicholas Whatmough has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11928176Abstract: A system and method for multiplying matrices are provided. The system includes a processor coupled to a memory and a matrix multiply accelerator (MMA) coupled to the processor. The MMA is configured to multiply, based on a bitmap, a compressed first matrix and a second matrix to generate an output matrix including, for each element i,j of the output matrix, a calculation of a dot product of an ith row of the compressed first matrix and a jth column of the second matrix based on the bitmap. Or, the MMA is configured to multiply, based on the bitmap, the second matrix and the compressed first matrix and to generate the output matrix including, for each element i,j of the output matrix, a calculation of a dot product of an ith row of the second matrix and a jth column of the compressed first matrix based on the bitmap.Type: GrantFiled: November 24, 2020Date of Patent: March 12, 2024Assignee: Arm LimitedInventors: Zhi-Gang Liu, Paul Nicholas Whatmough, Matthew Mattina
-
Publication number: 20240046065Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to determine options for decisions in connection with design features of a computing device. In a particular implementation, design options for two or more design decisions of neural network processing device may be identified based, at least in part, on combination of a definition of available computing resources and one or more predefined performance constraints.Type: ApplicationFiled: August 3, 2022Publication date: February 8, 2024Inventors: Hokchhay Tann, Ramon Matas Navarro, Igor Fedorov, Chuteng Zhou, Paul Nicholas Whatmough, Matthew Mattina
-
Patent number: 11886972Abstract: A non-volatile memory (NVM) crossbar for an artificial neural network (ANN) accelerator is provided. The NVM crossbar includes row signal lines configured to receive input analog voltage signals, multiply-and-accumulate (MAC) column signal lines, a correction column signal line, a MAC cell disposed at each row signal line and MAC column signal line intersection, and a correction cell disposed at each row signal line and correction column signal line intersection. Each MAC cell includes one or more programmable NVM elements programmed to an ANN unipolar weight, and each correction cell includes one or more programmable NVM elements. Each MAC column signal line generates a MAC signal based on the input analog voltage signals and the respective MAC cells, and the correction column signal line generates a correction signal based on the input analog voltage signals and the correction cells. Each MAC signal is corrected based on the correction signal.Type: GrantFiled: September 29, 2020Date of Patent: January 30, 2024Assignee: Arm LimitedInventors: Fernando Garcia Redondo, Shidhartha Das, Paul Nicholas Whatmough, Glen Arnold Rosendale
-
Publication number: 20240020419Abstract: Methods and systems for detecting errors when performing a convolutional operation is provided. Predicted checksum data, corresponding to input checksum data and kernel checksum data, is obtained. The convolutional operation is performed to obtain an output feature map. Output checksum data is generated and the predicted checksum data and the output checksum data are compared, the comparing taking account of partial predicted checksum data configured to correct for a lack of padding when performing the convolution operation, wherein the partial predicted checksum data corresponds to input checksum data for a subset of the values in the input feature map and kernel checksum data for a subset of the values in the kernel.Type: ApplicationFiled: July 15, 2022Publication date: January 18, 2024Inventors: Matthew David HADDON, Igor FEDOROV, Reiley JEYAPAUL, Paul Nicholas WHATMOUGH, Zhi-Gang LIU
-
Publication number: 20240013052Abstract: A method, system and apparatus provide bit-sparse neural network optimization. Rather than quantizing and pruning weight and activation elements at the word level, weight and activation elements are pruned at the bit level, which reduces the density of effective “set” bits in weight and activation data, which, advantageously, reduces the power consumption of the neural network inference process by reducing the degree of bit-level switching during inference.Type: ApplicationFiled: July 11, 2022Publication date: January 11, 2024Applicant: Arm LimitedInventors: Zhi-Gang Liu, Paul Nicholas Whatmough, John Fremont Brown, III
-
Patent number: 11823430Abstract: A method for processing video data, comprising: receiving raw video data, representative of a plurality of frames; detecting, using the raw video data, one or more regions of interest in a detection frame that belongs to the plurality of frames, for example using a region proposal network; performing a cropping process on a portion of the raw video data representative of the detection frame, based on the regions of interest, so as to generate cropped raw video data; performing image processing on the cropped raw video data, including demosaicing, so as to generate processed image data for the detection frame; and analyzing the processed image data, for example using an object detection process, to determine information relating to at least one of said one or more regions of interest.Type: GrantFiled: July 16, 2021Date of Patent: November 21, 2023Assignee: Arm LimitedInventors: Paul Nicholas Whatmough, Patrick Thomas Hansen
-
Patent number: 11783163Abstract: The present disclosure advantageously provides a matrix expansion unit that includes an input data selector, a first register set, a second register set, and an output data selector. The input data selector is configured to receive first matrix data in a columnwise format. The first register set is coupled to the input data selector, and includes a plurality of data selectors and a plurality of registers arranged in a first shift loop. The second register set is coupled to the data selector, and includes a plurality of data selectors and a plurality of registers arranged in a second shift loop. The output data selector is coupled to the first register set and the second register set, and is configured to output second matrix data in a rowwise format.Type: GrantFiled: June 15, 2020Date of Patent: October 10, 2023Assignee: Arm LimitedInventors: Zhi-Gang Liu, Paul Nicholas Whatmough, Matthew Mattina
-
Publication number: 20230297432Abstract: Various implementations described herein are related to a method that monitors workloads of a neural network for current spikes. The method may determine current transitions of the workloads that result in rapid changes in load current consumption of the neural network. The method may modify load scheduling of the neural network so as to smooth and/or stabilize the current transitions of the workloads.Type: ApplicationFiled: March 17, 2022Publication date: September 21, 2023Inventors: Paul Nicholas Whatmough, Shidhartha Das
-
Publication number: 20230289576Abstract: Various implementations described herein are directed to a device having neural network circuitry with an array of synapse cells arranged in columns and rows. The device may have input circuitry that provides voltage to the synapse cells by way of row input lines for the rows in the array. The device may have output circuitry that receives current from the synapse cells by way of column output lines for the columns in the array. Also, conductance for the synapse cells in the array may be determined based on the voltage provided by the input circuitry and the current received by the output circuitry.Type: ApplicationFiled: March 8, 2022Publication date: September 14, 2023Inventors: Fernando García Redondo, Mudit Bhargava, Paul Nicholas Whatmough, Shidhartha Das
-
Publication number: 20230229921Abstract: Neural network systems and methods are provided. One method for processing a neural network includes, for at least one neural network layer that includes a plurality of weights, applying an offset function to each of a plurality of weight values in the plurality of weights to generate an offset weight value, and quantizing the offset weight values to form quantized offset weight values. The plurality of weights are pruned. One method for executing a neural network includes reading, from a memory, at least one neural network layer that includes quantized offset weight values and an offset value ?, and performing a neural network layer operation on an input feature map, based on the quantized offset weight values and the offset value ?, to generate an output feature map. The quantized offset weight values are signed integer numbers.Type: ApplicationFiled: January 14, 2022Publication date: July 20, 2023Applicant: Arm LimitedInventors: Igor Fedorov, Paul Nicholas Whatmough
-
Patent number: 11693796Abstract: Various implementations described herein are directed to a device having a multi-layered logic structure with a first logic layer and a second logic layer arranged vertically in a stacked configuration. The device may have a memory array that provides data, and also, the device may have an inter-layer data bus that vertically couples the memory array to the multi-layered logic structure. The inter-layer data bus may provide multiple data paths to the first logic layer and the second logic layer for reuse of the data provided by the memory array.Type: GrantFiled: May 31, 2021Date of Patent: July 4, 2023Assignee: Arm LimitedInventors: Paul Nicholas Whatmough, Zhi-Gang Liu, Supreet Jeloka, Saurabh Pijuskumar Sinha, Matthew Mattina
-
Patent number: 11640533Abstract: A system, an apparatus and methods for utilizing software and hardware portions of a neural network to fix, or hardwire, certain portions, while modifying other portions are provided. A first set of weights for layers of the first neural network are established, and selected weights are modified to generate a second set of weights, based on a second dataset. The second set of weights is then used to train a second neural network.Type: GrantFiled: August 3, 2018Date of Patent: May 2, 2023Assignee: Arm LimitedInventors: Paul Nicholas Whatmough, Matthew Mattina, Jesse Garrett Beu
-
Publication number: 20230103312Abstract: A processor, computer based method and apparatus for performing matrix multiplication are provided. The processor obtains a first bitslice vector comprising m elements, obtains a second bitslice vector comprising n elements, provides at least one element of the first bitslice vector as a first input to a single bit dot product unit, provides at least one element of the second bit-slice vector as a second input to the single-bit dot product unit, and obtains, from the single-bit dot product unit, an output comprising at least a partial dot product of the first and second bitslice vectors.Type: ApplicationFiled: March 30, 2022Publication date: April 6, 2023Applicant: Arm LimitedInventors: Zhi-Gang Liu, Paul Nicholas Whatmough, Matthew Mattina, John Fremont Brown, III
-
Publication number: 20230108629Abstract: A system and method for multiplying first and second matrices are provided. For the first matrix, a number of bit slice vectors for each row are generated based on the bit resolution, and a first bit slice tensor is generated based on the bit slice vectors for each row. For the second matrix, a number of bit slice vectors for each column are generated based on the bit resolution, and a second bit slice tensor is generated based on the bit slice vectors for each row. The first and second bit slice tensors are multiplied by a matrix multiply accelerator (MMA) to generate an output matrix.Type: ApplicationFiled: October 4, 2021Publication date: April 6, 2023Applicant: Arm LimitedInventors: Zhi-Gang Liu, Paul Nicholas Whatmough, Matthew Mattina, John Fremont Brown, III
-
Publication number: 20230076138Abstract: A matrix multiplication system and method are provided. The system includes a memory that stores one or more weight tensors, a processor and a matrix multiply accelerator (MMA). The processor converts each weight tensor into an encoded block set that is stored in the memory. Each encoded block set includes a number of encoded blocks, and each encoded block includes a data field and an index field. The MMA converts each encoded block set into a reconstructed weight tensor, and convolves each reconstructed weight tensor and an input data tensor to generate an output data matrix.Type: ApplicationFiled: September 9, 2021Publication date: March 9, 2023Applicant: Arm LimitedInventors: Paul Nicholas Whatmough, Zhi-Gang Liu, Matthew Mattina
-
Patent number: 11586890Abstract: The present disclosure advantageously provides a hardware accelerator for an artificial neural network (ANN), including a communication bus interface, a memory, a controller, and at least one processing engine (PE). The communication bus interface is configured to receive a plurality of finetuned weights associated with the ANN, receive input data, and transmit output data. The memory is configured to store the plurality of finetuned weights, the input data and the output data. The PE is configured to receive the input data, execute an ANN model using a plurality of fixed weights associated with the ANN and the plurality of finetuned weights, and generate the output data. Each finetuned weight corresponds to a fixed weight.Type: GrantFiled: December 19, 2019Date of Patent: February 21, 2023Assignee: Arm LimitedInventors: Paul Nicholas Whatmough, Chuteng Zhou
-
Publication number: 20230042271Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to select options for decisions in connection with design features of a computing device. In a particular implementation, design options for two or more design decisions of neural network processing device may be selected based, at least in part, on combination of function values that are computed based, at least in part, on a tensor expressing sample neural network weights.Type: ApplicationFiled: August 4, 2021Publication date: February 9, 2023Inventors: Igor Fedorov, Ramon Matas Navarro, Chuteng Zhou, Hokchhay Tann, Paul Nicholas Whatmough, Matthew Mattina
-
Publication number: 20230026113Abstract: Example methods, devices and/or circuits to be implemented in a processing device to perform neural network-based computing operations. According to an embodiment, an accumulation of weighted activation input values may be computed on accumulation cycles at least in part by multiplying and/or scaling accumulated activation input values by an associated neural network weight.Type: ApplicationFiled: July 21, 2021Publication date: January 26, 2023Inventors: Paul Nicholas Whatmough, Zhi-Gang Liu, Matthew Mattina
-
Patent number: 11561767Abstract: The present disclosure advantageously provides a mixed precision computation (MPC) unit for executing one or more mixed-precision layers of an artificial neural network (ANN). The MPC unit includes a multiplier circuit configured to input a pair of operands and output a product, a first adder circuit coupled to the multiplier circuit, a second adder circuit, coupled to the first adder circuit, configured to input a pair of operands, an accumulator circuit, coupled to the multiplier circuit and the first adder circuit, configured to output an accumulated value, and a controller, coupled to the multiplier circuit, the first adder circuit, the second adder circuit and the accumulator circuit, configured to input a mode control signal. The controller has a plurality of operating modes including a high precision mode, a low precision add mode and a low precision multiply mode.Type: GrantFiled: March 31, 2020Date of Patent: January 24, 2023Assignee: Arm LimitedInventors: Dibakar Gope, Jesse Garrett Beu, Paul Nicholas Whatmough, Matthew Mattina
-
Publication number: 20230019360Abstract: A method for processing video data, comprising: receiving raw video data, representative of a plurality of frames; detecting, using the raw video data, one or more regions of interest in a detection frame that belongs to the plurality of frames, for example using a region proposal network; performing a cropping process on a portion of the raw video data representative of the detection frame, based on the regions of interest, so as to generate cropped raw video data; performing image processing on the cropped raw video data, including demosaicing, so as to generate processed image data for the detection frame; and analyzing the processed image data, for example using an object detection process, to determine information relating to at least one of said one or more regions of interest.Type: ApplicationFiled: July 16, 2021Publication date: January 19, 2023Inventors: Paul Nicholas WHATMOUGH, Patrick Thomas HANSEN