Patents by Inventor John Fremont Brown, III

John Fremont Brown, III has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013052
    Abstract: A method, system and apparatus provide bit-sparse neural network optimization. Rather than quantizing and pruning weight and activation elements at the word level, weight and activation elements are pruned at the bit level, which reduces the density of effective “set” bits in weight and activation data, which, advantageously, reduces the power consumption of the neural network inference process by reducing the degree of bit-level switching during inference.
    Type: Application
    Filed: July 11, 2022
    Publication date: January 11, 2024
    Applicant: Arm Limited
    Inventors: Zhi-Gang Liu, Paul Nicholas Whatmough, John Fremont Brown, III
  • Publication number: 20230103312
    Abstract: A processor, computer based method and apparatus for performing matrix multiplication are provided. The processor obtains a first bitslice vector comprising m elements, obtains a second bitslice vector comprising n elements, provides at least one element of the first bitslice vector as a first input to a single bit dot product unit, provides at least one element of the second bit-slice vector as a second input to the single-bit dot product unit, and obtains, from the single-bit dot product unit, an output comprising at least a partial dot product of the first and second bitslice vectors.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 6, 2023
    Applicant: Arm Limited
    Inventors: Zhi-Gang Liu, Paul Nicholas Whatmough, Matthew Mattina, John Fremont Brown, III
  • Publication number: 20230108629
    Abstract: A system and method for multiplying first and second matrices are provided. For the first matrix, a number of bit slice vectors for each row are generated based on the bit resolution, and a first bit slice tensor is generated based on the bit slice vectors for each row. For the second matrix, a number of bit slice vectors for each column are generated based on the bit resolution, and a second bit slice tensor is generated based on the bit slice vectors for each row. The first and second bit slice tensors are multiplied by a matrix multiply accelerator (MMA) to generate an output matrix.
    Type: Application
    Filed: October 4, 2021
    Publication date: April 6, 2023
    Applicant: Arm Limited
    Inventors: Zhi-Gang Liu, Paul Nicholas Whatmough, Matthew Mattina, John Fremont Brown, III
  • Patent number: 11526743
    Abstract: The present disclosure advantageously provides an Optical Hardware Accelerator (OHA) for an Artificial Neural Network (ANN) that includes a communication bus interface, a memory, a controller, and an optical computing engine (OCE). The OCE is configured to execute an ANN model with ANN weights. Each ANN weight includes a quantized phase shift value ?i and a phase shift value ?i. The OCE includes a digital-to-optical (D/O) converter configured to generate input optical signals based on the input data, an optical neural network (ONN) configured to generate output optical signals based on the input optical signals, and an optical-to-digital (O/D) converter configured to generate the output data based on the output optical signals. The ONN includes a plurality of optical units (OUs), and each OU includes an optical multiply and accumulate (OMAC) module.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: December 13, 2022
    Assignee: Arm Limited
    Inventors: Zhi-Gang Liu, Matthew Mattina, John Fremont Brown, III
  • Publication number: 20210287078
    Abstract: The present disclosure advantageously provides an Optical Hardware Accelerator (OHA) for an Artificial Neural Network (ANN) that includes a communication bus interface, a memory, a controller, and an optical computing engine (OCE). The OCE is configured to execute an ANN model with ANN weights. Each ANN weight includes a quantized phase shift value ?i and a phase shift value ?i. The OCE includes a digital-to-optical (D/O) converter configured to generate input optical signals based on the input data, an optical neural network (ONN) configured to generate output optical signals based on the input optical signals, and an optical-to-digital (O/D) converter configured to generate the output data based on the output optical signals. The ONN includes a plurality of optical units (OUs), and each OU includes an optical multiply and accumulate (OMAC) module.
    Type: Application
    Filed: March 13, 2020
    Publication date: September 16, 2021
    Applicant: Arm Limited
    Inventors: Zhi-Gang Liu, Matthew Mattina, John Fremont Brown, III