Patents by Inventor Norman Paul Jouppi

Norman Paul Jouppi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11520581
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: December 6, 2022
    Assignee: Google LLC
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps
  • Patent number: 11507452
    Abstract: Aspects of the disclosure are directed to a computation unit implementing a systolic array and configured for detecting errors while processing data on the systolic array. Checksum circuit in communication with a systolic array is configured to compute checksums and perform error detection while the systolic array processes input data. Instead of pre-generating checksums in input matrices, input matrices can be directly fed into the systolic array through the checksum circuit. On the output side, the checksum circuit can generate and compare checksums with checksums in an output matrix generated by the systolic array. Error checking the operations to generate the output matrix can be performed without delaying the operations of the systolic array, and without preprocessing the input matrices.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: November 22, 2022
    Assignee: Google LLC
    Inventors: Doe Hyun Yoon, Norman Paul Jouppi
  • Publication number: 20220366255
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: July 27, 2022
    Publication date: November 17, 2022
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Patent number: 11500961
    Abstract: Methods, systems, and apparatus for a matrix multiply unit implemented as a systolic array of cells are disclosed. The matrix multiply unit may include cells arranged in columns of the systolic array. Two chains of weight shift registers per column of the systolic array are in the matrix multiply unit. Each weight shift register is connected to only one chain and each cell is connected to only one weight shift register. A weight matrix register per cell is configured to store a weight input received from a weight shift register. A multiply unit is coupled to the weight matrix register and configured to multiply the weight input of the weight matrix register with a vector data input in order to obtain a multiplication result.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: November 15, 2022
    Assignee: Google LLC
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20220261622
    Abstract: Methods, systems, and apparatus including a special purpose hardware chip for training neural networks are described. The special-purpose hardware chip may include a scalar processor configured to control computational operation of the special-purpose hardware chip. The chip may also include a vector processor configured to have a 2-dimensional array of vector processing units which all execute the same instruction in a single instruction, multiple-data manner and communicate with each other through load and store instructions of the vector processor. The chip may additionally include a matrix multiply unit that is coupled to the vector processor configured to multiply at least one two-dimensional matrix with a second one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.
    Type: Application
    Filed: March 14, 2022
    Publication date: August 18, 2022
    Inventors: Thomas Norrie, Olivier Temam, Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20220230048
    Abstract: Methods, systems, and apparatus, including computer-readable media, for scaling neural network architectures on hardware accelerators. A method includes receiving training data and information specifying target computing resources, and performing using the training data, a neural architecture search over a search space to identify an architecture for a base neural network. A plurality of scaling parameter values for scaling the base neural network can be identified, which can include repeatedly selecting a plurality of candidate scaling parameter values, and determining a measure of performance for the base neural network scaled according to the plurality of candidate scaling parameter values, in accordance with a plurality of second objectives including a latency objective. An architecture for a scaled neural network can be determined using the architecture of the base neural network scaled according to the plurality of scaling parameter values.
    Type: Application
    Filed: February 12, 2021
    Publication date: July 21, 2022
    Inventors: Andrew Li, Sheng Li, Mingxing Tan, Ruoming Pang, Liqun Cheng, Quoc V. Le, Norman Paul Jouppi
  • Publication number: 20220172060
    Abstract: Methods, systems, and apparatus for updating machine learning models to improve locality are described. In one aspect, a method includes receiving data of a machine learning model. The data represents operations of the machine learning model and data dependencies between the operations. Data specifying characteristics of a memory hierarchy for a machine learning processor on which the machine learning model is going to be deployed is received. The memory hierarchy includes multiple memories at multiple memory levels for storing machine learning data used by the machine learning processor when performing machine learning computations using the machine learning model. An updated machine learning model is generated by modifying the operations and control dependencies of the machine learning model to account for the characteristics of the memory hierarchy. Machine learning computations are performed using the updated machine learning model.
    Type: Application
    Filed: February 15, 2022
    Publication date: June 2, 2022
    Inventors: Doe Hyun Yoon, Nishant Patil, Norman Paul Jouppi
  • Publication number: 20220156071
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.
    Type: Application
    Filed: November 19, 2021
    Publication date: May 19, 2022
    Inventors: Thomas Norrie, Gurushankar Rajamani, Andrew Everett Phelps, Matthew Leever Hedlund, Norman Paul Jouppi
  • Patent number: 11275992
    Abstract: Methods, systems, and apparatus including a special purpose hardware chip for training neural networks are described. The special-purpose hardware chip may include a scalar processor configured to control computational operation of the special-purpose hardware chip. The chip may also include a vector processor configured to have a 2-dimensional array of vector processing units which all execute the same instruction in a single instruction, multiple-data manner and communicate with each other through load and store instructions of the vector processor. The chip may additionally include a matrix multiply unit that is coupled to the vector processor configured to multiply at least one two-dimensional matrix with a second one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: March 15, 2022
    Assignee: Google LLC
    Inventors: Thomas Norrie, Olivier Temam, Andrew Everett Phelps, Norman Paul Jouppi
  • Patent number: 11263529
    Abstract: Methods, systems, and apparatus for updating machine learning models to improve locality are described. In one aspect, a method includes receiving data of a machine learning model. The data represents operations of the machine learning model and data dependencies between the operations. Data specifying characteristics of a memory hierarchy for a machine learning processor on which the machine learning model is going to be deployed is received. The memory hierarchy includes multiple memories at multiple memory levels for storing machine learning data used by the machine learning processor when performing machine learning computations using the machine learning model. An updated machine learning model is generated by modifying the operations and control dependencies of the machine learning model to account for the characteristics of the memory hierarchy. Machine learning computations are performed using the updated machine learning model.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: March 1, 2022
    Assignee: Google LLC
    Inventors: Doe Hyun Yoon, Nishant Patil, Norman Paul Jouppi
  • Publication number: 20220019869
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining an architecture for a task neural network that is configured to perform a particular machine learning task on a target set of hardware resources. When deployed on a target set of hardware, such as a collection of datacenter accelerators, the task neural network may be capable of performing the particular machine learning task with enhanced accuracy and speed.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 20, 2022
    Inventors: Sheng Li, Norman Paul Jouppi, Quoc V. Le, Mingxing Tan, Ruoming Pang, Liqun Cheng, Andrew Li
  • Publication number: 20220013435
    Abstract: Methods, systems, and apparatus, including an integrated circuit (IC) with a ring-shaped hot spot area. In one aspect, an IC includes a first area along an outside perimeter of a surface of the IC. The first area defines a first inner perimeter. The IC includes a second area that includes a center of the IC and that includes a first set of components. The second area defines a first outer. The IC includes a ring-shaped hot spot area between the first area and the second area. The ring-shaped hot spot area defines a ring outer perimeter that is juxtaposed with the first inner perimeter. The ring-shaped hot spot area defines a ring inner perimeter that is juxtaposed with the first outer perimeter. The ring-shaped hot spot area includes a second set of components that produce more heat than the first set of components.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Madhusudan Krishnan Iyengar, Norman Paul Jouppi, Jorge Padilla, Christopher Gregory Malone
  • Publication number: 20210378106
    Abstract: A method of manufacturing a chip assembly comprises joining an in-process unit to a printed circuit board; reflowing a bonding material disposed between and electrically connecting the in-process unit with the printed circuit board, the bonding material having a first reflow temperature; and then joining a heat distribution device to the plurality of semiconductor chips using a thermal interface material (“TIM”) having a second reflow temperature that is lower than the first reflow temperature. The in-process unit further comprises a substrate having an active surface, a passive surface, and contacts exposed at the active surface; an interposer electrically connected to the substrate; a plurality of semiconductor chips overlying the substrate and electrically connected to the substrate through the interposer, and a stiffener overlying the substrate and having an aperture extending therethrough, the plurality of semiconductor chips being positioned within the aperture.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 2, 2021
    Inventors: Madhusudan K. Iyengar, Christopher Malone, Woon-Seong Kwon, Emad Samadiani, Melanie Beauchemin, Padam Jain, Teckgyu Kang, Yuan Li, Connor Burgess, Norman Paul Jouppi, Nicholas Stevens-Yu, Yingying Wang
  • Patent number: 11182159
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 23, 2021
    Assignee: Google LLC
    Inventors: Thomas Norrie, Gurushankar Rajamani, Andrew Everett Phelps, Matthew Leever Hedlund, Norman Paul Jouppi
  • Publication number: 20210357212
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Application
    Filed: May 24, 2021
    Publication date: November 18, 2021
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps
  • Patent number: 11158566
    Abstract: Methods, systems, and apparatus, including an integrated circuit (IC) with a ring-shaped hot spot area. In one aspect, an IC includes a first area along an outside perimeter of a surface of the IC. The first area defines a first inner perimeter. The IC includes a second area that includes a center of the IC and that includes a first set of components. The second area defines a first outer. The IC includes a ring-shaped hot spot area between the first area and the second area. The ring-shaped hot spot area defines a ring outer perimeter that is juxtaposed with the first inner perimeter. The ring-shaped hot spot area defines a ring inner perimeter that is juxtaposed with the first outer perimeter. The ring-shaped hot spot area includes a second set of components that produce more heat than the first set of components.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: October 26, 2021
    Assignee: Google LLC
    Inventors: Madhusudan Krishnan Iyengar, Norman Paul Jouppi, Jorge Padilla, Christopher Gregory Malone
  • Publication number: 20210263739
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.
    Type: Application
    Filed: August 31, 2020
    Publication date: August 26, 2021
    Inventors: Thomas Norrie, Gurushankar Rajamani, Andrew Everett Phelps, Matthew Leever Hedlund, Norman Paul Jouppi
  • Publication number: 20210232898
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
    Type: Application
    Filed: May 14, 2020
    Publication date: July 29, 2021
    Inventors: Thomas Norrie, Andrew Everett Phelps, Norman Paul Jouppi, Matthew Leever Hedlund
  • Publication number: 20210209193
    Abstract: Methods, systems, and apparatus for a matrix multiply unit implemented as a systolic array of cells are disclosed. Each cell of the matrix multiply includes: a weight matrix register configured to receive a weight input from either a transposed or a non-transposed weight shift register; a transposed weight shift register configured to receive a weight input from a horizontal direction to be stored in the weight matrix register; a non-transposed weight shift register configured to receive a weight input from a vertical direction to be stored in the weight matrix register; and a multiply unit that is coupled to the weight matrix register and configured to multiply the weight input of the weight matrix register with a vector data input in order to obtain a multiplication result.
    Type: Application
    Filed: March 23, 2021
    Publication date: July 8, 2021
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Patent number: 11049016
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: June 29, 2021
    Assignee: Google LLC
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu