Patents by Inventor Norman Paul Jouppi

Norman Paul Jouppi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10645847
    Abstract: A data center cooling system includes a server rack sub-assembly that includes a motherboard mounted on a support member and a heat generating electronic devices mounted on the a motherboard; a cold plate positioned in thermal communication with at least a portion of the heat generating electronic devices, the cold plate configured to receive a flow of a cooling liquid circulated through a supply conduit fluidly coupled to a liquid inlet of the cold plate, through the cold plate, and through a return conduit fluidly coupled to a liquid outlet of the cold plate; and a modulating control valve attached to either of the motherboard or the support member and positioned in either of the supply conduit or the return conduit, the modulating control valve configured to adjust a rate of the flow of the cooling liquid based on an operating condition of at least one of the heat generating electronic devices.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: May 5, 2020
    Assignee: Google LLC
    Inventors: William Edwards, Madhusudan Krishnan Iyengar, Sundar Rajan, Jorge Padilla, Norman Paul Jouppi
  • Patent number: 10635740
    Abstract: Methods, systems, and apparatus for a matrix multiply unit implemented as a systolic array of cells are disclosed. The matrix multiply unit may include cells arranged in columns of the systolic array. Two chains of weight shift registers per column of the systolic array are in the matrix multiply unit. Each weight shift register is connected to only one chain and each cell is connected to only one weight shift register. A weight matrix register per cell is configured to store a weight input received from a weight shift register. A multiply unit is coupled to the weight matrix register and configured to multiply the weight input of the weight matrix register with a vector data input in order to obtain a multiplication result.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: April 28, 2020
    Assignee: Google LLC
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20200117999
    Abstract: Methods, systems, and apparatus for updating machine learning models to improve locality are described. In one aspect, a method includes receiving data of a machine learning model. The data represents operations of the machine learning model and data dependencies between the operations. Data specifying characteristics of a memory hierarchy for a machine learning processor on which the machine learning model is going to be deployed is received. The memory hierarchy includes multiple memories at multiple memory levels for storing machine learning data used by the machine learning processor when performing machine learning computations using the machine learning model. An updated machine learning model is generated by modifying the operations and control dependencies of the machine learning model to account for the characteristics of the memory hierarchy. Machine learning computations are performed using the updated machine learning model.
    Type: Application
    Filed: October 10, 2018
    Publication date: April 16, 2020
    Inventors: Doe Hyun Yoon, Nishant Patil, Norman Paul Jouppi
  • Patent number: 10621269
    Abstract: Methods, systems, and apparatus for performing a matrix multiplication using a hardware circuit are described. An example method begins by obtaining an input activation value and a weight input value in a first floating point format. The input activation value and the weight input value are multiplied to generate a product value in a second floating point format that has higher precision than the first floating point format. A partial sum value is obtained in a third floating point format that has a higher precision than the first floating point format. The partial sum value and the product value are combined to generate an updated partial sum value that has the third floating point format.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: April 14, 2020
    Assignee: Google LLC
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Patent number: 10572150
    Abstract: According to an example, a memory network includes memory nodes. The memory nodes may each include memory and control logic. The control logic may operate the memory node as a destination for a memory access invoked by a processor connected to the memory network and may operate the memory node as a router to route data or memory access commands to a destination in the memory network.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: February 25, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sheng Li, Norman Paul Jouppi, Paolo Faraboschi, Michael R. Krause
  • Publication number: 20200057942
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: October 25, 2019
    Publication date: February 20, 2020
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Publication number: 20200042895
    Abstract: Methods, systems, and apparatus, including instructions encoded on storage media, for performing reduction of gradient vectors and similarly structured data that are generated in parallel, for example, on nodes organized in a mesh or torus topology defined by connections in at least two dimension between the nodes. The methods provide parallel computation and communication between nodes in the topology.
    Type: Application
    Filed: February 8, 2018
    Publication date: February 6, 2020
    Inventors: Ian Moray Mclaren, Norman Paul Jouppi, Clifford Hsiang Chao, Gregory Michael Thorson, Bjarke Hammersholt Roune
  • Patent number: 10548240
    Abstract: A server tray package includes a motherboard assembly that includes a plurality of data center electronic devices; and a liquid cold plate assembly. The liquid cold plate assembly includes a base portion mounted to the motherboard assembly, the base portion and motherboard assembly defining a volume that at least partially encloses the plurality of data center electronic devices; and a top portion mounted to the base portion and including a heat transfer member that includes a first number of inlet ports and a second number of outlet ports that are in fluid communication with a cooling liquid flow path defined through the heat transfer member, the first number of inlet ports being different that the second number of outlet ports.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: January 28, 2020
    Assignee: Google LLC
    Inventors: Madhusudan Krishnan Iyengar, Christopher Gregory Malone, Yuan Li, Jorge Padilla, Woon-Seong Kwon, Teckgyu Kang, Norman Paul Jouppi
  • Publication number: 20190354571
    Abstract: Methods, systems, and apparatus for a matrix multiply unit implemented as a systolic array of cells are disclosed. Each cell of the matrix multiply includes: a weight matrix register configured to receive a weight input from either a transposed or a non-transposed weight shift register; a transposed weight shift register configured to receive a weight input from a horizontal direction to be stored in the weight matrix register; a non-transposed weight shift register configured to receive a weight input from a vertical direction to be stored in the weight matrix register; and a multiply unit that is coupled to the weight matrix register and configured to multiply the weight input of the weight matrix register with a vector data input in order to obtain a multiplication result.
    Type: Application
    Filed: August 1, 2019
    Publication date: November 21, 2019
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20190354862
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: August 1, 2019
    Publication date: November 21, 2019
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Publication number: 20190327859
    Abstract: A server tray package includes a motherboard assembly that includes a plurality of data center electronic devices, the plurality of data center electronic devices including at least one heat generating processor device; and a liquid cold plate assembly. The liquid cold plate assembly includes a base portion mounted to the motherboard assembly, the base portion and motherboard assembly defining a volume that at least partially encloses the plurality of data center electronic devices; and a top portion mounted to the base portion and including a heat transfer member shaped to thermally contact the heat generating processor device, the heat transfer member including an inlet port and an outlet port that are in fluid communication with a cooling liquid flow path defined through the heat transfer member.
    Type: Application
    Filed: April 19, 2018
    Publication date: October 24, 2019
    Inventors: Madhusudan Krishnan Iyengar, Christopher Gregory Malone, Yuan Li, Jorge Padilla, Woon Seong Kwon, Teckgyu Kang, Norman Paul Jouppi
  • Publication number: 20190327860
    Abstract: A data center cooling system includes a server rack sub-assembly that includes a motherboard mounted on a support member and a heat generating electronic devices mounted on the a motherboard; a cold plate positioned in thermal communication with at least a portion of the heat generating electronic devices, the cold plate configured to receive a flow of a cooling liquid circulated through a supply conduit fluidly coupled to a liquid inlet of the cold plate, through the cold plate, and through a return conduit fluidly coupled to a liquid outlet of the cold plate; and a modulating control valve attached to either of the motherboard or the support member and positioned in either of the supply conduit or the return conduit, the modulating control valve configured to adjust a rate of the flow of the cooling liquid based on an operating condition of at least one of the heat generating electronic devices.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: William Edwards, Madhusudan Krishnan Iyengar, Sundar Rajan, Jorge Padilla, Norman Paul Jouppi
  • Publication number: 20190243645
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Application
    Filed: March 4, 2019
    Publication date: August 8, 2019
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps
  • Patent number: 10261786
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: April 16, 2019
    Assignee: Google LLC
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps
  • Publication number: 20180336456
    Abstract: Methods, systems, and apparatus including a special purpose hardware chip for training neural networks are described. The special-purpose hardware chip may include a scalar processor configured to control computational operation of the special-purpose hardware chip. The chip may also include a vector processor configured to have a 2-dimensional array of vector processing units which all execute the same instruction in a single instruction, multiple-data manner and communicate with each other through load and store instructions of the vector processor. The chip may additionally include a matrix multiply unit that is coupled to the vector processor configured to multiply at least one two-dimensional matrix with a second one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 22, 2018
    Inventors: Thomas Norrie, Olivier Temam, Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20180336165
    Abstract: Methods, systems, and apparatus for performing a matrix multiplication using a hardware circuit are described. An example method begins by obtaining an input activation value and a weight input value in a first floating point format. The input activation value and the weight input value are multiplied to generate a product value in a second floating point format that has higher precision than the first floating point format. A partial sum value is obtained in a third floating point format that has a higher precision than the first floating point format. The partial sum value and the product value are combined to generate an updated partial sum value that has the third floating point format.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 22, 2018
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20180336163
    Abstract: Methods, systems, and apparatus for a matrix multiply unit implemented as a systolic array of cells are disclosed. Each cell of the matrix multiply includes: a weight matrix register configured to receive a weight input from either a transposed or a non-transposed weight shift register; a transposed weight shift register configured to receive a weight input from a horizontal direction to be stored in the weight matrix register; a non-transposed weight shift register configured to receive a weight input from a vertical direction to be stored in the weight matrix register; and a multiply unit that is coupled to the weight matrix register and configured to multiply the weight input of the weight matrix register with a vector data input in order to obtain a multiplication result.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 22, 2018
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20180336164
    Abstract: Methods, systems, and apparatus for a matrix multiply unit implemented as a systolic array of cells are disclosed. The matrix multiply unit may include cells arranged in columns of the systolic array. Two chains of weight shift registers per column of the systolic array are in the matrix multiply unit. Each weight shift register is connected to only one chain and each cell is connected to only one weight shift register. A weight matrix register per cell is configured to store a weight input received from a weight shift register. A multiply unit is coupled to the weight matrix register and configured to multiply the weight input of the weight matrix register with a vector data input in order to obtain a multiplication result.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 22, 2018
    Inventors: Andrew Everett Phelps, Norman Paul Jouppi
  • Patent number: 10127154
    Abstract: A memory system includes a plurality of memory nodes provided at different hierarchical levels of the memory system, each of the memory nodes including a corresponding memory storage and a cache. A memory node at a first of the different hierarchical levels is coupled to a processor with lower communication latency than a memory node at a second of the different hierarchical levels. The memory nodes are to cooperate to decide which of the memory nodes is to cache data of a given one of the memory nodes.
    Type: Grant
    Filed: March 20, 2013
    Date of Patent: November 13, 2018
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Norman Paul Jouppi, Sheng Li, Ke Chen
  • Publication number: 20180260220
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Application
    Filed: March 9, 2017
    Publication date: September 13, 2018
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps