Patents by Inventor Thomas Norrie

Thomas Norrie has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11995017
    Abstract: A multi-plane, multi-protocol memory switch system is disclosed. In some embodiments, a memory switch includes a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: May 28, 2024
    Assignee: Enfabrica Corporation
    Inventors: Thomas Norrie, Shrijeet Mukherjee, John Greth, Rochan Sankar, Shimon Muller, Ariel Hendel, Gurjeet Singh
  • Publication number: 20240160909
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
    Type: Application
    Filed: January 25, 2024
    Publication date: May 16, 2024
    Inventors: Thomas Norrie, Andrew Everett Phelps, Norman Paul Jouppi, Matthew Leever Hedlund
  • Publication number: 20240104045
    Abstract: A system for sharing peripheral component interconnect express (PCIe) devices across multiple host servers is disclosed. In some embodiments, a switch includes a plurality of hosts associated with a plurality of hierarchies, one or more endpoints associated with one or more of the plurality of hierarchies, and a switch communicatively connectable to the plurality of hosts and the one or more endpoints. The switch is configured to: receive a transaction layer packet (TLP); determine a policy group identifier based on parsing and processing the TLP; perform packet forward matching based on the policy group identifier and destination fields of the TLP; based on whether the TLP is communicated between the hosts and endpoints in different hierarchies of the plurality of hierarchies, determine whether to edit the TLP using one or more rewrite rules; and forward the TLP to an appropriate destination link.
    Type: Application
    Filed: August 9, 2023
    Publication date: March 28, 2024
    Inventors: Thomas Norrie, Frederic Vecoven, Kiran Seshadri, Shrijeet Mukherjee, Chetan Loke
  • Patent number: 11934826
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: March 19, 2024
    Assignee: Google LLC
    Inventors: Thomas Norrie, Gurushankar Rajamani, Andrew Everett Phelps, Matthew Leever Hedlund, Norman Paul Jouppi
  • Patent number: 11922292
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: March 5, 2024
    Assignee: Google LLC
    Inventors: Thomas Norrie, Andrew Everett Phelps, Norman Paul Jouppi, Matthew Leever Hedlund
  • Patent number: 11921611
    Abstract: A computer-implemented method that includes monitoring execution of program code by first and second processor components. A computing system detects that a trigger condition is satisfied by: i) identifying an operand in a portion of the program code; or ii) determining that a current time of a clock of the computing system indicates a predefined time value. The operand and the predefined time value are used to initiate trace events. When the trigger condition is satisfied the system initiates trace events that generate trace data identifying respective hardware events occurring across the computing system. The system uses the trace data to generate a correlated set of trace data. The correlated trace data indicates a time ordered sequence of the respective hardware events. The system uses the correlated set of trace data to analyze performance of the executing program code.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: March 5, 2024
    Assignee: Google LLC
    Inventors: Thomas Norrie, Naveen Kumar
  • Publication number: 20240070098
    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.
    Type: Application
    Filed: August 2, 2023
    Publication date: February 29, 2024
    Inventors: Mark William Gottscho, Matthew William Ashcraft, Thomas Norrie, Oliver Edward Bowen
  • Publication number: 20230297372
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Application
    Filed: December 5, 2022
    Publication date: September 21, 2023
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps
  • Patent number: 11762793
    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: September 19, 2023
    Assignee: Google LLC
    Inventors: Mark William Gottscho, Matthew William Ashcraft, Thomas Norrie, Oliver Edward Bowen
  • Publication number: 20230239351
    Abstract: A system for one-sided read remote memory access is disclosed. In some embodiments, the system is configured to receive, at a responder SFA, a first packet comprising a read request to read a remote memory of a second host from a first host, wherein a payload of the first packet is mapped to be a transmit header queue (TxHQ) entry (TxHQE), and the TxHQE includes a pointer to a memory map; separate, the received packet into portions including a upper level protocol (ULP) portion, the ULP portion being the TxHQE; create a ULP header queue for the TxHQE; generate a read response based on mapping the ULP header queue into hardware as the TxHQ, wherein the TxHQE includes a pointer to data from a valid memory region of the second host identified by the memory mapping; and transmit a read response packet with the data identified by the pointer using the TxHQ to the first host.
    Type: Application
    Filed: January 26, 2023
    Publication date: July 27, 2023
    Inventors: Shrijeet Mukherjee, Thomas Norrie, Carlo Contavalli, Gurjeet Singh, Adam Belay
  • Patent number: 11650895
    Abstract: A computer-implemented method executed by one or more processors, the method includes monitoring execution of program code executed by a first processor component; and monitoring execution of program code executed by a second processor component. A computing system stores data identifying hardware events in a memory buffer. The stored events occur across processor units that include at least the first and second processor components. The hardware events each include an event time stamp and metadata characterizing the event. The system generates a data structure identifying the hardware events. The data structure arranges the events in a time ordered sequence and associates events with at least the first or second processor components. The system stores the data structure in a memory bank of a host device and uses the data structure to analyze performance of the program code executed by the first or second processor components.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: May 16, 2023
    Assignee: Google LLC
    Inventors: Thomas Norrie, Naveen Kumar
  • Patent number: 11586920
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: February 21, 2023
    Assignee: Google LLC
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Publication number: 20220398215
    Abstract: A system for providing memory access is disclosed. In some embodiments, the system is configured to receive at a source server fabric adapter (SFA), from a server, a memory access request comprising a virtual memory address; using associative mapping, determining whether the virtual address corresponds to a source-local memory associated with the source SFA or to a remote memory. If the virtual address corresponds to the source-local memory, the virtual memory address is translated, at the source SFA, into a physical memory address of the source-local memory. If the virtual address corresponds to the remote memory, a request message is synthesized, and the synthesized request message is transmitted to the destination SFA using a network protocol.
    Type: Application
    Filed: June 9, 2022
    Publication date: December 15, 2022
    Inventors: Thomas Norrie, Shrijeet Mukherjee, Rochan Sankar
  • Publication number: 20220398207
    Abstract: A multi-plane, multi-protocol memory switch system is disclosed. In some embodiments, a memory switch includes a plurality of switch ports, the memory switch connectable to one or more root complex (RC) devices through one or more respective switch ports of the plurality of switch ports, and the memory switch connectable to a set of endpoints through a set of other switch ports of the plurality of switch ports, wherein the set includes zero or multiple endpoints; a cacheline exchange engine configured to provide a data-exchange path between two endpoints and to map an address space of one endpoint to an address space of another endpoint; and a bulk data transfer engine configured to facilitate data-exchange between two endpoints as a source-destination data stream, one endpoint being designated a source address and another endpoint being designated a destination address.
    Type: Application
    Filed: June 8, 2022
    Publication date: December 15, 2022
    Inventors: Thomas Norrie, Shrijeet Mukherjee, John Greth, Rochan Sankar, Shimon Muller, Ariel Hendel, Gurjeet Singh
  • Patent number: 11520581
    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: December 6, 2022
    Assignee: Google LLC
    Inventors: William Lacy, Gregory Michael Thorson, Christopher Aaron Clark, Norman Paul Jouppi, Thomas Norrie, Andrew Everett Phelps
  • Publication number: 20220366255
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: July 27, 2022
    Publication date: November 17, 2022
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Publication number: 20220327075
    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.
    Type: Application
    Filed: April 25, 2022
    Publication date: October 13, 2022
    Inventors: Mark William Gottscho, Matthew William Ashcraft, Thomas Norrie, Oliver Edward Bowen
  • Publication number: 20220261622
    Abstract: Methods, systems, and apparatus including a special purpose hardware chip for training neural networks are described. The special-purpose hardware chip may include a scalar processor configured to control computational operation of the special-purpose hardware chip. The chip may also include a vector processor configured to have a 2-dimensional array of vector processing units which all execute the same instruction in a single instruction, multiple-data manner and communicate with each other through load and store instructions of the vector processor. The chip may additionally include a matrix multiply unit that is coupled to the vector processor configured to multiply at least one two-dimensional matrix with a second one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.
    Type: Application
    Filed: March 14, 2022
    Publication date: August 18, 2022
    Inventors: Thomas Norrie, Olivier Temam, Andrew Everett Phelps, Norman Paul Jouppi
  • Publication number: 20220156071
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.
    Type: Application
    Filed: November 19, 2021
    Publication date: May 19, 2022
    Inventors: Thomas Norrie, Gurushankar Rajamani, Andrew Everett Phelps, Matthew Leever Hedlund, Norman Paul Jouppi
  • Publication number: 20220129364
    Abstract: A computer-implemented method that includes monitoring execution of program code by first and second processor components. A computing system detects that a trigger condition is satisfied by: i) identifying an operand in a portion of the program code; or ii) determining that a current time of a clock of the computing system indicates a predefined time value. The operand and the predefined time value are used to initiate trace events. When the trigger condition is satisfied the system initiates trace events that generate trace data identifying respective hardware events occurring across the computing system. The system uses the trace data to generate a correlated set of trace data. The correlated trace data indicates a time ordered sequence of the respective hardware events. The system uses the correlated set of trace data to analyze performance of the executing program code.
    Type: Application
    Filed: January 7, 2022
    Publication date: April 28, 2022
    Inventors: Thomas Norrie, Naveen Kumar