Patents by Inventor Ravi Narayanaswami

Ravi Narayanaswami has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11948060
    Abstract: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: April 2, 2024
    Assignee: GOOGLE LLC
    Inventors: Andreas Georg Nowatzyk, Olivier Temam, Ravi Narayanaswami, Uday Kumar Dasari
  • Publication number: 20240078417
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Application
    Filed: June 30, 2023
    Publication date: March 7, 2024
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20230418677
    Abstract: The present disclosure describes a system and method for preempting a long-running process with a higher priority process in a machine learning system, such as a hardware accelerator. The machine learning hardware accelerator can be a multi-chip system including semiconductor chips that can be application-specific integrated circuits (ASIC) designed to perform machine learning operations. An ASIC is an integrated circuit (IC) that is customized for a particular use.
    Type: Application
    Filed: December 21, 2020
    Publication date: December 28, 2023
    Inventors: Temitayo Fadelu, Ravi Narayanaswami, JiHong Min, Dongdong Li, Suyog Gupta, Jason Jong Kyu Park
  • Publication number: 20230376664
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining architectures of hardware accelerators.
    Type: Application
    Filed: October 11, 2021
    Publication date: November 23, 2023
    Inventors: Amir YAZDANBAKHSH, Christof ANGERMUELLER, Berkin AKIN, Yanqi ZHOU, James LAUDON, Ravi NARAYANASWAMI
  • Patent number: 11816045
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: November 14, 2023
    Assignee: Google LLC
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Patent number: 11816480
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: November 14, 2023
    Assignee: Google LLC
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Patent number: 11727259
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: August 15, 2023
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 11586701
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a circuit configured to add multiple inputs. The circuit includes a first adder section that receives a first input and a second input and adds the inputs to generate a first sum. The circuit also includes a second adder section that receives the first and second inputs and adds the inputs to generate a second sum. An input processor of the circuit receives the first and second inputs, determines whether a relationship between the first and second inputs satisfies a set of conditions, and selects a high-power mode of the adder circuit or a low-power mode of the adder circuit using the determined relationship between the first and second inputs. The high-power mode is selected and the first and second inputs are routed to the second adder section when the relationship satisfies the set of conditions.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: February 21, 2023
    Assignee: Google LLC
    Inventors: Anand Suresh Kane, Ravi Narayanaswami
  • Publication number: 20230004386
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Application
    Filed: August 22, 2022
    Publication date: January 5, 2023
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20220391472
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Application
    Filed: June 16, 2022
    Publication date: December 8, 2022
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 11501144
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: November 15, 2022
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20220318594
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Application
    Filed: June 21, 2022
    Publication date: October 6, 2022
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Publication number: 20220300421
    Abstract: Components on an IC chip may operate faster or provide higher performance relative to power consumption if allowed access to sufficient memory resources. If every component is provided its own memory, however, the chip becomes expensive. In described implementations, memory is shared between two or more components. For example, a processing component can include computational circuitry and a memory coupled thereto. A multi-component cache controller is coupled to the memory. Logic circuitry is coupled to the cache controller and the memory. The logic circuitry selectively separates the memory into multiple memory partitions. A first memory partition can be allocated to the computational circuitry and provide storage to the computational circuitry. A second memory partition can be allocated to the cache controller and provide storage to multiple components.
    Type: Application
    Filed: August 19, 2020
    Publication date: September 22, 2022
    Applicant: Google LLC
    Inventors: Suyog Gupta, Ravi Narayanaswami, Uday Kumar Dasari, Ali Iranli, Pavan Thirunagari, Vinu Vijay Kumar, Sunitha R. Kosireddy
  • Patent number: 11422801
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: August 23, 2022
    Assignee: Google LLC
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20220245453
    Abstract: Methods, systems, and apparatus, including an apparatus for redistributing tensor elements among computing units are described. In one aspect, a method includes distributing tensor elements of an N-dimensional tensor among multiple computing units of a computation system. Each computing unit redistributes the subset of tensor elements previously distributed to the computing unit to computing units. Each computing unit accesses redistribution partitioning data that specifies, for each computing unit, the tensor elements that are to be stored by the computing unit after redistributing the tensor elements. For each tensor element previously distributed to the particular computing unit, the computing unit determines a global linearized index value for the tensor element based on a multi-dimensional index for the tensor element.
    Type: Application
    Filed: October 7, 2020
    Publication date: August 4, 2022
    Inventors: David Alexander Majnemer, Ravi Narayanaswami, Dong Hyuk Woo, Carrell Daniel Killebrew
  • Patent number: 11379707
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: July 5, 2022
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 11366877
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: June 21, 2022
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Publication number: 20220147793
    Abstract: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
    Type: Application
    Filed: January 7, 2022
    Publication date: May 12, 2022
    Inventors: Andreas Georg Nowatzyk, Olivier Temam, Ravi Narayanaswami, Uday Kumar Dasari
  • Publication number: 20220083480
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Application
    Filed: August 24, 2021
    Publication date: March 17, 2022
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Publication number: 20220044153
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for virtualizing external memory as local to a machine learning accelerator. One ambient computing system comprises: an ambient machine learning engine; a low-power CPU; and an SRAM that is shared among at least the ambient machine learning engine and the low-power CPU; wherein the ambient machine learning engine comprises virtual address logic to translate from virtual addresses generated by the ambient machine learning engine to physical addresses within the SRAM.
    Type: Application
    Filed: October 21, 2021
    Publication date: February 10, 2022
    Inventors: Lawrence J. Madar, III, Temitayo Fadelu, Harshit Khaitan, Ravi Narayanaswami