Patents by Inventor Ganesh Venkatesh

Ganesh Venkatesh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10977002
    Abstract: Disclosed herein includes a system, a method, and a device including shift circuitry and add circuitry for performing multiplication of a first value and a second value for a neural network. The first value has a predetermined format including a first bit, and two or more second bits to represent a value of zero or 2n where n is an integer greater than or equal to 0. The device shifts, when the two or more second bits represent the value of 2n, the second value by (n+1) bits via the shift circuitry to provide a first result, selectively outputs zero or the second value, based on a value of the first bit of the first value, to provide a second result, and adds the first result and the second results via the add circuitry to provide a result of the multiplication of the first and second values.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: April 13, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li, Vikas Chandra
  • Publication number: 20210019633
    Abstract: Disclosed herein includes a system, a method, and a device for performing a convolution on data of a current layer of a neural network, including a plurality of channels arranged in a first order and partitioned into a plurality of first partitions according to the first order. Each first partition includes a result of a convolution on a corresponding partition of channels in data of a previous layer of the neural network. The device shifts the plurality of channels arranged in the first order to a second order, partition the shifted plurality of channels into a plurality of second partitions, according to the second order. For each of the plurality of second partitions, the device performs a convolution on channels of the shifted plurality of channels that are in the corresponding second partition.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Applicant: Facebook Technologies, LLC
    Inventor: Ganesh Venkatesh
  • Publication number: 20210019115
    Abstract: Disclosed herein includes a system, a method, and a device including shift circuitry and add circuitry for performing multiplication of a first value and a second value for a neural network. The first value has a predetermined format including a first bit, and two or more second bits to represent a value of zero or 2n where n is an integer greater than or equal to 0. The device shifts, when the two or more second bits represent the value of 2n, the second value by (n+1) bits via the shift circuitry to provide a first result, selectively outputs zero or the second value, based on a value of the first bit of the first value, to provide a second result, and adds the first result and the second results via the add circuitry to provide a result of the multiplication of the first and second values.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li, Vikas Chandra
  • Publication number: 20210019363
    Abstract: Disclosed herein includes a system, a method, and a device for improving computational efficiency of deconvolution by reducing a number of dot products. In one aspect, an input image having a set of pixels is received. A first dot product may be performed on a subset of the set of pixels of the input image and a portion of a kernel, to generate a first pixel of an output image. A number of multiplications performed for the first dot product performed may be less than a number of elements of the kernel. A second dot product on a remaining portion of the kernel to generate the first pixel of the output image may be bypassed.
    Type: Application
    Filed: July 16, 2019
    Publication date: January 21, 2021
    Applicant: Facebook Technologies, LLC
    Inventor: Ganesh Venkatesh
  • Publication number: 20210019591
    Abstract: Disclosed herein includes a system, a method, and a device for receiving input data to generate a plurality of outputs for a layer of a neural network. The plurality of outputs are arranged in a first array. Dimensions of the first array may be compared with dimensions of a processing unit (PE) array including a plurality of PEs. According to a result of the comparing, the first array is partitioned into subarrays by the processor. Each of the subarrays has dimensions less than or equal to the dimensions of the PE array. A first group of PEs in the PE array is assigned to a first one of the subarrays. A corresponding output of the plurality of outputs is generated using a portion of the input data by each PE of the first group of PEs assigned to the first one of the subarrays.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li
  • Publication number: 20210012202
    Abstract: Disclosed herein includes a system, a method, and a device for asymmetrical scaling factor support for negative and positive values. A device can include a circuit having a shift circuitry and multiply circuitry. The circuit can be configured to perform computation for a neural network, including multiplying, via the multiply circuitry, a first value and a second value. The circuit can be configured to perform computation for a neural network, including shifting, via the shift circuitry, a result of the multiplying by a determined number of bits. The circuit can be configured to perform computation for a neural network, including outputting the result of the multiplying when a sign bit of the first value is negative, and a result of the shifting when the sign bit of the first value is positive.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Pierce I-Jen Chuang
  • Publication number: 20210011288
    Abstract: Disclosed herein is a method for using a neural network across multiple devices. The method can include receiving, by a first device configured with a first one or more layers of a neural network, input data for processing via the neural network implemented across the first device and a second device. The method can include outputting, by the first one or more layers of the neural network implemented on the first device, a data set that is reduced in size relative to the input data while identifying one or more features of the input data for processing by a second one or more layers of the neural network. The method can include communicating, by the first device, the data set to the second device for processing via the second one or more layers of the neural network implemented on the second device.
    Type: Application
    Filed: July 9, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Liangzhen Lai, Pierce I-Jen Chuang, Vikas Chandra, Ganesh Venkatesh
  • Publication number: 20210012178
    Abstract: Disclosed herein includes a system, a method, and a device for early-exit from convolution. In some embodiments, at least one processing element (PE) circuit is configured to perform, for a node of a neural network corresponding to a dot-product operation with a set of operands, computation using a subset of the set of operands to generate a dot-product value of the subset of the set of operands. The at least one PE circuit can compare the dot-product value of the subset of the set of operands, to a threshold value. The at least one PE circuit can determine whether to activate the node of the neural network, based at least on a result of the comparing.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang
  • Publication number: 20210011846
    Abstract: Disclosed herein includes a system, a method, and a device for reading and writing sparse data in a neural network accelerator. A plurality of slices can be established to access a memory having an access size of a data word. A first slice can be configured to access a first side of the data word in memory. Circuitry can access a mask identifying byte positions within the data word having non-zero values. The circuitry can modify the data word to have non-zero byte values stored starting at an end of the first side, and any zero byte values stored in a remainder of the data word. A determination can be made whether a number of non-zero byte values is less than or equal to a first access size of the first slice. The circuitry can write the modified data word to the memory via at least the first slice.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li
  • Publication number: 20210012186
    Abstract: Disclosed herein includes a system, a method, and a device for pipelined parallelism to accelerate distributed learning network graph. First data for a first layer of a neural network may be stored in memory. First circuitry including a first plurality of processing element (PE) circuits may read the first data from the memory and perform computation for the first layer of the neural network using the first data to generate second data. The first circuitry includes a plurality of buffers for outputting the generated second data as input to second circuitry to perform computation for a second layer of the neural network. The second circuitry includes a second plurality of PE circuits configured to perform computation for the second layer of the neural network using the second data.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai
  • Publication number: 20200401440
    Abstract: Embodiments of systems, methods, and apparatuses for heterogeneous computing are described. In some embodiments, a hardware heterogeneous scheduler dispatches instructions for execution on one or more plurality of heterogeneous processing elements, the instructions corresponding to a code fragment to be processed by the one or more of the plurality of heterogeneous processing elements, wherein the instructions are native instructions to at least one of the one or more of the plurality of heterogeneous processing elements.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 24, 2020
    Inventors: Rajesh M. SANKARAN, Gilbert NEIGER, Narayan RANGANATHAN, Stephen R. VAN DOREN, Joseph NUZMAN, Niall D. MCDONNELL, Michael A. O'HANLON, Lokpraveen B. MOSUR, Tracy Garrett DRYSDALE, Eriko NURVITADHI, Asit K. MISHRA, Ganesh VENKATESH, Deborah T. MARR, Nicholas P. CARTER, Jonathan D. PEARCE, Edward T. GROCHOWSKI, Richard J. GRECO, Robert VALENTINE, Jesus CORBAL, Thomas D. FLETCHER, Dennis R. BRADFORD, Dwight P. MANLEY, Mark J. CHARNEY, Jeffrey J. COOK, Paul CAPRIOLI, Koichi YAMADA, Kent D. GLOSSOP, David B. SHEFFIELD
  • Publication number: 20200285618
    Abstract: Compressed data is oftentimes beneficial for reducing the computing resources required, for example, to transmit and store data. The compression of data is particularly useful when dealing with sparse data (data that includes numerous zeros or near-zero values) and only non-zero values above a certain threshold have significance. When dealing with compressed data, oftentimes the data needs to be decompressed for processing (e.g., by deep learning networks or other applications configured to operate on sparse, or other uncompressed data). Instructions are disclosed for supporting the decompression of compressed data by a processing unit such as a CPU and GPU.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 10, 2020
    Inventors: Jorge Albericio Latorre, Jack H. Choquette, Manan Maheshkumar Patel, Jeffrey Pool, Ming Y. Siu, Ronny Meir Krashinsky, Ganesh Venkatesh
  • Publication number: 20190347125
    Abstract: Embodiments of systems, methods, and apparatuses for heterogeneous computing are described. In some embodiments, a hardware heterogeneous scheduler dispatches instructions for execution on one or more plurality of heterogeneous processing elements, the instructions corresponding to a code fragment to be processed by the one or more of the plurality of heterogeneous processing elements, wherein the instructions are native instructions to at least one of the one or more of the plurality of heterogeneous processing elements.
    Type: Application
    Filed: December 31, 2016
    Publication date: November 14, 2019
    Inventors: Rajesh M. SANKARAN, Gilbert NEIGER, Narayan RANGANATHAN, Stephen R. VAN DOREN, Joseph NUZMAN, Niall D. MCDONNELL, Michael A. O'HANLON, Lokpraveen B. MOSUR, Tracy Garrett DRYSDALE, Eriko NURVITADHI, Asit K. MISHRA, Ganesh VENKATESH, Deborah T. MARR, Nicholas P. CARTER, Jonathan D. PEARCE, Edward T. GROCHOWSKI, Richard J. GRECO, Robert VALENTINE, Jesus CORBAL, Thomas D. FLETCHER, Dennis R. BRADFORD, Dwight P. MANLEY, Mark J. CHARNEY, Jeffrey J. COOK, Paul CAPRIOLI, Koichi YAMADA, Kent D. GLOSSOP, David B. SHEFFIELD
  • Patent number: 10452551
    Abstract: A processor may include a programmable memory prefetcher that includes a programmable hardware prefetch engine and a prefetch engine control register.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: October 22, 2019
    Assignee: Intel Corporation
    Inventors: Ganesh Venkatesh, Christopher B. Wilkerson, Seth H. Pugsley, Deborah T. Marr
  • Patent number: 10387037
    Abstract: Techniques for enabling enhanced parallelism for sparse linear algebra operations having write-to-read dependencies are disclosed. A hardware processor includes a plurality of processing elements, a memory that is heavily-banked into a plurality of banks, and an arbiter. The arbiter is to receive requests from threads executing at the plurality of processing elements seeking to perform operations involving the memory, and to maintain a plurality of lock buffers corresponding to the plurality of banks. Each of the lock buffers is able to track up to a plurality of memory addresses within the corresponding bank that are to be treated as locked in that the values stored at those memory addresses cannot be updated by those of the threads that did not cause the memory addresses to be locked until those memory addresses have been removed from being tracked by the plurality of lock buffers.
    Type: Grant
    Filed: December 31, 2016
    Date of Patent: August 20, 2019
    Assignee: Intel Corporation
    Inventors: Ganesh Venkatesh, Deborah Marr
  • Patent number: 10372507
    Abstract: Techniques involving a compute engine architecture to support data-parallel loops with reduction operations are described. In some embodiments, a hardware processor includes a memory unit and a plurality of processing elements (PEs). Each of the PEs is directly coupled via one or more neighbor-to-neighbor links with one or more neighboring PEs so that each PE can receive a value from a neighboring PE, provide a value to a neighboring PE, or both receive a value from one neighboring PE and also provide a value to another neighboring PE. The hardware processor also includes a control engine coupled with the plurality of PEs that is to cause the plurality of PEs to collectively perform a task to generate one or more output values by each performing one or more iterations of a same subtask of the task.
    Type: Grant
    Filed: December 31, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Ganesh Venkatesh, Deborah Marr
  • Patent number: 10289752
    Abstract: A processor may include a gather-update-scatter accelerator, and an allocator comprising circuitry to direct an instruction to the accelerator for execution. The instruction may include a search index, an operation to be performed, and a scalar data value. The accelerator may include a content-addressable memory (CAM) storing multiple entries, each of which stores a respective index key and a data value associated with the index key. The accelerator may include a CAM controller, which includes circuitry. The CAM controller may be configured to select, based on the information in the instruction, one of the plurality of entries in the CAM on which to operate. The CAM controller may be configured to perform an arithmetic or logical operation on the selected entry dependent on the information in the instruction. The CAM controller may be configured to store a result of the operation in the selected entry in the CAM.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: May 14, 2019
    Assignee: Intel Corporation
    Inventors: Ganesh Venkatesh, Nicholas P. Carter, Deborah T. Marr
  • Publication number: 20180188961
    Abstract: Techniques for enabling enhanced parallelism for sparse linear algebra operations having write-to-read dependencies are disclosed. A hardware processor includes a plurality of processing elements, a memory that is heavily-banked into a plurality of banks, and an arbiter. The arbiter is to receive requests from threads executing at the plurality of processing elements seeking to perform operations involving the memory, and to maintain a plurality of lock buffers corresponding to the plurality of banks. Each of the lock buffers is able to track up to a plurality of memory addresses within the corresponding bank that are to be treated as locked in that the values stored at those memory addresses cannot be updated by those of the threads that did not cause the memory addresses to be locked until those memory addresses have been removed from being tracked by the plurality of lock buffers.
    Type: Application
    Filed: December 31, 2016
    Publication date: July 5, 2018
    Inventors: Ganesh VENKATESH, Deborah MARR
  • Publication number: 20180189110
    Abstract: Techniques involving a compute engine architecture to support data-parallel loops with reduction operations are described. In some embodiments, a hardware processor includes a memory unit and a plurality of processing elements (PEs). Each of the PEs is directly coupled via one or more neighbor-to-neighbor links with one or more neighboring PEs so that each PE can receive a value from a neighboring PE, provide a value to a neighboring PE, or both receive a value from one neighboring PE and also provide a value to another neighboring PE. The hardware processor also includes a control engine coupled with the plurality of PEs that is to cause the plurality of PEs to collectively perform a task to generate one or more output values by each performing one or more iterations of a same subtask of the task.
    Type: Application
    Filed: December 31, 2016
    Publication date: July 5, 2018
    Inventors: Ganesh VENKATESH, Deborah MARR
  • Publication number: 20180189675
    Abstract: Hardware accelerator architectures for clustering are described. A hardware accelerator includes sparse tiles and very/hyper sparse tiles. The sparse tile(s) execute operations for a clustering task involving a matrix. Each sparse tile includes a first plurality of processing units to operate upon a first plurality of blocks of the matrix that have been streamed to one or more random access memories of the sparse tiles over a high bandwidth interface from a first memory unit. Each of the very/hyper sparse tiles are to execute operations for the clustering task involving the matrix. Each of the very/hyper sparse tiles includes a second plurality of processing units to operate upon a second plurality of blocks of the matrix that have been randomly accessed over a low-latency interface from a second memory unit.
    Type: Application
    Filed: December 31, 2016
    Publication date: July 5, 2018
    Inventors: Eriko NURVITADHI, Ganesh VENKATESH, Srivatsan KRISHNAN, Suchit SUBHASCHANDRA, Deborah MARR