Patents by Inventor Prashant Jayaprakash Nair

Prashant Jayaprakash Nair has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11334786
    Abstract: A method (and structure and computer product) to optimize an operation in a Neural Network Accelerator (NNAccel) that includes a hierarchy of neural network layers as computational stages for the NNAccel and a configurable hierarchy of memory modules including one or more on-chip Static Random-Access Memory (SRAM) modules and one or more Dynamic Random-Access Memory (DRAM) modules, where each memory module is controlled by a plurality of operational parameters that are adjustable by a controller of the NNAcc. The method includes detecting bit error rates of memory modules currently being used by the NNAccel and determining, by the controller, whether the detected bit error rates are sufficient for a predetermined threshold value for an accuracy of a processing of the NNAccel. One or more operational parameters of one or more memory modules are dynamically changed by the controller to move to a higher accuracy state when the accuracy is below the predetermined threshold value.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: May 17, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alper Buyuktosunoglu, Nandhini Chandramoorthy, Prashant Jayaprakash Nair, Karthik V. Swaminathan
  • Patent number: 11245415
    Abstract: Methods, systems, and techniques for data compression. A cluster fingerprint of an uncompressed data block is determined to correspond to a cluster fingerprint of a base block stored in a base array. This determining involves looking up the cluster fingerprint of the first base block from the base array using the cluster fingerprint of the first uncompressed data block. The difference between the uncompressed data block and the base block is determined, and a compressed data block is encoded using this difference. The compressed data block is then stored in a data array.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: February 8, 2022
    Assignee: THE UNIVERSITY OF BRITISH COLUMBIA UNIVERSITY-INDUSTRY LIAISON OFFICE
    Inventors: Amin Ghasemazar, Prashant Jayaprakash Nair, Mieszko Lis
  • Publication number: 20210288659
    Abstract: Methods, systems, and techniques for data compression. A cluster fingerprint of an uncompressed data block is determined to correspond to a cluster fingerprint of a base block stored in a base array. This determining involves looking up the cluster fingerprint of the first base block from the base array using the cluster fingerprint of the first uncompressed data block. The difference between the uncompressed data block and the base block is determined, and a compressed data block is encoded using this difference. The compressed data block is then stored in a data array.
    Type: Application
    Filed: March 12, 2021
    Publication date: September 16, 2021
    Inventors: Amin Ghasemazar, Prashant Jayaprakash Nair, Mieszko Lis
  • Patent number: 11095313
    Abstract: Single error correction (“SEC”) code and triple error detection (“TED”) code are used to optimize bandwidth and resilience under multiple bit failures. One or more errors in data stored in duplicated registers are detected and corrected using the SEC code and TED code where simultaneous read operations are produced with two copies of data for each of the duplicated registers for a multi-port banked memory array. The SEC code and TED code may be included in each of the two data copies of the simultaneous read operations.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: August 17, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert Montoye, Jeffrey Derby, Bruce Fleischer, Prashant Jayaprakash Nair
  • Publication number: 20210119646
    Abstract: Various embodiments are provided for enhanced error correction using single error correction (“SEC”) code and triple error detection (“TED”) code to optimize bandwidth and resilience under multiple bit failures by a processor. One or more errors may be detected and corrected in duplicated registers using an SEC code and TED code where simultaneous read operations are produced with two copies of data for each of the duplicated registers for a multi-port banked memory array. The SEC code and TED code may be included in each of the two data copies of the simultaneous read operations.
    Type: Application
    Filed: October 21, 2019
    Publication date: April 22, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert MONTOYE, Jeffrey DERBY, Bruce FLEISCHER, Prashant Jayaprakash NAIR
  • Patent number: 10831669
    Abstract: Systems, methods and computer program products using multi-tag storage to enable efficient data compression in caches without increasing a tag/data area overhead. One method can comprise storing compressed versions of data elements in a data array of a cache, with tags for the compressed versions respectively appended to the compressed versions as stored in the data array, and storing hashed versions of the tags in a tag array of the cache, wherein the hashed versions of the tags respectively have fewer bits than the tags. A tag block may store hashed versions of tags corresponding to first and second compressed data elements stored in a cacheline of the cache. Hashed tag entries may be compared with full versions of the tags appended to compressed versions of data elements stored in the data array to prevent false positive cache reads. A compressed identifier (CID) may be stored with the hashed versions of tags in the tag array.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Prashant Jayaprakash Nair, Seokin Hong, Alper Buyuktosunoglu, Michael B. Healy, Bulent Abali
  • Publication number: 20200342284
    Abstract: A method (and structure and computer product) to optimize an operation in a Neural Network Accelerator (NNAccel) that includes a hierarchy of neural network layers as computational stages for the NNAccel and a configurable hierarchy of memory modules including one or more on-chip Static Random-Access Memory (SRAM) modules and one or more Dynamic Random-Access Memory (DRAM) modules, where each memory module is controlled by a plurality of operational parameters that are adjustable by a controller of the NNAcc. The method includes detecting bit error rates of memory modules currently being used by the NNAccel and determining, by the controller, whether the detected bit error rates are sufficient for a predetermined threshold value for an accuracy of a processing of the NNAccel. One or more operational parameters of one or more memory modules are dynamically changed by the controller to move to a higher accuracy state when the accuracy is below the predetermined threshold value.
    Type: Application
    Filed: April 25, 2019
    Publication date: October 29, 2020
    Inventors: Alper BUYUKTOSUNOGLU, Nandhini CHANDRAMOORTHY, Prashant Jayaprakash NAIR, Karthik V. SWAMINATHAN
  • Publication number: 20200174939
    Abstract: Systems, methods and computer program products using multi-tag storage to enable efficient data compression in caches without increasing a tag/data area overhead. One method can comprise storing compressed versions of data elements in a data array of a cache, with tags for the compressed versions respectively appended to the compressed versions as stored in the data array, and storing hashed versions of the tags in a tag array of the cache, wherein the hashed versions of the tags respectively have fewer bits than the tags. A tag block may store hashed versions of tags corresponding to first and second compressed data elements stored in a cacheline of the cache. Hashed tag entries may be compared with full versions of the tags appended to compressed versions of data elements stored in the data array to prevent false positive cache reads. A compressed identifier (CID) may be stored with the hashed versions of tags in the tag array.
    Type: Application
    Filed: December 3, 2018
    Publication date: June 4, 2020
    Inventors: Prashant Jayaprakash Nair, Seokin Hong, Alper Buyuktosunoglu, Michael B. Healy, Bulent Abali
  • Patent number: 10558518
    Abstract: A computer monitors a memory system during operation. The computer detects a first number of errors in the memory system. The computer determines that the first number of errors is below an error level threshold. The computer lowers a first group of one or more memory parameters of the memory system by a first amount. After the lowering of one or more memory parameters by the first amount, the computer detects a second number of errors in the memory system. The computer determines that the second number of errors is above the error level threshold. The computer raises a second group of one or more memory parameters of the memory system by a second amount.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: February 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Prashant Jayaprakash Nair, Alper Buyuktosunoglu, Pradip Bose
  • Patent number: 10423538
    Abstract: Embodiments include techniques for receiving a cacheline of data, hashing the cacheline into a plurality of chunks, wherein each chunk includes a pattern of bits, storing the plurality of chunks in a pattern table, wherein the plurality of chunks are indexed in the pattern table based on the pattern of bits of each chunk, and identifying a repeated pattern of bits of the plurality of chunks and selecting the repeated pattern of bits as candidate pattern. Techniques include comparing a threshold number of bits of the candidate pattern to the pattern of bits of the plurality of chunks in the pattern table; based on the comparison, inserting valid bits and a tag into the pattern table for the candidate pattern by replacing bits in the candidate pattern, and writing the candidate pattern, including the valid bits and the tag, into a location of the memory corresponding to the candidate pattern.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alper Buyuktosunoglu, Seokin Hong, Prashant Jayaprakash Nair
  • Publication number: 20190163640
    Abstract: Embodiments include techniques for receiving a cacheline of data, hashing the cacheline into a plurality of chunks, wherein each chunk includes a pattern of bits, storing the plurality of chunks in a pattern table, wherein the plurality of chunks are indexed in the pattern table based on the pattern of bits of each chunk, and identifying a repeated pattern of bits of the plurality of chunks and selecting the repeated pattern of bits as candidate pattern. Techniques include comparing a threshold number of bits of the candidate pattern to the pattern of bits of the plurality of chunks in the pattern table; based on the comparison, inserting valid bits and a tag into the pattern table for the candidate pattern by replacing bits in the candidate pattern, and writing the candidate pattern, including the valid bits and the tag, into a location of the memory corresponding to the candidate pattern.
    Type: Application
    Filed: November 29, 2017
    Publication date: May 30, 2019
    Inventors: Alper Buyuktosunoglu, Seokin Hong, Prashant Jayaprakash Nair
  • Publication number: 20190146864
    Abstract: A computer monitors a memory system during operation. The computer detects a first number of errors in the memory system. The computer determines that the first number of errors is below an error level threshold. The computer lowers a first group of one or more memory parameters of the memory system by a first amount. After the lowering of one or more memory parameters by the first amount, the computer detects a second number of errors in the memory system. The computer determines that the second number of errors is above the error level threshold. The computer raises a second group of one or more memory parameters of the memory system by a second amount.
    Type: Application
    Filed: November 13, 2017
    Publication date: May 16, 2019
    Inventors: Prashant Jayaprakash Nair, Alper Buyuktosunoglu, Pradip Bose
  • Patent number: 10248497
    Abstract: A processing system includes a memory coupled to a processor. The memory stores data blocks, with each data block having a separate associated checksum value stored along with the data block in the memory. The processor has a storage location that stores parity information for the data blocks, with the parity information having a plurality of parity blocks. Each parity block represents a parity of a corresponding set of data blocks. The parity blocks can be accessed for use in error detection and correction schemes used by the processing system.
    Type: Grant
    Filed: October 22, 2014
    Date of Patent: April 2, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Prashant Jayaprakash Nair, David A. Roberts
  • Patent number: 9754684
    Abstract: In an Error Correction Code (ECC)-based memory, a Single Error Correction Double Error Detection (SECDED) scheme is used with data aggregation to correct more than one error in a memory word received in a memory burst. By completely utilizing the Hamming distance of the SECDED (128,120) code, 8 ECC bits can potentially correct one error in 120 data bits. Each memory burst is effectively “expanded” from its actual 64 data bits to 120 data bits by “sharing” additional 56 data bits from all of the other related bursts. When a cache line of 512 bits is read, the SECDED (128,120) code is used in conjunction with all the received 64 ECC bits to correct more than one error in the actual 64 bits of data in a memory word. The data mapping of the present disclosure translates to a higher rate of error correction than the existing (72,64) SECDED code.
    Type: Grant
    Filed: March 5, 2015
    Date of Patent: September 5, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chaohong Hu, Hongzhong Zheng, Prashant Jayaprakash Nair
  • Publication number: 20160134307
    Abstract: In an Error Correction Code (ECC)-based memory, a Single Error Correction Double Error Detection (SECDED) scheme is used with data aggregation to correct more than one error in a memory word received in a memory burst. By completely utilizing the Hamming distance of the SECDED (128,120) code, 8 ECC bits can potentially correct one error in 120 data bits. Each memory burst is effectively “expanded” from its actual 64 data bits to 120 data bits by “sharing” additional 56 data bits from all of the other related bursts. When a cache line of 512 bits is read, the SECDED (128,120) code is used in conjunction with all the received 64 ECC bits to correct more than one error in the actual 64 bits of data in a memory word. The data mapping of the present disclosure translates to a higher rate of error correction than the existing (72,64) SECDED code.
    Type: Application
    Filed: March 5, 2015
    Publication date: May 12, 2016
    Inventors: Chaohong HU, Hongzhong ZHENG, Prashant Jayaprakash NAIR
  • Publication number: 20160117221
    Abstract: A processing system includes a memory coupled to a processor device. The memory stores data blocks, with each data block having a separate associated checksum value stored along with the data block in the memory. The processor device has a storage location that stores parity information for the data blocks, with the parity information having a plurality of parity blocks. Each parity block represents a parity of a corresponding set of data blocks. The parity blocks can be accessed for use in error detection and correction schemes used by the processing system.
    Type: Application
    Filed: October 22, 2014
    Publication date: April 28, 2016
    Inventors: Prashant Jayaprakash Nair, David A. Roberts