Patents by Inventor Jeffrey Michael Pool

Jeffrey Michael Pool has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240152407
    Abstract: Apparatuses, systems, and techniques to determine a configuration based at least in part on data stored by at least one data structure of a workload at runtime, and transform the workload into a sparse workload based at least in part on the configuration. In at least one embodiment, one or more sparse workloads (e.g., one or more sparse neural networks) are generated based at least in part on, for example, one or more workloads (e.g., one or more neural networks).
    Type: Application
    Filed: July 17, 2023
    Publication date: May 9, 2024
    Inventors: Geonhwa Jeong, Po-An Tsai, Jeffrey Michael Pool
  • Patent number: 11977888
    Abstract: A method, computer readable medium, and processor are described herein for inline data inspection by using a decoder to decode a load instruction, including a signal to cause a circuit in a processor to indicate whether data loaded by a load instruction exceeds a threshold value. Moreover, an indication of whether data loaded by a load instruction exceeds a threshold value may be stored.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: May 7, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jeffrey Michael Pool, Andrew Kerr, John Tran, Ming Y. Siu, Stuart Oberman
  • Publication number: 20230221957
    Abstract: A method, computer readable medium, and processor are described herein for inline data inspection by using a decoder to decode a load instruction, including a signal to cause a circuit in a processor to indicate whether data loaded by a load instruction exceeds a threshold value. Moreover, an indication of whether data loaded by a load instruction exceeds a threshold value may be stored.
    Type: Application
    Filed: February 22, 2023
    Publication date: July 13, 2023
    Inventors: Jeffrey Michael Pool, Andrew Kerr, John Tran, Ming Y. Siu, Stuart Oberman
  • Patent number: 11609761
    Abstract: A method, computer readable medium, and processor are described herein for inline data inspection by using a decoder to decode a load instruction, including a signal to cause a circuit in a processor to indicate whether data loaded by a load instruction exceeds a threshold value. Moreover, an indication of whether data loaded by a load instruction exceeds a threshold value may be stored.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: March 21, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Jeffrey Michael Pool, Andrew Kerr, John Tran, Ming Y. Siu, Stuart Oberman
  • Patent number: 11522565
    Abstract: A packed error correction code (ECC) technique opportunistically embeds ECC check-bits with compressed data. When compressed, the data is encoded in fewer bits and is therefore fragmented when stored or transmitted compared with the uncompressed data. The ECC check-bits may be packed with compressed data at “source” points. The check-bits are transmitted along with the compressed data and, at any “intermediate” point between the source and a “destination” the check-bits may be used to detect and correct errors in the compressed data. In contrast with conventional systems, packed ECC enables end-to-end coverage for sufficiently-compressed data within the processor and also externally. While storage circuitry typically is protected by structure-specific ECC, protection is also beneficial for data as it is transmitted between processing and/or storage units.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: December 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Michael Brendan Sullivan, Jeffrey Michael Pool, Yangxiang Huang, Timothy Kohchih Tsai, Siva Kumar Sastry Hari, Steven William Keckler
  • Publication number: 20220329265
    Abstract: A packed error correction code (ECC) technique opportunistically embeds ECC check-bits with compressed data. When compressed, the data is encoded in fewer bits and is therefore fragmented when stored or transmitted compared with the uncompressed data. The ECC check-bits may be packed with compressed data at “source” points. The check-bits are transmitted along with the compressed data and, at any “intermediate” point between the source and a “destination” the check-bits may be used to detect and correct errors in the compressed data. In contrast with conventional systems, packed ECC enables end-to-end coverage for sufficiently-compressed data within the processor and also externally. While storage circuitry typically is protected by structure-specific ECC, protection is also beneficial for data as it is transmitted between processing and/or storage units.
    Type: Application
    Filed: April 7, 2021
    Publication date: October 13, 2022
    Inventors: Michael Brendan Sullivan, Jeffrey Michael Pool, Yangxiang Huang, Timothy Kohchih Tsai, Siva Kumar Sastry Hari, Steven William Keckler
  • Publication number: 20220327101
    Abstract: Apparatuses, systems, and techniques to transform data sets, such as matrices representing layers of neural networks, to increase sparsity and/or other characteristics of said data sets to improve performance in computations, such as neural network computations. In at least one embodiment, one or more subsets of data in one or more sets of data are rearranged as part of a process to increase sparsity in said one or more sets of data to satisfy one or more one or more structural sparsity constraints.
    Type: Application
    Filed: May 18, 2021
    Publication date: October 13, 2022
    Inventors: Jeffrey Michael Pool, Chong Yu, Paulius Micikevicius
  • Publication number: 20200125363
    Abstract: A method, computer readable medium, and processor are described herein for inline data inspection by using a decoder to decode a load instruction, including a signal to cause a circuit in a processor to indicate whether data loaded by a load instruction exceeds a threshold value. Moreover, an indication of whether data loaded by a load instruction exceeds a threshold value may be stored.
    Type: Application
    Filed: December 9, 2019
    Publication date: April 23, 2020
    Inventors: Jeffrey Michael Pool, Andrew Kerr, John Tran, Ming Y. Siu, Stuart Oberman
  • Patent number: 10503507
    Abstract: A method, computer readable medium, and system are disclosed for inline data inspection. The method includes the steps of receiving, by a load/store unit, a load instruction and obtaining, by an inspection circuit that is coupled to the load/store unit, data specified by the load instruction. Additional steps include determining that the data equals zero and transmitting the data and a predicate signal to the load/store unit, wherein the predicate signal indicates that the data equals zero. Alternative additional steps include computing a predicate value based on a comparison between the data and a threshold value and transmitting the data and the predicate value to the load/store unit, wherein the predicate value is asserted when the data is less than the threshold value and is negated when the data is not less than the threshold value.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: December 10, 2019
    Assignee: NVIDIA Corporation
    Inventors: Jeffrey Michael Pool, Andrew Kerr, John Tran, Ming Y. Siu, Stuart Oberman
  • Publication number: 20190065195
    Abstract: A method, computer readable medium, and system are disclosed for inline data inspection. The method includes the steps of receiving, by a load/store unit, a load instruction and obtaining, by an inspection circuit that is coupled to the load/store unit, data specified by the load instruction. Additional steps include determining that the data equals zero and transmitting the data and a predicate signal to the load/store unit, wherein the predicate signal indicates that the data equals zero. Alternative additional steps include computing a predicate value based on a comparison between the data and a threshold value and transmitting the data and the predicate value to the load/store unit, wherein the predicate value is asserted when the data is less than the threshold value and is negated when the data is not less than the threshold value.
    Type: Application
    Filed: August 31, 2017
    Publication date: February 28, 2019
    Inventors: Jeffrey Michael Pool, Andrew Kerr, John Tran, Ming Y. Siu, Stuart Oberman
  • Patent number: 10096134
    Abstract: A method, computer program product, and system for sparse convolutional neural networks that improves efficiency is described. Multi-bit data for input to a processing element is received at a compaction engine. The multi-bit data is determined to equal zero and a single bit signal is transmitted from the memory interface to the processing element in lieu of the multi-bit data, where the single bit signal indicates that the multi-bit data equals zero. A compacted data sequence for input to a processing element is received by a memory interface. The compacted data sequence is transmitted from the memory interface to an expansion engine. Non-zero values are extracted from the compacted data sequence and zeros are inserted between the non-zero values by the expansion engine to generate an expanded data sequence that is output to the processing element.
    Type: Grant
    Filed: February 1, 2017
    Date of Patent: October 9, 2018
    Assignee: NVIDIA Corporation
    Inventors: Zhou Yan, Franciscus Wilhelmus Sijstermans, Yuanzhi Hua, Xiaojun Wang, Jeffrey Michael Pool, William J. Dally, Liang Chen
  • Publication number: 20180218518
    Abstract: A method, computer program product, and system for sparse convolutional neural networks that improves efficiency is described. Multi-bit data for input to a processing element is received at a compaction engine. The multi-bit data is determined to equal zero and a single bit signal is transmitted from the memory interface to the processing element in lieu of the multi-bit data, where the single bit signal indicates that the multi-bit data equals zero. A compacted data sequence for input to a processing element is received by a memory interface. The compacted data sequence is transmitted from the memory interface to an expansion engine. Non-zero values are extracted from the compacted data sequence and zeros are inserted between the non-zero values by the expansion engine to generate an expanded data sequence that is output to the processing element.
    Type: Application
    Filed: February 1, 2017
    Publication date: August 2, 2018
    Inventors: Zhou Yan, Franciscus Wilhelmus Sijstermans, Yuanzhi Hua, Xiaojun Wang, Jeffrey Michael Pool, William J. Dally, Liang Chen