Patents by Inventor Edward H. Gornish

Edward H. Gornish has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220366008
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Publication number: 20220365783
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Publication number: 20220366007
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Publication number: 20220365833
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Patent number: 6314431
    Abstract: The present invention enables efficient pre-fetching of instructions. The present invention novelly determines a location for insertion of pre-fetch instructions earlier than in the past and in a cost effective manner. Therefore, the present invention introduces more control into the determination of when to initiate instruction pre-fetching than in the past. The present invention pre-fetches instructions accurately and launches instructions early enough to avoid cache miss latency. Also the present invention enables pre-fetching of instructions with the appropriate coverage. The present invention novelly generates pre-fetch instructions that have improved coverage over pre-fetching of the past by testing if the probability of a pre-fetch is cost effective and by determining whether the predicted size of a pre-fetched trace supports cost effective pre-fetching.
    Type: Grant
    Filed: September 2, 1999
    Date of Patent: November 6, 2001
    Assignee: Hewlett-Packard Company
    Inventor: Edward H. Gornish
  • Patent number: 5752037
    Abstract: There are two separate, yet related, prefetching strategies used for data references used having multiple strides, which typically occur in data references within nested loop structures. The first approach attempts to reverse one or more of the nested loops so that the strides of the reference are in the same direction. Once the loop or loops are reversed, data elements can be prefetched in the common loop direction. Preferably, the inner loops are reversed as compared with the outer loops, but this is not essential. The second approach is used where the data reference has multiple strides and the loops cannot be reversed. In this case, the prefetching method prefetches in the opposite direction of the innermost loop that surrounds the data reference. The second approach is used when the first approach cannot be used and where the strides of the reference have different directions and the inner loop is expected to iterate relatively few times.
    Type: Grant
    Filed: April 26, 1996
    Date of Patent: May 12, 1998
    Assignee: Hewlett-Packard Company
    Inventors: Edward H. Gornish, Anne M. Holler, Wei Chung Hsu