Patents by Inventor Jayashree Venkatesh

Jayashree Venkatesh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220366007
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Publication number: 20220365882
    Abstract: Apparatuses, systems, and techniques to control operation of a memory cache. In at least one embodiment, cache guidance is specified within application source code by associating guidance with declaration of a memory block, and then applying specified guidance to source code statements that access said memory block.
    Type: Application
    Filed: August 5, 2021
    Publication date: November 17, 2022
    Inventors: Harold Carter Edwards, Luke David Durant, Stephen Jones, Jack H. Choquette, Ronny Krashinsky, Dmitri Vainbrand, Olivier Giroux, Olivier Francois Joseph Harel, Shirish Gadre, Ze Long, Matthieu Tardy, David Dastous St Hilaire, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Jaewook Shin, Jayashree Venkatesh, Girish Bhaskar Bharambe
  • Publication number: 20220366008
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Publication number: 20220365783
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Publication number: 20220365833
    Abstract: Apparatuses, systems, and techniques to perform an operation to indicate one or more non-zero values within one or more matrices of data; to perform an API to compress one or more matrices of data; to perform a matrix multiply accumulate (MMA) operation on two or more matrices of data, wherein at least one of the two or more matrices contain compressed data; and/or to perform an API to decompress one or more matrices of data. In at least one embodiment, one or more circuits are configured to receive and compile one or more instructions to perform computational operations for a sparse matrix multiplication.
    Type: Application
    Filed: May 12, 2022
    Publication date: November 17, 2022
    Inventors: Jaewook Shin, Balaji Krishna Yugandhar Atukuri, Edward H. Gornish, Jayashree Venkatesh
  • Patent number: 10430229
    Abstract: To use SIMD lanes efficiently for domain shader execution, domain point data from different domain shader patches may be packed together into a single SIMD thread. To generate an efficient code sequence, each domain point occupies one SIMD lane and all attributes for the domain point reside in their own partition of General Register File (GRF) space. This technique is called the multiple-patch SIMD dispatch mode.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: October 1, 2019
    Assignee: Intel Corporation
    Inventors: Jayashree Venkatesh, Guei-Yuan Lueh, Subramaniam Maiyuran
  • Publication number: 20170178384
    Abstract: Reducing SIMD fragmentation for SIMD execution widths of 32 or even 64 channels in a single hardware thread leads to better EU utilization. Increasing SIMD execution widths to 32 or 64 channels per thread, enables handling more vertices, patches, primitives and triangles per EU hardware thread. Modified 3D pipeline shader payloads can handle multiple patches in case of domain shaders or multiple primitives when primitive object instance count is greater than one in the case of geometry shaders and multiple triangles in case of pixel shaders.
    Type: Application
    Filed: December 21, 2015
    Publication date: June 22, 2017
    Inventors: Jayashree Venkatesh, Gang Chen, Thomas F. Raoux, Guei-Yuan Lueh, Subramaniam Maiyuran
  • Publication number: 20170178274
    Abstract: To use SIMD lanes efficiently for domain shader execution, domain point data from different domain shader patches may be packed together into a single SIMD thread. To generate an efficient code sequence, each domain point occupies one SIMD lane and all attributes for the domain point reside in their own partition of General Register File (GRF) space. This technique is called the multiple-patch SIMD dispatch mode.
    Type: Application
    Filed: December 21, 2015
    Publication date: June 22, 2017
    Inventors: Jayashree Venkatesh, Guei-Yuan Lueh, Subramaniam Maiyuran