Patents by Inventor Varghese George

Varghese George has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11314515
    Abstract: Embodiments described herein provide for an instruction and associated logic to enable a vector multiply add instructions with automatic zero skipping for sparse input. One embodiment provides for a general-purpose graphics processor comprising logic to perform operations comprising fetching a hardware macro instruction having a predicate mask, a repeat count, and a set of initial operands, where the initial operands include a destination operand and multiple source operands. The hardware macro instruction is configured to perform one or more multiply/add operations on input data associated with a set of matrices.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 26, 2022
    Assignee: Intel Corporation
    Inventors: Supratim Pal, Sasikanth Avancha, Ishwar Bhati, Wei-Yu Chen, Dipankar Das, Ashutosh Garg, Chandra S. Gurram, Junjie Gu, Guei-Yuan Lueh, Subramaniam Maiyuran, Jorge E. Parra, Sudarshan Srinivasan, Varghese George
  • Publication number: 20220122215
    Abstract: Embodiments described herein include software, firmware, and hardware that provides techniques to enable deterministic scheduling across multiple general-purpose graphics processing units. One embodiment provides a multi-GPU architecture with uniform latency. One embodiment provides techniques to distribute memory output based on memory chip thermals. One embodiment provides techniques to enable thermally aware workload scheduling. One embodiment provides techniques to enable end to end contracts for workload scheduling on multiple GPUs.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 21, 2022
    Applicant: Intel Corporation
    Inventors: JOYDEEP RAY, SELVAKUMAR PANNEER, SAURABH TANGRI, BEN ASHBAUGH, SCOTT JANUS, ABHISHEK APPU, VARGHESE GEORGE, RAVISHANKAR IYER, NILESH JAIN, PATTABHIRAMAN K, ALTUG KOKER, MIKE MACPHERSON, JOSH MASTRONARDE, ELMOUSTAPHA OULD-AHMED-VALL, JAYAKRISHNA P. S, ERIC SAMSON
  • Publication number: 20220121421
    Abstract: Methods and apparatus relating to techniques for multi-tile memory management. In an example, an apparatus comprises a cache memory, a high-bandwidth memory, a shader core communicatively coupled to the cache memory and comprising a processing element to decompress a first data element extracted from an in-memory database in the cache memory and having a first bit length to generate a second data element having a second bit length, greater than the first bit length, and an arithmetic logic unit (ALU) to compare the data element to a target value provided in a query of the in-memory database. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 21, 2022
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Aravindh Anantaraman, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Mike Macpherson, Subramaniam Maiyuran, Joydeep Ray, Lakshminarayana Striramassarma, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, Prasoonkumar Surti, David Puffer, James Valerio, Ankur N. Shah
  • Publication number: 20220114108
    Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache memory that is coupled to the processing resources. The cache controller is configured to set an initial aging policy using an aging field based on age of cache lines within the cache memory and to determine whether a hint or an instruction to indicate a level of aging has been received.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Altug Koker, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Abhishek Appu, Aravindh Anantaraman, Valentin Andrei, Durgaprasad Bilagi, Varghese George, Brent Insko, Sanjeev Jahagirdar, Scott Janus, Pattabhiraman K., SungYe Kim, Subramaniam Maiyuran, Vasanth Ranganathan, Lakshminarayanan Striramassarma, Xinmin Tian
  • Publication number: 20220114096
    Abstract: Multi-tile Memory Management for Detecting Cross Tile Access, Providing Multi-Tile Inference Scaling with multicasting of data via copy operation, and Providing Page Migration are disclosed herein. In one embodiment, a graphics processor for a multi-tile architecture includes a first graphics processing unit (GPU) having a memory and a memory controller, a second graphics processing unit (GPU) having a memory and a cross-GPU fabric to communicatively couple the first and second GPUs. The memory controller is configured to determine whether frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU in the multi-GPU configuration and to send a message to initiate a data transfer mechanism when frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Lakshminarayanan Striramassarma, Prasoonkumar Surti, Varghese George, Ben Ashbaugh, Aravindh Anantaraman, Valentin Andrei, Abhishek Appu, Nicolas Galoppo Von Borries, Altug Koker, Mike Macpherson, Subramaniam Maiyuran, Nilay Mistry, Elmoustapha Ould-Ahmed-Vall, Selvakumar Panneer, Vasanth Ranganathan, Joydeep Ray, Ankur Shah, Saurabh Tangri
  • Publication number: 20220107914
    Abstract: Embodiments are generally directed to a multi-tile architecture for graphics operations. An embodiment of an apparatus includes a multi-tile architecture for graphics operations including a multi-tile graphics processor, the multi-tile processor includes one or more dies; multiple processor tiles installed on the one or more dies; and a structure to interconnect the processor tiles on the one or more dies, wherein the structure to enable communications between processor tiles the processor tiles.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 7, 2022
    Applicant: Intel Corporation
    Inventors: Altug Koker, Ben Ashbaugh, Scott Janus, Aravindh Anantaraman, Abhishek R. Appu, Niranjan Cooray, Varghese George, Arthur Hunter, Brent E. Insko, Elmoustapha Ould-Ahmed-Vall, Selvakumar Panneer, Vasanth Ranganathan, Joydeep Ray, Kamal Sinha, Lakshminarayanan Striramassarma, Surti Prasoonkumar, Saurabh Tangri
  • Publication number: 20220073578
    Abstract: Provided herein are methods of killing or reducing the number of naturally-occurring and/or treatment-induced senescent cells and diseased cells in a subject in need thereof, decreasing the accumulation of naturally-occurring and/or treatment-induced senescent cells and diseased cells in a subject in need thereof, that include administering to the subject a therapeutically effective amount of one or more common gamma-chain family cytokine receptor activating agent(s) and/or one or more agent(s) that result(s) in a decrease in the activation of a TGF-? receptor.
    Type: Application
    Filed: June 1, 2021
    Publication date: March 10, 2022
    Inventors: Hing C. Wong, Xiaoyun Zhu, Bai Liu, Pallavi Chaturvedi, Varghese George
  • Publication number: 20220066931
    Abstract: Embodiments described herein provide techniques to enable the dynamic reconfiguration of memory on a general-purpose graphics processing unit. One embodiment described herein enables dynamic reconfiguration of cache memory bank assignments based on hardware statistics. One embodiment enables for virtual memory address translation using mixed four kilobyte and sixty-four kilobyte pages within the same page table hierarchy and under the same page directory. One embodiment provides for a graphics processor and associated heterogenous processing system having near and far regions of the same level of a cache hierarchy.
    Type: Application
    Filed: March 14, 2020
    Publication date: March 3, 2022
    Applicant: INTEL CORPORATION
    Inventors: JOYDEEP RAY, NIRANJAN COORAY, SUBRAMANIAM MAIYURAN, ALTUG KOKER, PRASOONKUMAR SURTI, VARGHESE GEORGE, VALENTIN ANDREI, ABHISHEK APPU, GUADALUPE GARCIA, PATTABHIRAMAN K, SUNGYE KIM, SANJAY KUMAR, PRATIK MAROLIA, ELMOUSTAPHA OULD-AHMED-VALL, VASANTH RANGANATHAN, WILLIAM SADLER, LAKSHMINARAYANAN STRIRAMASSARMA
  • Publication number: 20220058053
    Abstract: One embodiment provides for a general-purpose graphics processing unit comprising a set of processing elements to execute one or more thread groups of a second kernel to be executed by the general-purpose graphics processor, an on-chip memory coupled to the set of processing elements, and a scheduler coupled with the set of processing elements, the scheduler to schedule the thread groups of the kernel to the set of processing elements, wherein the scheduler is to schedule a thread group of the second kernel to execute subsequent to a thread group of a first kernel, the thread group of the second kernel configured to access a region of the on-chip memory that contains data written by the thread group of the first kernel in response to a determination that the second kernel is dependent upon the first kernel.
    Type: Application
    Filed: September 13, 2021
    Publication date: February 24, 2022
    Applicant: Intel Corporation
    Inventors: Valentin Andrei, Aravindh Anantaraman, Abhishek R. Appu, Nicolas C. Galoppo von Borries, Altug Koker, SungYe Kim, Elmoustapha Ould-Ahmed-Vall, Mike Macpherson, Subramaniam Maiyuran, Vasanth Ranganathan, Joydeep Ray, Varghese George
  • Publication number: 20220058853
    Abstract: One embodiment provides for a graphics processor comprising a block of graphics compute units, a graphics processor pipeline coupled to the block of graphics compute units, and a programmable neural network unit including one or more neural network hardware blocks. The programmable neural network unit is coupled with the block of graphics compute units and the graphics processor pipeline. The one or more neural network hardware blocks include hardware to perform neural network operations and activation operations for a layer of a neural network. The programmable neural network unit can configure settings of one or more hardware blocks within the graphics processor pipeline based on a machine learning model trained to optimize performance of a set of workloads.
    Type: Application
    Filed: October 13, 2021
    Publication date: February 24, 2022
    Applicant: Intel Corporation
    Inventors: HUGUES LABBE, DARREL PALKE, SHERINE ABDELHAK, JILL BOYCE, VARGHESE GEORGE, SCOTT JANUS, ADAM LAKE, ZHIJUN LEI, ZHENGMIN LI, MIKE MACPHERSON, CARL MARSHALL, SELVAKUMAR PANNEER, PRASOONKUMAR SURTI, KARTHIK VEERAMANI, DEEPAK VEMBAR, VALLABHAJOSYULA SRINIVASA SOMAYAZULU
  • Publication number: 20220036500
    Abstract: Embodiments described herein provide techniques to disaggregate an architecture of a system on a chip integrated circuit into multiple distinct chiplets that can be packaged onto a common chassis. In one embodiment, a graphics processing unit or parallel processor is composed from diverse silicon chiplets that are separately manufactured. A chiplet is an at least partially and distinctly packaged integrated circuit that includes distinct units of logic that can be assembled with other chiplets into a larger package. A diverse set of chiplets with different IP core logic can be assembled into a single device.
    Type: Application
    Filed: October 13, 2021
    Publication date: February 3, 2022
    Applicant: Intel Corporation
    Inventors: Naveen Matam, Lance Cheney, Eric Finley, Varghese George, Sanjeev Jahagirdar, Altug Koker, Josh Mastronarde, Iqbal Rajwani, Lakshminarayanan Striramassarma, Melaku Teshome, Vikranth Vemulapalli, Binoj Xavier
  • Patent number: 11232533
    Abstract: Embodiments are generally directed to memory prefetching in multiple GPU environment. An embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (GPUs) to process data, each of the GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the GPUs is to prefetch data from the memory to the cache of the GPU; and wherein the prefetcher of a GPU is prohibited from prefetching from a page that is not owned by the GPU or by the host processor.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: January 25, 2022
    Assignee: INTEL CORPORATION
    Inventors: Joydeep Ray, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Nicolas Galoppo von Borries, Varghese George, Altug Koker, Elmoustapha Ould-Ahmed-Vall, Mike Macpherson, Subramaniam Maiyuran
  • Patent number: 11227358
    Abstract: Apparatuses including general-purpose graphics processing units and graphics multiprocessors that exploit queues or transitional buffers for improved low-latency high-bandwidth on-die data retrieval are disclosed. In one embodiment, a graphics multiprocessor includes at least one compute engine to provide a request, a queue or transitional buffer, and logic coupled to the queue or transitional buffer. The logic is configured to cause a request to be transferred to a queue or transitional buffer for temporary storage without processing the request and to determine whether the queue or transitional buffer has a predetermined amount of storage capacity.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: January 18, 2022
    Assignee: Intel Corporation
    Inventors: Aravindh Anantaraman, Altug Koker, Varghese George, Subramaniam Maiyuran, SungYe Kim, Valentin Andrei
  • Patent number: 11221848
    Abstract: Embodiments described herein provide an apparatus comprising a plurality of processing resources including a first processing resource and a second processing resource, a shared local memory communicatively coupled to the first processing resource and the second processing resource, and a processor to receive an instruction to initiate a matrix multiplication operation, write a first set of matrix data into a first set of registers, and share the first set of matrix data between the first processing resource and the second processing resource for use in the matrix multiplication operation. Other embodiments may be described and claimed.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 11, 2022
    Assignee: INTEL CORPORATION
    Inventors: Subramaniam Maiyuran, Varghese George, Joydeep Ray, Ashutosh Garg, Jorge Parra, Shubh Shah, Shubra Marwaha
  • Patent number: 11221762
    Abstract: A processor includes a first memory interface to be coupled to a plurality of memory module sockets located off-package, a second memory interface to be coupled to a non-volatile memory (NVM) socket located off-package, and a multi-level memory controller (MLMC). The MLMC is to: control the memory modules disposed in the plurality of memory module sockets as main memory in a one-level memory (1LM) configuration; detect a switch from a 1LM mode of operation to a two-level memory (2LM) mode of operation in response to a basic input/output system (BIOS) detection of a low-power memory module disposed in one of the memory module sockets and a NVM device disposed in the NVM socket in a 2LM configuration; and control the low-power memory module as cache in the 2LM configuration in response to detection of the switch from the 1LM mode of operation to the 2LM mode of operation.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: January 11, 2022
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Varghese George, Inder M. Sodhi, Jeffrey R. Wilcox
  • Patent number: 11204977
    Abstract: Described herein is an accelerator device including a host interface, a fabric interconnect coupled with the host interface, and one or more hardware tiles coupled with the fabric interconnect, the one or more hardware tiles including sparse matrix multiply acceleration hardware including a systolic array with feedback inputs.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: December 21, 2021
    Assignee: Intel Corporation
    Inventors: Subramaniam Maiyuran, Jorge Parra, Supratim Pal, Ashutosh Garg, Shubra Marwaha, Chandra Gurram, Darin Starkey, Durgesh Borkar, Varghese George
  • Publication number: 20210374897
    Abstract: Embodiments described herein include, software, firmware, and hardware logic that provides techniques to perform arithmetic on sparse data via a systolic processing unit. Embodiment described herein provided techniques to skip computational operations for zero filled matrices and sub-matrices. Embodiments additionally provide techniques to maintain data compression through to a processing unit. Embodiments additionally provide an architecture for a sparse aware logic unit.
    Type: Application
    Filed: June 3, 2021
    Publication date: December 2, 2021
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Scott Janus, Varghese George, Subramaniam Maiyuran, Altug Koker, Abhishek Appu, Prasoonkumar Surti, Vasanth Ranganathan, Andrei Valentin, Ashutosh Garg, Yoav Harel, Arthur Hunter, JR., SungYe Kim, Mike Macpherson, Elmoustapha Ould-Ahmed-Vall, William Sadler, Lakshminarayanan Striramassarma, Vikranth Vemulapalli
  • Publication number: 20210349966
    Abstract: Described herein is an accelerator device including a host interface, a fabric interconnect coupled with the host interface, and one or more hardware tiles coupled with the fabric interconnect, the one or more hardware tiles including sparse matrix multiply acceleration hardware including a systolic array with feedback inputs.
    Type: Application
    Filed: June 26, 2020
    Publication date: November 11, 2021
    Applicant: Intel Corporation
    Inventors: SUBRAMANIAM MAIYURAN, JORGE PARRA, SUPRATIM PAL, ASHUTOSH GARG, SHUBRA MARWAHA, CHANDRA GURRAM, DARIN STARKEY, DURGESH BORKAR, VARGHESE GEORGE
  • Publication number: 20210349848
    Abstract: Methods and apparatus relating to scalar core integration in a graphics processor. In an example, an apparatus comprises a processor to receive a set of workload instructions for a graphics workload from a host complex, determine a first subset of operations in the set of operations that is suitable for execution by a scalar processor complex of the graphics processing device and a second subset of operations in the set of operations that is suitable for execution by a vector processor complex of the graphics processing device, assign the first subset of operations to the scalar processor complex for execution to generate a first set of outputs, assign the second subset of operations to the vector processor complex for execution to generate a second set of outputs. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 11, 2021
    Applicant: Intel Corporation
    Inventors: JOYDEEP RAY, ARAVINDH ANANTARAMAN, ABHISHEK R. APPU, ALTUG KOKER, ELMOUSTAPHA OULD-AHMED-VALL, VALENTIN ANDREI, SUBRAMANIAM MAIYURAN, NICOLAS GALOPPO VON BORRIES, VARGHESE GEORGE, MIKE MACPHERSON, BEN ASHBAUGH, MURALI RAMADOSS, VIKRANTH VEMULAPALLI, WILLIAM SADLER, JONATHAN PEARCE, SUNGYE KIM
  • Publication number: 20210326176
    Abstract: Accelerated synchronization operations using fine grain dependency check are disclosed. A graphics multiprocessor includes a plurality of execution units and synchronization circuitry that is configured to determine availability of at least one execution unit. The synchronization circuitry to perform a fine grain dependency check of availability of dependent data or operands in shared local memory or cache when at least one execution unit is available.
    Type: Application
    Filed: May 11, 2021
    Publication date: October 21, 2021
    Applicant: Intel Corporation
    Inventors: Subramaniam Maiyuran, Varghese George, Altug Koker, Aravindh Anantaraman, SungYe Kim, Valentin Andrei, Joydeep Ray