Patents by Inventor Vasanth Ranganathan
Vasanth Ranganathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250231769Abstract: Techniques are provided to enable mid-thread (instruction level) preemption in a graphics processor without requiring software intervention by a graphics driver associated with the graphics processor. Hardware based mid-thread preemption is facilitated using the thread dispatch hardware of the graphics processor to trigger execution of a kernel program by the graphics processor that saves the thread state of preempted processing resources and facilitates the subsequent restoration of that thread state to the same or a different set of processing resources.Type: ApplicationFiled: January 17, 2024Publication date: July 17, 2025Applicant: Intel CorporationInventors: Vasanth Ranganathan, James Valerio, Aditya Navale, Alan Curtis, Jeffery S. Boles, Hema Chand Nalluri, Vikranth Vemulapalli, Supratim Pal, Boris Kuznetsov, John A. Wiegert
-
Publication number: 20250232397Abstract: Generation and storage of compressed z-planes in graphics processing is described. An example of a processor includes a rasterizer to generate a fragment of pixel data including blocks of pixel data; a depth pipeline to receive the fragment, the pipeline including a first and second depth test hardware, the first depth test hardware to perform a coarse depth test including determining minimum and maximum depths for each block; and a depth buffer, wherein the processor is to determine whether the fragment meets requirements that the fragment fully covers a tile of pixel data and passes a first depth test, and that each of the minimum and maximum depths of the fragment has a same sign and exponent, and, upon determining that the fragment meets the requirements, to generate a compressed depth plane utilizing the first depth test and update the depth buffer with the compressed depth plane.Type: ApplicationFiled: January 17, 2025Publication date: July 17, 2025Applicant: Intel CorporationInventors: Saikat Mandal, Karol Szerszen, Vasanth Ranganathan, Altug Koker, Michael Norris, Prasoonkumar Surti, Takahiro Murata
-
Publication number: 20250173308Abstract: Embodiments are generally directed to graphics processor data access and sharing. An embodiment of an apparatus includes a circuit element to produce a result in processing of an application; a load-store unit to receive the result and generate pre-fetch information for a cache utilizing the result; and a prefetch generator to produce prefetch addresses based at least in part on the pre-fetch information; wherein the load-store unit is to receive software assistance for prefetching, and wherein generation of the pre-fetch information is based at least in part on the software assistance.Type: ApplicationFiled: November 25, 2024Publication date: May 29, 2025Applicant: Intel CorporationInventors: Altug Koker, Varghese George, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Niranjan Cooray, Nicolas Galoppo Von Borries, Mike MacPherson, Subramaniam Maiyuran, ElMoustapha Ould-Ahmed-Vall, David Puffer, Vasanth Ranganathan, Joydeep Ray, Ankur N. Shah, Lakshminarayanan Striramassarma, Prasoonkumar Surti, Saurabh Tangri
-
Patent number: 12299766Abstract: Systems and methods for supporting generic pointers in hardware of a graphics processing unit (GPU) are provided. In various examples, a GPU includes multiple sub-cores each having a processing resource and a load/store pipeline. The processing resource is operable to receive a memory access message including a pointer and a memory type identifier indicative of the pointer representing a generic pointer. The processing resource is further operable to output a load or store operation to the load/store pipeline based on the memory access message, including computing an address for the load or store operation by adding a base address of a named memory type of a plurality of named memory types referenced by the generic pointer to an offset into a memory of the named memory type. The load/store pipeline is operable to, responsive to receipt of the load or store operation, access the memory at the address.Type: GrantFiled: September 24, 2021Date of Patent: May 13, 2025Assignee: Intel CorporationInventors: Joydeep Ray, Prathamesh Raghunath Shinde, Ben J. Ashbaugh, Wei-Yu Chen, Abhishek R. Appu, Vasanth Ranganathan, Dmitry Yurievich Babokin, Ankur N. Shah
-
Publication number: 20250147762Abstract: Described herein is a graphics processor having processing resources with configurable thread and register configurations. Program code can configure a number of registers and accumulators that will be used by hardware threads during execution of the program code by the graphics processor. Processing resources within the graphics processor can be configured to assign different numbers of registers and accumulators to hardware threads based on the configuration requested by program code to be executed by the processing resource.Type: ApplicationFiled: November 8, 2023Publication date: May 8, 2025Applicant: Intel CorporationInventors: Vasanth Ranganathan, Gang Chen, Supratim Pal, Jorge Eduardo Parra Osorio, Arthur Hunter, Boris Kuznetsov, Deepak N K, Siva Kumar Seemakurthi, James Valerio, Shubham Dinesh Chavan, Abhishek Kumar Singh, Samir Pandya, Sandeep Tippannanavar Niranjan, Alan Curtis, Jain Philip, Maltesh Kulkarni, Fangwen Fu, John Wiegert, Brent Schwartz
-
Patent number: 12293431Abstract: Embodiments described herein include, software, firmware, and hardware logic that provides techniques to perform arithmetic on sparse data via a systolic processing unit. Embodiment described herein provided techniques to detect zero value elements within a vector or a set of packed data elements output by a processing resource and generate metadata to indicate a location of the zero value elements within the plurality of data elements.Type: GrantFiled: May 2, 2023Date of Patent: May 6, 2025Assignee: Intel CorporationInventors: Joydeep Ray, Scott Janus, Varghese George, Subramaniam Maiyuran, Altug Koker, Abhishek Appu, Prasoonkumar Surti, Vasanth Ranganathan, Valentin Andrei, Ashutosh Garg, Yoav Harel, Arthur Hunter, Jr., SungYe Kim, Mike Macpherson, Elmoustapha Ould-Ahmed-Vall, William Sadler, Lakshminarayanan Striramassarma, Vikranth Vemulapalli
-
Publication number: 20250130848Abstract: An apparatus to facilitate barrier state save and restore for preemption in a graphics environment is disclosed. The apparatus includes processing resources to execute a plurality of execution threads that are comprised in a thread group (TG) and mid-thread preemption barrier save and restore hardware circuitry to: initiate an exception handling routine in response to a mid-thread preemption event, the exception handling routine to cause a barrier signaling event to be issued; receive indication of a valid designated thread status for a thread of a thread group (TG) in response to the barrier signaling event; and in response to receiving the indication of the valid designated thread status for the thread of the TG, cause, by the thread of the TG having the valid designated thread status, a barrier save routine and a barrier restore routine to be initiated for named barriers of the TG.Type: ApplicationFiled: November 1, 2024Publication date: April 24, 2025Applicant: Intel CorporationInventors: Vasanth Ranganathan, James Valerio, Joydeep Ray, Abhishek R. Appu, Alan Curtis, Prathamesh Raghunath Shinde, Brandon Fliflet, Ben J. Ashbaugh, John Wiegert
-
Publication number: 20250117356Abstract: Methods and apparatus relating to techniques for multi-tile memory management. In an example, a graphics processor includes an interposer, a first chiplet coupled with the interposer, the first chiplet including a graphics processing resource and an interconnect network coupled with the graphics processing resource, cache circuitry coupled with the graphics processing resource via the interconnect network, and a second chiplet coupled with the first chiplet via the interposer, the second chiplet including a memory-side cache and a memory controller coupled with the memory-side cache. The memory controller is configured to enable access to a high-bandwidth memory (HBM) device, the memory-side cache is configured to cache data associated with a memory access performed via the memory controller, and the cache circuitry is logically positioned between the graphics processing resource and a chiplet interface.Type: ApplicationFiled: October 15, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Aravindh Anantaraman, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Mike Macpherson, Subramaniam Maiyuran, Joydeep Ray, Lakshminarayanan Striramassarma, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, Prasoonkumar Surti, David Puffer, James Valerio, Ankur N. Shah
-
Publication number: 20250103430Abstract: Apparatuses including a graphics processing unit, graphics multiprocessor, or graphics processor having an error detection correction logic for cache memory or shared memory are disclosed. In one embodiment, a graphics multiprocessor includes cache or local memory for storing data and error detection correction circuitry integrated with or coupled to the cache or local memory. The error detection correction circuitry is configured to perform a tag read for data of the cache or local memory to check error detection correction information.Type: ApplicationFiled: October 4, 2024Publication date: March 27, 2025Applicant: Intel CorporationInventors: Vasanth Ranganathan, Joydeep Ray, Abhishek R. Appu, Nikos Kaburlasos, Lidong Xu, Subramaniam Maiyuran, Altug Koker, Naveen Matam, James Holland, Brent Insko, Sanjeev Jahagirdar, Scott Janus, Durgaprasad Bilagi, Xinmin Tian
-
Publication number: 20250103343Abstract: Embodiments described herein provide an apparatus comprising a plurality of processing resources including a first processing resource and a second processing resource, a memory communicatively coupled to the first processing resource and the second processing resource, and a processor to receive data dependencies for one or more tasks comprising one or more producer tasks executing on the first processing resource and one or more consumer tasks executing on the second processing resource and move a data output from one or more producer tasks executing on the first processing resource to a cache memory communicatively coupled to the second processing resource. Other embodiments may be described and claimed.Type: ApplicationFiled: November 21, 2024Publication date: March 27, 2025Applicant: INTEL CORPORATIONInventors: Christopher J. HUGHES, Prasoonkumar SURTI, Guei-Yuan LUEH, Adam T. LAKE, Jill BOYCE, Subramaniam MAIYURAN, Lidong XU, James M. HOLLAND, Vasanth RANGANATHAN, Nikos KABURLASOS, Altug KOKER, Abhishek R. Appu
-
Publication number: 20250103547Abstract: Embodiments described herein include software, firmware, and hardware logic that provides techniques to perform arithmetic on sparse data via a systolic processing unit. One embodiment provides techniques to optimize training and inference on a systolic array when using sparse data. One embodiment provides techniques to use decompression information when performing sparse compute operations. One embodiment enables the disaggregation of special function compute arrays via a shared reg file. One embodiment enables packed data compress and expand operations on a GPGPU. One embodiment provides techniques to exploit block sparsity within the cache hierarchy of a GPGPU.Type: ApplicationFiled: October 4, 2024Publication date: March 27, 2025Applicant: INTEL CORPORATIONInventors: Prasoonkumar Surti, Subramaniam Maiyuran, Valentin Andrei, Abhishek Appu, Varghese George, Altug Koker, Mike Macpherson, Elmoustapha Ould-Ahmed-Vall, Vasanth Ranganathan, Joydeep Ray, Lakshminarayanan Striramassarma, SungYe Kim
-
Publication number: 20250103546Abstract: Embodiments are generally directed to cache structure and utilization. An embodiment of an apparatus includes one or more processors including a graphics processor; a memory for storage of data for processing by the one or more processors; and a cache to cache data from the memory; wherein the apparatus is to provide for dynamic overfetching of cache lines for the cache, including receiving a read request and accessing the cache for the requested data, and upon a miss in the cache, overfetching data from memory or a higher level cache in addition to fetching the requested data, wherein the overfetching of data is based at least in part on a current overfetch boundary, and provides for data is to be prefetched extending to the current overfetch boundary.Type: ApplicationFiled: October 4, 2024Publication date: March 27, 2025Applicant: Intel CorporationInventors: Altug Koker, Lakshminarayanan Striramassarma, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Sean Coleman, Varghese George, Pattabhiraman K, Mike MacPherson, Subramaniam Maiyuran, ElMoustapha Ould-Ahmed-Vall, Vasanth Ranganathan, Joydeep Ray, Jayakrishna P S, Prasoonkumar Surti
-
Publication number: 20250104180Abstract: Embodiments described herein include software, firmware, and hardware logic that provides techniques to perform arithmetic on sparse data via a systolic processing unit. One embodiment provides for data aware sparsity via compressed bitstreams. One embodiment provides for block sparse dot product instructions. One embodiment provides for a depth-wise adapter for a systolic array.Type: ApplicationFiled: December 3, 2024Publication date: March 27, 2025Applicant: Intel CorporationInventors: Abhishek Appu, Subramaniam Maiyuran, Mike Macpherson, Fangwen Fu, Jiasheng Chen, Varghese George, Vasanth Ranganathan, Ashutosh Garg, Joydeep Ray
-
Publication number: 20250103548Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache coupled to the processing resources. The cache controller is configured to control cache priority by determining whether default settings or an instruction will control cache operations for the cache.Type: ApplicationFiled: November 14, 2024Publication date: March 27, 2025Applicant: Intel CorporationInventors: Altug Koker, Joydeep Ray, Ben Ashbaugh, Jonathan Pearce, Abhishek Appu, Vasanth Ranganathan, Lakshminarayanan Striramassarma, Elmoustapha Ould-Ahmed-Vall, Aravindh Anantaraman, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Yoav Harel, Arthur Hunter, JR., Brent Insko, Scott Janus, Pattabhiraman K, Mike Macpherson, Subramaniam Maiyuran, Marian Alin Petre, Murali Ramadoss, Shailesh Shah, Kamal Sinha, Prasoonkumar Surti, Vikranth Vemulapalli
-
Publication number: 20250103511Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache memory that is coupled to the processing resources. The cache controller is configured to set an initial aging policy using an aging field based on age of cache lines within the cache memory and to determine whether a hint or an instruction to indicate a level of aging has been received.Type: ApplicationFiled: October 3, 2024Publication date: March 27, 2025Applicant: Intel CorporationInventors: Altug Koker, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Abhishek Appu, Aravindh Anantaraman, Valentin Andrei, Durgaprasad Bilagi, Varghese George, Brent Insko, Sanjeev Jahagirdar, Scott Janus, Pattabhiraman K, SungYe Kim, Subramaniam Maiyuran, Vasanth Ranganathan, Lakshminarayanan Striramassarma, Xinmin Tian
-
Publication number: 20250094170Abstract: One embodiment provides for a graphics processing unit to accelerate machine-learning operations, the graphics processing unit comprising a multiprocessor having a single instruction, multiple thread (SIMT) architecture, the multiprocessor to execute at least one single instruction; and a first compute unit included within the multiprocessor, the at least one single instruction to cause the first compute unit to perform a two-dimensional matrix multiply and accumulate operation, wherein to perform the two-dimensional matrix multiply and accumulate operation includes to compute a 32-bit intermediate product of 16-bit operands and to compute a 32-bit sum based on the 32-bit intermediate product.Type: ApplicationFiled: September 30, 2024Publication date: March 20, 2025Applicant: Intel CorporationInventors: Himanshu Kaul, Mark A. Anders, Sanu K. Mathew, Anbang Yao, Joydeep Ray, Ping T. Tang, Michael S. Strickland, Xiaoming Chen, Tatiana Shpeisman, Abhishek R. Appu, Altug Koker, Kamal Sinha, Balaji Vembu, Nicolas C. Galoppo Von Borries, Eriko Nurvitadhi, Rajkishore Barik, Tsung-Han Lin, Vasanth Ranganathan, Sanjeev Jahagirdar
-
Publication number: 20250077232Abstract: A graphics processing device is provided that includes a set of compute units to execute a workload, a cache coupled with the set of compute units, and circuitry coupled with the cache and the set of compute units. The circuitry is configured to, in response to a cache miss for the read from a first cache, broadcast an event within the graphics processor device to identify data associated with the cache miss, receive the event at a second compute unit in the set of compute units, and prefetch the data identified by the event into a second cache that is local to the second compute unit before an attempt to read the instruction or data by the second thread.Type: ApplicationFiled: September 11, 2024Publication date: March 6, 2025Applicant: Intel CorporationInventors: JAMES VALERIO, VASANTH RANGANATHAN, JOYDEEP RAY, PRADEEP RAMANI
-
Patent number: 12242414Abstract: Methods and apparatus relating to data initialization techniques. In an example, an apparatus comprises a processor to read one or more metadata codes which map to one or more cache lines in a cache memory and invoke a random number generator to generate random numerical data for the one or more cache lines in response to a determination that the one more metadata codes indicate that the cache lines are to contain random numerical data. Other embodiments are also disclosed and claimed.Type: GrantFiled: March 14, 2020Date of Patent: March 4, 2025Assignee: INTEL CORPORATIONInventors: Abhishek R. Appu, Aravindh Anantaraman, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Altug Koker, Mike Macpherson, Subramaniam Maiyuran, Joydeep Ray, Vasanth Ranganathan
-
Publication number: 20250068423Abstract: Described herein is a graphics processor comprising first circuitry configured to execute a decoded instruction and second circuitry configured to second circuitry configured to decode an instruction into the decoded instruction. The second circuitry is configured to determine a number of registers within a register file that are available to a thread of the processing resource and decode the instruction based on that number of registers.Type: ApplicationFiled: August 22, 2023Publication date: February 27, 2025Applicant: Intel CorporationInventors: Jorge Eduardo Parra Osorio, Jiasheng Chen, Supratim Pal, Vasanth Ranganathan, Guei-Yuan Lueh, James Valerio, Pradeep Golconda, Brent Schwartz, Fangwen Fu, Sabareesh Ganapathy, Peter Caday, Wei-Yu Chen, Po-Yu Chen, Timothy Bauer, Maxim Kazakov, Stanley Gambarin, Samir Pandya
-
Patent number: 12236498Abstract: Generation and storage of compressed z-planes in graphics processing is described. An example of a processor includes a rasterizer to generate a fragment of pixel data including blocks of pixel data; a depth pipeline to receive the fragment, the pipeline including a first and second depth test hardware, the first depth test hardware to perform a coarse depth test including determining minimum and maximum depths for each block; and a depth buffer, wherein the processor is to determine whether the fragment meets requirements that the fragment fully covers a tile of pixel data and passes a first depth test, and that each of the minimum and maximum depths of the fragment has a same sign and exponent, and, upon determining that the fragment meets the requirements, to generate a compressed depth plane utilizing the first depth test and update the depth buffer with the compressed depth plane.Type: GrantFiled: May 27, 2021Date of Patent: February 25, 2025Assignee: INTEL CORPORATIONInventors: Saikat Mandal, Karol Szerszen, Vasanth Ranganathan, Altug Koker, Michael Norris, Prasoonkumar Surti, Takahiro Murata