Patents by Inventor Aditya Navale

Aditya Navale has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240054595
    Abstract: Embodiments described herein provide a system of concurrent compute queues that enable the scheduling of a large number of compute contexts simultaneously on graphics processor hardware. One embodiment provides an apparatus comprising a system interface and a general-purpose graphics processor coupled with the system interface. The general-purpose graphics processor comprises a plurality of graphics processor hardware resources configured to be partitioned into a plurality of isolated partitions, each of the plurality of isolated partitions including a first command streamer, a second command streamer, and circuitry configured to schedule general-purpose graphics compute workloads submitted to a first plurality of command queues associated with the first command streamer and a second plurality of command queues associated with the second command streamer.
    Type: Application
    Filed: August 10, 2022
    Publication date: February 15, 2024
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Vasanth Ranganathan, James Valerio, Jeffery S. Boles, Hema Chand Nalluri, Aditya Navale, Ben J. Ashbaugh, Michal Mrozek, Murali Ramadoss, Hong Jiang, Ankur Shah
  • Patent number: 11900539
    Abstract: An apparatus to facilitate graphics rendering is disclosed. The apparatus comprises sequencer hardware to operate in a tile mode to render objects, including performing batch formation to generate one or more batches of received objects, performing tile sequencing for each of the objects to compute tile fill intersects for each of the objects and performing a play sequencing of each of the objects.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: February 13, 2024
    Assignee: Intel Corporation
    Inventors: Subramaniam Maiyuran, Saurabh Sharma, Jorge F. Garcia Pabon, Raghavendra Kamath Miyar, Sudheendra Srivathsa, Justin Decell, Aditya Navale
  • Publication number: 20240045725
    Abstract: Apparatus and method for concurrent performance monitoring. For example, one embodiment of an apparatus comprises: compute hardware logic to concurrently process a number of workloads, the compute hardware logic to be subdivided into a plurality of compute hardware contexts based on the number of workloads; and programmable performance monitoring circuitry to be dynamically partitioned to perform parallel performance monitoring operations to monitor performance of each of the plurality of compute hardware contexts while the number of workloads are concurrently processed, the programmable performance monitoring circuitry to differentiate between performance monitoring data of different compute hardware contexts based on a unique identifier associated with each of the compute hardware contexts.
    Type: Application
    Filed: August 4, 2022
    Publication date: February 8, 2024
    Inventors: Prashant CHAUDHARI, Jain PHILIP, James VALERIO, Murali RAMADOSS, Ankur SHAH, Jeffery S. BOLES, Aditya NAVALE
  • Publication number: 20230306551
    Abstract: Described herein is a graphics processor comprising a processing resource configured to perform processing operations, a codec configured to compress and decompress data associated with the processing operations, and circuitry configured to calculate a metadata address for a compressed surface based on a flat virtual memory address mapping between the address of the compressed surface and the metadata address. The compressed surface is to store data associated with a processing operation to be performed by the processing resource and the metadata address is a virtual address that stores compression metadata for the compressed surface. The circuitry can configure the codec to access the compressed surface based on the compression metadata.
    Type: Application
    Filed: March 23, 2022
    Publication date: September 28, 2023
    Applicant: Intel Corporation
    Inventors: Vidhya Krishnan, Niranjan Cooray, David Puffer, Ronald Silvas, Durgaprasad Bilagi, Aditya Navale
  • Publication number: 20230298125
    Abstract: Described herein is a partitionable graphics processor having multiple render front ends. The partitions of the graphics processor maintain render functionality when partitioned and enable fault isolation and independent multi-client rendering.
    Type: Application
    Filed: May 27, 2022
    Publication date: September 21, 2023
    Applicant: Intel Corporation
    Inventors: Hema Chand Nalluri, Jeffery S. Boles, David Cowperthwaite, Aditya Navale, Prasoonkumar Surti, Arthur Hunter, Vasanth Ranganathan, Joydeep Ray, David Puffer, Ankur Shah, Vidhya Krishnan, Kritika Bala, Aravindh Anantaraman, Michael Apodaca, Kenneth Daxer
  • Publication number: 20230297440
    Abstract: Described herein is a partitionable graphics processor having a plurality of flexibly partitioned processing resources. One embodiment provides a graphics processor comprising a plurality of processing resources configurable to be flexibly partitioned into a plurality of resource partitions and circuitry to compose multiple graphics processor device partitions from the plurality of resource partitions. The multiple graphics processor device partitions are configurable to be asymmetrically composed of different types of functional units.
    Type: Application
    Filed: May 27, 2022
    Publication date: September 21, 2023
    Applicant: Intel Corporation
    Inventors: David Cowperthwaite, Kenneth Daxer, Jeffery S. Boles, Hema Chand Nalluri, Aditya Navale, Prasoonkumar Surti, Arthur Hunter, Vasanth Ranganathan, Joydeep Ray, David Puffer, Aravindh Anantaraman, Ankur Shah, Vidhya Krishnan, Kritika Bala
  • Publication number: 20230298129
    Abstract: Embodiments described herein provide techniques to facilitate access to local memory of a graphics processor by a guest software domain. The guest software domain can access the local memory via an address translation system that includes a local memory translation table. In one embodiment, accessed and/or dirty bits are enabled in the local memory translation table, which may be used to accelerate the GPU local memory portion of VM Migration for a VM that includes a vGPU.
    Type: Application
    Filed: June 24, 2022
    Publication date: September 21, 2023
    Applicant: Intel Corporation
    Inventors: David Puffer, Ankur Shah, Niranjan Cooray, David Cowperthwaite, Aditya Navale
  • Publication number: 20230297421
    Abstract: Described herein is a partitional graphics processor having multiple hard partitions with separate software execution and fault domains. One embodiment provides a graphics processor comprising a system interface and a plurality of graphics processing resources coupled with the system interface. The plurality of graphics processing resources is configurable to be partitioned into a plurality of isolated device partitions, each isolated device partition configured for fault isolation and independent concurrent execution of workloads associated with a plurality of clients, and the system interface is configured to present each of the plurality of isolated device partitions as a virtual function.
    Type: Application
    Filed: May 27, 2022
    Publication date: September 21, 2023
    Applicant: Intel Corporation
    Inventors: David Cowperthwaite, Kenneth Daxer, Aditya Navale, Prasoonkumar Surti, Arthur Hunter, Hema Chand Nalluri, Jeffery S. Boles, Vasanth Ranganathan, Joydeep Ray, David Puffer, Aravindh Anantaraman, Ankur Shah, Vidhya Krishnan, Kritika Bala, Michael Apodaca
  • Publication number: 20230298128
    Abstract: Embodiments described herein provide techniques to facilitate access to local memory of a graphics processor by a guest software domain. The guest software domain can access the local memory via an address translation system that includes a local memory translation table.
    Type: Application
    Filed: June 24, 2022
    Publication date: September 21, 2023
    Applicant: Intel Corporation
    Inventors: David Puffer, Ankur Shah, Niranjan Cooray, Aditya Navale, David Cowperthwaite
  • Publication number: 20230291567
    Abstract: Described herein is a paging technique that can be implemented in any accelerator with attached memory and support for operating on encrypted data when the CPU is not within the trusted compute base (TCB). Memory storing data that is encrypted using hardware physical address (HPA)-based encrypted can be paged out of accelerator device memory by decoupling encryption from the hardware physical address and re-encrypting the data for page-out. Upon page-in, the data is decrypted, the integrity and authenticity of the data is verified, then the data is re-encrypted using HPA-based encryption.
    Type: Application
    Filed: March 11, 2022
    Publication date: September 14, 2023
    Applicant: Intel Corporation
    Inventors: VIDHYA KRISHNAN, SIDDHARTHA CHHABRA, VEDVYAS SHANBHOGUE, XIAOYU RUAN, ADITYA NAVALE, JULIEN CARRENO
  • Patent number: 11704181
    Abstract: Apparatus and method for scalable error reporting. For example, one embodiment of an apparatus comprises error detection circuitry to detect an error in a component of a first tile within a tile-based hierarchy of a processing device; error classification circuitry to classify the error and record first error data based on the classification; a first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data; and a master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate second accumulated error data and to provide the second accumulated error data to a host executing an application to process the second accumulated error data.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: July 18, 2023
    Assignee: Intel Corporation
    Inventors: Balaji Vembu, Bryan White, Ankur Shah, Murali Ramadoss, David Puffer, Altug Koker, Aditya Navale, Mahesh Natu
  • Publication number: 20230206383
    Abstract: A system includes a compression engine that stores the compression format information embedded in the compressed data. The compression format information can be included in a header that includes compression control surface (CCS) information. The system includes a shared memory to store compressed data for multiple hardware pipelines, where blocks of the compressed data have a common memory footprint and the compression header. The compression engine can compress data to store in the shared memory including generation of the header. The compression engine can decompress data read from the shared memory, including identification of the compression format from the header.
    Type: Application
    Filed: December 23, 2021
    Publication date: June 29, 2023
    Inventors: Karol A. SZERSZEN, Prasoonkumar SURTI, Vidhya KRISHNAN, Aditya NAVALE, Abhishek R. APPU, Altug KOKER, Ronald W. SILVAS
  • Publication number: 20230109990
    Abstract: One embodiment provides a graphics processor including an active base die including a fabric interconnect and a chiplet including a switched fabric, wherein the chiplet couples with the active base die via an array of interconnect structures, the array of interconnect structures couple the fabric interconnect with the switched fabric, and the chiplet includes a first modular interconnect configured to couple a block of graphics processing resources to the switched fabric and a second modular interconnect configured to couple a memory subsystem with the switched fabric and the block of graphics processing resources, the memory interconnect including a set of memory controllers and a set of physical interfaces.
    Type: Application
    Filed: October 7, 2021
    Publication date: April 13, 2023
    Applicant: Intel Corporation
    Inventors: Lakshminarayana Pappu, Altug Koker, Aditya Navale, Prasoonkumar Surti, Ankur Shah, Joydeep Ray, Naveen Matam
  • Publication number: 20230104845
    Abstract: One embodiment provides a graphics processor including a processing resource including a register file, memory, a cache, and load/store/cache circuitry to process load, store, and prefetch messages from the processing resource. The circuitry will sort received memory access messages into address sorted lists of reads and writes. The circuitry schedules a first set of address sorted requests from a first request buffer for a first period of time, then schedules a second set of address sorted requests from a second request buffer for a second period of time.
    Type: Application
    Filed: September 24, 2021
    Publication date: April 6, 2023
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Abhishek R. Appu, Altug Koker, Aditya Navale, Varghese George, Vasanth Ranganathan, Fangwen Fu, Ben J. Ashbaugh, Vidhya Krishnan, Sabareesh Ganapathy, Prathamesh Raghunath Shinde
  • Publication number: 20230094002
    Abstract: Dynamic routing of texture-load in graphics processing is described. An example of an apparatus includes a graphics processor including a plurality of processing engines of a class of processing engines of the graphic processor; a set of queues for the plurality of processing engines; and a unified submit port for the plurality of processing engines, wherein the unified submit port is to notify a scheduler regarding availability of slots in the set of queues for receipt of workload contexts; and wherein, upon the unified submit port receiving a workload context for processing by the plurality of processing engines, the unified submit port is to detect an available processing engine of the plurality of processing engines and direct the received context to a slot of the set of queues for processing by the available processing engine.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Applicant: Intel Corporation
    Inventors: Hema Chand Nalluri, Jeffery S. Boles, Joseph Koston, Ankur N. Shah, Vidhya Krishnan, Vasanth Ranganathan, Joydeep Ray, Aditya Navale, Murali Ramadoss, James Valerio
  • Publication number: 20230039853
    Abstract: Embodiments described herein provide a graphics, media, and compute device having a tiled architecture composed of a number of tiles of smaller graphics devices. The work distribution infrastructure for such device enables the distribution of workloads across multiple tiles of the device. Work items can be submitted to any one or more of the multiple tiles, with workloads able to span multiple tiles. Additionally, upon completion of a work item, graphics, media, and/or compute engines within the device can readily acquire new work items for execution with minimal latency.
    Type: Application
    Filed: October 18, 2022
    Publication date: February 9, 2023
    Applicant: Intel Corporation
    Inventors: Balaji Vembu, Brandon Fliflet, James Valerio, Michael Apodaca, Ben Ashbaugh, Hema Nalluri, Ankur Shah, Murali Ramadoss, David Puffer, Altug Koker, Aditya Navale, Abhishek R. Appu, Joydeep Ray, Travis Schluessler
  • Patent number: 11556480
    Abstract: Systems and methods for providing shared virtual memory addressing support for a host system are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations. A memory management unit (MMU) is coupled to the processing resources. The MMU to support a first virtual address size for managing allocation of non-shared virtual memory and to support a second virtual address size for managing allocation of shared virtual memory that is shared between the graphics processor and a host.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: January 17, 2023
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, Aditya Navale, Ankur Shah, Murali Ramadoss, Ben Ashbaugh, Ronald Silvas
  • Publication number: 20220398147
    Abstract: Apparatus and method for scalable error reporting. For example, one embodiment of an apparatus comprises error detection circuitry to detect an error in a component of a first tile within a tile-based hierarchy of a processing device; error classification circuitry to classify the error and record first error data based on the classification; a first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data; and a master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate second accumulated error data and to provide the second accumulated error data to a host executing an application to process the second accumulated error data.
    Type: Application
    Filed: June 24, 2022
    Publication date: December 15, 2022
    Inventors: Balaji VEMBU, Bryan WHITE, Ankur SHAH, Murali RAMADOSS, David PUFFER, Altug KOKER, Aditya NAVALE, Mahesh NATU
  • Patent number: 11494867
    Abstract: An apparatus to facilitate asynchronous execution at a processing unit. The apparatus includes one or more processors to detect independent task passes that may be executed out of order in a pipeline of the processing unit, schedule a first set of processing tasks to be executed at a first set of processing elements at the processing unit and schedule a second set of tasks to be executed at a second set of processing elements, wherein execution of the first set of tasks at the first set of processing elements is to be performed simultaneous and in parallel to execution of the second set of tasks at the second set of processing elements.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: November 8, 2022
    Assignee: Intel Corporation
    Inventors: Saurabh Sharma, Michael Apodaca, Aditya Navale, Travis Schluessler, Vamsee Vardhan Chivukula, Abhishek Venkatesh, Subramaniam Maiyuran
  • Patent number: 11481864
    Abstract: Embodiments described herein provide a graphics, media, and compute device having a tiled architecture composed of a number of tiles of smaller graphics devices. The work distribution infrastructure for such device enables the distribution of workloads across multiple tiles of the device. Work items can be submitted to any one or more of the multiple tiles, with workloads able to span multiple tiles. Additionally, upon completion of a work item, graphics, media, and/or compute engines within the device can readily acquire new work items for execution with minimal latency.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: October 25, 2022
    Assignee: Intel Corporation
    Inventors: Balaji Vembu, Brandon Fliflet, James Valerio, Michael Apodaca, Ben Ashbaugh, Hema Nalluri, Ankur Shah, Murali Ramadoss, David Puffer, Altug Koker, Aditya Navale, Abhishek R. Appu, Joydeep Ray, Travis Schluessler