Patents by Inventor Parth Damani

Parth Damani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160478
    Abstract: An apparatus to facilitate increasing processing resources in processing cores of a graphics environment is disclosed. The apparatus includes a plurality of processing resources to execute one or more execution threads; a plurality of message arbiter-processing resource (MA-PR) routers, wherein a respective MA-PR router of the plurality of MA-PR routers corresponds to a pair of processing resources of the plurality of processing resources and is to arbitrate routing of a thread control message from a message arbiter between the pair of processing resources; a plurality of local shared cache (LSC) sequencers to provide an interface between at least one LSC of the processing core and the plurality of processing resources; and a plurality of instruction caches (ICs) to store instructions of the one or more execution threads, wherein a respective IC of the plurality of ICs interfaces with a portion of the plurality of processing resources.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Applicant: Intel Corporation
    Inventors: Jiasheng Chen, Chunhui Mei, Ben J. Ashbaugh, Naveen Matam, Joydeep Ray, Timothy Bauer, Guei-Yuan Lueh, Vasanth Ranganathan, Prashant Chaudhari, Vikranth Vemulapalli, Nishanth Reddy Pendluru, Piotr Reiter, Jain Philip, Marek Rudniewski, Christopher Spencer, Parth Damani, Prathamesh Raghunath Shinde, John Wiegert, Fataneh Ghodrat
  • Patent number: 10691603
    Abstract: An apparatus to facilitate cache partitioning is disclosed. The apparatus includes a set associative cache to receive access requests from a plurality of agents and partitioning logic to partition the set associative cache by assigning sub-components of a set address to each of the plurality of agents.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: June 23, 2020
    Assignee: Intel Corporation
    Inventors: Nicholas Kacevas, Niranjan Cooray, Parth Damani, Pritav Shah
  • Patent number: 10552937
    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: February 4, 2020
    Assignee: INTEL CORPORATION
    Inventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu
  • Publication number: 20200004683
    Abstract: An apparatus to facilitate cache partitioning is disclosed. The apparatus includes a set associative cache to receive access requests from a plurality of agents and partitioning logic to partition the set associative cache by assigning sub-components of a set address to each of the plurality of agents.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 2, 2020
    Applicant: Intel Corporation
    Inventors: Nicolas Kacevas, Niranjan Cooray, Parth Damani, Pritav Shah
  • Publication number: 20190213707
    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.
    Type: Application
    Filed: January 10, 2018
    Publication date: July 11, 2019
    Applicant: Intel Corporation
    Inventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu