Patents by Inventor Nicolas Kacevas

Nicolas Kacevas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11023998
    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
  • Patent number: 10552937
    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: February 4, 2020
    Assignee: INTEL CORPORATION
    Inventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu
  • Publication number: 20200004683
    Abstract: An apparatus to facilitate cache partitioning is disclosed. The apparatus includes a set associative cache to receive access requests from a plurality of agents and partitioning logic to partition the set associative cache by assigning sub-components of a set address to each of the plurality of agents.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 2, 2020
    Applicant: Intel Corporation
    Inventors: Nicolas Kacevas, Niranjan Cooray, Parth Damani, Pritav Shah
  • Patent number: 10372621
    Abstract: An apparatus to facilitate page translation is disclosed. The apparatus a set associative translation lookaside buffer (TLB) including a plurality of entries to store virtual to physical memory address translations and a page size table (PST) including a plurality of entries to store page size corresponding to each of the TLB entries.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Niranjan L. Cooray, Altug Koker, Nicolas Kacevas, Parth S. Damani, David Standring
  • Publication number: 20190228499
    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.
    Type: Application
    Filed: April 2, 2019
    Publication date: July 25, 2019
    Applicant: Intel Corporation
    Inventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
  • Publication number: 20190213707
    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.
    Type: Application
    Filed: January 10, 2018
    Publication date: July 11, 2019
    Applicant: Intel Corporation
    Inventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu
  • Publication number: 20190213140
    Abstract: An apparatus to facilitate page translation is disclosed. The apparatus a set associative translation lookaside buffer (TLB) including a plurality of entries to store virtual to physical memory address translations and a page size table (PST) including a plurality of entries to store page size corresponding to each of the TLB entries.
    Type: Application
    Filed: January 5, 2018
    Publication date: July 11, 2019
    Applicant: Intel Corporation
    Inventors: Niranjan L. Cooray, Altug Koker, Nicolas Kacevas, Parth S. Damani, David Standring
  • Publication number: 20190163641
    Abstract: An apparatus to facilitate page translation prefetching is disclosed. The apparatus includes a translation lookaside buffer (TLB), including a first table to store page table entries (PTEs) and a second table to store tags corresponding to each of the PTEs; and prefetch logic to detect a miss of a first requested address in the TLB during a page translation, retrieve a plurality of physical addresses from memory in response to the TLB miss and store the plurality of physical addresses as a plurality of PTEs in a first TLB entry.
    Type: Application
    Filed: November 27, 2017
    Publication date: May 30, 2019
    Applicant: Intel Corporation
    Inventors: Niranjan Cooray, Nicolas Kacevas, David Standring
  • Patent number: 10249017
    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.
    Type: Grant
    Filed: August 11, 2016
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
  • Publication number: 20180047131
    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.
    Type: Application
    Filed: August 11, 2016
    Publication date: February 15, 2018
    Inventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
  • Publication number: 20130275683
    Abstract: Agents may be assigned to discrete portions of a cache. In some cases, more than one agent may be assigned to the same cache portion. The size of the portion, the assignment of agents to the portion and the number of agents may be programmed dynamically in some embodiments.
    Type: Application
    Filed: August 29, 2011
    Publication date: October 17, 2013
    Applicant: Intel Corporation
    Inventor: Nicolas Kacevas
  • Patent number: 7174444
    Abstract: A system and method of early branch prediction in a processor to evaluate, typically before a full branch prediction is made, ways in a branch target buffer to determine if any of said ways corresponds to a valid unconditional branch, and upon such determination, to generate a signal to prevent a read of a next sequential chunk.
    Type: Grant
    Filed: March 31, 2003
    Date of Patent: February 6, 2007
    Assignee: Intel Corporation
    Inventors: Eran Altshuler, Oded Lempel, Robert Valentine, Nicolas Kacevas
  • Patent number: 7058795
    Abstract: Briefly, a method and apparatus of branch prediction is provided. The branch prediction may be done by performing a XOR operation between MSB of set bits of a path register with LSB of set bits of an instruction pointer address register to provide a global index, and by performing a XOR operation of LSB tag bits of the path register with MSB tag bits of the instruction pointer address register and providing a tag index. There may be multiplexing between a global prediction to a local prediction.
    Type: Grant
    Filed: June 25, 2002
    Date of Patent: June 6, 2006
    Assignee: Intel Corporation
    Inventors: Nicolas Kacevas, Eran Altshuler
  • Publication number: 20050262332
    Abstract: A system and method for predicting a branch target for a current instruction in a microprocessor, the system comprising a cache storing indirect branch instructions and a path register. The path register is updated on certain branches by an XOR operation on the path register and the branch instruction, followed by the addition of one or more bits to the register. The cache is indexed by performing an operation on a portion of the current instruction address and the path register; the entry returned, if any, may be used to predict the target of the current instruction.
    Type: Application
    Filed: March 31, 2003
    Publication date: November 24, 2005
    Inventors: Lihu Rappoport, Ronny Ronen, Nicolas Kacevas, Oded Lempel
  • Publication number: 20040193855
    Abstract: A processor including a branch prediction unit, wherein various techniques can be used to decrease branch prediction unit access, possibly saving power. Whether or not a branch prediction target needs updating may be stored, and thus it may be known whether or not the branch prediction unit needs to be accessed after the initial access. Which way corresponds to the prediction may be stored, decreasing the amount of subsequent accesses. Use information (e.g., least recently used information) may be updated at the time of the first access of the branch prediction unit, possibly eliminating the need for a later use information update. A branch prediction unit update or allocate, or update or allocate attempt, may be performed prior to the execute stage.
    Type: Application
    Filed: March 31, 2003
    Publication date: September 30, 2004
    Inventors: Nicolas Kacevas, Eran Altshuler
  • Publication number: 20040193843
    Abstract: A system and method of early branch prediction in a processor to evaluate, typically before a full branch prediction is made, ways in a branch target buffer to determine if any of said ways corresponds to a valid unconditional branch, and upon such determination, to generate a signal to prevent a read of a next sequential chunk.
    Type: Application
    Filed: March 31, 2003
    Publication date: September 30, 2004
    Inventors: Eran Altshuler, Oded Lempel, Robert Valentine, Nicolas Kacevas
  • Publication number: 20030236969
    Abstract: Briefly, a method and apparatus of branch prediction is provided. The branch prediction may be done by performing a XOR operation between MSB of set bits of a path register with LSB of set bits of an instruction pointer address register to provide a global index, and by performing a XOR operation of LSB tag bits of the path register with MSB tag bits of the instruction pointer address register and providing a tag index. There may be multiplexing between a global prediction to a local prediction.
    Type: Application
    Filed: June 25, 2002
    Publication date: December 25, 2003
    Inventors: Nicolas Kacevas, Eran Altshuler
  • Patent number: 6601161
    Abstract: A system and method for predicting a branch target for a current instruction in a microprocessor, the system comprising a cache storing indirect branch instructions and a path register. The path register is updated on certain branches by an XOR operation on the path register and the branch instruction, followed by the addition of one or more bits to the register. The cache is indexed by performing an operation on a portion of the current instruction address and the path register; the entry returned, if any, may be used to predict the target of the current instruction.
    Type: Grant
    Filed: December 30, 1998
    Date of Patent: July 29, 2003
    Assignee: Intel Corporation
    Inventors: Lihu Rappoport, Ronny Ronen, Nicolas Kacevas, Oded Lempel
  • Patent number: 6397297
    Abstract: A computer system having cache modules interconnected in series includes a first and a second cache module directly coupled to an address generating line for parallel lookup of data and data conversion logic coupled between the first cache module and said second cache module.
    Type: Grant
    Filed: December 30, 1999
    Date of Patent: May 28, 2002
    Assignee: Intel Corp.
    Inventors: Zeev Sperber, Jack Doweck, Nicolas Kacevas, Roy Nesher
  • Patent number: 5964868
    Abstract: A return stack buffer mechanism that uses two separate return stack buffers is disclosed. The first return stack buffer is the Speculative Return Stack Buffer. The Speculative Return Stack Buffer is updated using speculatively fetched instructions. Thus, the Speculative Return Stack Buffer may become corrupted when incorrect instructions are fetched. The second return stack buffer is the Actual Return Stack Buffer. The Actual Return Stack Buffer is updated using information from fully executed branch instructions. When a branch misprediction causes a pipeline flush, the contents of the Actual Return Stack Buffer is copied into the Speculative Return Stack Buffer to correct any corrupted information.
    Type: Grant
    Filed: May 15, 1996
    Date of Patent: October 12, 1999
    Assignee: Intel Corporation
    Inventors: Simcha Gochman, Nicolas Kacevas, Farah Jubran