Patents by Inventor Nicolas Kacevas
Nicolas Kacevas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11023998Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.Type: GrantFiled: April 2, 2019Date of Patent: June 1, 2021Assignee: Intel CorporationInventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
-
Patent number: 10552937Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.Type: GrantFiled: January 10, 2018Date of Patent: February 4, 2020Assignee: INTEL CORPORATIONInventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu
-
Publication number: 20200004683Abstract: An apparatus to facilitate cache partitioning is disclosed. The apparatus includes a set associative cache to receive access requests from a plurality of agents and partitioning logic to partition the set associative cache by assigning sub-components of a set address to each of the plurality of agents.Type: ApplicationFiled: June 29, 2018Publication date: January 2, 2020Applicant: Intel CorporationInventors: Nicolas Kacevas, Niranjan Cooray, Parth Damani, Pritav Shah
-
Patent number: 10372621Abstract: An apparatus to facilitate page translation is disclosed. The apparatus a set associative translation lookaside buffer (TLB) including a plurality of entries to store virtual to physical memory address translations and a page size table (PST) including a plurality of entries to store page size corresponding to each of the TLB entries.Type: GrantFiled: January 5, 2018Date of Patent: August 6, 2019Assignee: Intel CorporationInventors: Niranjan L. Cooray, Altug Koker, Nicolas Kacevas, Parth S. Damani, David Standring
-
Publication number: 20190228499Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.Type: ApplicationFiled: April 2, 2019Publication date: July 25, 2019Applicant: Intel CorporationInventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
-
Publication number: 20190213707Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.Type: ApplicationFiled: January 10, 2018Publication date: July 11, 2019Applicant: Intel CorporationInventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu
-
Publication number: 20190213140Abstract: An apparatus to facilitate page translation is disclosed. The apparatus a set associative translation lookaside buffer (TLB) including a plurality of entries to store virtual to physical memory address translations and a page size table (PST) including a plurality of entries to store page size corresponding to each of the TLB entries.Type: ApplicationFiled: January 5, 2018Publication date: July 11, 2019Applicant: Intel CorporationInventors: Niranjan L. Cooray, Altug Koker, Nicolas Kacevas, Parth S. Damani, David Standring
-
Publication number: 20190163641Abstract: An apparatus to facilitate page translation prefetching is disclosed. The apparatus includes a translation lookaside buffer (TLB), including a first table to store page table entries (PTEs) and a second table to store tags corresponding to each of the PTEs; and prefetch logic to detect a miss of a first requested address in the TLB during a page translation, retrieve a plurality of physical addresses from memory in response to the TLB miss and store the plurality of physical addresses as a plurality of PTEs in a first TLB entry.Type: ApplicationFiled: November 27, 2017Publication date: May 30, 2019Applicant: Intel CorporationInventors: Niranjan Cooray, Nicolas Kacevas, David Standring
-
Patent number: 10249017Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.Type: GrantFiled: August 11, 2016Date of Patent: April 2, 2019Assignee: Intel CorporationInventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
-
Publication number: 20180047131Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.Type: ApplicationFiled: August 11, 2016Publication date: February 15, 2018Inventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
-
Publication number: 20130275683Abstract: Agents may be assigned to discrete portions of a cache. In some cases, more than one agent may be assigned to the same cache portion. The size of the portion, the assignment of agents to the portion and the number of agents may be programmed dynamically in some embodiments.Type: ApplicationFiled: August 29, 2011Publication date: October 17, 2013Applicant: Intel CorporationInventor: Nicolas Kacevas
-
Patent number: 7174444Abstract: A system and method of early branch prediction in a processor to evaluate, typically before a full branch prediction is made, ways in a branch target buffer to determine if any of said ways corresponds to a valid unconditional branch, and upon such determination, to generate a signal to prevent a read of a next sequential chunk.Type: GrantFiled: March 31, 2003Date of Patent: February 6, 2007Assignee: Intel CorporationInventors: Eran Altshuler, Oded Lempel, Robert Valentine, Nicolas Kacevas
-
Patent number: 7058795Abstract: Briefly, a method and apparatus of branch prediction is provided. The branch prediction may be done by performing a XOR operation between MSB of set bits of a path register with LSB of set bits of an instruction pointer address register to provide a global index, and by performing a XOR operation of LSB tag bits of the path register with MSB tag bits of the instruction pointer address register and providing a tag index. There may be multiplexing between a global prediction to a local prediction.Type: GrantFiled: June 25, 2002Date of Patent: June 6, 2006Assignee: Intel CorporationInventors: Nicolas Kacevas, Eran Altshuler
-
Publication number: 20050262332Abstract: A system and method for predicting a branch target for a current instruction in a microprocessor, the system comprising a cache storing indirect branch instructions and a path register. The path register is updated on certain branches by an XOR operation on the path register and the branch instruction, followed by the addition of one or more bits to the register. The cache is indexed by performing an operation on a portion of the current instruction address and the path register; the entry returned, if any, may be used to predict the target of the current instruction.Type: ApplicationFiled: March 31, 2003Publication date: November 24, 2005Inventors: Lihu Rappoport, Ronny Ronen, Nicolas Kacevas, Oded Lempel
-
Publication number: 20040193855Abstract: A processor including a branch prediction unit, wherein various techniques can be used to decrease branch prediction unit access, possibly saving power. Whether or not a branch prediction target needs updating may be stored, and thus it may be known whether or not the branch prediction unit needs to be accessed after the initial access. Which way corresponds to the prediction may be stored, decreasing the amount of subsequent accesses. Use information (e.g., least recently used information) may be updated at the time of the first access of the branch prediction unit, possibly eliminating the need for a later use information update. A branch prediction unit update or allocate, or update or allocate attempt, may be performed prior to the execute stage.Type: ApplicationFiled: March 31, 2003Publication date: September 30, 2004Inventors: Nicolas Kacevas, Eran Altshuler
-
Publication number: 20040193843Abstract: A system and method of early branch prediction in a processor to evaluate, typically before a full branch prediction is made, ways in a branch target buffer to determine if any of said ways corresponds to a valid unconditional branch, and upon such determination, to generate a signal to prevent a read of a next sequential chunk.Type: ApplicationFiled: March 31, 2003Publication date: September 30, 2004Inventors: Eran Altshuler, Oded Lempel, Robert Valentine, Nicolas Kacevas
-
Publication number: 20030236969Abstract: Briefly, a method and apparatus of branch prediction is provided. The branch prediction may be done by performing a XOR operation between MSB of set bits of a path register with LSB of set bits of an instruction pointer address register to provide a global index, and by performing a XOR operation of LSB tag bits of the path register with MSB tag bits of the instruction pointer address register and providing a tag index. There may be multiplexing between a global prediction to a local prediction.Type: ApplicationFiled: June 25, 2002Publication date: December 25, 2003Inventors: Nicolas Kacevas, Eran Altshuler
-
Patent number: 6601161Abstract: A system and method for predicting a branch target for a current instruction in a microprocessor, the system comprising a cache storing indirect branch instructions and a path register. The path register is updated on certain branches by an XOR operation on the path register and the branch instruction, followed by the addition of one or more bits to the register. The cache is indexed by performing an operation on a portion of the current instruction address and the path register; the entry returned, if any, may be used to predict the target of the current instruction.Type: GrantFiled: December 30, 1998Date of Patent: July 29, 2003Assignee: Intel CorporationInventors: Lihu Rappoport, Ronny Ronen, Nicolas Kacevas, Oded Lempel
-
Patent number: 6397297Abstract: A computer system having cache modules interconnected in series includes a first and a second cache module directly coupled to an address generating line for parallel lookup of data and data conversion logic coupled between the first cache module and said second cache module.Type: GrantFiled: December 30, 1999Date of Patent: May 28, 2002Assignee: Intel Corp.Inventors: Zeev Sperber, Jack Doweck, Nicolas Kacevas, Roy Nesher
-
Patent number: 5964868Abstract: A return stack buffer mechanism that uses two separate return stack buffers is disclosed. The first return stack buffer is the Speculative Return Stack Buffer. The Speculative Return Stack Buffer is updated using speculatively fetched instructions. Thus, the Speculative Return Stack Buffer may become corrupted when incorrect instructions are fetched. The second return stack buffer is the Actual Return Stack Buffer. The Actual Return Stack Buffer is updated using information from fully executed branch instructions. When a branch misprediction causes a pipeline flush, the contents of the Actual Return Stack Buffer is copied into the Speculative Return Stack Buffer to correct any corrupted information.Type: GrantFiled: May 15, 1996Date of Patent: October 12, 1999Assignee: Intel CorporationInventors: Simcha Gochman, Nicolas Kacevas, Farah Jubran