Patents by Inventor Sebastien Hily

Sebastien Hily has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7418552
    Abstract: A memory disambiguation apparatus includes a store queue, a store forwarding buffer, and a version count buffer. The store queue includes an entry for each store instruction in the instruction window of a processor. Some store queue entries include resolved store addresses, and some do not. The store forwarding buffer is a set-associative buffer that has entries allocated for store instructions as store addresses are resolved. Each entry in the store forwarding buffer is allocated into a set determined in part by a subset of the store address. When the set in the store forwarding buffer is full, an older entry in the set is discarded in favor of the newly allocated entry. A version count buffer including an array of overflow indicators is maintained to track overflow occurrences. As load addresses are resolved for load instructions in the instruction window, the set-associative store forwarding buffer can be searched to provide memory disambiguation.
    Type: Grant
    Filed: May 15, 2003
    Date of Patent: August 26, 2008
    Assignee: Intel Corporation
    Inventors: Haitham Akkary, Sebastien Hily
  • Publication number: 20080082765
    Abstract: Methods and apparatus for resolving false dependencies associated with speculatively executing load instructions in a processor core are described. In one embodiment, physical addresses of a load operation and a store operation are compared in response to a determination that the load operation may be potentially dependent on the store operation. Other embodiments are also described.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Inventors: Sebastien Hily, Zhongying Zhang, Per Hammarlund
  • Publication number: 20080072019
    Abstract: A technique to filter bogus instructions from a processor pipeline. At least one embodiment of the invention detects a bogus event, removes only instructions from the processor corresponding to the bogus event without affecting instructions not corresponding to the bogus event.
    Type: Application
    Filed: September 19, 2006
    Publication date: March 20, 2008
    Inventors: Avinash Sodani, Ranjani Iyer, Sean Mirkes, Sebastien Hily, David Koufaty, Stephan Jourdan, Zhongying Zhang
  • Publication number: 20080059753
    Abstract: Methods and apparatus to redispatch an operation for execution in a processor are described. In one embodiment, a virtual address corresponding to a store instruction may be reselected for translation into a physical address in response to remaining unselected during a previous selection process. Other embodiments are also described.
    Type: Application
    Filed: August 30, 2006
    Publication date: March 6, 2008
    Inventors: Sebastien Hily, Zhongying Zhang, Ranjani Iyer, Stephan Jourdan, Per Hammarlund
  • Publication number: 20080059723
    Abstract: In one embodiment, the present invention includes an apparatus having a first counter to count dispatches of a senior request in a memory unit, a second counter to count cycles of a processor coupled to the memory unit, and a controller coupled to the first and second counters to execute one or more one remediation measures with respect to the senior request based on a value of at least one of the counters. Other embodiments are described and claimed.
    Type: Application
    Filed: August 31, 2006
    Publication date: March 6, 2008
    Inventors: Prakash Math, Matthew Merten, Sebastien Hily, Beeman Strong, Morris Marden, David Burns
  • Publication number: 20070156990
    Abstract: A method is disclosed. The method includes scheduling a load operation at least twice the size of a maximum access supported by a memory device, dividing the load operation into a plurality of separate load operation segments having a size equivalent to the maximum access supported by the memory device, and performing each of the plurality of load operation segments. A further method is disclosed where a temporary register is used to minimize the number of memory accesses to support unaligned accesses.
    Type: Application
    Filed: December 30, 2005
    Publication date: July 5, 2007
    Inventors: Per Hammarlund, Stephan Jourdan, Michael Fetterman, Glenn Hinton, Sebastien Hily, Ronak Singhal
  • Publication number: 20070157006
    Abstract: Microarchitecture policies and structures partition execution resource clusters. In disclosed microarchitecture embodiments, micro-operations representing a sequential instruction ordering are partitioned into a two sets. To one set of micro-operations execution resources are allocated from a cluster of execution resources that can perform memory access operations but not branching operations. To the other set of micro-operations execution resources are allocated from a cluster of execution resources that can perform branching operations but not memory access operations. The first and second sets of micro-operations may be executed out of sequential order but are retired to represent their sequential instruction ordering.
    Type: Application
    Filed: December 30, 2005
    Publication date: July 5, 2007
    Inventors: Stephan Jourdan, Avinash Sodani, Alexandre Farcy, Per Hammarlund, Sebastien Hily, Mark Davis
  • Publication number: 20070130448
    Abstract: Methods and apparatus to identify memory communications are described. In one embodiment, an access to a stack pointer is monitored, e.g., to maintain a stack tracker structure. The information stored in the stack tracker structure may be utilized to generate a distance value corresponding to a relative distance between a load instruction and a previous store instruction.
    Type: Application
    Filed: December 1, 2005
    Publication date: June 7, 2007
    Inventors: Stephan Jourdan, Mark Davis, Sebastien Hily
  • Patent number: 7174428
    Abstract: Embodiments of the present invention provide a method, apparatus and system for memory renaming. In one embodiment, a decode unit may decode a load instruction. If the load instruction is predicted to be memory renamed, the load instruction may have a predicted store identifier associated with the load instruction. The decode unit may transform the load instruction that is predicted to be memory renamed into a data move instruction and a load check instruction. The data move instruction may read data from the cache based on the predicted store identifier and load check instruction may compare an identifier associated with an identified source store with the predicted store identifier. A retirement unit may retire the load instruction if the predicted store identifier matches an identifier associated with the identified source store.
    Type: Grant
    Filed: December 29, 2003
    Date of Patent: February 6, 2007
    Assignee: Intel Corporation
    Inventors: Sebastien Hily, Per H. Hammarlund, Avinash Sodani
  • Patent number: 7130965
    Abstract: Embodiments of the present invention relate to a memory management scheme and apparatus that enables efficient cache memory management. The method includes writing an entry to a store buffer at execute time; determining if the entry's address is in a first-level cache associated with the store buffer before retirement; and setting a status bit associated with the entry in said store buffer, if the address is in the cache in either exclusive or modified state. The method further includes immediately writing the entry to the first-level cache at or after retirement when the status bit is set; and de-allocating the entry from said store buffer at retirement. The method further may comprise resetting the status bit if the cacheline is allocated over or is evicted from the cache before the store buffer entry attempts to write to the cache.
    Type: Grant
    Filed: December 23, 2003
    Date of Patent: October 31, 2006
    Assignee: Intel Corporation
    Inventors: Per H. Hammarlund, Stephan Jourdan, Sebastien Hily, Aravindh Baktha, Hermann Gartler
  • Publication number: 20060161738
    Abstract: In one embodiment, the present invention includes a predictor to predict contention of an operation to be executed in a program. The operation may be processed based on a result of the prediction, which may be based on multiple independent predictions. In one embodiment, the operation may be optimized if no contention is predicted. Other embodiments are described and claimed.
    Type: Application
    Filed: December 29, 2004
    Publication date: July 20, 2006
    Inventors: Bratin Saha, Matthew Merten, Sebastien Hily, David Koufaty, Per Hammarlund
  • Publication number: 20050149702
    Abstract: Embodiments of the present invention provide a method, apparatus and system for memory renaming. In one embodiment, a decode unit may decode a load instruction. If the load instruction is predicted to be memory renamed, the load instruction may have a predicted store identifier associated with the load instruction. The decode unit may transform the load instruction that is predicted to be memory renamed into a data move instruction and a load check instruction. The data move instruction may read data from the cache based on the predicted store identifier and load check instruction may compare an identifier associated with an identified source store with the predicted store identifier. A retirement unit may retire the load instruction if the predicted store identifier matches an identifier associated with the identified source store.
    Type: Application
    Filed: December 29, 2003
    Publication date: July 7, 2005
    Inventors: Sebastien Hily, Per Hammarlund, Avinash Sodani
  • Publication number: 20050138339
    Abstract: Embodiments of the present invention relate to a memory management scheme and apparatus that enables efficient memory renaming. The method includes computing a store address, writing the store address in a first storage, writing data associated with the store address to a memory, and de-allocating the store address from the first storage, allocating the store address in a second storage, predicting a load instruction to be memory renamed, computing a load store source index, computing a load address, disambiguating the memory renamed load instruction, and retiring the memory renamed load instruction, if the store instruction is still allocated in at least one of the first storage and the second storage and should have effectively provided to the load the full data. The method may also include re-executing the load instruction without memory renaming, if the store instruction is not in at the first storage or in the second storage.
    Type: Application
    Filed: December 23, 2003
    Publication date: June 23, 2005
    Inventors: Sebastien Hily, Per Hammarlund
  • Publication number: 20050138295
    Abstract: Embodiments of the present invention relate to a memory management scheme and apparatus that enables efficient cache memory management. The method includes writing an entry to a store buffer at execute time; determining if the entry's address is in a first-level cache associated with the store buffer before retirement; and setting a status bit associated with the entry in said store buffer, if the address is in the cache in either exclusive or modified state. The method further includes immediately writing the entry to the first-level cache at or after retirement when the status bit is set; and de-allocating the entry from said store buffer at retirement. The method further may comprise resetting the status bit if the cacheline is allocated over or is evicted from the cache before the store buffer entry attempts to write to the cache.
    Type: Application
    Filed: December 23, 2003
    Publication date: June 23, 2005
    Inventors: Per Hammarlund, Stephan Jourdan, Sebastien Hily, Aravindh Baktha, Hermann Gartler
  • Patent number: 6591342
    Abstract: A memory disambiguation apparatus includes a store queue, a store forwarding buffer, and a version count buffer. The store queue includes an entry for each store instruction in the instruction window of a processor. Some store queue entries include resolved store addresses, and some do not. The store forwarding buffer is a set-associative buffer that has entries allocated for store instructions as store addresses are resolved. Each entry in the store forwarding buffer is allocated into a set determined in part by a subset of the store address. When the set in the store forwarding buffer is full, an older entry in the set is discarded in favor of the newly allocated entry. A version count buffer including an array of overflow indicators is maintained to track overflow occurrences. As load addresses are resolved for load instructions in the instruction window, the set-associative store forwarding buffer can be searched to provide memory disambiguation.
    Type: Grant
    Filed: December 14, 1999
    Date of Patent: July 8, 2003
    Assignee: Intel Corporation
    Inventors: Haitham Akkary, Sebastien Hily
  • Publication number: 20030005266
    Abstract: A device is presented including a first processor and a second processor. A number of memory devices are connected to the first processor and the second processor. A register buffer is connected to the first processor and the second processor. A trace buffer is connected to the first processor and the second processor. A number of memory instruction buffers are connected to the first processor and the second processor. The first processor and the second processor perform single threaded applications using multithreading resources. A method is also presented where a first thread is executed from a first processor. The first thread is also executed from a second processor as directed by the first processor. The second processor executes instructions ahead of the first processor.
    Type: Application
    Filed: June 28, 2001
    Publication date: January 2, 2003
    Inventors: Haitham Akkary, Sebastien Hily