Patents by Inventor Raanan Sade

Raanan Sade has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140208031
    Abstract: An apparatus and method are described for efficiently transferring data from a producer core to a consumer core within a central processing unit (CPU). For example, one embodiment of a method comprises: A method for transferring a chunk of data from a producer core of a central processing unit (CPU) to consumer core of the CPU, comprising: writing data to a buffer within the producer core of the CPU until a designated amount of data has been written; upon detecting that the designated amount of data has been written, responsively generating an eviction cycle, the eviction cycle causing the data to be transferred from the fill buffer to a cache accessible by both the producer core and the consumer core; and upon the consumer core detecting that data is available in the cache, providing the data to the consumer core from the cache upon receipt of a read signal from the consumer core.
    Type: Application
    Filed: December 21, 2011
    Publication date: July 24, 2014
    Inventors: Shlomo Raikin, Robert Valentine, Raanan Sade, Julius Yuli Mandelbalt, Ron Shalev, Larisa Novakovsky
  • Publication number: 20140192069
    Abstract: An apparatus and method are described for efficiently transferring data from a core of a central processing unit (CPU) to a graphics processing unit (GPU). For example, one embodiment of a method comprises: writing data to a buffer within the core of the CPU until a designated amount of data has been written; upon detecting that the designated amount of data has been written, responsively generating an eviction cycle, the eviction cycle causing the data to be transferred from the buffer to a cache accessible by both the core and the GPU; setting an indication to indicate to the GPU that data is available in the cache; and upon the GPU detecting the indication, providing the data to the GPU from the cache upon receipt of a read signal from the GPU.
    Type: Application
    Filed: December 21, 2011
    Publication date: July 10, 2014
    Inventors: Shlomo Raikin, Raanan Sade, Robert Valentine, Julius Yuli Mandelblat, Ron Shalev, Larisa Novakovsky
  • Publication number: 20140122811
    Abstract: A processor includes a core to execute instructions and a cache memory coupled to the core and having a plurality of entries. Each entry of the cache memory may include a data storage including a plurality of data storage portions, each data storage portion to store a corresponding data portion. Each entry may also include a metadata storage to store a plurality of portion modification indicators, each portion modification indicator corresponding to one of the data storage portions. Each portion modification indicator is to indicate whether the data portion stored in the corresponding data storage portion has been modified, independently of cache coherency state information of the entry. Other embodiments are described as claimed.
    Type: Application
    Filed: October 31, 2012
    Publication date: May 1, 2014
    Inventors: Stanislav Shwartsman, Raanan Sade, Larisa Novakovsky, Arijit Biswas
  • Patent number: 8688917
    Abstract: A method and apparatus for monitoring memory accesses in hardware to support transactional execution is herein described. Attributes are monitor accesses to data items without regard for detection at physical storage structure granularity, but rather ensuring monitoring at least at data items granularity. As an example, attributes are added to state bits of a cache to enable new cache coherency states. Upon a monitored memory access to a data item, which may be selectively determined, coherency states associated with the data item are updated to a monitored state. As a result, invalidating requests to the data item are detected through combination of the request type and the monitored coherency state of the data item.
    Type: Grant
    Filed: January 20, 2012
    Date of Patent: April 1, 2014
    Assignee: Intel Corporation
    Inventors: Gad Sheaffer, Shlomo Raikin, Vadim Bassin, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Patent number: 8627017
    Abstract: A method and apparatus for monitoring memory accesses in hardware to support transactional execution is herein described. Attributes are monitor accesses to data items without regard for detection at physical storage structure granularity, but rather ensuring monitoring at least at data items granularity. As an example, attributes are added to state bits of a cache to enable new cache coherency states. Upon a monitored memory access to a data item, which may be selectively determined, coherency states associated with the data item are updated to a monitored state. As a result, invalidating requests to the data item are detected through combination of the request type and the monitored coherency state of the data item.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: January 7, 2014
    Assignee: Intel Corporation
    Inventors: Gad Sheaffer, Shlomo Raikin, Vadim Bassin, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Publication number: 20130326145
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing efficient communication between caches in hierarchical caching design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a lower level cache communicably interfaced with the data bus; a higher level cache communicably interfaced with the data bus; one or more data buffers and one or more dataless buffers. The data buffers in such an embodiment being communicably interfaced with the data bus, and each of the one or more data buffers having a buffer memory to buffer a full cache line, one or more control bits to indicate state of the respective data buffer, and an address associated with the full cache line.
    Type: Application
    Filed: December 23, 2011
    Publication date: December 5, 2013
    Inventors: Ron Shalev, Yiftach Gilad, Shlomo Raikin, Igor Yanover, Stanislav Shwartsman, Raanan Sade
  • Patent number: 8516577
    Abstract: In one embodiment, the present invention includes a method for identifying a termination sequence for an atomic memory operation executed by a first thread, associating a timer with the first thread, and preventing the first thread from execution of a memory cluster operation after completion of the atomic memory operation until a prevention window has passed. This method may be executed by regulation logic associated with a memory execution unit of a processor, in some embodiments. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 22, 2010
    Date of Patent: August 20, 2013
    Assignee: Intel Corporation
    Inventors: Michael S. Bair, David W. Burns, Robert S. Chappell, Prakash Math, Leslie A. Ong, Pankaj Raghuvanshi, Shlomo Raikin, Raanan Sade, Michael D. Tucknott, Igor Yanover
  • Patent number: 8352683
    Abstract: A method and system to reduce the power consumption of a memory device. In one embodiment of the invention, the memory device is a N-way set-associative level one (L1) cache memory and there is logic coupled with the data cache memory to facilitate access to only part of the N-ways of the N-way set-associative L1 cache memory in response to a load instruction or a store instruction. By reducing the number of ways to access the N-way set-associative L1 cache memory for each load or store request, the power requirements of the N-way set-associative L1 cache memory is reduced in one embodiment of the invention. In one embodiment of the invention, when a prediction is made that the accesses to cache memory only requires the data arrays of the N-way set-associative L1 cache memory, the access to the fill buffers are deactivated or disabled.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: January 8, 2013
    Assignee: Intel Corporation
    Inventors: Ehud Cohen, Oleg Margulis, Raanan Sade, Stanislav Shwartsman
  • Patent number: 8271732
    Abstract: In one embodiment, a cache memory includes a data array having N ways and M sets and at least one fill buffer coupled to the data array, where the data array is segmented into multiple array portions such that only one of the portions is to be accessed to seek data for a memory request if the memory request is predicted to hit in the data array. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 4, 2008
    Date of Patent: September 18, 2012
    Assignee: Intel Corporation
    Inventors: Ehud Cohen, Oleg Margulis, Raanan Sade, Stanislav Shwartsman
  • Publication number: 20120117334
    Abstract: A method and apparatus for monitoring memory accesses in hardware to support transactional execution is herein described. Attributes are monitor accesses to data items without regard for detection at physical storage structure granularity, but rather ensuring monitoring at least at data items granularity. As an example, attributes are added to state bits of a cache to enable new cache coherency states. Upon a monitored memory access to a data item, which may be selectively determined, coherency states associated with the data item are updated to a monitored state. As a result, invalidating requests to the data item are detected through combination of the request type and the monitored coherency state of the data item.
    Type: Application
    Filed: January 20, 2012
    Publication date: May 10, 2012
    Inventors: Gad Sheaffer, Shlomo Raikin, Vadim Bassin, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Publication number: 20120072984
    Abstract: In one embodiment, the present invention includes a method for identifying a termination sequence for an atomic memory operation executed by a first thread, associating a timer with the first thread, and preventing the first thread from execution of a memory cluster operation after completion of the atomic memory operation until a prevention window has passed. This method may be executed by regulation logic associated with a memory execution unit of a processor, in some embodiments. Other embodiments are described and claimed.
    Type: Application
    Filed: September 22, 2010
    Publication date: March 22, 2012
    Inventors: MICHAEL S. BAIR, David W. Burns, Robert S. Chappell, Prakash Math, Leslie A. Ong, Pankaj Raghuvanshi, Shlomo Raikin, Raanan Sade, Michael D. Tucknott, Igor Yanover
  • Publication number: 20110320723
    Abstract: A method and system to reduce the power consumption of a memory device. In one embodiment of the invention, the memory device is a N-way set-associative level one (L1) cache memory and there is logic coupled with the data cache memory to facilitate access to only part of the N-ways of the N-way set-associative L1 cache memory in response to a load instruction or a store instruction. By reducing the number of ways to access the N-way set-associative L1 cache memory for each load or store request, the power requirements of the N-way set-associative L1 cache memory is reduced in one embodiment of the invention. In one embodiment of the invention, when a prediction is made that the accesses to cache memory only requires the data arrays of the N-way set-associative L1 cache memory, the access to the fill buffers are deactivated or disabled.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Inventors: Ehud Cohen, Oleg Margulis, Raanan Sade, Stanislav Shwartsman
  • Publication number: 20100169382
    Abstract: A method and apparatus for metaphysical address space for holding lossy metadata is herein described. An explicit or implicit metadata access operation referencing data address of a data item is encountered. Hardware modifies the data address to a metadata address including a metaphysical extension. The metaphysical extension overlays one or more metaphysical address space(s) on the data address space. A portion of the metadata address including the metaphysical extension is utilized to search a tag array of the cache memory holding the data item. As a result, metadata access operations only hit metadata entries of the cache based on the metadata address extension. However, as the metadata is held within the cache, the metadata potentially competes with data for space within the cache.
    Type: Application
    Filed: December 30, 2008
    Publication date: July 1, 2010
    Inventors: Gad Sheaffer, Shlomo Raikia, Vadim Bassia, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Publication number: 20100169579
    Abstract: A method and apparatus for monitoring memory accesses in hardware to support transactional execution is herein described. Attributes are monitor accesses to data items without regard for detection at physical storage structure granularity, but rather ensuring monitoring at least at data items granularity. As an example, attributes are added to state bits of a cache to enable new cache coherency states. Upon a monitored memory access to a data item, which may be selectively determined, coherency states associated with the data item are updated to a monitored state. As a result, invalidating requests to the data item are detected through combination of the request type and the monitored coherency state of the data item.
    Type: Application
    Filed: December 30, 2008
    Publication date: July 1, 2010
    Inventors: Gad Sheaffer, Shlomo Raikin, Vadim Bassin, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Publication number: 20100169581
    Abstract: A method and apparatus for extending cache coherency to hold buffered data to support transactional execution is herein described. A transactional store operation referencing an address associated with a data item is performed in a buffered manner. Here, the coherency state associated with cache lines to hold the data item are transitioned to a buffered state. In response to local requests for the buffered data item, the data item is provided to ensure internal transactional sequential ordering. However, in response to external access requests, a miss response is provided to ensure the transactionally updated data item is not made globally visible until commit. Upon commit, the buffered lines are transitioned to a modified state to make the data item globally visible.
    Type: Application
    Filed: December 30, 2008
    Publication date: July 1, 2010
    Inventors: Gad Sheaffer, Shlomo Raikin, Vadim Bassin, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Publication number: 20100146212
    Abstract: In one embodiment, a cache memory includes a data array having N ways and M sets and at least one fill buffer coupled to the data array, where the data array is segmented into multiple array portions such that only one of the portions is to be accessed to seek data for a memory request if the memory request is predicted to hit in the data array. Other embodiments are described and claimed.
    Type: Application
    Filed: December 4, 2008
    Publication date: June 10, 2010
    Inventors: Ehud Cohen, Oleg Margulis, Raanan Sade, Stanislav Shwartsman