Cache Access Modes (epo) Patents (Class 711/E12.052)
  • Patent number: 11704031
    Abstract: A system-on-chip is connected to a first memory device and a second memory device. The system-on-chip comprises a memory controller configured to control an interleaving access operation on the first and second memory devices. A modem processor is configured to provide an address for accessing the first or second memory devices. A linear address remapping logic is configured to remap an address received from the modem processor and to provide the remapped address to the memory controller. The memory controller performs a linear access operation on the first or second memory device in response to receiving the remapped address.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: July 18, 2023
    Inventor: Dongsik Cho
  • Patent number: 11579792
    Abstract: According to one embodiment, a memory system includes a non-volatile memory array with a plurality of memory cells. Each memory cell is a multilevel cell to which multibit data can be written. The non-volatile memory array includes a first storage region in which the multibit data of a first bit level is written and a second storage region in which data of a second bit level less than the first bit level is written. A memory controller is configured to move pieces of data from the first storage region to the second storage region based on the number of data read requests for the pieces of data received over a period of time or on external information received from a host device that sends read requests.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: February 14, 2023
    Assignee: Kioxia Corporation
    Inventors: Masaomi Teranishi, Kentaro Inomata
  • Patent number: 8996820
    Abstract: A multi-core processor system includes a processor configured to establish coherency of shared data values stored in a cache memory accessed by a multiple cores; detect a first thread executed by a first core among the cores; identify upon detecting the first thread, a second thread under execution by a second core other than the first core and among the cores; determine whether shared data commonly accessed by the first thread and the second thread is present; and stop establishment of coherency for a first cache memory corresponding to the first core and a second cache memory corresponding to the second core, upon determining that no shared data commonly accessed is present.
    Type: Grant
    Filed: December 12, 2012
    Date of Patent: March 31, 2015
    Assignee: Fujitsu Limited
    Inventors: Takahisa Suzuki, Koichiro Yamashita, Hiromasa Yamauchi, Koji Kurihara
  • Patent number: 8812784
    Abstract: A command executing method for a memory storage apparatus and a memory controller and the memory storage apparatus using the same are provided. The method includes, during a data merging operation, receiving a write command and a write data corresponding to the write command from a host system. The method also includes temporarily storing the write data into a buffer memory, and at a delay time point, transmitting a response message to the host, the delay time point is set by adding a dummy delay time to a time point that the operation of writing the write data into the buffer memory is completed. Accordingly, the method can effectively level the response times of executing write commands during the data merging operation, thereby shortening the maximum access time.
    Type: Grant
    Filed: September 25, 2011
    Date of Patent: August 19, 2014
    Assignee: Phison Electronics Corp.
    Inventor: Chih-Kang Yeh
  • Patent number: 8732399
    Abstract: A technique to retain cached information during a low power mode, according to at least one embodiment. In one embodiment, information stored in a processor's local cache is saved to a shared cache before the processor is placed into a low power mode, such that other processors may access information from the shared cache instead of causing the low power mode processor to return from the low power mode to service an access to its local cache.
    Type: Grant
    Filed: March 6, 2013
    Date of Patent: May 20, 2014
    Assignee: Intel Corporation
    Inventors: Sanjeev Jahagirdar, Varghese George, Jose P. Allarey
  • Patent number: 8706974
    Abstract: In a data processing system, a method includes a first master initiating a transaction via a system interconnect to a target device. After initiating the transaction, a snoop request corresponding to the transaction is provided to a cache of a second master. The transaction is completed. After completing the transaction, a snoop lookup operation corresponding to the snoop request in the cache of the second master is performed. The transaction may be completed prior to or after providing the snoop request. In response to performing the snoop lookup operation, a snoop response may be provided, where the snoop response is provided after completing the transaction. When the snoop response indicates an error, a snoop error may be provided to the first master.
    Type: Grant
    Filed: April 30, 2008
    Date of Patent: April 22, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Patent number: 8700858
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: May 16, 2012
    Date of Patent: April 15, 2014
    Assignee: Intel Corporation
    Inventor: Sanjeev N. Trika
  • Patent number: 8627009
    Abstract: A method and apparatus used within memory and data processing that reduces the number of references allowed in processor cache by using active rows to reject references that are less frequently used from the cache. Comparators within a memory controller are used to generate a signal indicative of a row hit or miss, which signal is then applied to one or more demultiplexers to enable or disable transfer of a memory reference to processor cache locations. The cache may be level one (L1) or level two (L2) caches including data and or instructions or some combination of L1, L2, data, and instructions.
    Type: Grant
    Filed: September 16, 2008
    Date of Patent: January 7, 2014
    Assignee: Mosaid Technologies Incorporated
    Inventor: Nagi Nassief Mekhiel
  • Patent number: 8549220
    Abstract: Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, identifying working data on a stride basis by a processor device are provided. A multi-update bit is established for each of a plurality of strides in a modified cache, wherein the multi-update bit is adapted to indicate a corresponding stride is part of at least one track in a working set that refers to a group of frequently updated tracks. The plurality of strides are scanned based on a schedule to identify tracks for destaging. An operation to destage is performed on a selected track identified during the scanning, if the multi-update bit of a selected stride on the selected track is set to indicate the selected track is part of the working set and if the NVS is about 90% full or greater.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: October 1, 2013
    Assignee: International Business Machines Corporation
    Inventors: Brent C. Beardsley, Michael T. Benhase, Lokesh M. Gupta, Joseph S. Hyde, II, Sonny E. Williams
  • Patent number: 8527709
    Abstract: A technique to retain cached information during a low power mode, according to at least one embodiment. In one embodiment, information stored in a processor's local cache is saved to a shared cache before the processor is placed into a low power mode, such that other processors may access information from the shared cache instead of causing the low power mode processor to return from the low power mode to service an access to its local cache.
    Type: Grant
    Filed: July 20, 2007
    Date of Patent: September 3, 2013
    Assignee: Intel Corporation
    Inventors: Sanjeev Jahagirdar, Varghese George, Jose Allarey
  • Publication number: 20130219367
    Abstract: During execution of a program, the situation where the atomicity of a pair of instructions that are to be executed atomically is violated is identified, and a bug is detected as occurring in the program at the pair of instructions. The pairs of instructions that are to be executed atomically can be identified in different manners, such as by executing a program multiple times and using the results of those executions to automatically identify the pairs of instructions.
    Type: Application
    Filed: September 19, 2007
    Publication date: August 22, 2013
    Inventors: Yuanyuan Zhou, Shan Lu, Joseph Andrew Tucek
  • Publication number: 20130185508
    Abstract: A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., virtual storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache.
    Type: Application
    Filed: January 12, 2012
    Publication date: July 18, 2013
    Applicant: FUSION-IO, INC.
    Inventors: Nisha Talagala, Swaminathan Sundararaman
  • Patent number: 8478947
    Abstract: A method of controlling a memory and a memory controller are disclosed. The memory controller is operable to control a memory, the memory being operable in a plurality of modes, the memory controller comprising: memory interface logic configurable to interact with the memory in each of the plurality of modes; and memory mode change logic operable, in response to a memory mode change request instruction specifying a predetermined one the plurality of modes being issued by the memory interface logic to the memory, to request the memory interface logic to be configured to interact with the memory in the predetermined one of the plurality of modes and to prevent interaction between the memory interface logic and the memory until the memory interface logic confirms that it is configured to interact with the memory in the predetermined one of the plurality of modes.
    Type: Grant
    Filed: July 5, 2005
    Date of Patent: July 2, 2013
    Assignee: ARM Limited
    Inventors: Graeme Leslie Ingram, Ian James Quinn
  • Patent number: 8447933
    Abstract: In a multi-core processor of a shared-memory type, deterioration in the data processing capability caused by competitions of memory accesses from a plurality of processors is suppressed effectively. In a memory access controlling system for controlling accesses to a cache memory in a data read-ahead process when the multi-core processor of a shared-memory type performs a task including a data read-ahead thread for executing data read-ahead and a parallel execution thread for performing an execution process in parallel with the data read-ahead, the system includes a data read-ahead controller which controls an interval between data read-ahead processes in the data read-ahead thread adaptive to a data flow which varies corresponding to an input value of the parallel process in the parallel execution thread. By controlling the interval between the data read-ahead processes, competitions of memory accesses in the multi-core processor are suppressed.
    Type: Grant
    Filed: February 4, 2008
    Date of Patent: May 21, 2013
    Assignee: NEC Corporation
    Inventor: Kosuke Nishihara
  • Patent number: 8392650
    Abstract: A system provides for a signal to indicate when a memory device exits from self-refresh. Thus, substantially at the same time (before or after) the memory device exits self-refresh, an indicator signal can be triggered to indicate normal operation or standard refresh operation and normal memory access of the memory device. A memory controller can access the indicator signal to determine whether the memory device is in self-refresh. Thus, the memory controller can more carefully manage the timing of sending a command to the memory device while reducing the delay time typically associated with detecting a self-refresh condition.
    Type: Grant
    Filed: April 1, 2010
    Date of Patent: March 5, 2013
    Assignee: Intel Corporation
    Inventor: Kuljit S. Bains
  • Patent number: 8370578
    Abstract: Provided is a storage controller and method of controlling same which, if part of a storage area of a local memory is used as cache memory, enable an access conflict for access to a parallel bus connected to the local memory to be avoided. A storage controller which exercises control of data between a host system and a storage apparatus, comprising a data transfer control unit which exercises control to transfer the data on the basis of a read/write request from the host system; a cache memory which is connected to the data transfer control unit via a parallel bus; a control unit which is connected to the data transfer control unit via a serial bus; and a local memory which is connected to the control unit via a parallel bus, wherein the control unit decides to assign, from a cache segment of either the cache memory or the local memory, a storage area which stores the data on the basis of a CPU operating rate and a path utilization of the parallel bus connected to the cache memory.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: February 5, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Yoshihiro Yoshii, Mitsuru Inoue, Kentaro Shimada, Sadahiro Sugimoto
  • Patent number: 8316187
    Abstract: Disclosed is a cache memory, design structure, and corresponding method for improving cache performance comprising one or more cache lines of equal size, each cache line adapted to store a cache block of data from a main memory in response to an access request from a processor; and a predict buffer, of size equal to the size of the cache lines, configured to store a next block of data from said main memory in response to a predict-fetch signal generated using at least one previous access request.
    Type: Grant
    Filed: July 8, 2008
    Date of Patent: November 20, 2012
    Assignee: International Business Machines Corporation
    Inventor: Anil Pothireddy
  • Patent number: 8285935
    Abstract: A cache control apparatus is provided in a computer system including an access source and a storage apparatus. This device, based on I/O status information, which is information denoting the I/O status in accordance with an I/O command from the access source, determines whether or not the I/O performance from the access source drops. In a case where the result of this determination is affirmative, the cache control apparatus changes a cache utilization status specified from cache utilization status information, which is information denoting the cache utilization status related to a cache area, to a cache utilization status that improves I/O performance.
    Type: Grant
    Filed: July 29, 2009
    Date of Patent: October 9, 2012
    Assignee: Hitachi, Ltd.
    Inventors: Yosuke Kasai, Manabu Obana, Akihiko Sakaguchi
  • Publication number: 20120221796
    Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.
    Type: Application
    Filed: February 28, 2011
    Publication date: August 30, 2012
    Inventor: THANG M. TRAN
  • Patent number: 8195891
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: June 5, 2012
    Assignee: Intel Corporation
    Inventor: Sanjeev N. Trika
  • Patent number: 8171225
    Abstract: A method includes storing a plurality of data RAM, holding information for all outstanding requests forwarded to a next-level memory subsystem, clearing information associated with a serviced request after the request has been fulfilled, determining if a subsequent request matches an address supplied to one or more requests already in-flight to the next-level memory subsystem, matching fulfilled requests serviced by the next-level memory subsystem to at least one requester who issued requests while an original request was in-flight to the next level memory subsystem, storing information specific to each request comprising a set attribute and a way attribute configured to identify where the returned data should be held in the data RAM once the data is returned, the information specific to each request further including at least one of thread ID, instruction queue position and color, and scheduling hit and miss data returns.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: May 1, 2012
    Assignee: Intel Corporation
    Inventors: Thomas A Piazza, Michael K Dwyer, Scott Cheng
  • Patent number: 8166246
    Abstract: A computer implemented method, a processor chip, a data processing system, and computer program product in a data processing system process information in a store cache of a data processing system. The store cache receives a first entry that includes a first address indicating a first segment of a cache line. The store cache then receives a second entry including a second address indicating a second segment of the cache line. Responsive to the first segment not being equal to the second segment, the first entry is chained to the second entry.
    Type: Grant
    Filed: January 31, 2008
    Date of Patent: April 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Thomas Leo Jeremiah, William Lloyd McNeil, Hugh Shen, William John Starke
  • Publication number: 20120030428
    Abstract: According to one embodiment, an information processing device includes a first determination section and a setting section. The first determination section determines inconsistency between first data and second data. The first data is stored in a nonvolatile semiconductor memory. The second data is corresponding to the first data and stored in a semiconductor memory. The setting section sets execution timing of write back based on access frequency information associated with the second data.
    Type: Application
    Filed: March 21, 2011
    Publication date: February 2, 2012
    Inventors: Kenta YASUFUKU, Masaki MIYAGAWA, Goh UEMURA, Tsutomu OWA, Tsutomu UNESAKI, Atsushi KUNIMATSU
  • Patent number: 8095739
    Abstract: A data processing system employing a weakly ordered storage architecture includes first and second sets of processing units coupled to each other and data storage by an interconnect fabric. Each processing unit has a processor core having an associated cache hierarchy including at least a level one, level two and level three cache memories. A request to perform an update to a portion of a first image of memory contained in the level three cache memory of a first processing unit while at last one kill-type command is pending at the first processing unit, the cache hierarchy of the first processing unit permitting the update to be exposed to any first processor core only after the at least one kill-type command is complete.
    Type: Grant
    Filed: April 13, 2009
    Date of Patent: January 10, 2012
    Assignee: International Business Machines Corporation
    Inventors: David W. Cummings, Guy L. Guthrie, Harmony L. Helterhoff, Derek E. Williams
  • Patent number: 8065486
    Abstract: A cache memory control circuit includes a selecting section configured to be capable of selecting, in a predetermined order, each way or a predetermined two or more ways of a cache memory having multiple ways; a comparing section configured to detect a cache hit in each way; and a control section configured to, upon detection of a cache hit, stop a selection of the respective ways or the predetermined two or more ways at the selecting section.
    Type: Grant
    Filed: March 9, 2009
    Date of Patent: November 22, 2011
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Toshio Fujisawa
  • Patent number: 8065487
    Abstract: A design structure embodied in a machine readable storage medium for of designing, manufacturing, and/or testing for shared cache eviction in a multi-core processing environment having a cache shared by a plurality of processor cores is provided. The design structure includes means for receiving from a processor core a request to load a cache line in the shared cache; means for determining whether the shared cache is full; means for determining whether a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache if the shared cache is full; and means for evicting a cache line that has been accessed by fewer than all the processor cores sharing the cache if a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache.
    Type: Grant
    Filed: May 1, 2008
    Date of Patent: November 22, 2011
    Assignee: International Business Machines Corporation
    Inventors: Marcus L. Kornegay, Ngan N. Pham
  • Publication number: 20110276765
    Abstract: Systems and methods for managing cache configurations are disclosed. In accordance with a method, a system management control module may receive access rights of a host to a logical storage unit and may also receive a desired caching policy for caching data associated with the logical storage unit and the host. The system management control module may determine an allowable caching policy indicator for the logical storage unit. The allowable caching policy indicator may indicate whether caching is permitted for data associated with input/output operations between the host and the logical storage unit. The system management control module may further set a caching policy for data associated with input/output operations between the host and the logical storage unit, based on at least one of the desired caching policy and the allowable caching policy indicator. The system management control module may also communicate the caching policy to the host.
    Type: Application
    Filed: May 10, 2010
    Publication date: November 10, 2011
    Applicant: DELL PRODUCTS L.P.
    Inventor: William Price Dawkins
  • Publication number: 20110252203
    Abstract: The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are tracked by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.
    Type: Application
    Filed: June 24, 2011
    Publication date: October 13, 2011
    Inventors: Sailesh Kottapalli, John H. Crawford, Kushagra Vaid
  • Publication number: 20110238927
    Abstract: Solved is a problem that use efficiency of a memory cache is low because in contents distribution using a memory cache whose capacity is limited, even when only a part of contents is accessed, the entire contents will be stored in the memory cache. The contents distribution device includes a contents holding unit 102 which stores contents to be distributed, a cache holding unit 103 which temporarily stores the contents to be distributed, a contents distribution unit 100 which distributes contents stored in the cache holding unit or the contents holding unit, and a cache control unit 101 which controls storage and deletion of contents in and from the cache holding unit, in which the cache control unit 101 sections the contents into a plurality of blocks and controls storage and deletion in and from the cache holding unit on a block basis based on cache control information which sets a deletion waiting time before deletion from the cache holding unit on a block basis.
    Type: Application
    Filed: November 18, 2009
    Publication date: September 29, 2011
    Inventor: Hiroyuki Hatano
  • Patent number: 7941601
    Abstract: A data process can be performed without lowering the data processing efficiency even when the sector length of the host device side is different from the sector length of the hard disk side. Partial data or whole data of a second data block which is based on a long sector defined on the hard disk side and surrounds the starting end and terminating end addresses of a first data block based on a host-defined sector is read from the hard disk and written to a flash memory before the data process using the flash memory as a cache is performed based on the command.
    Type: Grant
    Filed: December 20, 2006
    Date of Patent: May 10, 2011
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kenji Yoshida, Yoriharu Takai
  • Patent number: 7937534
    Abstract: Embodiments of an apparatus, method, and system for encoding direct cache access transactions based on a memory access data structure are disclosed. In one embodiment, an apparatus includes memory access logic and transaction logic. The memory access logic is to determine whether to allow a memory access based on a memory access data structure. The transaction logic is to assign direct cache access attributes to a transaction based on the memory access data structure.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: May 3, 2011
    Inventors: Rajesh Sankaran Madukkarumukumana, Sridhar Muthrasanallur, Ramakrishna Huggahalli, Rameshkumar G. Illikkal
  • Patent number: 7882322
    Abstract: A system and method to organize and use data sent over a double data rate interface so that the system operation does not experience a time penalty. The first cycle of data is used independently of the second cycle so that latency is not jeopardized. There are many applications. In a preferred embodiment for an L2 cache, the system transmits congruence class data in the first half and can start to access the L2 cache directory with the congruence class data.
    Type: Grant
    Filed: June 27, 2006
    Date of Patent: February 1, 2011
    Assignee: International Business Machines Corporation
    Inventors: Christopher J. Berry, Jonathan Y. Chen, Michael Fee, Patrick J. Meaney, Alan P. Wagstaff
  • Patent number: 7822919
    Abstract: A data process can be performed without lowering the data processing efficiency even when the sector length of the host device side is different from the sector length of the hard disk side. Partial data or whole data of a second data block which is based on a long sector defined on the hard disk side and surrounds the starting end and terminating end addresses of a first data block based on a host-defined sector is read from the hard disk and written to a flash memory before the data process using the flash memory as a cache is performed based on the command.
    Type: Grant
    Filed: December 20, 2006
    Date of Patent: October 26, 2010
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kenji Yoshida, Yoriharu Takai
  • Publication number: 20100262786
    Abstract: A data processing system employing a weakly ordered storage architecture includes first and second sets of processing units coupled to each other and data storage by an interconnect fabric. Each processing unit has a processor core having an associated cache hierarchy including at least a level one, level two and level three cache memories. In response to a request to perform an update to a portion of a first image of memory contained in the level three cache memory of a first processing unit while at last one kill-type command is pending at the first processing unit, the cache hierarchy of the first processing unit permitting the update to be exposed to any first processor core only after the at least one kill-type command is complete.
    Type: Application
    Filed: April 13, 2009
    Publication date: October 14, 2010
    Applicant: International Business Machines Corporation
    Inventors: David W. Cummings, Guy L. Guthrie, Harmony L. Helterhoff, Derek E. Williams
  • Publication number: 20090276575
    Abstract: According to one embodiment, an information processing apparatus includes a processor, a cache, and a cache controller. The processor is configured to output a memory access request for accessing an entity of a variable stored in a variable-storage region provided in a memory by using first or second memory address. Both the first and second memory addresses are allocated to the variable-storage region. The cache is configured to store some of data items stored in the memory. The cache controller is configured to access the memory or the cache by using a memory address designating the variable-storage region, in accordance with one of the first and second memory addresses which is included in a memory access request coming from the processor.
    Type: Application
    Filed: April 8, 2009
    Publication date: November 5, 2009
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Yoriharu Takai, Kenji Yoshida
  • Publication number: 20090248984
    Abstract: There are disclosed a method and device for performing Copy-on-Write in a processor. The processor comprises: processor cores, L1 caches each of which is logically divided into a first L1 cache and a second L1 cache, and L2 caches. The first L1 cache is used for saving new data value, and the second L1 cache for saving old data value. The method can comprise the steps of: in response to a store operation from said processor core, judging whether a corresponding cache line in said L2 cache has been modified; if it is determined a corresponding L2 cache line in said L2 cache has not been modified, copying old data value in the corresponding L2 cache line to said second L1 cache, and writing new data value to the corresponding L2 cache line; and if it is determined a corresponding L2 cache line in said L2 cache has been modified, writing new data value to the corresponding L2 cache line directly.
    Type: Application
    Filed: March 24, 2009
    Publication date: October 1, 2009
    Applicant: International Business Machines Corporation
    Inventors: Xiao Wei Shen, Hua Yong Wang, Wen Bo Shen, Peng Shao
  • Publication number: 20090187727
    Abstract: Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory.
    Type: Application
    Filed: January 23, 2008
    Publication date: July 23, 2009
    Applicant: SUN MICROSYSTEMS, INC.
    Inventors: Paul Caprioli, Martin Karlsson, Shailender Chaudhry
  • Publication number: 20090132769
    Abstract: Systems and methods that optimize memory allocation in hierarchical and/or distributed data storage. A memory management component facilitates a compact manner of identifying approximately how often the memory chunk is being used, to promote efficient operation of the system as a whole. Each memory location can be changed based on the corresponding memory access that is determined through tracking of statistical usage counts of memory locations, and a comparison thereof with a threshold value.
    Type: Application
    Filed: November 19, 2007
    Publication date: May 21, 2009
    Applicant: MICROSOFT CORPORATION
    Inventors: Steve Pronovost, Ketan K. Dalal, Ameet A. Chitre
  • Publication number: 20090077540
    Abstract: During execution of a program, the situation where the atomicity of a pair of instructions that are to be executed atomically is violated is identified, and a bug is detected as occurring in the program at the pair of instructions. The pairs of instructions that are to be executed atomically can be identified in different manners, such as by executing a program multiple times and using the results of those executions to automatically identify the pairs of instructions.
    Type: Application
    Filed: September 19, 2007
    Publication date: March 19, 2009
    Inventors: Yuanyuan Zhou, Shan Lu, Joseph Andrew Tucek
  • Publication number: 20090037664
    Abstract: A system and method for dynamically selecting the data fetch path for improving the performance of the system improves data access latency by dynamically adjusting data fetch paths based on application data fetch characteristics. The application data fetch characteristics are determined through the use of a hit/miss tracker. It reduces data access latency for applications that have a low data reuse rate (streaming audio, video, multimedia, games, etc.) which will improve overall application performance. It is dynamic in a sense that at any point in time when the cache hit rate becomes reasonable (defined parameter), the normal cache lookup operations will resume. The system utilizes a hit/miss tracker which tracks the hits/misses against a cache and, if the miss rate surpasses a prespecified rate or matches an application profile, the hit/miss tracker causes the cache to be bypassed and the data is pulled from main memory or another cache thereby improving overall application performance.
    Type: Application
    Filed: August 2, 2007
    Publication date: February 5, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Marcus L. Kornegay, Ngan N. Pham
  • Publication number: 20090024799
    Abstract: A technique to retain cached information during a low power mode, according to at least one embodiment. In one embodiment, information stored in a processor's local cache is saved to a shared cache before the processor is placed into a low power mode, such that other processors may access information from the shared cache instead of causing the low power mode processor to return from the low power mode to service an access to its local cache.
    Type: Application
    Filed: July 20, 2007
    Publication date: January 22, 2009
    Inventors: Sanjeev Jahagirdar, Varghese George, Jose Allarey
  • Publication number: 20090006767
    Abstract: A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have any arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.
    Type: Application
    Filed: June 27, 2007
    Publication date: January 1, 2009
    Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Gad Sheaffer, Quinn Jacobson
  • Publication number: 20080320233
    Abstract: The complexity of the logic of the cache coherency manager unit is reduced by leveraging the data path for intervention messages and responses to carry data associated with writeback requests. A processor core unit sends a writeback request to the cache coherency manager unit. The request does not include the writeback data. Upon receiving an intervention message associated with the writeback request, the processor core unit provides an intervention message response to the cache coherency manager unit indicating that the writeback operation should not be cancelled. The intervention message response includes the writeback data. Because the cache coherency manager already requires a data path to handle data transfers between processor core units, little or no additional overhead needs to be added to the cache coherency manager to handle data associated with writeback request.
    Type: Application
    Filed: June 22, 2007
    Publication date: December 25, 2008
    Applicant: MIPS TECHNOLOGIES INC.
    Inventor: Ryan C. Kinter
  • Publication number: 20080301367
    Abstract: The present patent application discloses a method and apparatus for using external and internal memory for cancelling traffic interference comprising storing data in an external memory; and processing the data samples on an internal memory, wherein the external memory is low bandwidth memory; and the internal memory is high bandwidth on board cache. The present method and apparatus also comprises caching portions of the data on the internal memory, filling the internal memory by reading the newest data from the external memory and updating the internal memory; and writing the older data back to the external memory from the internal memory, wherein the data is incoming data samples.
    Type: Application
    Filed: May 20, 2008
    Publication date: December 4, 2008
    Applicant: QUALCOMM Incorporated
    Inventors: Senthil Govindaswamy, Jeffrey A. Levin, Raghu Sagar Madala, Sharad Deepak Sambhwani
  • Publication number: 20080209081
    Abstract: A method is disclosed for failover protection in an information storage and retrieval system comprising two clusters, two device adapters, and a plurality of data storage devices. The method provides a first device driver for a first device adapter and a second device driver for a second device adapter, and disposes those device drivers in both clusters. The method then places in operation the first device driver disposed in a first cluster, places in operation the second device driver disposed in a second cluster, and places in a standby mode the first device driver disposed in the second cluster. The method detects a failure of the first cluster, followed by a failure of the second device adapter. The method then makes operational the first device driver disposed in the second cluster, and continues to access information stored in the plurality of data storage devices using the first device adapter, and the first device driver disposed in the second cluster.
    Type: Application
    Filed: May 8, 2008
    Publication date: August 28, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael P. Vageline
  • Publication number: 20080147983
    Abstract: A data processing system is provided comprising at least one processing unit (10) for processing data; a memory means (40) for storing data; and a cache memory means (20) for caching data stored in the memory means (40). Said cache memory means (20) is associated to at least one processing unit (10). An interconnect means (30) is provided for connecting the memory means (40) and the cache memory means (20). The cache memory means (20) is adapted for performing a cache replacement based on reduced logic level changes of the interconnect means (30) as introduced by a data transfer (DO-Dm) between the memory means (40) and the cache memory means (20).
    Type: Application
    Filed: January 27, 2006
    Publication date: June 19, 2008
    Applicant: NXP B.V.
    Inventors: Bijo Thomas, Sainath Karlapalem
  • Publication number: 20080147990
    Abstract: A cache module for a central processing unit has a cache control unit coupled with a memory, and a cache memory coupled with the control unit and the memory, wherein the cache memory has a plurality of cache lines, at least one cache line of the plurality of cache lines has an address tag bit field and an associated storage area for storing instructions to be issued sequentially and at least one control bit field, wherein the control bit field is coupled with the address tag bit field to mask a predefined number of bits in the address tag bit field.
    Type: Application
    Filed: October 30, 2007
    Publication date: June 19, 2008
    Inventors: Rodney J. Pesavento, Gregg D. Lahti, Joseph W. Triece
  • Publication number: 20080120472
    Abstract: Methods, systems, and computer program products for forwarding store data to loads in a pipelined processor are provided. In one implementation, a processor is provided that includes a decoder operable to decode an instruction, and a plurality of execution units operable to respectively execute a decoded instruction from the decoder. The plurality of execution units include a load/store execution unit operable to execute decoded load instructions and decoded store instructions and generate corresponding load memory operations and store memory operations. The store queue is operable to buffer one or more store memory operations prior to the one or more memory operations being completed, and the store queue is operable to forward store data of the one or more store memory operations buffered in the store queue to a load memory operation on a byte-by-byte basis.
    Type: Application
    Filed: November 16, 2006
    Publication date: May 22, 2008
    Inventors: Jason Alan Cox, Kevin Chih Kang Lin, Eric Francis Robinson