Write-back Patents (Class 711/143)
-
Patent number: 9552296Abstract: A method, a system and a computer program product including instructions for verification of the integrity of a shared memory using in line coding is provided. It involves an active step wherein multiple bus masters write a corresponding data to a shared memory. After that it also includes a verification step where data entered in the shared memory by multiple bus masters is verified.Type: GrantFiled: March 15, 2013Date of Patent: January 24, 2017Assignee: International Business Machines CorporationInventors: Duy Q Huynh, Lyndsi R McKinney
-
Patent number: 9542316Abstract: A system and method are disclosed for multiple coherent caches supporting agents that use different, incompatible coherence models. Compatibility is implemented by translators that accept coherency requests and snoop responses from an agent and accept snoop requests and coherency responses from a coherence controller. The translators issue corresponding coherency requests and snoop responses to the coherence controller and issue corresponding coherency responses and snoop requests to the agent. Interaction between translators and the coherence controller accord with a generic coherence model, which may be a subset, superset, or partially inclusive of features of any native coherence model. A generic coherence protocol may include binary values for each of characteristics: valid or invalid, owned or non-owned, unique or shared, and clean or dirty.Type: GrantFiled: December 15, 2015Date of Patent: January 10, 2017Assignee: ARTERIS, INC.Inventors: Craig Stephen Forrest, David A. Kruckemyer
-
Patent number: 9529686Abstract: In an approach for detecting faults on a bus interconnect that connects a bus master circuit to bus slave circuits, application program code and fault detection program code are concurrently executed by a bus master circuit. The application program code initiates first bus transactions to the bus slave circuits, and the fault detection program code initiates second bus transactions to the bus slave circuits for detection of faults in data channels of the bus interconnect. An error code generator circuit generates error codes from addresses of the first and second bus transactions. The error codes are transmitted with the first and second bus transactions on address channels of the bus interconnect to addressed ones of the bus slave circuits. Respective error code checker circuits coupled between the bus interconnect and the bus slave circuits determine whether or not the addresses of the bus transactions are correct based on the error codes.Type: GrantFiled: October 29, 2014Date of Patent: December 27, 2016Assignee: XILINX, INC.Inventors: Ygal Arbel, Sagheer Ahmad
-
Patent number: 9524238Abstract: A data storage device includes a data storage medium, a cache, and a cache control memory. The data storage medium has M data blocks. M is an integer greater than 1. The cache includes N cache blocks having N cache block addresses, respectively. N is an integer greater than 1. The cache control memory includes M memory elements corresponding to the M data blocks, respectively. The cache control memory is configured to, in response to a request to cache data of one of the M data blocks: (a) write the data from the one of the M data blocks to one of the N cache blocks; and (b) write, in the one of the M memory elements corresponding to the one of the M data blocks, one of the N cache block addresses corresponding to the one of the N cache blocks where the data is written.Type: GrantFiled: February 19, 2016Date of Patent: December 20, 2016Assignee: Marvell International LTD.Inventors: Weiya Xi, Chao Jin, Khai Leong Yong, Sophia Tan, Zhi Yong Ching
-
Patent number: 9507864Abstract: There is provided a method that includes (a) receiving a request for access to data, (b) identifying a data store that stores the data, and (c) communicating with the data store, by way of an electronic communication, to access the data. There is also provided a system that performs the method, and a storage medium that includes a program module for controlling a processor to perform the method.Type: GrantFiled: January 27, 2012Date of Patent: November 29, 2016Assignee: THE DUN & BRADSTREET CORPORATIONInventors: William Morgan, Robert Tam, Nina Gerasimova, Keith Gastauer, Stacey Rasgado, Ken Einstein, Chip Swanson, Neil Lamka, Dave Horowitz, Jim Longo, Emmet Townsend, Julian Prower
-
Patent number: 9507726Abstract: A method and apparatus of a device that manages virtual memory for a graphics processing unit is described. In an exemplary embodiment, the device manages a graphics processing unit working set of pages. In this embodiment, the device determines the set of pages of the device to be analyzed, where the device includes a central processing unit and the graphics processing unit. The device additionally classifies the set of pages based on a graphics processing unit activity associated with the set of pages and evicts a page of the set of pages based on the classifying.Type: GrantFiled: April 25, 2014Date of Patent: November 29, 2016Assignee: Apple Inc.Inventor: Derek R. Kumar
-
Patent number: 9483404Abstract: A method includes monitoring a number of read access requests to an address for data stored on a backing store. The method also includes comparing the number of read access requests to a read access threshold. The read access threshold includes a threshold number of read access requests for the address. The method also includes caching data corresponding to a write access request to the address in response to determining that the number of read access requests satisfies the read access threshold.Type: GrantFiled: July 7, 2015Date of Patent: November 1, 2016Assignee: SANDISK TECHNOLOGIES LLCInventor: David Atkisson
-
Patent number: 9424189Abstract: A computer-implemented method for mitigating write-back caching failures may include (1) detecting a failure that impairs at least one write-back cache that temporarily caches updates for individual files stored on a storage device, (2) identifying an attribute of an individual file stored on the storage device in response to the failure that impairs the write-back cache, (3) determining that at least a portion of the individual file is obsolete based at least in part on the attribute of the individual file, and then (4) performing at least one mitigating action with respect to the individual file to address the obsolete portion of the individual file. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: February 10, 2014Date of Patent: August 23, 2016Assignee: Veritas Technologies LLCInventors: Sushil Patil, Shirish Vijayvargiya, Anindya Banerjee, Sanjay Jain
-
Patent number: 9424286Abstract: Managing database recovery time. A method includes receiving user input specifying a target recovery time for a database. The method further includes determining an amount of time to read a data page of the database from persistent storage. The method further includes determining an amount of time to process a log record of the database to apply changes specified in the log record to a data page. The method further includes determining a number of dirty pages that presently would be read in recovery if a database failure occurred. The method further includes determining a number of log records that would be processed in recovery if a database failure occurred. The method further includes adjusting at least one of the number of dirty pages that presently would be read in recovery or the number of log records that would be processed in recovery to meet the specified target recovery time.Type: GrantFiled: February 4, 2013Date of Patent: August 23, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Robin Dhananjay Dhamankar, Hanumantha Rao Kodavalla
-
Patent number: 9350779Abstract: A method may include receiving, at a mobile computing device comprising a processor, input identifying control information to be sent from the mobile computing device to a media server. The method may include determining, at the mobile computing device, whether the computing device is sending a media stream to the media server. In response to determining that the mobile computing device is sending a media stream to the media server, the control information may be sent from the mobile computing device to the media server without interrupting the media stream by embedding the control information in the media stream.Type: GrantFiled: February 16, 2015Date of Patent: May 24, 2016Assignee: WOWZA MEDIA SYSTEMS, LLCInventor: Barry Owen
-
Patent number: 9342461Abstract: A cache memory system includes a cache memory including a plurality of cache memory lines and a dirty buffer including a plurality of dirty masks. A cache controller is configured to allocate one of the dirty masks to each of the cache memory lines when a write to the respective cache memory line is not a full write to that cache memory line. Each of the dirty masks indicates dirty states of data units in one of the cache memory lines. The cache controller may include a dirty buffer index which stores an identification (ID) information that associates the dirty masks with the cache memory lines to which the dirty masks are allocated. A cache line may include a fully dirty flag indicating when each byte in that cache line is dirty, so that a dirty mask does not need to be allocated for that cache line.Type: GrantFiled: November 28, 2012Date of Patent: May 17, 2016Assignee: QUALCOMM IncorporatedInventors: Jian Liang, Chun Yu, Fei Xu
-
Patent number: 9329890Abstract: Cache lines in a multi-processor computing environment are configurable with a coherency mode. Cache lines in full-line coherency mode are operated or managed with full-line granularity. Cache lines in sub-line coherency mode are operated or managed as sub-cache line portions of a full cache line. A high-coherence-miss cache line may be placed in sub-line coherency mode. A cache line may be associated with a counter in a coherence miss detection table that is incremented whenever an access of the cache line results in a coherence request. The cache line may be a high-coherence-miss cache line when the counter satisfies a high-coherence-miss criterion, such as reaching a threshold value. The cache line may be returned to full-line coherency mode when a reset criterion is satisfied.Type: GrantFiled: September 26, 2013Date of Patent: May 3, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael K. Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
-
Patent number: 9323674Abstract: A processor includes: a primary cache memory; an instruction control unit that issues a store request to the primary cache memory; a pipeline processing unit that, upon receiving the store request, writes data to the primary cache memory; a buffer unit that obtains an address output to the primary cache memory from the pipeline processing unit during an output period of the store request regarding certain data to hold the obtained address in an entry, and when the output period ends, issues a write-back request for writing the data indicated by the address held in the entry to a memory; and a secondary cache memory that, upon receiving the write-back request from the buffer unit, writes the data of the primary cache memory to the memory, the certain data is quickly written back to the memory from the primary cache memory.Type: GrantFiled: January 13, 2014Date of Patent: April 26, 2016Assignee: FUJITSU LIMITEDInventors: Hayato Koike, Naohiro Kiyota
-
Patent number: 9311241Abstract: A method is described that includes performing the following for a transactional operation in response to a request from a processing unit that is directed to a cache identifying a cache line. Reading the cache line, and, if the cache line is in a Modified cache coherency protocol state, forwarding the cache line to circuitry that will cause the cache line to be written to deeper storage, and, changing another instance of the cache line that is available to the processing unit for the transactional operation to an Exclusive cache coherency state.Type: GrantFiled: December 29, 2012Date of Patent: April 12, 2016Assignee: Intel CorporationInventors: Ravi Rajwar, Robert Chappell, Zhongying Zhang, Jason Bessette
-
Patent number: 9311251Abstract: Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group ID. Also, each line may have a corresponding sticky state which indicates if the line should be retained in the cache. The sticky state is determined by an allocation hint provided by the requesting agent. When a cache line is allocated with the sticky state, the line will not be replaced by other cache lines fetched by any other group IDs.Type: GrantFiled: August 27, 2012Date of Patent: April 12, 2016Assignee: Apple Inc.Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
-
Patent number: 9298626Abstract: Cache lines in a computing environment with transactional memory are configurable with a coherency mode. Cache lines in full-line coherency mode are operated or managed with full-line granularity. Cache lines in sub-line coherency mode are operated or managed as sub-cache line portions of a full cache line. When a transaction accessing a cache line in full-line coherency mode results in a transactional abort, the cache line may be placed in sub-line coherency mode if the cache line is a high-conflict cache line. The cache line may be associated with a counter in a conflict address detection table that is incremented whenever a transaction conflict is detected for the cache line. The cache line may be a high-conflict cache line when the counter satisfies a high-conflict criterion, such as reaching a threshold value. The cache line may be returned to full-line coherency mode when a reset criterion is satisfied.Type: GrantFiled: September 26, 2013Date of Patent: March 29, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael K. Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
-
Patent number: 9298632Abstract: In one embodiment, a cache memory can store a plurality of cache lines, each including a write-set field to store a write-set indicator to indicate whether data has been speculatively written during a transaction of a transactional memory, and a read-set field to store a plurality of read-set indicators each to indicate whether a corresponding thread has read the data before the transaction has committed. A compression filter associated with the cache memory includes a first filter storage to store a representation of a cache line address of a cache line read by a first thread of threads before the transaction has committed. Other embodiments are described and claimed.Type: GrantFiled: June 28, 2012Date of Patent: March 29, 2016Assignee: Intel CorporationInventors: Robert S. Chappell, Ravi Rajwar, Zhongying Zhang, Jason A. Bessette
-
Patent number: 9298623Abstract: Cache lines in a computing environment with transactional memory are configurable with a coherency mode and are associated with a high-conflict indicator. Cache lines in full-line coherency mode are operated or managed with full-line granularity. Cache lines in sub-line coherency mode are operated or managed as sub-cache line portions of a full cache line. A cache line is placed in sub-line coherency mode based on examining the high-conflict indicator. A transaction accessing a memory address in a cache line in sub-line coherency mode marks only the sub-cache line portion associated with the memory address as transactionally accessed. The high-conflict indicator may be included in a set of descriptive bits associated with the cache line. A copy of the high-conflict indicator for a cache line in a first cache may be updated with the high-conflict indicator for the cache line in a second cache.Type: GrantFiled: September 26, 2013Date of Patent: March 29, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael K. Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
-
Patent number: 9292444Abstract: Cache lines in a multi-processor computing environment are configurable with a coherency mode. Cache lines in full-line coherency mode are operated or managed with full-line granularity. Cache lines in sub-line coherency mode are operated or managed as sub-cache line portions of a full cache line. Each cache is associated with a directory having a number of directory entries and with a side table having a smaller number of entries. The directory entry for a cache line associates the cache line with a tag and a set of full-line descriptive bits. Creating a side table entry for the cache line places the cache line in sub-line coherency mode. The side table entry associates each of the sub-cache line portions of the cache line with a set of sub-line descriptive bits. Removing the side table entry may return the cache line to full-line coherency mode.Type: GrantFiled: September 26, 2013Date of Patent: March 22, 2016Assignee: International Business Machines CorporationInventors: Fadi Y. Busaba, Harold W. Cain, III, Michael K. Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
-
Patent number: 9270770Abstract: A system and method for optimizing publication operating states includes sending a first publication message to provide an operating state of a mobile unit. The operating state of the mobile unit is stored in a data session slot associated with the mobile unit. A second publication message is sent to update the operating state of the mobile unit. An entity tag value is derived from identifiers in accounting information of the second publication message to identify data in the data session slot associated with the mobile unit.Type: GrantFiled: June 30, 2005Date of Patent: February 23, 2016Assignee: Cisco Technology, Inc.Inventor: Edward Dean Willis
-
Patent number: 9268705Abstract: A data storage device is provided. The data storage device includes a data storage medium having a plurality of data blocks, a cache having a plurality of cache blocks, wherein each cache block is identified by a cache block address, a cache control memory including a memory element for each data block configured to store the cache block address of the cache block in which data of the data block is written.Type: GrantFiled: February 22, 2013Date of Patent: February 23, 2016Assignee: Marvell International LTD.Inventors: Weiya Xi, Chao Jin, Khai Leong Yong, Sophia Tan, Zhi Yong Ching
-
Patent number: 9262122Abstract: In a multicore system in which a plurality of CPUs each including a cache memory share one main memory, a write buffer having a plurality of stages of buffers each holding data to be written to the main memory and an address of a write destination is provided between the cache memory and the main memory, and at the time of a write to the write buffer from the cache memory, an address of a write destination and the addresses stored in the buffers are compared, and when any of the buffers has an agreeing address, data is overwritten to this buffer, and the buffer is logically moved to a last stage.Type: GrantFiled: January 9, 2014Date of Patent: February 16, 2016Assignee: FUJITSU LIMITEDInventors: Takatoshi Fukuda, Shuji Takada, Kenjiro Mori
-
Patent number: 9251058Abstract: An apparatus, system, and method are disclosed for servicing storage requests for a non-volatile memory device. An interface module is configured to receive a storage request for a data set of a non-volatile memory device from a client. The data set is different from a block of the non-volatile memory device, and may have a length different from a block size of the non-volatile memory device. A block load module is configured to load data of at least the block size of the non-volatile memory device. A fulfillment module is configured to service the storage request using at least a portion of the loaded data.Type: GrantFiled: December 28, 2012Date of Patent: February 2, 2016Assignee: SanDisk Technologies, Inc.Inventors: David Nellans, Anirudh Badam, David Flynn, James Peterson
-
Patent number: 9239679Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache may comprise one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. The controller is connected to the memory and configured to (A) process normal read/write operations in a first mode and (B) process special read/write operations in a second mode by (i) tracking a write followed by read condition on each of said cache windows and (ii) discarding data on the cache-lines associated with the cache windows after completion of the write followed by a read condition on the cache-lines.Type: GrantFiled: January 1, 2014Date of Patent: January 19, 2016Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Kishore Kaniyar Sampathkumar, Saugata Das Purkayastha, Parag R. Maharana
-
Patent number: 9235531Abstract: A buffer manager that manages blocks of memory amongst multiple levels of buffer pools. For instance, there may be a first level buffer pool for blocks in first level memory, and a second level buffer pool for blocks in second level memory. The first level buffer pool evicts blocks to the second level buffer pool if the blocks are not used above a first threshold level. The second level buffer pool evicts blocks to a yet lower level if they have not used above a second threshold level. The first level memory may be dynamic random access memory, whereas the second level memory may be storage class memory, such as a solid state disk. By using such a storage class memory, the working block set of the buffer manager may be increased without resorting to lower efficiency random block access from yet lower level memory such as disk.Type: GrantFiled: November 28, 2011Date of Patent: January 12, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Pedro Celis, Dexter Paul Bradshaw, Sadashivan Krishnamurthy, Georgiy I. Reynya, Chengliang Zhang, Hanumantha Rao Kodavalla
-
Patent number: 9218305Abstract: Data writers desiring to update data without unduly impacting concurrent readers perform a synchronization operation with respect to plural processors or execution threads. The synchronization operation is parallelized using a hierarchical tree having a root node, one or more levels of internal nodes and as many leaf nodes as there are processors or threads. The tree is traversed from the root node to a lowest level of the internal nodes and the following node processing is performed for each node: (1) check the node's children, (2) if the children are leaf nodes, perform the synchronization operation relative to each leaf node's associated processor or thread, and (3) if the children are internal nodes, fan out and repeat the node processing with each internal node representing a new root node. The foregoing node processing is continued until all processors or threads associated with the leaf nodes have performed the synchronization operation.Type: GrantFiled: November 30, 2011Date of Patent: December 22, 2015Assignee: International Business Machines CorporationInventor: Paul E. McKenney
-
Patent number: 9218307Abstract: Data writers desiring to update data without unduly impacting concurrent readers perform a synchronization operation with respect to plural processors or execution threads. The synchronization operation is parallelized using a hierarchical tree having a root node, one or more levels of internal nodes and as many leaf nodes as there are processors or threads. The tree is traversed from the root node to a lowest level of the internal nodes and the following node processing is performed for each node: (1) check the node's children, (2) if the children are leaf nodes, perform the synchronization operation relative to each leaf node's associated processor or thread, and (3) if the children are internal nodes, fan out and repeat the node processing with each internal node representing a new root node. The foregoing node processing is continued until all processors or threads associated with the leaf nodes have performed the synchronization operation.Type: GrantFiled: November 30, 2013Date of Patent: December 22, 2015Assignee: International Business Machines CorporationInventor: Paul E. McKenney
-
Patent number: 9183399Abstract: A method and circuit arrangement utilize secure clear instructions defined in an instruction set architecture (ISA) for a processing unit to clear, overwrite or otherwise restrict unauthorized access to the internal architected state of the processing unit in association with context switch operations. The secure clear instructions are executable by a hypervisor, operating system, or other supervisory program code in connection with a context switch operation, and the processing unit includes security logic that is responsive to such instructions to restrict access by an operating system or process associated with an incoming context to architected state information associated with an operating system or process associated with an outgoing context.Type: GrantFiled: February 14, 2013Date of Patent: November 10, 2015Assignee: International Business Machines CorporationInventors: Adam J. Muff, Paul E. Schardt, Robert A. Shearer, Matthew R. Tubbs
-
Patent number: 9183150Abstract: A method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N?2 non-cooperative processors via the second interfaces, the logic operatively coupled to the first and second interfaces. The method includes receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluating if a second processor has previously accessed the data requested by the first processor; and deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.Type: GrantFiled: December 7, 2012Date of Patent: November 10, 2015Assignee: International Business Machines CorporationInventors: Victoria Caparros Cabezas, Rik Jongerius, Martin L. Schmatz, Phillip Stanley-Marbell
-
Patent number: 9176885Abstract: A circuit arrangement and method utilize cache injection logic to perform a cache inject and lock operation to inject a cache line in a cache memory and automatically lock the cache line in the cache memory in parallel with communication of the cache line to a main memory. The cache injection logic may additionally limit the maximum number of locked cache lines that may be stored in the cache memory, e.g., by aborting a cache inject and lock operation, injecting the cache line without locking, or unlocking and/or evicting another cache line in the cache memory.Type: GrantFiled: January 23, 2012Date of Patent: November 3, 2015Assignee: International Business Machines CorporationInventors: Jamie R. Kuesel, Mark G. Kupferschmidt, Paul E. Schardt, Robert A. Shearer, III
-
Patent number: 9147078Abstract: A method and circuit arrangement utilize secure clear instructions defined in an instruction set architecture (ISA) for a processing unit to clear, overwrite or otherwise restrict unauthorized access to the internal architected state of the processing unit in association with context switch operations. The secure clear instructions are executable by a hypervisor, operating system, or other supervisory program code in connection with a context switch operation, and the processing unit includes security logic that is responsive to such instructions to restrict access by an operating system or process associated with an incoming context to architected state information associated with an operating system or process associated with an outgoing context.Type: GrantFiled: March 12, 2013Date of Patent: September 29, 2015Assignee: International Business Machines CorporationInventors: Adam J. Muff, Paul E. Schardt, Robert A. Shearer, Matthew R. Tubbs
-
Patent number: 9135175Abstract: A system includes a number of processors with each processor including a cache memory. The system also includes a number of directory controllers coupled to the processors. Each directory controller may be configured to administer a corresponding cache coherency directory. Each cache coherency directory may be configured to track a corresponding set of memory addresses. Each processor may be configured with information indicating the corresponding set of memory addresses tracked by each cache coherency directory. Directory redundancy operations in such a system may include identifying a failure of one of the cache coherency directories; reassigning the memory address set previously tracked by the failed cache coherency directory among the non-failed cache coherency directories; and reconfiguring each processor with information describing the reassignment of the memory address set among the non-failed cache coherency directories.Type: GrantFiled: February 4, 2013Date of Patent: September 15, 2015Assignee: Oracle International CorporationInventors: Thomas M Wicki, Stephen E Phillips, Nicholas E Aneshansley, Ramaswamy Sivaramakrishnan, Paul N Loewenstein
-
Patent number: 9092336Abstract: A method includes monitoring a number of read access requests to an address for data stored on a backing store. The method also includes comparing the number of read access requests to a read access threshold. The read access threshold includes a threshold number of read access requests for the address. The method also includes caching data corresponding to a write access request to the address in response to determining that the number of read access requests satisfies the read access threshold.Type: GrantFiled: March 15, 2013Date of Patent: July 28, 2015Assignee: Intelligent Intellectual Property Holdings 2 LLCInventor: David Atkisson
-
Patent number: 9086808Abstract: A storage apparatus includes a memory that stores a job management information that registers a write job corresponding to a write command upon receiving the write command from other apparatus, a cache memory that stores data designated as target data by the write command, a storage drive that records the data stored in the cache memory to a storage medium based on the write job registered in the job management information, and a controller that controls a timing to output to the other apparatus a completion report of the write command based on a load condition of the storage device related to an accumulation count of write job acquired from the job management information.Type: GrantFiled: June 19, 2012Date of Patent: July 21, 2015Assignee: FUJITSU LIMITEDInventor: Yoshiharu Itoh
-
Patent number: 9081689Abstract: Methods and systems are disclosed for recovering dirty linefill buffer data upon linefill request failures. When a linefill request failure occurs and the linefill buffer has been marked as dirty, such as due to a system bus failure, the contents of the linefill buffer are pushed back to the system bus. The dirty data within the linefill buffer can then be used to update the external memory. The disclosed embodiments are useful for a wide variety of applications, including those requiring low data failure rates.Type: GrantFiled: January 14, 2013Date of Patent: July 14, 2015Assignee: Freescale Semiconductor, Inc.Inventor: Quyen Pho
-
Patent number: 9053030Abstract: A cache memory comprises a data array that stores a cashed block; a first address array that stores an address of the cached block; a second address array that stores an address of a first block to be removed from the data array when a cache miss occurs; and a control unit that transmits to a processor the first block stored in the data array as a cache hit block, when the address stored in the second address array results in a cache hit during a period before a second block which has caused the cache miss is read from a memory and written into the data array.Type: GrantFiled: January 25, 2010Date of Patent: June 9, 2015Assignee: NEC CORPORATIONInventor: Yasushi Kanoh
-
Patent number: 9043550Abstract: A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.Type: GrantFiled: November 6, 2013Date of Patent: May 26, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 9043560Abstract: Systems, methods, and other embodiments associated with a distributed cache coherency protocol are described. According to one embodiment, a method includes receiving a request from a requester for access to one or more memory blocks in a block storage device that is shared by at least two physical computing machines and determining if a caching right to any of the one or more memory blocks has been granted to a different requester. If the caching right has not been granted to the different requester, access is granted to the one or more memory blocks to the requester.Type: GrantFiled: September 23, 2011Date of Patent: May 26, 2015Assignee: Toshiba CorporationInventor: Arvind Pruthi
-
Publication number: 20150143059Abstract: A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets.Type: ApplicationFiled: December 6, 2013Publication date: May 21, 2015Applicant: International Business Machines CorporationInventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, William J. Starke, Jeffrey A. Stuecheli
-
Publication number: 20150143056Abstract: A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets.Type: ApplicationFiled: November 18, 2013Publication date: May 21, 2015Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, William J. Starke, Jeffrey A. Stuecheli
-
Patent number: 9032157Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.Type: GrantFiled: December 11, 2012Date of Patent: May 12, 2015Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
-
Patent number: 9026749Abstract: Disclosed is an on-chip buffer program method for a data storage device which comprises a multi-bit memory device and a memory controller. The on-chip buffer program method includes measuring a performance of the data storage device, judging whether the measured performance satisfies a target performance of the data storage device, and selecting one of a plurality of scheduling manners as an on-chip buffer program scheduling manner of the data storage device according to the judgment result.Type: GrantFiled: May 31, 2012Date of Patent: May 5, 2015Assignee: Samsung Electronics Co., Ltd.Inventors: Sangyong Yoon, Kitae Park
-
Publication number: 20150113228Abstract: A processor capable of storing trace data is disclosed. The processor includes a core adapted to execute programs, as well as a cache memory electrically connected to the core. The cache memory includes a core way and a trace way. The core way is adapted to store data that is required when the core executes the programs. The trace way is adapted to store data that is generated during debugging operations of the core. A control method of the processor is also disclosed.Type: ApplicationFiled: April 18, 2014Publication date: April 23, 2015Applicant: NATIONAL SUN YAT-SEN UNIVERSITYInventors: ING-JER HUANG, CHUN-HUNG LAI
-
Patent number: 9009417Abstract: It is an object to improve a reliability of a data protection for a storage control apparatus that is provided with a redundant configuration that is made of a plurality of clusters. A memory unit in each of the clusters C1 and C2 is provided with a first memory 3 having a volatile property, a battery 5 that is configured to supply an electrical power to the first memory 3, and a second memory 4 that stores data that is transferred from the first memory 3 in the case of a power outage. A control unit selects an operating mode for protecting data from a normal mode, a write through mode, and an access disable mode (a not ready state) based on a remaining power level of the battery 5.Type: GrantFiled: August 27, 2010Date of Patent: April 14, 2015Assignee: Hitachi, Ltd.Inventor: Tomoaki Okawa
-
Patent number: 8996812Abstract: A write-back coherency data cache for temporarily holding cache lines. Upon receiving a processor request for data, a determination is made from a coherency directory whether a copy of the data is cached in a write-back cache located in a memory controller hardware. The write-back cache holds data being written back to main memory for a period of time prior to writing the data to main memory. If the data is cached in the write-back cache, the data is removed from the write-back cache and forwarded to the requesting processor. The cache coherency state in the coherency directory entry for the data is updated to reflect the current cache coherency state of the data based on the requesting processor's intended use of the data.Type: GrantFiled: June 19, 2009Date of Patent: March 31, 2015Assignee: International Business Machines CorporationInventors: Marcus Lathan Kornegay, Ngan Ngoc Pham
-
Patent number: 8990511Abstract: There is provided a cache synchronization control method by which contents of a plurality of caches can be synchronized without a programmer explicitly setting a synchronization point, and the contents of the caches can be synchronized without scanning all cache blocks. A cache synchronization control method for a multiprocessor that has a plurality of processors having a cache, and a storage device shared by the plurality of processors, the method comprises: before a task is executed, a first step of writing back input data of the task to the storage device by a processor that manages the task and deleting data corresponding to the input data from its own cache by a processor other than the processor; and after the task is executed, a second step of writing back output data of the task to the storage device by a processor that has executed the task and deleting data corresponding to the output data from its own cache by a processor other than the processor.Type: GrantFiled: October 31, 2008Date of Patent: March 24, 2015Assignee: NEC CorporationInventor: Takahiro Kumura
-
Patent number: 8990513Abstract: A coherent attached processor proxy (CAPP) that participates in coherence communication in a primary coherent system on behalf of an external attached processor maintains, in each of a plurality of entries of a CAPP directory, information regarding a respective associated cache line of data from the primary coherent system cached by the attached processor. In response to initiation of recovery operations, the CAPP transmits, in a generally sequential order with respect to the CAPP directory, multiple memory access requests indicating an error for addresses indicated by the plurality of entries. In response to a snooped memory access request that targets a particular address hitting in the CAPP directory during the transmitting, the CAPP performs a coherence recovery operation for the particular address prior to a time indicated by the generally sequential order.Type: GrantFiled: January 11, 2013Date of Patent: March 24, 2015Assignee: International Business Machines CorporationInventors: Bartholomew Blaner, David W. Cummings, George W. Daly, Jr., Michael S. Siegel, Jeff A. Stuecheli
-
Publication number: 20150081982Abstract: A method of shielding a memory device (110) from high write rates comprising receiving instructions to write data at a memory container (105), the memory controller (105) composing a cache (120) comprising a number of cache lines defining stored data, with the memory controller (105), updating a cache line in response to a write hit in the cache (120), and with the memory controller (105), executing the instruction to write data in response to a cache miss to a cache line within the cache (120) in which the memory controller (105) prioritizes for writing to the cache (120) over writing to the memory device (110).Type: ApplicationFiled: April 27, 2012Publication date: March 19, 2015Inventors: Craig Warner, Gary Gostin, Matthew D. Pickett
-
Publication number: 20150081979Abstract: Techniques for enabling integration between a storage system and a host system that performs write-back caching are provided. In one embodiment, the host system can transmit to the storage system a command indicating that the host system intends to cache, in a write-back cache, writes directed to a range of logical block addresses (LBAs). The host system can further receive from the storage system a response indicating whether the command is accepted or rejected. If the command is accepted, the host system can initiate the caching of writes in the write-back cache.Type: ApplicationFiled: September 16, 2013Publication date: March 19, 2015Applicant: VMware, Inc.Inventors: Andrew Banta, Erik Cota-Robles
-
Publication number: 20150081984Abstract: A request is received that is to reference a first agent and to request a particular line of memory to be cached in an exclusive state. A snoop request is sent intended for one or more other agents. A snoop response is received that is to reference a second agent, the snoop response to include a writeback to memory of a modified cache line that is to correspond to the particular line of memory. A complete is sent to be addressed to the first agent, wherein the complete is to include data of the particular line of memory based on the writeback.Type: ApplicationFiled: November 26, 2014Publication date: March 19, 2015Inventors: Robert G. Blankenship, Bahaa Fahim, Robert Beers, Yen-Cheng Liu, Vedaraman Geetha, Herbert H. Hum, Jeff Willey