Access Control Bit Patents (Class 711/145)
  • Patent number: 8327100
    Abstract: A microcontroller system, such as a system-on-a-chip integrated circuit, including a processor (e.g., a Von Neumann processor), memory, and a memory protection unit (MPU), where the MPU provides execute-only access rights for one or more protected areas of the memory. The MPU can allow instructions fetched from within a protected area to access data in the protected area while preventing instructions fetched from outside the protected area from accessing data in the protected area.
    Type: Grant
    Filed: February 16, 2011
    Date of Patent: December 4, 2012
    Assignee: Inside Secure
    Inventors: Sandrine Batifoulier, Stephane Godzinski, Vincent Dupaquis
  • Publication number: 20120303908
    Abstract: Various embodiments of the present invention allow concurrent accesses to a cache. A request to update an object stored in a cache is received. A first data structure comprising a new value for the object is created in response to receiving the request. A cache pointer is atomically modified to point to the first data structure. A second data structure comprising an old value for the cached object is maintained until a process, which holds a pointer to the old value of the cached object, at least one of one of ends and indicates that the old value is no longer needed.
    Type: Application
    Filed: August 9, 2012
    Publication date: November 29, 2012
    Applicant: International Business Machines Corporation
    Inventors: Paul M. DANTZIG, Robert O. DRYFOOS, Sastry S. DURI, Arun IYENGAR
  • Patent number: 8321635
    Abstract: A method and apparatus for synchronizing input/output commands is provided. An incoming command mask representing an incoming input/output command associated with a memory region is created. In response to a determination that a pending input/output command associated with the memory region is pending, a bitwise inversion operation is performed on the incoming command mask to form a modified incoming command mask. A bitwise AND operation is performed on the modified incoming command mask and the pending command mask to form a pending command locking mask associated with the pending input/output command. A bitwise OR operation is performed between an existing memory lock for a same type of commands and incoming command bit mask to form a new memory region lock.
    Type: Grant
    Filed: November 30, 2010
    Date of Patent: November 27, 2012
    Assignee: LSI Corporation
    Inventor: Mark Ish
  • Patent number: 8301844
    Abstract: Multi-processor systems and methods are disclosed. One embodiment may comprise a multi-processor system including a processor that executes program instructions across at least one memory barrier. A request engine may provide an updated data fill corresponding to an invalid cache line. The invalid cache line may be associated with at least one executed load instruction. A load compare component may compare the invalid cache line to the updated data fill to evaluate the consistency of the at least one executed load instruction.
    Type: Grant
    Filed: January 13, 2004
    Date of Patent: October 30, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Simon C. Steely, Jr., Gregory Edward Tierney
  • Patent number: 8301847
    Abstract: Various embodiments of the present invention manage concurrent accesses to a resource in a parallel computing environment. A plurality of locks is assigned to manage concurrent access to a plurality of parts of a resource. A usage of at least one of the plurality of parts of the resource is monitored. The assignment of the plurality of locks to the plurality of parts of the resource is modified based on the usage that has been monitored.
    Type: Grant
    Filed: February 22, 2011
    Date of Patent: October 30, 2012
    Assignee: International Business Machines Corporation
    Inventors: Paul M. Dantzig, Robert O. Dryfoos, Sastry S. Duri, Arun Iyengar
  • Patent number: 8296542
    Abstract: Some embodiments of a system and a method to dynamically allocate memory to applications have been presented. For instance, an application interface executing on a processing device running in a computing system receives a request from an application running on the processing device for a predetermined capacity of memory. A kernel running on the processing device may allocate one or more consecutive heaps of memory to the application in response to the request. A total capacity of the heaps allocated is at least the predetermined capacity requested.
    Type: Grant
    Filed: September 30, 2009
    Date of Patent: October 23, 2012
    Assignee: Red Hat, Inc.
    Inventor: Philipp Knirsch
  • Patent number: 8291168
    Abstract: Methods and apparatus relating to disabling one or more cache portions during low voltage operations are described. In some embodiments, one or more extra bits may be used for a portion of a cache that indicate whether the portion of the cache is capable at operating at or below Vccmin levels. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 31, 2011
    Date of Patent: October 16, 2012
    Assignee: Intel Corporation
    Inventors: Christopher Wilkerson, Muhammad M. Khellah, Vivek De, Ming Zhang, Jaume Abella, Javier Carretero Casado, Pedro Chaparro Monferrer, Xavier Vera, Antonio Gonzalez
  • Patent number: 8266386
    Abstract: A design structure for a processor system may be embodied in a machine readable medium for designing, manufacturing or testing a processor integrated circuit. The design structure may embody a processor integrated circuit including multiple processors with respective processor cache memories. The design structure may specify enhanced cache coherency protocols to achieve cache memory integrity in a multi-processor environment. The design structure may describe a processor bus controller manages cache coherency bus interfaces to master devices and slave devices. The design structure may also describe a master I/O device controller and a slave I/O device controller that couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller.
    Type: Grant
    Filed: November 25, 2008
    Date of Patent: September 11, 2012
    Assignee: International Business Machines Corporation
    Inventor: Bernard Charles Drerup
  • Patent number: 8261022
    Abstract: A method and apparatus are disclosed for locking the most recently accessed frames in a cache memory. The most recently accessed frames in a cache memory are likely to be accessed by a task again in the near future. The most recently used frames may be locked at the beginning of a task switch or interrupt to improve the performance of the cache. The list of most recently used frames is updated as a task executes and may be embodied, for example, as a list of frames addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold.
    Type: Grant
    Filed: October 9, 2001
    Date of Patent: September 4, 2012
    Assignee: Agere Systems Inc.
    Inventors: Harry Dwyer, John Susantha Fernando
  • Patent number: 8255636
    Abstract: A messaging protocol that facilitates a distributed cache coherency conflict resolution in a multi-node system that resolves conflicts at a home node. The protocol may perform a method including supporting at least three protocol classes for the messaging protocol, via at least three virtual channels provided by a link layer of a network fabric coupled to the caching agents, wherein the virtual channels include a first virtual channel to support a probe message class, a second virtual channel to support an acknowledgment message class, and a third virtual channel to support a response message class.
    Type: Grant
    Filed: February 13, 2009
    Date of Patent: August 28, 2012
    Assignee: Intel Corporation
    Inventors: Brannon J. Batson, Benjamin Tsien, William A. Welch
  • Publication number: 20120215989
    Abstract: A system and method are disclosed for determining whether to allow or deny an access request based upon one or more descriptors at a local memory protection unit and based upon one or more descriptors a system memory protection unit. When multiple descriptors of a memory protection unit apply to a particular request, the least-restrictive descriptor will be selected. System access information is stored at a cache of a local core in response to a cache line being filled. The cached system access information is merged with local access information, wherein the most-restrictive access is selected.
    Type: Application
    Filed: February 23, 2011
    Publication date: August 23, 2012
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: William C. Moyer, Joseph C. Circello
  • Patent number: 8250311
    Abstract: A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data.
    Type: Grant
    Filed: July 7, 2008
    Date of Patent: August 21, 2012
    Assignee: Intel Corporation
    Inventors: Robert H. Beers, Ching-Tsun Chou, Robert J. Safranek, James Vash
  • Patent number: 8250304
    Abstract: A memory device comprising a cache memory with a predetermined amount of cache sets, each cache set comprising a predetermined amount of cache lines. Each cache line is operable to indicate a cache data injection into the particular cache line triggered by a bus-actor.
    Type: Grant
    Filed: December 3, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Florian Alexander Auernhammer, Patricia Maria Sagmeister
  • Patent number: 8250309
    Abstract: A data processor comprising: a control register operable to store a cache control value; and data accessing logic responsive to a data access instruction and to said cache control value to look for data to be accessed in a cache if said cache control value has a predetermined value and not to look for said data to be accessed in said cache if said cache control value does not have said predetermined value.
    Type: Grant
    Filed: January 28, 2005
    Date of Patent: August 21, 2012
    Assignee: ARM Limited
    Inventors: Patrick Gerard McGlew, Andrew Burdass
  • Patent number: 8250310
    Abstract: A method, apparatus, and article of manufacture are provided for managing a hybrid storage device based upon the properties associated therewith. The storage device includes flash memory and physical storage. Select data is written to the flash memory and is not subject to flushing to the physical storage, and select data is either written directly to the physical storage or written to the flash memory and is subject to flushing to the physical storage.
    Type: Grant
    Filed: July 31, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventor: Steven D. Cook
  • Patent number: 8250308
    Abstract: The method includes initiating a processor request to a cache in a requesting node and broadcasting the processor request to remote nodes when the processor request encounters a local cache miss, performing a directory search of each remote cache to determine a state of a target line's address and an ownership state of a specified address, returning the state of the target line to the requesting node and forming a combined response, and broadcasting the combined response to each remote node. During a fetch operation, when the directory search indicates an IM or a Target Memory node on a remote node, data is sourced from the respective remote cache and forwarded to the requesting node while protecting the data, and during a store operation, the data is sourced from the requesting node and protected while being forwarded to the IM or the Target Memory node after coherency has been established.
    Type: Grant
    Filed: February 15, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vesselina K. Papazova, Ekaterina M. Ambroladze, Michael A. Blake, Pak-kin Mak, Arthur J. O'Neill, Jr., Craig R. Waters
  • Patent number: 8250331
    Abstract: Operating system virtual memory management for hardware transactional memory. A method may be performed in a computing environment where an application running on a first hardware thread has been in a hardware transaction, with transactional memory hardware state in cache entries correlated by memory hardware when data is read from or written to data cache entries. The data cache entries are correlated to physical addresses in a first physical page mapped from a first virtual page in a virtual memory page table. The method includes an operating system deciding to unmap the first virtual page. As a result, the operating system removes the mapping of the first virtual page to the first physical page from the virtual memory page table. As a result, the operating system performs an action to discard transactional memory hardware state for at least the first physical page. Embodiments may further suspend hardware transactions in kernel mode.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: August 21, 2012
    Assignee: Microsoft Corporation
    Inventors: Koichi Yamada, Gad Sheaffer, Ali-Reza Adl-Tabatabai, Landy Wang, Martin Taillefer, Arun Kishan, David Callahan, Jan Gray, Vadim Bassin
  • Patent number: 8244981
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: August 14, 2012
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Patent number: 8244983
    Abstract: A memory control system is provided with a directory cache and a memory controller. The directory cache has a plurality of directory cache entries configured to store information regarding copies of memory lines stored in a plurality of memory caches, wherein each directory cache entry has one or more bits configured to store an ownership state that indicates whether a corresponding master directory entry lacks a memory cache owner. The memory controller is configured to free for re-use ones of the directory cache entries by 1) accessing a particular directory entry, and 2) determining whether the ownership state of the particular directory cache entry indicates that a corresponding master directory entry lacks a memory cache owner. If so, the memory controller A) skips a master directory update process, and B) claims for re-use the particular directory cache entry.
    Type: Grant
    Filed: October 30, 2006
    Date of Patent: August 14, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Erin A. Handgen
  • Patent number: 8244985
    Abstract: Apparatus and methods relating to store operations are disclosed. In one embodiment, a first storage unit is to store data. A second storage unit is to store the data only after it has become detectable by a bus agent. Moreover, the second storage unit may store an index field for each data value to be stored within the second storage unit. Other embodiments are also disclosed.
    Type: Grant
    Filed: January 27, 2009
    Date of Patent: August 14, 2012
    Assignee: Intel Corporation
    Inventors: Vladimir Pentkovksi, Ling Cen, Vivek Garg, Deep Buch, David Zhao
  • Publication number: 20120203976
    Abstract: A data processing system includes at least a first through third processing nodes coupled by an interconnect fabric. The first processing node includes a master, a plurality of snoopers capable of participating in interconnect operations, and a node interface that receives a request of the master and transmits the request of the master to the second processing unit with a nodal scope of transmission limited to the second processing node. The second processing node includes a node interface having a directory. The node interface of the second processing node permits the request to proceed with the nodal scope of transmission if the directory does not indicate that a target memory block of the request is cached other than in the second processing node and prevents the request from succeeding if the directory indicates that the target memory block of the request is cached other than in the second processing node.
    Type: Application
    Filed: April 12, 2012
    Publication date: August 9, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul A. Ganfield, Guy L. Guthrie, David J. Krolak, Michael S. Siegel, William J. Starke, Jeffrey A. Stuecheli, Derek E. Williams
  • Publication number: 20120198178
    Abstract: One embodiment provides a cached memory system including a memory cache and a plurality of read-claim (RC) machines configured for performing read and write operations dispatched from a processor. According to control logic provided with the cached memory system, a hazard is detected between first and second read or write operations being handled by first and second RC machines. The second RC machine is suspended and a subset of the address bits of the second operation at specific bit positions are recorded. The subset of address bits of the first operation at the specific bit positions are broadcast in response to the first operation being completed. The second operation is then re-requested.
    Type: Application
    Filed: January 31, 2011
    Publication date: August 2, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Cox, Robert J. Dorsey, Kevin CK Lin, Eric F. Robinson
  • Publication number: 20120191917
    Abstract: Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.
    Type: Application
    Filed: March 20, 2012
    Publication date: July 26, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jun Dai, Subhendu Das, Zhi Gan, Zhang Yue
  • Patent number: 8225297
    Abstract: Various technologies and techniques are disclosed for providing software accessible metadata on a cache of a central processing unit. A multiprocessor has at least one central processing unit. The central processing unit has a cache with cache lines that are augmented by cache metadata. The cache metadata includes software-controlled metadata identifiers that allow multiple logical processors to share the cache metadata. The metadata identifiers and cache metadata can then be used to accelerate various operations. For example, parallel computations can be accelerated using cache metadata and metadata identifiers. As another example, nested computations can be accelerated using metadata identifiers and cache metadata. As yet another example, transactional memory applications that include parallelism within transactions or that include nested transactions can be also accelerated using cache metadata and metadata identifiers.
    Type: Grant
    Filed: August 6, 2007
    Date of Patent: July 17, 2012
    Assignee: Microsoft Corporation
    Inventors: Jan Gray, Timothy L. Harris, James Larus, Burton Smith
  • Publication number: 20120179875
    Abstract: A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have an arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.
    Type: Application
    Filed: January 10, 2012
    Publication date: July 12, 2012
    Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Gad Sheaffer, Quinn Jacobson
  • Patent number: 8219758
    Abstract: In an embodiment, a non-transparent memory unit is provided which includes a non-transparent memory and a control circuit. The control circuit may manage the non-transparent memory as a set of non-transparent memory blocks. Software executing on one or more processors may request a non-transparent memory block in which to process data. The control circuit may allocate a first block, and may return an address (or other indication) of the allocated block so that the software can access the block. The control circuit may also provide automatic data movement between the non-transparent memory and a main memory system to which the non-transparent memory unit is coupled. For example, the automatic data movement may include filling data from the main memory system to the allocated block, or flushing the data in the allocated block to the main memory system after the processing of the allocated block is complete.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: July 10, 2012
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Publication number: 20120173825
    Abstract: Each level of cache within a memory hierarchy of a device is configured with a cache results register (CRR). The caches are coupled to a debugger interface via a peripheral bus. The device is placed in debug mode, and a debugger forwards a transaction address (TA) of a dummy transaction to the device. On receipt of the TA, the device processor forwards the TA via the system bus to the memory hierarchy to initiate an address lookup operation within each level of cache. For each cache in which the TA hits, the cache controller (debug) logic updates the cache's CRR with Hit, Way, and Index values, identifying the physical storage location within the particular cache at which the corresponding instruction/data is stored. The debugger retrieves information about the hit/miss status, the physical storage location and/or a copy of the data via direct requests over the peripheral bus.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Robert Ehrlich, Kevin C. Heuer, Robert A. McGowan
  • Patent number: 8214601
    Abstract: The present invention provides a system with a cache that indicates which, if any, of its sections contain data having spent status. The invention also provides a method for identifying cache sections containing data having spent status and then purging without writing back to main memory a cache line having at least one section containing data having spent status. The invention further provides a program that specifies a cache-line section containing data that is to acquire “spent” status. “Spent” data, herein, is useless modified or unmodified data that was formerly at least potentially useful data when it was written to a cache. “Purging” encompasses both invalidating and overwriting.
    Type: Grant
    Filed: July 30, 2004
    Date of Patent: July 3, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Dale Morris, Robert S. Schreiber
  • Patent number: 8214593
    Abstract: A serial interface cache controller, control method and micro-controller system using the same. The controller includes L rows of address tags, wherein each row of address tags includes an M-bits block tag and an N-bits valid area tag. The M-bits block tag records an address block of T-byte data stored in an internal cache memory, and the N-bits valid area tag records valid bit sectors in the address block. Each valid bit sector has the size of T/N bytes. The controller needs to read T/N bytes of data from an external memory to the internal cache memory at each time without the need of reading the T-byte data of the whole address block. Because the T-byte data of the whole address block is not necessary to be read by the micro-controller, the waiting time of the micro-controller may be shortened, and the performance can be increased.
    Type: Grant
    Filed: November 18, 2009
    Date of Patent: July 3, 2012
    Assignee: Sunplus Innovation Technology Inc.
    Inventor: Han-Huah Hsu
  • Patent number: 8214602
    Abstract: In one embodiment, a processor comprises a data cache and a load/store unit (LSU). The LSU comprises a queue and a control unit, and each entry in the queue is assigned to a different load that has accessed the data cache but has not retired. The control unit is configured to update the data cache hit status of each load represented in the queue as a content of the data cache changes. The control unit is configured to detect a snoop hit on a first load in a first entry of the queue responsive to: the snoop index matching a load index stored in the first entry, the data cache hit status of the first load indicating hit, the data cache detecting a snoop hit for the snoop operation, and a load way stored in the first entry matching a first way of the data cache in which the snoop operation is a hit.
    Type: Grant
    Filed: June 23, 2008
    Date of Patent: July 3, 2012
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ashutosh S. Dhodapkar, Michael G. Butler
  • Patent number: 8214603
    Abstract: A method for handling multiple memory requests within a multi-processor system is disclosed. A lock control section is initially assigned to a data block within a system memory. In response to a request for accessing the data block by a processing unit, a determination is made whether or not the lock control section of the data block has been set. If the lock control section has been set, another determination is made whether or not the requesting processing unit is located beyond a predetermined distance from a memory controller. If the requesting processing unit is located beyond a predetermined distance from the memory controller, the requesting processing unit is invited to perform other functions; otherwise, the number of the requesting processing unit is placed in a queue table. However, if the lock control section has not been set, the lock control section of the data block is set, and the access request is allowed.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: July 3, 2012
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Guy L. Guthrie, William J. Starke
  • Patent number: 8209492
    Abstract: Systems and methods for accessing common registers in a multi-core processor are disclosed. In an embodiment a method may comprise streaming at least one transaction from one of a plurality of processing cores in a core domain directly to a register domain. The method may also comprise reassembling the at least one streamed transaction in the register domain for data access operations at the common registers.
    Type: Grant
    Filed: June 14, 2005
    Date of Patent: June 26, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Warren K. Howlett, Christopher L. Lyles
  • Patent number: 8205030
    Abstract: There is provided a composite type recording apparatus restricting write operations depending on the type of a connected host apparatus, including a recording medium having a first data region, a non-volatile storage medium having a second data region and an identification information table for integrating and managing the first and second data region, an information selection section for selecting positional information having predetermined identification information from the identification information table according to the type of the host apparatus, a conversion section for converting positional information selected by the information selection section into positional information matching the first data region or positional information matching the second data region, a first write section for writing data supplied from the host apparatus in the first data region according to conversion process of the conversion section and a second write section for writing data supplied from the host apparatus in the se
    Type: Grant
    Filed: July 21, 2006
    Date of Patent: June 19, 2012
    Assignee: Sony Corporation
    Inventors: Hajime Nishimura, Takeshi Sasa, Tetsuya Tamura, Kazuya Suzuki
  • Patent number: 8195884
    Abstract: A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to a router through a memory communications controller and a network interface controller, a multiplicity of computer processors, each computer processor implementing a plurality of hardware threads of execution; and computer memory, the computer memory organized in pages and operatively coupled to one or more of the computer processors, the computer memory including a set associative cache, the cache comprising cache ways organized in sets, the cache being shared among the hardware threads of execution, each page of computer memory restricted for caching by one replacement vector of a class of replacement vectors to particular ways of the cache, each page of memory further restricted for caching by one or more bits of a replacement vector classification to particular sets of ways of the cache.
    Type: Grant
    Filed: September 18, 2008
    Date of Patent: June 5, 2012
    Assignee: International Business Machines Corporation
    Inventors: Russell D. Hoover, Eric O. Mejdrich
  • Patent number: 8195891
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: June 5, 2012
    Assignee: Intel Corporation
    Inventor: Sanjeev N. Trika
  • Publication number: 20120137081
    Abstract: Systems and methods for management of a cache are disclosed. In general, embodiments described herein store access counts in file system metadata associated with files in the cache. By encoding access counts in the file system metadata, file I/O operations are reduced. Preferably, the reference count is encoded in an access count timestamp in the file system metadata. The access counts can be decoded based on the difference between the access count time stamp and a base time value, with larger differences reflecting a larger access count. The cache can be aged by advancing the base time value, thereby causing the access count for a file to drop. The base time value can also be stored in file system metadata, thereby reducing file I/O operations when performing aging.
    Type: Application
    Filed: November 30, 2010
    Publication date: May 31, 2012
    Inventor: James C. Shea
  • Patent number: 8190840
    Abstract: A memory device comprises a memory array, a status register, a status-register write-protect bit and a security register. The memory array contains a number of memory blocks. The status register includes at least one protection bit indicative of a protection status of at least one corresponding block of the memory blocks. The status-register write-protect bit is coupled with the status register for preventing a state change of the at least one protection bit. The security register includes at least one register-protection bit for preventing the state change in one of the at least one protection bit of the status register and the status-register write-protect bit.
    Type: Grant
    Filed: June 8, 2011
    Date of Patent: May 29, 2012
    Assignee: MACRONIX International Co., Ltd.
    Inventors: Yu-Lan Kuo, Chun-Yi Lee, Kuen-Long Chang, Chun-Hsiung Hung
  • Patent number: 8185695
    Abstract: A system and method for selectively transmitting probe commands and reducing network traffic. Directory entries are maintained to filter probe command and response traffic for certain coherent transactions. Rather than storing directory entries in a dedicated directory storage, directory entries may be stored in designated locations of a shared cache memory subsystem, such as an L3 cache. Directory entries are stored within the shared cache memory subsystem to provide indications of lines (or blocks) that may be cached in exclusive-modified, owned, shared, shared-one, or invalid coherency states. The absence of a directory entry for a particular line may imply that the line is not cached anywhere in a computing system.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: May 22, 2012
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Patrick Conway, Kevin Michael Lepak
  • Patent number: 8180968
    Abstract: The invention relates to a method for reducing cache flush time of a cache in a computer system. The method includes populating at least one of a plurality of directory entries of a dirty line directory based on modification of the cache to form at least one populated directory entry, and de-populating a pre-determined number of the plurality of directory entries according to a dirty line limiter protocol causing a write-back from the cache to a main memory, where the dirty line limiter protocol is based on a number of the at least one populated directory entry exceeding a pre-defined limit.
    Type: Grant
    Filed: March 28, 2007
    Date of Patent: May 15, 2012
    Assignee: Oracle America, Inc.
    Inventors: Brian W. O'Krafka, Roy S. Moore, Pranay Koka
  • Patent number: 8180969
    Abstract: A cache stores information in each of a plurality of cache lines. Addressing circuitry receives memory addresses for comparison with multiple ways of stored addresses to determine a hit condition representing a match of a stored address and a received address. A pseudo least recently used (PLRU) tree circuit stores one or more states of a PLRU tree and implements a tree having a plurality of levels beginning with a root and indicates one of a plurality of ways in the cache. Each level has one or more nodes. Multiple nodes within a same level are child nodes to a parent node of an immediately higher level. PLRU update circuitry that is coupled to the addressing circuitry and the PLRU tree circuit receives lock information to lock one or more lines of the cache and prevent a PLRU tree state from selecting a locked line.
    Type: Grant
    Filed: January 15, 2008
    Date of Patent: May 15, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Publication number: 20120117333
    Abstract: A method and apparatus for detecting lock instructions and lock release instruction, as well as predicting critical sections is herein described. A lock instruction is detected with detection logic, which potentially resides in decode logic. A lock instruction entry associated with the lock instruction is stored/created. Address locations and values to be written to those address location of subsequent potential lock release instruction are compared to the address loaded from by the lock instruction and the value load by the lock instruction. If the addresses and values match, it is determined that the lock release instruction matches the lock instruction. A prediction entry stores a reference to the lock instruction, such as a last instruction pointer (LIP), and an associated value to represent the lock instruction is to be elided upon subsequent detection, if it is determined that the lock release instruction matches the lock instruction.
    Type: Application
    Filed: January 13, 2012
    Publication date: May 10, 2012
    Inventors: Haitham Akkary, Ravi Rajwar, Srikanth T. Srinivasan
  • Publication number: 20120117332
    Abstract: A method and apparatus for synchronizing input/output commands is provided. An incoming command mask representing an incoming input/output command associated with a memory region is created. In response to a determination that a pending input/output command associated with the memory region is pending, a bitwise inversion operation is performed on the incoming command mask to form a modified incoming command mask. A bitwise AND operation is performed on the modified incoming command mask and the pending command mask to form a pending command locking mask associated with the pending input/output command. A bitwise OR operation is performed between an existing memory lock for a same type of commands and incoming command bit mask to form a new memory region lock.
    Type: Application
    Filed: November 30, 2010
    Publication date: May 10, 2012
    Applicant: LSI CORPORATION
    Inventor: Mark Ish
  • Publication number: 20120117334
    Abstract: A method and apparatus for monitoring memory accesses in hardware to support transactional execution is herein described. Attributes are monitor accesses to data items without regard for detection at physical storage structure granularity, but rather ensuring monitoring at least at data items granularity. As an example, attributes are added to state bits of a cache to enable new cache coherency states. Upon a monitored memory access to a data item, which may be selectively determined, coherency states associated with the data item are updated to a monitored state. As a result, invalidating requests to the data item are detected through combination of the request type and the monitored coherency state of the data item.
    Type: Application
    Filed: January 20, 2012
    Publication date: May 10, 2012
    Inventors: Gad Sheaffer, Shlomo Raikin, Vadim Bassin, Raanan Sade, Ehud Cohen, Oleg Margulis
  • Patent number: 8176260
    Abstract: In a recording controller, an invalidation-request issuing unit, when a second data is recorded on a recording location for a first data that is to be recorded in a memory and a third data corresponding to the second data is recorded in the cache, issues an invalidation request for invalidating the third data, and a notifying unit, when the third data is invalidated, outputs data for identifying a recording request for the first data to an output source of the recording request, and notifies to the output source that recording of the first data is completed.
    Type: Grant
    Filed: July 31, 2008
    Date of Patent: May 8, 2012
    Assignee: Fujitsu Limited
    Inventor: Kenji Uchida
  • Patent number: 8176261
    Abstract: One aspect of the embodiments utilizes an information processing apparatus having a plurality of system boards connected via a bus, each system board including a CPU having a cache memory, a main memory that forms a shared memory, and a system controller that manages the CPU and the main memory as well as controls a data transfer of at least one of the cache memory and the main memory by a memory access request, wherein each system controller including a snoop controller that selects a transfer source CPU from transfer source candidate CPUs each having cache memory including a data requested by the memory access request when the data is available in a plurality of cache memories.
    Type: Grant
    Filed: August 22, 2008
    Date of Patent: May 8, 2012
    Assignee: Fujitsu Limited
    Inventor: Go Sugizaki
  • Patent number: 8176014
    Abstract: Servers in a network cluster can each store a copy of a data item in local cache, providing read access to these copies through read-only entity beans. The original data item in the database can be updated through a read/write entity bean one of the cluster servers. That cluster server has access to an invalidation target, which contains identification information relating to copies of the data item stored on servers in the cluster. Once the read/write bean updates the data item in the database, an invalidate request can be sent or multicast to all cluster members, or to any read-only bean or server contained in the invalidation target. Each server or read-only bean receiving the request knows to drop any copy of the data item in local cache, and can request a current copy of the data item from the database.
    Type: Grant
    Filed: April 6, 2007
    Date of Patent: May 8, 2012
    Assignee: Oracle International Corporation
    Inventors: Dean Bernard Jacobs, Robert Woollen, Seth White
  • Patent number: 8171230
    Abstract: A PCI Express (PCIe) computer system utilizes address translation services to translate virtual addresses from I/O device adaptors to physical addresses of system memory. A combined memory controller and host bridge uses a translation agent to convert the I/O addresses via translation control entries (TCEs) in a TCE table (also known as an address translation and protection table). Some of the I/O device adaptors have address translation caches for local storage of TCEs. The TCE definition includes a new non-cacheable control bit which is set active in the TCE table when the TCE is in the process of being invalidated. The memory controller prevents further caching of the TCE while the non-cacheable control bit is active. A further implementation utilizes a change-in-progress control bit of the TCE to indicate that the TCE is in the process of being changed to allow simultaneous invalidation of the previously TCE information.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: May 1, 2012
    Assignee: International Business Machines Corporation
    Inventors: Douglas M. Freimuth, Renato J. Recio, Steven M. Thurber
  • Patent number: 8171218
    Abstract: A memory card system and memory card. The memory card system may include a host and a memory card able to be received by the host. The memory card may transfer an application program index to the host in response to a command from the host. Time spent finding information relating to the application programs loaded in the memory card may be saved and a convenient and efficient interface may be provided to a user.
    Type: Grant
    Filed: October 28, 2006
    Date of Patent: May 1, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gwang-Myung Kim, Tae-Hyun Yoon
  • Patent number: 8161248
    Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Phillip Heidelberger, Dirk Hoenicke, Martin Ohmacht
  • Patent number: 8161247
    Abstract: Synchronizing threads on loss of memory access monitoring. Using a processor level instruction included as part of an instruction set architecture for a processor, a read, or write monitor to detect writes, or reads or writes respectively from other agents on a first set of one or more memory locations and a read, or write monitor on a second set of one or more different memory locations are set. A processor level instruction is executed, which causes the processor to suspend executing instructions and optionally to enter a low power mode pending loss of a read or write monitor for the first or second set of one or more memory locations. A conflicting access is detected on the first or second set of one or more memory locations or a timeout is detected. As a result, the method includes resuming execution of instructions.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: April 17, 2012
    Assignee: Microsoft Corporation
    Inventors: Jan Gray, David Callahan, Burton Jordan Smith, Gad Sheaffer, Ali-Reza Adl-Tabatabai, Bratin Saha