Access Control Bit Patents (Class 711/145)
-
Publication number: 20120079213Abstract: Various embodiments of the present invention manage concurrent accesses to a resource in a parallel computing environment. A plurality of locks is assigned to manage concurrent access to a plurality of parts of a resource. A usage of at least one of the plurality of parts of the resource is monitored. The assignment of the plurality of locks to the plurality of parts of the resource is modified based on the usage that has been monitored.Type: ApplicationFiled: February 22, 2011Publication date: March 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Paul M. DANTZIG, Robert O. Dryfoos, Sastry S. DURI, Arun IYENGAR
-
Patent number: 8145847Abstract: A system comprises a first node having an associated cache including data having an associated first cache state. The first cache state is capable of identifying the first node as being an ordering point for serializing requests from other nodes for the data.Type: GrantFiled: January 20, 2004Date of Patent: March 27, 2012Assignee: Hewlett-Packard Development Company, L.P.Inventors: Stephen R. Van Doren, Gregory Edward Tierney, Simon C. Steely, Jr.
-
Patent number: 8145830Abstract: A flash memory device includes a storage area having a main memory portion and a cache memory portion storing at least one bit per cell less than the main memory portion; and a controller that manages data transfer between the cache memory portion and the main memory portion according to at least one caching command received from a host. The management of data transfer, by the controller, includes transferring new data from the host to the cache memory portion, copying the data from the cache memory portion to the main memory portion and controlling (enabling/disabling) the scheduling of cache cleaning operations.Type: GrantFiled: April 26, 2010Date of Patent: March 27, 2012Assignee: SanDisk IL Ltd.Inventor: Menahem Lasser
-
Patent number: 8140764Abstract: A method for reconfiguring a cache memory is provided. The method in one aspect may include analyzing one or more characteristics of an execution entity accessing a cache memory and reconfiguring the cache based on the one or more characteristics analyzed. Examples of analyzed characteristic may include but are not limited to data structure used by the execution entity, expected reference pattern of the execution entity, type of an execution entity, heat and power consumption of an execution entity, etc. Examples of cache attributes that may be reconfigured may include but are not limited to associativity of the cache memory, amount of the cache memory available to store data, coherence granularity of the cache memory, line size of the cache memory, etc.Type: GrantFiled: January 6, 2011Date of Patent: March 20, 2012Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Balaram Sinharoy, Robert B. Tremaine, Robert W. Wisniewski
-
Patent number: 8140823Abstract: Systems and methods including a multithreaded processor with a lock indicator are disclosed. In an embodiment, a system includes means for indicating a lock status of a shared resource in a multithreaded processor. The system includes means for automatically locking the shared resource before processing exception handling instructions associated with the shared resource. The system further includes means for unlocking the shared resource.Type: GrantFiled: December 3, 2007Date of Patent: March 20, 2012Assignee: QUALCOMM IncorporatedInventors: Lucian Codrescu, Erich James Plondke, Suresh Venkumahanti
-
Patent number: 8140773Abstract: A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have any arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.Type: GrantFiled: June 27, 2007Date of Patent: March 20, 2012Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Gad Sheaffer, Quinn Jacobson
-
Patent number: 8135916Abstract: A processor includes a first level of cache memory and a first set of instructions configured to implement a first cache coherency protocol. The processor also includes a second set of instructions configured to implement a second cache coherency protocol and a cache coherency protocol selector having at least two choice-states. The processor further includes a cache coherency implementer configured to implement the first cache coherency protocol or the second cache coherency with respect to the first level of cache memory based on a selected choice-state of the cache coherency protocol selector.Type: GrantFiled: April 1, 2009Date of Patent: March 13, 2012Assignee: Marvell International Ltd.Inventors: R. Frank O'Bleness, Sujat Jamil, David E. Miner, Joseph Delgross, Tom Hameenanttila, Jeffrey Kehl
-
Patent number: 8135910Abstract: A cache, system and method for improving the snoop bandwidth of a cache directory. A cache directory may be sliced into two smaller cache directories each with its own snooping logic. By having two cache directories that can be accessed simultaneously, the bandwidth can be essentially doubled. Furthermore, a “frequency matcher” may shift the cycle speed to a lower speed upon receiving snoop addresses from the interconnect thereby slowing down the rate at which requests are transmitted to the dispatch pipelines. Each dispatch pipeline is coupled to a sliced cache directory and is configured to search the cache directory to determine if data at the received addresses is stored in the cache memory. As a result of slowing down the rate at which requests are transmitted to the dispatch pipelines and accessing the two sliced cache directories simultaneously, the bandwidth or throughput of the cache directory may be improved.Type: GrantFiled: February 11, 2005Date of Patent: March 13, 2012Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, William J. Starke, Derek E. Williams, Phillip G. Williams
-
Patent number: 8127085Abstract: Methods and apparatus for instruction restarts and inclusion in processor micro-op caches are disclosed. Embodiments of micro-op caches have way storage fields to record the instruction-cache ways storing corresponding macroinstructions. Instruction-cache in-use indications associated with the instruction-cache lines storing the instructions are updated upon micro-op cache hits. In-use indications can be located using the recorded instruction-cache ways in micro-op cache lines. Victim-cache deallocation micro-ops are enqueued in a micro-op queue after micro-op cache miss synchronizations, responsive to evictions from the instruction-cache into a victim-cache. Inclusion logic also locates and evicts micro-op cache lines corresponding to the recorded instruction-cache ways, responsive to evictions from the instruction-cache.Type: GrantFiled: December 31, 2008Date of Patent: February 28, 2012Assignee: Intel CorporationInventors: Lihu Rappoport, Chen Koren, Franck Sala, Oded Lempel, Ido Ouziel, Ilhyun Kim, Ron Gabor, Lior Libis, Gregory Pribush
-
Patent number: 8122197Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.Type: GrantFiled: August 19, 2009Date of Patent: February 21, 2012Assignee: International Business Machines CorporationInventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Dirk Hoenicke, Martin Ohmacht
-
Patent number: 8112505Abstract: Techniques are provided for desktop streaming over wide area networks. In one embodiment, a computing device comprises logic that is configured to intercept file open requests for files stored in a file system, where at least some of the files in the file system may have not yet been fully downloaded. In response to a request to open a file, the logic is configured to modify a first sharing mode specified therein and to open the file in a read-write sharing mode that allows other processes to open the file. While one or more blocks of the file are being downloaded or written into the file, the logic is configured to check whether a second sharing mode received in a subsequent request to open the file is compatible with the first sharing mode. If the second sharing mode is not compatible with the first sharing mode, the logic is configured to deny the subsequent request even though in the file system the file is opened in the read-write sharing mode.Type: GrantFiled: March 12, 2010Date of Patent: February 7, 2012Assignee: Wanova Technologies, Ltd.Inventors: Israel Ben-Shaul, Shahar Glixman, Tal Zamir
-
Patent number: 8108618Abstract: An information handling system includes a processor integrated circuit including multiple processors with respective processor cache memories. Enhanced cache coherency protocols achieve cache memory integrity in a multi-processor environment. A processor bus controller manages cache coherency bus interfaces to master devices and slave devices. In one embodiment, a master I/O device controller and a slave I/O device controller couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller. In one embodiment, the processor bus controller blocks partial responses that it receives from all devices except the slave I/O device from being included in a combined response that the processor bus controller sends over the cache coherency buses.Type: GrantFiled: October 30, 2007Date of Patent: January 31, 2012Assignee: International Business Machines CorporationInventor: Bernard Charles Drerup
-
Patent number: 8103830Abstract: Methods and apparatus relating to disabling one or more cache portions during low voltage operations are described. In some embodiments, one or more extra bits may be used for a portion of a cache that indicate whether the portion of the cache is capable at operating at or below Vccmin levels. Other embodiments are also described and claimed.Type: GrantFiled: September 30, 2008Date of Patent: January 24, 2012Assignee: Intel CorporationInventors: Christopher Wilkerson, Muhammad M. Khellah, Vivek De, Ming Zhang, Jaume Abella, Javier Carretero Casado, Pedro Chaparro Monferrer, Xavier Vera, Antonio Gonzalez
-
Patent number: 8103638Abstract: Methods, systems, and computer-readable media are disclosed for partitioning contended synchronization objects. A particular method determines a contention-free value of a performance metric associated with a synchronization object of a data structure. A contended value of the performance metric is measured, and the synchronization object is partitioned when the contended value of the performance metric exceeds a multiple of the contention-free value of the performance metric.Type: GrantFiled: May 7, 2009Date of Patent: January 24, 2012Assignee: Microsoft CorporationInventors: Fabricio Voznika, Alexandre Verbitski, Pravin Mittal
-
Patent number: 8090914Abstract: A system comprises a first node operative to provide a source broadcast requesting data. The first node associates an F-state with a copy of the data in response to receiving the copy of the data from memory and receiving non-data responses from other nodes in the system. The non-data responses include an indication that at least a second node includes a shared copy of the data. The F-state enabling the first node to serve as an ordering point in the system capable of responding to requests from other nodes in the system with a shared copy of the data.Type: GrantFiled: January 20, 2004Date of Patent: January 3, 2012Assignee: Hewlett-Packard Development Company, L.P.Inventors: Gregory Edward Tierney, Stephen R. Van Doren, Simon C. Steely, Jr.
-
Patent number: 8082399Abstract: Cache bounded reference counting for computer languages having automated memory management in which, for example, a reference to an object “Z” initially stored in an object “O” is fetched and the cache hardware is queried whether the reference to the object “Z” is a valid reference, is in a cache, and has a continuity flag set to “on”. If the object “Z” is a valid reference, is in the cache, and has a continuity flag set to “on”, the object “O” is locked for an update, a reference counter is decremented for the object “Z” if the object “Z” resides in the cache, and a return code is set to zero to indicate that the object “Z” is de-referenced and that its storage memory can be released and re-used if the reference counter for the object “Z” reaches zero. Thereafter, the cache hardware is similarly queried regarding an object “N” that will become a new reference of object “O”.Type: GrantFiled: July 31, 2008Date of Patent: December 20, 2011Assignee: International Business Machines CorporationInventors: Eberhard Pasch, Hans-Werner Tast, Achim Haessler, Markus Nosse, Elmar Zipp
-
Patent number: 8078801Abstract: For each memory location in a set of memory locations associated with a thread, setting an indication associated with the memory location to request a signal if data from the memory location is evicted from a cache; and in response to the signal, reloading the set of memory locations into the cache.Type: GrantFiled: September 17, 2009Date of Patent: December 13, 2011Assignee: Intel CorporationInventors: Mark Buxton, Ernie Brickell, Quinn A. Jacobson, Hong Wang, Baiju Patel
-
Patent number: 8078793Abstract: A non-volatile memory device stores configuration variables for use by a computer firmware. The variable is initially stored in the memory device in a manner that minimizes the number of bits used to store the variable that are in the updated state. When a request is received to change the initial value of the variable to an updated value, the value is changed in place by changing only the bits used to store the variable from an erased state to an updated state, by only setting the invert flag, by setting the invert flag and by changing one or more of the bits of the variable from the erased state to the updated state, or by storing the updated value of the variable in a new location in the memory device.Type: GrantFiled: April 27, 2006Date of Patent: December 13, 2011Assignee: American Megatrends, Inc.Inventor: Sergiy B. Yakovlev
-
Publication number: 20110296114Abstract: A method and central processing unit supporting atomic access of shared data by a sequence of memory access operations. A processor status flag is reset. A processor executes, subsequent to the setting of the processor status flag, a sequence of program instructions with instructions accessing a subset of shared data contained within its local cache. During execution of the sequence of program instructions and in response to a modification by another processor of the subset of shared data, the processor status flag is set. Subsequent to the executing the sequence of program instructions and based upon the state of the processor status flag, either a first program processing or a second program processing is executed. In some examples the first program processing includes storing results data into the local cache and the second program processing includes discarding the results data.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: Mark S. Farrell, Jonathan T. Hsieh, Christian Jacobi, Timothy J. Slegel
-
Patent number: 8065496Abstract: To reduce the number of bits required for LRU control when the number of target entries is large, and achieve complete LRU control. Each time an entry is used, an ID of the used entry is stored to configure LRU information so that storage data 0 stored in the leftmost position indicates an ID of an entry with the oldest last use time (that is, LRU entry), for example as shown in FIG. 1(1). An LRU control apparatus according to a first embodiment of the present invention refers to the LRU information, and selects an entry corresponding to the storage data 0 (for example, entry 1) from the LRU information as a candidate for the LRU control, based on the storage data 0 as the ID of the entry with the oldest last use time.Type: GrantFiled: August 27, 2008Date of Patent: November 22, 2011Assignee: Fujitsu LimitedInventors: Tomoyuki Okawa, Hiroyuki Kojima, Masaki Ukai
-
Publication number: 20110276765Abstract: Systems and methods for managing cache configurations are disclosed. In accordance with a method, a system management control module may receive access rights of a host to a logical storage unit and may also receive a desired caching policy for caching data associated with the logical storage unit and the host. The system management control module may determine an allowable caching policy indicator for the logical storage unit. The allowable caching policy indicator may indicate whether caching is permitted for data associated with input/output operations between the host and the logical storage unit. The system management control module may further set a caching policy for data associated with input/output operations between the host and the logical storage unit, based on at least one of the desired caching policy and the allowable caching policy indicator. The system management control module may also communicate the caching policy to the host.Type: ApplicationFiled: May 10, 2010Publication date: November 10, 2011Applicant: DELL PRODUCTS L.P.Inventor: William Price Dawkins
-
Patent number: 8051251Abstract: One aspect of the embodiments utilizes a system controller which has a broadcast transmitting and receiving unit that receives a memory access request from each of CPU and notifies to the other system controllers and a snoop control unit that judges when the memory access request from any of the CPUs for each of the cache memories in the CPU is received, whether object data conflicts with object data requested by a prior access request received earlier than the memory access request and whether the object data is present in any of the cache memories, selects the status of the cache memory of the CPU, notifies the other system controller of a snoop processing result in which the status selected and the cache memory are associated, and set a final status as the status of the system controller based on priority of each status of other system controllers.Type: GrantFiled: August 25, 2008Date of Patent: November 1, 2011Assignee: Fujitsu LimitedInventor: Go Sugizaki
-
Patent number: 8041898Abstract: The present disclosure provides a method for reducing memory traffic in a distributed memory system. The method may include storing a presence vector in a directory of a memory slice, said presence vector indicating whether a line in local memory has been cached. The method may further include protecting said memory slice from cache coherency violations via a home agent configured to transmit and receive data from said memory slice, said home agent configured to store a copy of said presence vector. The method may also include receiving a request for a block of data from at least one processing node at said home agent and comparing said presence vector with said copy of said presence vector stored in said home agent. The method may additionally include eliminating a write update operation between said home agent and said directory if said presence vector and said copy are equivalent. Of course, many alternatives, variations and modifications are possible without departing from this embodiment.Type: GrantFiled: May 1, 2008Date of Patent: October 18, 2011Assignee: Intel CorporationInventors: Adrian Moga, Rajat Agarwal, Malcolm Mandviwalla
-
Patent number: 8041900Abstract: Embodiments of the present invention provide a system that executes transactions on a processor that supports transactional memory. The system starts by executing the transaction on the processor. During execution of the transactions, the system places stores in a store buffer. In addition, the system sets a stores_encountered indicator when a first store is placed in the store buffer during the transaction. Upon completing the transaction, the system determines if the stores_encountered indicator is set. If so, the system signals a cache to commit the stores placed in the store buffer during the transaction to the cache and then resumes execution of program code following the transaction when the stores have been committed. Otherwise, the system resumes execution of program code following the transaction without signaling the cache.Type: GrantFiled: January 15, 2008Date of Patent: October 18, 2011Assignee: Oracle America, Inc.Inventors: Paul Caprioli, Martin Karlsson, Sherman H. Yip
-
Publication number: 20110252203Abstract: The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are tracked by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.Type: ApplicationFiled: June 24, 2011Publication date: October 13, 2011Inventors: Sailesh Kottapalli, John H. Crawford, Kushagra Vaid
-
Patent number: 8037252Abstract: Embodiments of the present invention generally provide techniques and apparatus to reduce the number of memory directory updates during block replacement in a system having a directory-based cache. The system may be implemented to utilize a read/write bit to determine the accessibility of a cache line and limit memory directory updates during block replacement to regions that are determined to be readable and writable by multiple processors.Type: GrantFiled: August 28, 2007Date of Patent: October 11, 2011Assignee: International Business Machines CorporationInventor: Farnaz Toussi
-
Patent number: 8032581Abstract: Provided are a method, system, and article of manufacture, wherein a control unit receives a request to establish a relationship over a fiber channel connection, wherein a first indicator associated with the request indicates that the relationship supports persistent information unit pacing across a plurality of command chains. The control unit sends a response indicating an acceptance of the relationship, wherein a second indicator associated with the response indicates that the control unit supports persistent information unit pacing across the plurality of command chains. An information unit pacing parameter value is retained across the plurality of command chains, in response to determining that the second indicator indicates that the control unit supports persistent information unit pacing across the plurality of command chains.Type: GrantFiled: August 30, 2006Date of Patent: October 4, 2011Assignee: International Business Machines CorporationInventors: Roger Gregory Hathorn, Daniel Francis Casper, John Flanagan, Catherine C. Huang
-
Publication number: 20110238927Abstract: Solved is a problem that use efficiency of a memory cache is low because in contents distribution using a memory cache whose capacity is limited, even when only a part of contents is accessed, the entire contents will be stored in the memory cache. The contents distribution device includes a contents holding unit 102 which stores contents to be distributed, a cache holding unit 103 which temporarily stores the contents to be distributed, a contents distribution unit 100 which distributes contents stored in the cache holding unit or the contents holding unit, and a cache control unit 101 which controls storage and deletion of contents in and from the cache holding unit, in which the cache control unit 101 sections the contents into a plurality of blocks and controls storage and deletion in and from the cache holding unit on a block basis based on cache control information which sets a deletion waiting time before deletion from the cache holding unit on a block basis.Type: ApplicationFiled: November 18, 2009Publication date: September 29, 2011Inventor: Hiroyuki Hatano
-
Patent number: 8024526Abstract: A system may include several nodes coupled by an inter-node network configured to convey coherency messages between the nodes. Each node may include several active devices coupled by an address network and a data network. The nodes implement a coherency protocol such that if an active device in one of the nodes has an ownership responsibility for a coherency unit, no active device in any of the other nodes has a valid access right to the coherency unit. For example, if a node receives a coherency message requesting read access to a coherency unit from another node, the node may respond by conveying a proxy address packet, receipt of which removes ownership, on the node's address network to an owning active device. In contrast, the active device's ownership responsibility may not be removed in response to a device within the same node requesting read access to the coherency unit.Type: GrantFiled: April 9, 2004Date of Patent: September 20, 2011Assignee: Oracle America, Inc.Inventors: Anders Landin, Robert E. Cypher, Erik E. Hagersten
-
Publication number: 20110225373Abstract: A computer system including: a file server, cache servers, and a cache management server, wherein: the cache server obtains the authority information from the cache management server, in a case of receiving a command to process a file, wherein the cache server refers to the obtained authority information, wherein the cache server executes the command to process the file, in a case where the cache server has an administration right of the cache data of the file, wherein the cache management server sends to the cache server an update command for transferring the administration right of the cache data to the other cache server, wherein the cache server sends the update command to the other cache server after receiving the update command, and executes a update procedure in which a lock management information is updated.Type: ApplicationFiled: November 16, 2010Publication date: September 15, 2011Inventors: Daisuke ITO, Yuji Tsushima, Hitoshi Hayakawa
-
Patent number: 8019194Abstract: An integrated apparatus is disclosed that can directly connect to a portable digital video camera and can record uncompressed video and audio data, along with associated metadata, in the field and elsewhere. Most preferably, the integrated apparatus includes a removable, recordable, reusable digital magazine that may be mounted. Most preferably, the integrated apparatus also supports a variety of input and output formats, and the apparatus may be easily connected to other computing systems, either directly or through network connections, wired or wireless. The digital magazine can be mounted in a variety of docking stations and can be directly connected to a network, allowing the video and audio data to be easily stored and transferred.Type: GrantFiled: April 5, 2005Date of Patent: September 13, 2011Assignee: S. two Corp.Inventors: Michael Morrison, James A. Rannalli, Stephen Roach, Christopher L. Romine
-
Patent number: 8015363Abstract: A process to make the cache memory of a processor consistent includes the processor processing a request to write data to an address in its memory marked as being in the shared state. The address is transmitted to the other processors, data are written into the processor's cache memory and the address changes to the modified state. An appended memory associated with the processor memorizes the address, the data and an associated marker in a first state. The processor then receives the address with an indicator. If the indicator indicates that the processor must perform the operation and if the associated marker is in the first state, the data are kept in the modified state. If the indicator does not indicate that the processor must perform the operation and if the processor receives an order to mark the data to be in the invalid state, the marker changes to a second state.Type: GrantFiled: September 15, 2009Date of Patent: September 6, 2011Assignee: STMicroelectronics S.A.Inventors: Jean-Philippe Cousin, Jean-Jose Berenguer, Gilles Pelissier
-
Systems and methods for tracking portions of a logical volume that have never been written by a host
Patent number: 8006052Abstract: Embodiments of the invention exploit the fact that not all portions of a logical volume may include data written by a host. Accordingly, an embodiment of the invention includes setting a designated set of bits to 1 in a meta data table when a logical volume is initialized. These bits may be referred to herein as Never Written by Host (NWBH) bits. Separately, or in combination, an embodiment of the invention includes setting a NWBH bit to 0 when data is written to the associated portion of the logical volume. Separately, or in combination, an embodiment of the invention includes reading the NWBH bit upon receiving a read command associated with the associated portion of the logical volume. If the NWBH bit is equal to 1, data is not read from the associated portion of the logical volume; if the NWBH bit is equal to 0, data is read from the associated portion of the logical volume.Type: GrantFiled: July 17, 2006Date of Patent: August 23, 2011Assignee: EMC CorporationInventors: Zvi Gabriel Benhanokh, Michael J. Scharland, Ran Margalit -
Patent number: 8001390Abstract: Methods and apparatus provide for: entering a secure mode in which a given processor may initiate a transfer of information into or out of said processor, but no external device may initiate a transfer of information into or out of said processor; and programming at least one trusted data storage location using a direct memory access (DMA) command to be one of read-only, write-only, readable and writeable, limited access, and reset, where said at least one trusted data storage location is located external to said processor.Type: GrantFiled: May 9, 2007Date of Patent: August 16, 2011Assignee: Sony Computer Entertainment Inc.Inventor: Akiyuki Hatakeyama
-
Patent number: 8001538Abstract: Various technologies and techniques are disclosed for providing software accessible metadata on a cache of a central processing unit. The metadata can include at least some bits for each virtual address, at least some bits for each cache line, and at least some bits for the cache overall. An instruction set architecture on the central processing unit is provided that includes additional instructions for interacting with the metadata. New side effects that are introduced into an operation of the central processing unit by a presence of the metadata and the additional instructions. The metadata can be accessed by at least one software program to facilitate an operation of the software program.Type: GrantFiled: June 8, 2007Date of Patent: August 16, 2011Assignee: Microsoft CorporationInventors: Jan Gray, Timothy L. Harris, James Larus, Burton Smith
-
Patent number: 7996632Abstract: A multithreaded processor with a banked cache is provided. The instruction set includes at least one atomic operation which is executed in the L2 cache if the atomic memory address source data is aligned. The core executing the instruction determines whether the atomic memory address source data is aligned. If it is aligned, the atomic memory address is sent to the bank that contains the atomic memory address source data, and the operation is executed in the bank. In one embodiment, if the instruction is mis-aligned, the operation is executed in the core.Type: GrantFiled: December 22, 2006Date of Patent: August 9, 2011Assignee: Oracle America, Inc.Inventors: Greg F. Grohoski, Mark A. Luttrell, Manish Shah
-
Patent number: 7991966Abstract: This disclosure presents an architectural mechanism which allows a caching bridge to efficiently store data either inclusively or exclusively based upon information configured by an application. An INC bit is set for each access to a page table that indicates whether the data is shared or not shared by a LLC. This allows a multicore multiprocessor system to have a caching policy which enables use of the last level cache efficiently and results in improved performance of the multicore multiprocessor system.Type: GrantFiled: December 29, 2004Date of Patent: August 2, 2011Assignee: Intel CorporationInventor: Krishnakanth V. Sistla
-
Patent number: 7984241Abstract: A plurality of bits are added to virtual and physical memory addresses to specify the level at which data is stored in a multi-level cache hierarchy. When data is to be written to cache, each cache level determines whether it is permitted to store the data. Storing data at the appropriate cache level addresses the problem of cache thrashing.Type: GrantFiled: July 26, 2006Date of Patent: July 19, 2011Assignee: Hewlett-Packard Development Company, L.P.Inventor: Sudheer Kurichiyath
-
Patent number: 7984245Abstract: Proposed is a storage system capable of preventing the compression of a cache memory caused by data remaining in a cache memory of a storage subsystem without being transferred to a storage area of an external storage, and maintaining favorable I/O processing performance of the storage subsystem. In this storage system where an external storage is connected to the storage subsystem and the storage subsystem provides a storage area of the external storage as its own storage area, provided is a volume for saving dirty data remaining in a cache memory of the storage subsystem without being transferred to the external volume. The storage system recognizes the compression of the cache memory, and eliminates the overload of the cache memory by saving dirty data in a save volume.Type: GrantFiled: August 15, 2008Date of Patent: July 19, 2011Assignee: Hitachi, Ltd.Inventors: Ryu Takada, Yoshihito Nakagawa
-
Patent number: 7979642Abstract: A data processing apparatus is provided comprising processing circuitry for executing multiple program threads. At least one storage unit is shared between the multiple program threads and comprises multiple entries, each entry for storing a storage item either associated with a high priority program thread or a lower priority program thread. A history storage for retaining a history field for each of a plurality of blocks of the storage unit is also provided. On detection of a high priority storage item being evicted from the storage unit as a result of allocation to that entry of a lower priority storage item, the history field for the block containing that entry is populated with an indication of the evicted high priority storage item.Type: GrantFiled: September 11, 2008Date of Patent: July 12, 2011Assignee: ARM LimitedInventors: David Michael Bull, Emre Özer
-
Patent number: 7970989Abstract: A hard disk cache includes entries to be written to a disk, and also includes ordering information describing the order that they should be written to the disk. Data may be written from the cache to the disk in the order specified by the ordering information. In some situations, data may be written out of order. Further, in some situations, clean data from the cache may be combined with dirty data from the cache when performing a cache flush.Type: GrantFiled: June 30, 2006Date of Patent: June 28, 2011Assignee: Intel CorporationInventor: Jeanna N. Matthews
-
Patent number: 7966457Abstract: A cache module for a central processing unit has a cache control unit coupled with a memory, and a cache memory coupled with the control unit and the memory wherein the cache memory has a plurality of cache lines, each cache line having a storage area for storing instructions to be issued sequentially and associated control bits, wherein at least one cache line of the plurality of cache lines has at least one branch trail control bit which when set provides for an automatic locking function of the cache line in case a predefined branch instruction has been issued.Type: GrantFiled: October 30, 2007Date of Patent: June 21, 2011Assignee: Microchip Technology IncorporatedInventors: Rodney J. Pesavento, Gregg D. Lahti, Joseph W. Triece
-
Patent number: 7962696Abstract: Systems and methods are disclosed for updating owner predictor structures. In one embodiment, a multi-processor system includes an owner predictor control that provides an ownership update message corresponding to a block of data to at least one of a plurality of owner predictors in response to a change in an ownership state of the block of data. The update message comprises an address tag associated with the block of data and an identification associated with an owner node of the block of data.Type: GrantFiled: January 15, 2004Date of Patent: June 14, 2011Assignee: Hewlett-Packard Development Company, L.P.Inventors: Simon C. Steely, Jr., Gregory Edward Tierney
-
Patent number: 7958317Abstract: A technique for performing stream detection and prefetching within a cache memory simplifies stream detection and prefetching. A bit in a cache directory or cache entry indicates that a cache line has not been accessed since being prefetched and another bit indicates the direction of a stream associated with the cache line. A next cache line is prefetched when a previously prefetched cache line is accessed, so that the cache always attempts to prefetch one cache line ahead of accesses, in the direction of a detected stream. Stream detection is performed in response to load misses tracked in the load miss queue (LMQ). The LMQ stores an offset indicating a first miss at the offset within a cache line. A next miss to the line sets a direction bit based on the difference between the first and second offsets and causes prefetch of the next line for the stream.Type: GrantFiled: August 4, 2008Date of Patent: June 7, 2011Assignee: International Business Machines CorporationInventors: William E. Speight, Lixin Zhang
-
Patent number: 7958320Abstract: Embodiments of the present invention provide a secure programming paradigm, and a protected cache that enable a processor to handle secret/private information while preventing, at the hardware level, malicious applications from accessing this information by circumventing the other protection mechanisms. A protected cache may be used as a building block to enhance the security of applications trying to create, manage and protect secure data. Other embodiments are described and claimed.Type: GrantFiled: December 3, 2007Date of Patent: June 7, 2011Assignee: Intel CorporationInventors: Shlomo Raikin, Shay Gueron, Gad Sheaffer
-
Patent number: 7958319Abstract: A method and apparatus for accelerating transactional execution. Barriers associated with shared memory lines referenced by memory accesses within a transaction are only invoked/executed the first time the shared memory lines are accessed within a transaction. Hardware support, such as a transaction field/transaction bits, are provided to determine if an access is the first access to a shared memory line during a pendancy of a transaction. Additionally, in an aggressive operational mode version numbers representing versions of elements stored in shared memory lines are not stored and validated upon commitment to save on validation costs. Moreover, even in a cautious mode, that stores version numbers to enable validation, validation costs may not be incurred, if eviction of accessed shared memory lines do not occur during execution of the transaction.Type: GrantFiled: October 29, 2007Date of Patent: June 7, 2011Assignee: Intel CorporationInventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Quinn A. Jacobson
-
Maintaining cache coherence using load-mark metadata to deny invalidation of load-marked cache lines
Patent number: 7949831Abstract: Embodiments of the present invention provide a system that maintains load-marks on cache lines. The system includes: (1) a cache which accommodates a set of cache lines, wherein each cache line includes metadata for load-marking the cache line, and (2) a local cache controller for the cache. Upon determining that a remote cache controller has made a request for a cache line that would cause the local cache controller to invalidate a copy of the cache line in the cache, the local cache controller determines if there is a load-mark in the metadata for the copy of the cache line. If not, the local cache controller invalidates the copy of the cache line. Otherwise, the local cache controller signals a denial of the invalidation of the cache line and retains the copy of the cache line and the load-mark in the metadata for the copy of the cache line.Type: GrantFiled: November 2, 2007Date of Patent: May 24, 2011Assignee: Oracle America, Inc.Inventors: Robert E. Cypher, Shailender Chaudhry -
Patent number: 7949834Abstract: According to the methods and apparatus taught herein, processor caching policies are determined using cache policy information associated with a target memory device accessed during a memory operation. According to one embodiment of a processor, the processor comprises at least one cache and a memory management unit. The at least one cache is configured to store information local to the processor. The memory management unit is configured to set one or more cache policies for the at least one cache. The memory management unit sets the one or more cache policies based on cache policy information associated with one or more target memory devices configured to store information used by the processor.Type: GrantFiled: January 24, 2007Date of Patent: May 24, 2011Assignee: QUALCOMM IncorporatedInventor: Michael William Morrow
-
Patent number: 7937535Abstract: Each of plural processing units has a cache, and each cache has indication circuitry containing segment filtering data. The indication circuitry responds to an address specified by an access request from an associated processing unit to reference the segment filtering data to indicate whether the data is either definitely not stored or is potentially stored in that segment. Cache coherency circuitry ensures that data accessed by each processing unit is up-to-date and has snoop indication circuitry whose content is derived from the already-provided segment filtering data. For certain access requests, the cache coherency circuitry initiates a coherency operation during which the snoop indication circuitry determines whether any of the caches requires a snoop operation. For each cache that does, the cache coherency circuitry issues a notification to that cache identifying the snoop operation to be performed.Type: GrantFiled: February 22, 2007Date of Patent: May 3, 2011Assignee: ARM LimitedInventors: Emre Özer, Stuart David Biles, Simon Andrew Ford
-
Patent number: 7934054Abstract: A re-fetching cache memory improves efficiency of a system, for example by advantageously sharing the cache memory and/or by increasing performance. When some or all of the cache memory is temporarily used for another purpose, some or all of a data portion of the cache memory is flushed, and some or all of a tag portion is saved in an archive. In some embodiments, some or all of the tag portion operates “in-place” as the archive, and in further embodiments, is placed in a reduced-power mode. When the temporary use completes, optionally and/or selectively, at least some of the tag portion is repopulated from the archive, and the data portion is re-fetched according to the repopulated tag portion. According to various embodiments, processor access to the cache is enabled during one or more of: the saving; the repopulating; and the re-fetching.Type: GrantFiled: May 22, 2007Date of Patent: April 26, 2011Assignee: Oracle America, Inc.Inventors: Laurent R. Moll, Peter N. Glaskowsky, Joseph B. Rowlands