Shared Cache Patents (Class 711/130)
-
Patent number: 8117395Abstract: Some of the embodiments of the present disclosure provide a command processing pipeline to be operatively coupled to a shared cache, the command processing pipeline comprising a command processing pipeline operatively coupled to the N-way cache and configured to process a sequence of cache commands, wherein a way of the N ways of the cache with which an address of a cache command matches is a hit way for the cache command in case the cache command is a hit. In one embodiment, the command processing pipeline may be configured to receive a first cache command from one of the plurality of processing cores, select a way, from the N ways, as a potential eviction way, and generate, based at least in part on the received first cache command, N selection signals corresponding to the N ways, wherein each selection signal is indicative of whether the corresponding way is (A). the hit way and/or the eviction way, or (B). neither the hit way nor the eviction way. Other embodiments are also described and claimed.Type: GrantFiled: July 21, 2009Date of Patent: February 14, 2012Assignee: Marvell Israel (MISL) Ltd.Inventors: Tarek Rohana, Gil Stoler
-
Patent number: 8117420Abstract: A buffer management structure for processing systems is described. In one embodiment, the buffer management structure includes a storage module and a control module. The storage module includes a read position and can store a bit indicating a valid state of a transaction request in a write entry. The control module can receive an invalidation request and modify the bit to indicate an invalid state for the transaction request and discard the transaction request when the transaction request is in the read position.Type: GrantFiled: August 7, 2008Date of Patent: February 14, 2012Assignee: QUALCOMM IncorporatedInventors: Jian Shen, Robert Allan Lester
-
Patent number: 8112505Abstract: Techniques are provided for desktop streaming over wide area networks. In one embodiment, a computing device comprises logic that is configured to intercept file open requests for files stored in a file system, where at least some of the files in the file system may have not yet been fully downloaded. In response to a request to open a file, the logic is configured to modify a first sharing mode specified therein and to open the file in a read-write sharing mode that allows other processes to open the file. While one or more blocks of the file are being downloaded or written into the file, the logic is configured to check whether a second sharing mode received in a subsequent request to open the file is compatible with the first sharing mode. If the second sharing mode is not compatible with the first sharing mode, the logic is configured to deny the subsequent request even though in the file system the file is opened in the read-write sharing mode.Type: GrantFiled: March 12, 2010Date of Patent: February 7, 2012Assignee: Wanova Technologies, Ltd.Inventors: Israel Ben-Shaul, Shahar Glixman, Tal Zamir
-
Patent number: 8111707Abstract: Methods, apparatuses, and systems directed to efficient compression processing in system architectures including a control plane and a data plane. Particular implementations feature integration of compression operations and mode selection with a beltway mechanism that takes advantage of atomic locking mechanisms supported by certain classes of hardware processors to handle the tasks that require atomic access to data structures while also reducing the overhead associated with these atomic locking mechanisms.Type: GrantFiled: December 20, 2007Date of Patent: February 7, 2012Assignee: Packeteer, Inc.Inventors: Guy Riddle, Jon Eric Okholm
-
Patent number: 8108620Abstract: A method of caching data in a global cache distributed amongst a plurality of computing devices, comprising providing a global cache for caching data accessible to interconnected client devices, where each client contributes a portion of its main memory to the global cache. Each client also maintains an ordering of data that it has in its cache portion. When a remote reference for a cached datum is made, both the supplying client and the requesting client adjust their orderings to reflect the fact that the number of copies of the requested datum now likely exist in the global cache.Type: GrantFiled: March 10, 2009Date of Patent: January 31, 2012Assignee: Hewlett-Packard Development Company, L.P.Inventors: Eric A. Anderson, Christopher E. Hoover, Xiaozhou Li, Allstair Veitch
-
Patent number: 8108622Abstract: A memory management system includes a plurality of processors, a shared memory that can be accessed from the plurality of processors, cache memories provided between each processor of the plurality of processors and the shared memory and invalidation or write back of a specified region can be commanded from a program running on a processor. Programs running on each processor invalidate an input data region of a cache memory with an invalidation command immediately before execution of a program as a processing batch, and write back an output data region of a cache memory to the shared memory with a write back command immediately after execution of a program as a processing batch.Type: GrantFiled: February 15, 2008Date of Patent: January 31, 2012Assignee: Kabushiki Kaisha ToshibaInventors: Nobuhiro Nonogaki, Takeshi Kodaka
-
Publication number: 20120023295Abstract: Described embodiments provide arbitration for a cache of a network processor. Processing modules of the network processor generate memory access requests including a requested address and an ID value corresponding to the requesting processing module. Each request is either a locked request or a simple request. An arbiter determines whether the received requests are locked requests. For each locked request, the arbiter determines whether two or more of the requests are conflicted based on the requested address of each received memory requests. If one or more of the requests are non-conflicted, the arbiter determines, for each non-conflicted request, whether the requested addresses are locked out by prior memory requests based on a lock table. If one or more of the non-conflicted memory requests are locked-out by prior memory requests, the arbiter queues the locked-out memory requests. The arbiter grants any non-conflicted memory access requests that are not locked-out.Type: ApplicationFiled: September 30, 2011Publication date: January 26, 2012Inventor: Shashank Nemawarkar
-
Patent number: 8103835Abstract: Embodiments of the invention provide methods and systems for reducing the consumption of inter-node bandwidth by communications maintaining coherence between accelerators and CPUs. The CPUs and the accelerators may be clustered on separate nodes in a multiprocessing environment. Each node that contains a shared memory device may maintain a directory to track blocks of shared memory that may have been cached at other nodes. Therefore, commands and addresses may be transmitted to processors and accelerators at other nodes only if a memory location has been cached outside of a node. Additionally, because accelerators generally do not access the same data as CPUs, only initial read, write, and synchronization operations may be transmitted to other nodes. Intermediate accesses to data may be performed non-coherently. As a result, the inter-chip bandwidth consumed for maintaining coherence may be reduced.Type: GrantFiled: October 11, 2010Date of Patent: January 24, 2012Assignee: International Business Machines CorporationInventors: Scott Douglas Clark, Andrew Henry Wottreng
-
Patent number: 8099557Abstract: In one embodiment, a system comprises a first processor, a main memory system, and a cache hierarchy coupled between the first processor and the main memory system. The cache hierarchy comprises at least a first cache. The first processor is configured to execute a first instruction, including forming an address responsive to one or more operands of the first instruction. The system is configured to push a first cache block that is hit by the first address in the first cache to a target location within the cache hierarchy or the main memory system, wherein the target location is unspecified in a definition of the first instruction within an instruction set architecture implemented by the first processor, and wherein the target location is implementation-dependent.Type: GrantFiled: February 26, 2008Date of Patent: January 17, 2012Assignee: GLOBALFOUNDRIES Inc.Inventors: John D. McCalpin, Patrick N. Conway
-
Patent number: 8099521Abstract: A network device comprises a controller that manages data flow through a network interconnecting a plurality of processors. The processors of the processor plurality comprise a local memory divided into a private local memory and a public local memory, a local cache, and working registers. The network device further comprises a plurality of cache mirror registers coupled to the controller that receive data to be forwarded to the processor plurality. The controller is responsive to a request to receive data by transferring requested data directly to public memory without interrupting the processor, and by transferring requested data via at least one cache mirror register for a transfer to processor local cache, and to processor working registers.Type: GrantFiled: October 26, 2007Date of Patent: January 17, 2012Assignee: Interactic Holdings Inc.Inventor: Coke S. Reed
-
Publication number: 20120005430Abstract: Access to various types of resources is controlled efficiently, thereby enhancing the throughput. A storage system includes: a disk device for providing a volume for storing data to a host system; a channel adapter for writing data from the host system to the disk device via a cache memory; a disk adapter for transferring data to and from the disk device; and at least one processor package including a plurality of processors for controlling the channel adapter and the disk adapter; wherein any one of the processor packages includes a processor for incorporatively transferring related types of ownership based on specific control information for managing the plurality of types of ownership for each of the plurality of types of resources.Type: ApplicationFiled: April 21, 2010Publication date: January 5, 2012Applicant: HITACHI, LTD.Inventors: Koji Watanabe, Toshiya Seki, Takashi Sakaguchi
-
Publication number: 20120005431Abstract: A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The clustered memory cache is accessible by a plurality of clients on the computer network and is configured to perform page caching of data items accessed by the clients. The network also includes a policy engine operatively coupled with the clustered memory cache, where the policy engine is configured to control where data items are cached in the clustered memory cache.Type: ApplicationFiled: September 9, 2011Publication date: January 5, 2012Inventors: Jason P. Gross, Ranjit B. Pandit, Clive G. Cook, Thomas H. Matson
-
Patent number: 8090921Abstract: A processing device included on a single chip includes processors capable of executing tasks in parallel and a cache memory shared by the processors, wherein the cache memory includes single-port memories and read data selection units, each of the single-port memories have one data output port, and each of the read data selection units is in a one-to-one association with each of the processors and selects a single-port memory which stores data to be read to a associated processor, from among the single-port memories.Type: GrantFiled: September 6, 2007Date of Patent: January 3, 2012Assignee: Panasonic CorporationInventors: Tetsu Hosoki, Masaitsu Nakajima
-
Publication number: 20110320727Abstract: An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available.Type: ApplicationFiled: June 23, 2010Publication date: December 29, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Diana L. Orf, Robert J. Sonnelitter, III
-
Publication number: 20110320720Abstract: Cache line replacement in a symmetric multiprocessing computer, the computer having a plurality of processors, a main memory that is shared among the processors, a plurality of cache levels including at least one high level of private caches and a low level shared cache, and a cache controller that controls the shared cache, including receiving in the cache controller a memory instruction that requires replacement of a cache line in the low level shared cache; and selecting for replacement by the cache controller a least recently used cache line in the low level shared cache that has no copy stored in any higher level cache.Type: ApplicationFiled: June 23, 2010Publication date: December 29, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Craig Walters, Vijayalakshmi Srinivasan
-
Publication number: 20110320728Abstract: A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit.Type: ApplicationFiled: June 23, 2010Publication date: December 29, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Deanna Postles Dunn Berger, Michael F. Fee, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
-
Publication number: 20110320695Abstract: Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.Type: ApplicationFiled: June 23, 2010Publication date: December 29, 2011Applicant: International Business Machines CorporationInventors: Deanna P. BERGER, Michael F. Fee, Christine C. Jones, Arthur J. O'Neill, Diana L. Orf, Robert J. Sonnelitter, III
-
Publication number: 20110320729Abstract: Various embodiments of the present invention manage access to a cache memory. In one embodiment, a set of cache bank availability vectors are generated based on a current set of cache access requests currently operating on a set of cache banks and at least a variable busy time of a cache memory includes the set of cache banks. The set of cache bank availability vectors indicate an availability of the set of cache banks. A set of cache access requests for accessing a set of given cache banks within the set of cache banks is received. At least one cache access request in the set of cache access requests is selected to access a given cache bank based on the a cache bank availability vectors associated with the given cache bank and the set of access request parameters associated with the at least one cache access that has been selected.Type: ApplicationFiled: June 23, 2010Publication date: December 29, 2011Applicant: International Business Machines CorporationInventors: TIMOTHY C. BRONSON, Garrett M. Drapala, Hieu T. Huynh, Kenneth D. Klapproth
-
Publication number: 20110320694Abstract: A method of performing operations in a shared cache coupled to a first requestor and a second requestor includes receiving at the shared cache a first request from the second requester; assigning the request to a state machine; transmitting a first pipe pass request from the state machine to an arbiter; providing a first instruction from the first pipe pass request to a cache pipeline, the first instruction causing a first pipe pass; and providing a second pipe pass request to the arbiter before the first pipe pass is completed.Type: ApplicationFiled: June 23, 2010Publication date: December 29, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Deanna Postles Dunn Berger, Michael F. Fee, Robert J. Sonnelitter, III
-
Publication number: 20110320726Abstract: A storage apparatus has a channel board 11; a drive board 13; a cache memory 14; a plurality of processor boards 12 that transfer data; and a shared memory 15. The channel board 11 stores a frame transfer table 521 containing information indicative of correspondence between LDEV 172 and each of the processor boards 12, set in accordance with a right of ownership that is a right of access to the LDEV 172. The processor boards 12 store LDEV control information 524 in a local memory 123, which is referred to by the processor board at the time of access The channel board 11 transfers a data frame that forms the received data I/O request, to one of the processor boards 12 corresponding to the LDEV 172 specified from the information contained in the frame by using the frame transfer table 521.Type: ApplicationFiled: May 26, 2009Publication date: December 29, 2011Applicant: HITACHI, LTD.Inventors: Takashi Noda, Takashi Ochi, Yoshihito Nakagawa
-
Patent number: 8087024Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.Type: GrantFiled: November 18, 2008Date of Patent: December 27, 2011Assignee: Intel CorporationInventors: Sridhar Lakshmanamurthy, Wilson Y. Liao, Prashant R. Chandra, Jeen-Yuan Miin, Yim Pun
-
Publication number: 20110314227Abstract: Horizontal cache persistence in a multi-compute node, SMP computer, including, responsive to a determination to evict a cache line on a first one of the compute nodes, broadcasting by a first compute node an eviction notice for the cache line; transmitting the state of the cache line receiving compute nodes, including, if the cache line is missing from a compute node, an indication whether that compute node has cache storage space available for the cache line; determining by the first compute node, according to the states of the cache line and space available, whether the first compute node can evict the cache line without writing the cache line to main memory; and updating by each compute node the state of the cache line in each compute node, in dependence upon one or more of the states of the cache line in all the compute nodes.Type: ApplicationFiled: June 21, 2010Publication date: December 22, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Lawrence D. Curley, Garrett M. Drapala, Edward J. Kaminski, JR., Craig R. Walters
-
Publication number: 20110314212Abstract: Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred.Type: ApplicationFiled: June 22, 2010Publication date: December 22, 2011Applicant: International Business Machines CorporationInventors: DEANNA P. BERGER, Michael F. Fee, Christine C. Jones, Diana L. Orf, Robert J. Sonnelitter, III
-
Patent number: 8082397Abstract: Described are techniques and criteria used in connection with cache management. The cache may be organized as a plurality of memory banks in which each memory bank includes a plurality of slots. Each memory bank has an associated control slot that includes groups of extents of tags. Each cache slot has a corresponding tag that includes a bit value indicating the availability of the associated cache slot, and a time stamp indicating the last time the data in the slot was used. The cache may be shared by multiple processors. Exclusive access of the cache slots is implemented using an atomic compare and swap instruction. The time stamp of slots in the cache may be adjusted to indicate ages of slots affecting the amount of time a particular portion of data remains in the cache. Each director may obtain a cache slot from a private stack of nondata cache slots in addition to accessing a shared cache used by all directors.Type: GrantFiled: September 30, 2004Date of Patent: December 20, 2011Assignee: EMC CorporationInventors: Josef Ezra, Adi Ofer
-
Publication number: 20110307665Abstract: Subject matter disclosed herein relates to a system of one or more processors that includes persistent memory.Type: ApplicationFiled: June 9, 2010Publication date: December 15, 2011Inventors: John Rudelic, August Camber, Mostafa Naguib Abdulla
-
Patent number: 8078804Abstract: A data cache memory coupled to a processor including processor clusters are adapted to operate simultaneously on scalar and vectorial data by providing data locations in the data cache memory for storing data for processing. The data locations are accessed either in a scalar mode or in a vectorial mode. This is done by explicitly mapping the data locations that are scalar and the data locations that are vectorial.Type: GrantFiled: June 26, 2007Date of Patent: December 13, 2011Assignees: STMicroelectronics S.r.l., STMicroelectronics N.V.Inventors: Francesco Pappalardo, Giuseppe Notarangelo, Elena Salurso, Elio Guidetti
-
Patent number: 8078798Abstract: A virtual tape server (VTS) and a method for managing shared first level storage, such as a disk cache, among multiple virtual tape servers are provided. Such a system and method manage first level storage to accommodate two or more host processing systems by maintaining adequate free space in the cache for each host and by preventing one host, such as a mainframe, from taking over free space from another host, such as a Linux system.Type: GrantFiled: May 13, 2009Date of Patent: December 13, 2011Assignee: International Business Machines CorporationInventor: Greg T. Kishi
-
Patent number: 8074026Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: May 10, 2006Date of Patent: December 6, 2011Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Publication number: 20110296406Abstract: Techniques for configuring a hypervisor scheduler to make use of cache topology of processors and physical memory distances between NUMA nodes when making scheduling decisions. In the same or other embodiments the hypervisor scheduler can be configured to optimize the scheduling of latency sensitive workloads. In the same or other embodiments a hypervisor can be configured to expose a virtual cache topology to a guest operating system running in a virtual machine.Type: ApplicationFiled: June 1, 2010Publication date: December 1, 2011Applicant: Microsoft CorporationInventors: Aditya Bhandari, Dmitry Meshchaninov, Shuvabrata Ganguly
-
Publication number: 20110293097Abstract: Techniques for memory compartmentalization for trusted execution of a virtual machine (VM) on a multi-core processing architecture are described. Memory compartmentalization may be achieved by encrypting layer 3 (L3) cache lines using a key under the control of a given VM within the trust boundaries of the processing core on which that VMs is executed. Further, embodiments described herein provide an efficient method for storing and processing encryption related metadata associated with each encrypt/decrypt operation performed for the L3 cache lines.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Inventors: FABIO R. MAINO, Pere Monclus, David A. McGrew
-
Publication number: 20110296113Abstract: A method, system, and computer usable program product for recovery in a shared memory environment are provided in the illustrative embodiments. A core in a multi-core processor is designated as a user level core (ULC), which executes an instruction to modify a memory while executing an application. A second core is designated as a operating system core (OSC), which manages checkpointing of several segments of the shared memory. A set of flags is accessible to a memory controller to manage a shared memory. A flag in the set of flags corresponds to one segment in the segments of the shared memory. A message or instruction for modification of a segment is received. A cache line tracking determination is made whether a cache line used for the modification has already been used for a similar modification. If not, a part of the segment is checkpointed. The modification proceeds after checkpointing.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventor: ELMOOTAZBELLAH NABIL ELNOZAHY
-
Publication number: 20110296407Abstract: In a virtual machine environment, a hypervisor is configured to expose a virtual cache topology to a guest operating system, such that the virtual cache topology may be provided by corresponding physical cache topology. The virtual cache topology may be determined by the hypervisor or, in the case of a datacenter environment, may be determined by the datacenter's management system. The virtual cache topology may be calculated from the physical cache topology of the system such that virtual machines may be instantiated with virtual processors and virtual cache that may be mapped to corresponding logical processors and physical cache.Type: ApplicationFiled: June 1, 2010Publication date: December 1, 2011Applicant: Microsoft CorporationInventors: Aditya Bhandari, Dmitry Meshchaninov, Shuvabrata Ganguly
-
Patent number: 8069444Abstract: In a computer system with a multi-core processor having a shared cache memory level, an operating system scheduler adjusts the CPU latency of a thread running on one of the cores to be equal to the fair CPU latency which that thread would experience when the cache memory was equally shared by adjusting the CPU time quantum of the thread. In particular, during a reconnaissance time period, the operating system scheduler gathers information regarding the threads via conventional hardware counters and uses an analytical model to estimate a fair cache miss rate that the thread would experience if the cache memory was equally shared. During a subsequent calibration period, the operating system scheduler computes the fair CPU latency using runtime statistics and the previously computed fair cache miss rate value to determine the fair CPI value.Type: GrantFiled: August 29, 2006Date of Patent: November 29, 2011Assignee: Oracle America, Inc.Inventor: Alexandra Fedorova
-
Patent number: 8051248Abstract: In one embodiment, a processor comprises an execution core, a level 1 (L1) data cache coupled to the execution core and configured to store data, and a transient/transactional cache (TTC) coupled to the execution core. The execution core is configured to generate memory read and write operations responsive to instruction execution, and to generate transactional read and write operations responsive to executing transactional instructions. The L1 data cache is configured to cache memory data accessed responsive to memory read and write operations to identify potentially transient data and to prevent the identified transient data from being stored in the L1 data cache. The TTC is also configured to cache transaction data accessed responsive to transactional read and write operations to track transactions. Each entry in the TTC is usable for transaction data and for transient data.Type: GrantFiled: May 5, 2008Date of Patent: November 1, 2011Assignee: GLOBALFOUNDRIES Inc.Inventors: Michael Frank, David J. Leibs, Michael J. Haertel
-
Patent number: 8046768Abstract: In an embodiment of the invention, an apparatus and method for detecting resource consumption and preventing workload starvation, are provided. The apparatus and method perform the acts including: receiving a query; determining if the query will be classified as a resource intense query, based on a number of passes by a cache call over a data blocks set during a time window, where the cache call is associated with the query; and if the query is classified as a resource intense query, then responding to prevent workload starvation.Type: GrantFiled: July 31, 2007Date of Patent: October 25, 2011Assignee: Hewlett-Packard Development Company, L.P.Inventors: Gary S. Smith, Milford L. Hazlet
-
Patent number: 8046540Abstract: A method and apparatus for copying data from a virtual machine to a shared closure on demand. This process improves system efficiency by avoiding the copying of data in the large number of cases where the same virtual machine is the next to request access and use of the data. Load balancing and failure recovery are supported by copying the data to the shared closure when the data is requested by another virtual machine or recovering the data from the failed virtual machine and storing it in the shared closure before a terminated virtual machine is discarded.Type: GrantFiled: April 26, 2007Date of Patent: October 25, 2011Assignee: SAP AGInventors: Thomas Smits, Jan Boris Dostert, Oliver Patrick Schmidt, Norbert Kuck
-
Publication number: 20110246721Abstract: A method and apparatus for data backup are disclosed. Embodiments of the method comprise receiving a set of data from a local computer, caching the received data locally on the storage appliance in a buffer module, uploading the cached data to a remote computer, and accessing the set of data using the storage device. Embodiments of the apparatus comprise a network interface module for establishing connection of the storage appliance with at least one computer in a local network and at least one remote computer in a cloud network, a buffer module for receiving data to be backed up from the at least one computer; and a processor.Type: ApplicationFiled: March 31, 2010Publication date: October 6, 2011Applicant: SONY CORPORATIONInventor: ADRIAN CRISAN
-
Patent number: 8028131Abstract: According to one embodiment of the invention, a processor comprises a memory, a plurality of processor cores in communication with the cache memory and a scalability agent unit that operates as an interface between an on-die interconnect and both multiple processor cores and memory.Type: GrantFiled: November 29, 2006Date of Patent: September 27, 2011Assignee: Intel CorporationInventor: Krishnakanth Sistla
-
Patent number: 8028129Abstract: In one embodiment, the present invention includes a method for determining if a state of data is indicative of a first class of data, re-classifying the data from a second class to the first class based on the determination, and moving the data to a first portion of a shared cache associated with a first requester unit based on the re-classification. Other embodiments are described and claimed.Type: GrantFiled: July 2, 2009Date of Patent: September 27, 2011Assignee: Intel CorporationInventors: Christopher J. Hughes, Yen-Kuang Chen
-
Patent number: 8019719Abstract: Systems and methods for partitioning information across multiple storage devices in a web server environment. The system comprises a web server database which includes information related creating a web site. The information is divided into partitions within the database. One of the partitions includes user information and another of the partitions includes content for the web site. Portions of the content for the web site is replicated and maintained within the partition including the user information. Further, a portion of the user information is replicated and maintained in the partition where the content for the web site is maintained. The methods include dividing information into partitions, de-normalizing the received data and replicating the data portions into the various web site locations.Type: GrantFiled: June 23, 2008Date of Patent: September 13, 2011Assignee: Ancestry.com Operations Inc.Inventors: Todd Hardman, James Ivie, Michael Mansfield, Greg Parkinson, Daren Thayne, Mark Wolfgramm, Michael Wolfgramm, Brandt Redd
-
Publication number: 20110219191Abstract: In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line.Type: ApplicationFiled: January 18, 2011Publication date: September 8, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Daniel Ahn, Luis H. Ceze, Alan Gara, Martin Ohmacht, Zhuang Xiaotong
-
Patent number: 8010696Abstract: Safe and efficient passing of information from a forwarding-plane to a control-plane is provided. The information can be passed from a forwarding-plane process to a control-plane process without having to modify the control-plane process and without requiring the processes to pass information via shared memory. The information is encoded in the forwarding-plane process. The encoded information is passed to the operating system, wherein the operating system interprets the encoded information and reports the information to the control plane process. The present invention can be advantageously utilized in passing multicast events from a forwarding-plane process to a control-plane process. Multicast events can be passed from a forwarding-plane process to a control-plane process without having to modify the control-plane process and without requiring the processes to pass messages via shared memory.Type: GrantFiled: June 6, 2006Date of Patent: August 30, 2011Assignee: Avaya Inc.Inventors: Harish Sankaran, Janet Doong, Arun Kudur
-
Patent number: 8010750Abstract: A network on chip (‘NOC’) including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controller, wherein the memory communications controller configured to execute a memory access instruction and configured to determine a state of a cache line addressed by the memory access instruction, the state of the cache line being one of shared, exclusive, or invalid; the memory communications controller configured to broadcast an invalidate command to a plurality of IP blocks of the NOC if the state of the cache line is shared; and the memory communications controller configured to transmit an invalidate command only to an IP block that controls a cache where the cache line is stored if the state of the cache line is exclusive.Type: GrantFiled: January 17, 2008Date of Patent: August 30, 2011Assignee: International Business Machines CorporationInventors: Miguel Comparan, Russell D. Hoover, Jamie R. Kuesel, Eric O. Mejdrich
-
Publication number: 20110208916Abstract: A monitoring section 139 monitors a power control command for controlling power supplied to a processor for operating a plurality of operating systems or a plurality of processors. A cache entry selecting section 141 sets a cache entry used by the operating system or the processor having executed the power control command to a state used in the past using executed states of the plurality of operating systems or the plurality of processors that are changed based on the power control command upon selecting a cache entry to be replaced from a plurality of cache entries constituting a cache storage device 111. A replacement object selecting section 136 selects the cache entry set to the state used in the past as the cache entry to be replaced. In this way, the plurality of operating systems or the plurality of processors can effectively utilize one cache storage device.Type: ApplicationFiled: November 28, 2008Publication date: August 25, 2011Inventor: Masahiko Saito
-
Patent number: 8006039Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining if the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data. A corresponding system and computer program product.Type: GrantFiled: February 25, 2008Date of Patent: August 23, 2011Assignee: International Business Machines CorporationInventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait
-
Patent number: 8001330Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first directory to access the first array slice while using a second directory to access the second array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In one embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. The cache array is arranged with rows and columns of cache sectors wherein a cache line is spread across sectors in different rows and columns, with a portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency.Type: GrantFiled: December 1, 2008Date of Patent: August 16, 2011Assignee: International Business Machines CorporationInventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, William John Starke
-
Publication number: 20110197031Abstract: Disclosed herein is a miss handler for a multi-channel cache memory, and a method that includes determining a need to update a multi-channel cache memory due at least to one of an occurrence of a cache miss or a data prefetch being needed. The method further includes operating a multi-channel cache miss handler to update at least one cache channel storage of the multi-channel cache memory from a main memory.Type: ApplicationFiled: February 5, 2010Publication date: August 11, 2011Inventors: Eero Aho, Jari Nikara, Kimmo Kuusilinna
-
Patent number: 7996630Abstract: Provided is a method of managing memory in a multiprocessor system on chip (MPSoC). According to an aspect of the present invention, locality of memory can be reflected and restricted memory resources can be efficiently used by determining a storage location of a variable or a function which corresponds to a symbol with reference to a symbol table based on memory access frequency of the variable or the function, comparing the determined storage location and a previous storage location, and copying the variable or the function stored in the previous storage location to the determined storage location if the determined storage location is different from the previous storage location.Type: GrantFiled: August 11, 2010Date of Patent: August 9, 2011Assignee: Samsung Electronics Co., Ltd.Inventors: Keun Soo Yim, Jeong-joon Yoo, Young-sam Shin, Seung-won Lee, Han-cheol Kim, Jae-don Lee, Min-kyu Jeong
-
Patent number: 7996644Abstract: An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion.Type: GrantFiled: December 29, 2004Date of Patent: August 9, 2011Assignee: Intel CorporationInventor: Sailesh Kottapalli
-
Patent number: 7996615Abstract: A method to associate a storage policy with a cache region is disclosed. In this method, a cache region associated with an application is created. The application runs on virtual machines, and where a first virtual machine has a local memory cache that is private to the first virtual machine. The first virtual machine additionally has a shared memory cache that is shared by the first virtual machine and a second virtual machine. Additionally, the cache region is associated with a storage policy. Here, the storage policy specifies that a first copy of an object to be stored in the cache region is to be stored in the local memory cache and that a second copy of the object to be stored in the cache region is to be stored in the shared memory cache.Type: GrantFiled: July 7, 2010Date of Patent: August 9, 2011Assignee: SAP AGInventors: Galin Galchev, Frank Kilian, Oliver Luik, Dirk Marwinski, Petio G. Petev