Shared Cache Patents (Class 711/130)
-
Patent number: 9589039Abstract: Synchronization of metadata structures in a multi-threaded system includes receiving, by a first thread of a processing device, a request for a metadata structure located in a first cache associated with an object, obtaining, by the first thread of the processing device, a synchronization mechanism associated with the first cache, holding, by the first thread of the processing device, the metadata structure associated with the object, receiving, by a second thread of the processing device, a request for the metadata structure in a second cache associated with the object, obtaining, by the second thread of the processing device, a synchronization mechanism associated with the second cache and informing the second thread of the processing device that the metadata structure associated with the object is not available.Type: GrantFiled: December 13, 2012Date of Patent: March 7, 2017Assignee: Sybase, Inc.Inventors: Amit Pathak, Aditya Kelkar, Paresh Rathod
-
Patent number: 9588814Abstract: The present disclosure is directed to fast approximate conflict detection. A device may comprise, for example, a memory, a processor and a fast conflict detection module (FCDM) to cause the processor to perform fast conflict detection. The FCDM may cause the processor to read a first and second vector from memory, and to then generate summaries based on the first and second vectors. The summaries may be, for example, shortened versions of write and read addresses in the first and second vectors. The FCDM may then cause the processor to distribute the summaries into first and second summary vectors, and may then determine potential conflicts between the first and second vectors by comparing the first and second summary vectors. The summaries may be distributed into the first and second summary vectors in a manner allowing all of the summaries to be compared to each other in one vector comparison transaction.Type: GrantFiled: December 24, 2014Date of Patent: March 7, 2017Assignee: Intel CorporationInventors: Sara S. Baghsorkhi, Albert Hartono, Youfeng Wu, Cheng Wang
-
Patent number: 9575895Abstract: In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed.Type: GrantFiled: January 30, 2015Date of Patent: February 21, 2017Assignee: Intel CorporationInventors: Yen-Cheng Liu, Robert G. Blankenship, Geeyarpuram N. Santhanakrishnan, Ganapati N. Srinivasa, Kenneth C. Creta, Sridhar Muthrasanallur, Bahaa Fahim
-
Patent number: 9563425Abstract: Instructions and logic provide pushing buffer copy and store functionality. Some embodiments include a first hardware thread or processing core, and a second hardware thread or processing core, a cache to store cache coherent data in a cache line for a shared memory address accessible by the second hardware thread or processing core. Responsive to decoding an instruction specifying a source data operand, said shared memory address as a destination operand, and one or more owner of said shared memory address, one or more execution units copy data from the source data operand to the cache coherent data in the cache line for said shared memory address accessible by said second hardware thread or processing core in the cache when said one or more owner includes said second hardware thread or processing core.Type: GrantFiled: November 28, 2012Date of Patent: February 7, 2017Assignee: Intel CorporationInventors: Christopher J. Hughes, Changkyu Kim, Daehyun Kim, Victor W. Lee, Jong Soo Park
-
Patent number: 9529715Abstract: Embodiments of the invention relate a hybrid hardware and software implementation of transactional memory accesses in a computer system. A processor including a transactional cache and a regular cache is utilized in a computer system that includes a policy manager to select one of a first mode (a hardware mode) or a second mode (a software mode) to implement transactional memory accesses. In the hardware mode the transactional cache is utilized to perform read and write memory operations and in the software mode the regular cache is utilized to perform read and write memory operations.Type: GrantFiled: March 15, 2013Date of Patent: December 27, 2016Assignee: Intel CorporationInventors: Sanjeev Kumar, Christopher J. Hughes, Partha Kundu, Anthony Nguyen
-
Patent number: 9519585Abstract: A method of implementing a shared cache between a plurality of virtual machines may include maintaining the plurality of virtual machines on one or more physical machines. Each of the plurality of virtual machines may include a private cache. The method may also include determining portions of the private caches that are idle and maintaining a shared cache that comprises the portions of the private caches that are idle. The method may additionally include storing data associated with the plurality of virtual machines in the shared cache and load balancing use of the shared cache between the plurality of virtual machines.Type: GrantFiled: June 25, 2014Date of Patent: December 13, 2016Assignee: ORACLE INTERNATIONAL CORPORATIONInventor: Daniel Magenheimer
-
Patent number: 9507524Abstract: Machine implemented method and system for in-band management is provided. The method includes generating a management command from a first adapter having a storage protocol controller; and using a transport driver at the first adapter for sending the management command to a second adapter via the port that is also used for sending any input/output requests to read and write information to and from a storage device, where the management command is in a same format as a write command block for writing information at the storage device.Type: GrantFiled: March 8, 2013Date of Patent: November 29, 2016Assignee: QLOGIC, CorporationInventors: Shishir Shah, Sudhir T. Ponnachana
-
Patent number: 9495401Abstract: Unified and normalized management of an object within a structured data store on any machine and/or across difference machines. In an embodiment, a first agent accesses a first request dataset representing a two-dimensional structure. Each row in the request dataset comprises an identification of an agent, a statement, an identification of a resource to execute the statement, and one of a plurality of request types. Each row in the request dataset is processed according to the identification of the agent in the row. When the identified agent is the first agent, the request type of the row is accessed, and one or more elements in the row are processed based on the request type. When the identified agent is not the first agent, the row is sent within a second request dataset to the identified agent (which may be on a different machine than the first agent) for processing.Type: GrantFiled: April 13, 2015Date of Patent: November 15, 2016Inventor: Douglas T. Migliori
-
Patent number: 9483810Abstract: Methods and apparatuses to reduce the number of IO requests to memory when executing a program that iteratively processes contiguous data are provided. A first set of data elements may be loaded in a first register and a second set of data elements may be loaded in a second register. The first set of data elements and the second set of data elements can be used during the execution of a program to iteratively process the data elements. For each of a plurality of iterations, a corresponding set of data elements to be used during the execution of an operation for the iteration may be selected from the first set of data elements stored in the first register and the second set of data elements stored in the second register. In this way, the same data elements are not re-loaded from memory during each iteration.Type: GrantFiled: December 28, 2011Date of Patent: November 1, 2016Assignee: Intel CorporationInventor: Tomasz Janczak
-
Patent number: 9459951Abstract: A technique is provided for accumulating failures. A failure of a first row is detected in a group of array macros, the first row having first row address values. A mask has mask bits corresponding to each of the first row address values. The mask bits are initially in active status. A failure of a second row, having second row address values, is detected. When none of the first row address values matches the second row address values, and when mask bits are all in the active status, the array macros are determined to be bad. When at least one of the first row address values matches the second row address values, mask bits that correspond to at least one of the first row address values that match are kept in active status, and mask bits that correspond to non-matching first address values are set to inactive status.Type: GrantFiled: March 21, 2016Date of Patent: October 4, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael F. Fee, Patrick J. Meaney, Arthur J. O'Neill, Jr.
-
Patent number: 9455017Abstract: A storage control device includes: a partial unit buffer configured to hold at least one data assigned to a partial unit, in which the partial unit is one of a plurality of partial units that are each a division of a write unit for a memory; and a request generation section configured to generate, upon indication of a busy state in the memory for any of the partial units, a write request for the write unit of the memory when the holding of the data assigned to that partial unit is possible in the partial unit buffer.Type: GrantFiled: February 25, 2014Date of Patent: September 27, 2016Assignee: Sony CorporationInventors: Kenichi Nakanishi, Yasushi Fujinami, Ken Ishii, Hiroyuki Iwaki, Kentarou Mori
-
Patent number: 9451040Abstract: A method includes altering a request interval threshold when a cache-hit ratio falling below a target, receiving a request for content, providing the content when the content is in the cache, when the content is not in the cache and the time since a previous request for the content is less than the request interval threshold, retrieving and storing the content, and providing the content to the client, when the elapsed time is greater than the request interval threshold, and when another elapsed time since another previous request for the content is less than another request interval threshold, retrieving and storing the content, and providing the content to the client, and when the other elapsed time is greater than the other request interval threshold, rerouting the request to the content server without caching the content.Type: GrantFiled: February 2, 2015Date of Patent: September 20, 2016Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventor: Paul K. Reeser
-
Patent number: 9436619Abstract: A separation kernel isolating memory domains within a shared system memory is executed on the cores of a multicore processor having hardware security enforcement for static virtual address mappings, to implement an efficient embedded multi-level security system. Shared caches are either disabled or constrained by the same static virtual address mappings using the hardware security enforcement available, to isolate domains accessible to select cores and reduce security risks from data co-mingling.Type: GrantFiled: September 8, 2014Date of Patent: September 6, 2016Assignee: Raytheon CompanyInventor: Brandon Woolley
-
Patent number: 9413801Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for merging media stream indexes of a media stream are described in this specification. In one aspect, a method includes receiving a first media stream index at a first server system, including a first list of sequentially arranged fragment identifiers corresponding to at least a portion of multiple sequentially arranged fragments. Fragment identifiers that are potentially missing from the first index can be identified. A second media stream index including a second list of sequentially arranged fragment identifiers corresponding to at least a portion of the multiple sequentially arranged fragments can be requested from a second server system. The first and second list of the sequentially arranged fragment identifiers can be compared and the first list of sequentially arranged fragment identifiers can be reconstructed based on the comparison.Type: GrantFiled: June 28, 2012Date of Patent: August 9, 2016Assignee: Adobe Systems IncorporatedInventors: Glenn Eguchi, Asa Whillock, Kevin Streeter, Mohammed Pithapurwala, Noam Lorberbaum, Seth Hodgson, Srinivas Manapragada
-
Patent number: 9411741Abstract: The disclosure generally relates to methods and systems for application level caching and more particularly to dynamically applying caching policies to a software application. In one embodiment, an application level caching method, comprising: monitoring, using a utility executed by a processor, run-time data access operations corresponding to an application; identifying, using the processor, at least one characteristic associated with the run-time data access operations; triggering, using the processor, a caching rule based on the at least one characteristic associated with the run-time data access operations; and providing, using the processor, a memory access instruction according to the caching rule.Type: GrantFiled: September 18, 2013Date of Patent: August 9, 2016Assignee: WIPRO LIMITEDInventors: Munish Kumar Gupta, Aravind Ajad Yarra
-
Patent number: 9411729Abstract: A transactional memory system salvages hardware lock elision (HLE) transactions. A computer system of the transactional memory system records information about locks elided to begin HLE transactional execution of first and second transactional code regions. The computer system detects a pending cache line conflict of a cache line, and based on the detecting stops execution of the first code region of the first transaction and the second code region of the second transaction. The computer system determines that the first lock and the second lock are different locks and uses the recorded information about locks elided to acquire the first lock of the first transaction and the second lock of the second transaction. The computer system commits speculative state of the first transaction and the second transaction and the computer system continues execution of the first code region and the second code region non-transactionally.Type: GrantFiled: February 27, 2014Date of Patent: August 9, 2016Assignee: International Business Machines CorporationInventors: Harold W. Cain, III, Michael Karl Gschwind, Maged M. Michael, Chung-Lung K. Shum
-
Patent number: 9405595Abstract: In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed.Type: GrantFiled: July 24, 2014Date of Patent: August 2, 2016Assignee: Intel CorporationInventors: Sailesh Kottapalli, John H. Crawford
-
Patent number: 9401917Abstract: A method executed on a first electronic device for accessing an application server on a second electronic device includes receiving a cache manifest for an application, the cache manifest identifying a resource item that can be pre-cached on the first electronic device, pre-caching the resource item as a cached resource item in a cache memory of the first electronic device prior to launching an application client on the first electronic device. The method further includes, upon launching the application client on the first electronic device, retrieving data from the application server, wherein the data includes content and a reference to the resource item, obtaining, from the cache memory, the cached resource item that corresponds to the resource item, and displaying an output based upon the content and the cached resource item.Type: GrantFiled: June 3, 2011Date of Patent: July 26, 2016Assignee: BlackBerry LimitedInventors: Michael Stephen Brown, Herbert Anthony Little, Terrill Mark Dent
-
Patent number: 9378141Abstract: Caching metadata that identify hot blocks at a per local cache level are tracked. Tracked caching metadata are maintained so as to be persistent and shared across nodes of the cluster. Local caches are pre-warmed by using maintained caching metadata, responsive to detecting specific node level events. Such events can result in hot blocks being absent from a local cache, such as a failover between nodes or an unexpected failure local to a specific node. Another event example is the access of shared storage content, such as opening a file or mounting a file system by a specific node, in response to which the associated local cache can be pre-warmed using the tracked caching metadata for the specific file, or for each file of the file system. To pre-warm a local cache, hot blocks of stored content identified by corresponding caching metadata are loaded into the local cache.Type: GrantFiled: April 5, 2013Date of Patent: June 28, 2016Assignee: Veritas Technologies LLCInventors: Mithlesh Thukral, Mukesh Bafna, Shirish Vijayvargiya, Sanjay Jain, Sushil Patil, Sanjay Kumar, Anindya Banerjee
-
Patent number: 9367251Abstract: A method and apparatus of a device that includes a shared memory hash table that notifies one or more readers of changes to the shared memory hash table is described. In an exemplary embodiment, a device modifies a value in the shared memory hash table, where the value has a corresponding key. The device further stores a notification in a notification queue that indicates the value has changed. In addition, the device invalidates a previous entry in the notification queue that indicates the value has been modified. The device signals to the reader that a notification is ready to be processed.Type: GrantFiled: May 5, 2014Date of Patent: June 14, 2016Assignee: Arista Networks, Inc.Inventors: Hugh W. Holbrook, Duncan Stuart Ritchie, Sebastian Sapa, Simon Francis Capper
-
Patent number: 9367463Abstract: Methods and systems for providing a plurality of applications with concurrent access to data are disclosed. One such method includes identifying attributes of an expected data set to be accessed concurrently by the applications, initializing a shared cache with a column data store configured to store the expected data set in columns and creating a memory map for accessing a physical memory location in the shared cache. The method may also include mapping the applications' data access requests to the shared cache with the memory map. Only one instance of the expected data set is stored in memory, so each application is not required to create additional instances of the expected data set in the application's memory address space. Therefore, larger expected data sets may be entirely stored in memory without limiting the number of applications running concurrently.Type: GrantFiled: March 14, 2013Date of Patent: June 14, 2016Assignee: Palantir Technologies, Inc.Inventors: Punya Biswal, Beyang Liu, Eugene Marinelli, Nima Ghamsari
-
Patent number: 9367476Abstract: The present invention discloses a memory management apparatus, method, and system. An OS-based memory management apparatus associated with main memory includes a memory allocation controller configured to control a first memory region within the main memory such that the first memory region is used as a buffer cache depending on whether an input/output device is active or not in order to use the first memory region, allowing memory reservation for the input/output device, in the OS. The memory allocation controller controls the first memory region such that the first memory region is used as an eviction-based cache.Type: GrantFiled: October 7, 2013Date of Patent: June 14, 2016Assignee: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITYInventors: Joonwon Lee, Jinkyu Jeong, Hwanju Kim, Jaeho Hwang
-
Patent number: 9336110Abstract: Methods, systems, and computer program products for identifying performance limiting internode data sharing on Non-Uniform Memory Access (NUMA) platforms are provided. A computer-implemented method may include receiving event records collected by a performance monitoring unit (PMU) during event tracing, associating the event records with corresponding operating system information observed during the event tracing, analyzing the event records to identify shared cache line utilization, and generating a shared cache line utilization report in view of the analyzing.Type: GrantFiled: January 29, 2014Date of Patent: May 10, 2016Assignee: Red Hat, Inc.Inventors: Richard G. Fowles, Joseph P. Mario, Donald C. Zickus, II
-
Patent number: 9330010Abstract: Systems and methods for data storage and caching in a system including virtual machines is disclosed. In an embodiment, a machine implemented method includes mapping a plurality of virtual hard drives to a logical unit number (LUN) of a storage system, each virtual hard drive including a logical block address (LBA) range of the storage system LUN; storing the mapping in a caching module data structure; determining that one of the virtual hard drives should be cached; and updating the caching module data structure to indicate that the LBA range associated with the one of the virtual hard drives should be cached.Type: GrantFiled: August 27, 2014Date of Patent: May 3, 2016Assignee: QLOGIC, CorporationInventor: Bhavik Shah
-
Patent number: 9311209Abstract: Associating processor and processor core energy consumption with a task such as a virtual machine is disclosed. Various events cause a trace record to be written to a trace buffer for a processor. An identifier associated with a task using a processor core of the processor is read. In addition, one or more values associated with an energy consumption of the processor core are read. In response to the event, the one or more values associated with the energy consumption of the processor core and the identifier are written to the trace buffer memory.Type: GrantFiled: November 27, 2012Date of Patent: April 12, 2016Assignee: International Business Machines CorporationInventors: Bishop Brock, Tilman Gloekler, Charles R. Lefurgy, Karthick Rajamani, Gregory S. Still, Malcolm S. Allen-Ware
-
Patent number: 9304886Abstract: Associating processor and processor core energy consumption with a task such as a virtual machine is disclosed. Various events cause a trace record to be written to a trace buffer for a processor. An identifier associated with a task using a processor core of the processor is read. In addition, one or more values associated with an energy consumption of the processor core are read. In response to the event, the one or more values associated with the energy consumption of the processor core and the identifier are written to the trace buffer memory.Type: GrantFiled: February 21, 2013Date of Patent: April 5, 2016Assignee: International Business Machines CorporationInventors: Malcolm S. Allen-Ware, Bishop Brock, Tilman Gloekler, Charles R. Lefurgy, Karthick Rajamani, Gregory S. Still
-
Patent number: 9292446Abstract: A profiler may identify potentially-independent remote data accesses in a program. A remote data access is independent if value returned from said remote data access is not computed from another value returned from another remote data access appearing logically earlier in the program. A program rewriter may generate a program-specific prefetcher that preserves the behavior of the program, based on profiling information including the potentially-independent remote data accesses identified by the profiler. An execution engine may execute the prefetcher and the program concurrently. The execution engine may automatically decide which of said potentially-independent remote data accesses should be executed in parallel speculatively. A shared memory shared by the program and the prefetcher stores returned data from a data source as a result of issuing the remote data accesses.Type: GrantFiled: October 4, 2012Date of Patent: March 22, 2016Assignee: International Business Machines CorporationInventors: Arun Raman, Martin Vechev, Mark N. Wegman, Eran Yahav, Greta Yorsh
-
Patent number: 9256536Abstract: A method and apparatus for providing shared caches. A cache memory system may be operated in a first mode or a second mode. When the cache memory system is operated in the first mode, a first cache and a second cache of the cache memory system may be operated independently. When the cache memory system is operated in the second mode, the first cache and the second cache may be shared. In the second mode, at least one bit may overlap tag bits and set index bits among bits of a memory address.Type: GrantFiled: April 30, 2013Date of Patent: February 9, 2016Assignees: Samsung Electronics Co., Ltd., INDUSTRY-ACADEMIA COOPERATION GROUP OF SEJONG UNIVERSITYInventors: Jeong Ae Park, Sang Oak Woo, Seok Yoon Jung, Young sik Kim, Woo Chan Park
-
Patent number: 9253235Abstract: Media content that meets pre-fetching criteria may be distributed to the device, in a non-requested instance, and stored in the pre-fetched segments database. Rather than relying on caching methods for the first-time access of a media object, at least a portion of the media object may be pre-stored, prior to any requests for access, on the client's device. Thus, when a user attempts to access a particular media stream from the network, if the client's device already has a segment of the desired media object stored, the stored segment access can be directly from the client's device. To further efficiently use bandwidth, the distribution of the segment of a media object to be stored on a user's local machine, the distribution may be done out of band or based on a balance of network resources.Type: GrantFiled: November 19, 2014Date of Patent: February 2, 2016Assignee: AT&T Mobility II LLCInventors: John Ervin Lewis, Justin McNamara, Fulvio Arturo Cenciarelli, Jeffrey Mikan
-
Patent number: 9244724Abstract: In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request.Type: GrantFiled: August 15, 2013Date of Patent: January 26, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Sanjeev Ghai, Guy L. Guthrie, Jonathan R. Jackson, Derek E. Williams
-
Patent number: 9244725Abstract: In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request.Type: GrantFiled: September 26, 2013Date of Patent: January 26, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Sanjeev Ghai, Guy L. Guthrie, Jonathan R. Jackson, Derek E. Williams
-
Patent number: 9235550Abstract: A multi-core processor providing heterogeneous processor cores and a shared cache is presented.Type: GrantFiled: June 30, 2014Date of Patent: January 12, 2016Assignee: Intel CorporationInventors: Frank T. Hady, Mason B. Cabot, John Beck, Mark B. Rosenbluth
-
Patent number: 9223780Abstract: The described implementations relate to processing of electronic data. One implementation is manifested as a system that can include a cache module and at least one processing device configured to execute the cache module. The cache module can be configured to store data items in slots of a cache structure, receive a request for an individual data item that maps to an individual slot of the cache structure, and, when the individual slot of the cache structure is not available, return without further processing the request. For example, the request can be received from a calling application or thread that can proceed without blocking irrespective of whether the request is fulfilled by the cache module.Type: GrantFiled: December 19, 2012Date of Patent: December 29, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Anthony Aue, Arul A. Menezes
-
Patent number: 9195464Abstract: Described embodiments provide a method of controlling processing flow in a network processor having one or more processing modules. A given one of the processing modules loads a script into a compute engine. The script includes instructions for the compute engine. The given one of the processing modules loads a register file into the compute engine. The register file includes operands for the instructions of the loaded script. A tracking vector of the compute engine is initialized to a default value, and the compute engine executes the instructions of the loaded script based on the operands of the loaded register file. The compute engine updates corresponding portions of the register file with updated data corresponding to the executed script. The tracking vector tracks the updated portions of the register file. The compute engine provides the tracking vector and the updated register file to the given one of the processing modules.Type: GrantFiled: December 9, 2011Date of Patent: November 24, 2015Assignee: Intel CorporationInventors: David Sonnier, Chris Randall Stone, Charles Edward Peet, Jr.
-
Patent number: 9128857Abstract: Techniques are disclosed related to flushing one or more data caches. In one embodiment an apparatus includes a processing element, a first cache associated with the processing element, and a circuit configured to copy modified data from the first cache to a second cache in response to determining an activity level of the processing element. In this embodiment, the apparatus is configured to alter a power state of the first cache after the circuit copies the modified data. The first cache may be at a lower level in a memory hierarchy relative to the second cache. In one embodiment, the circuit is also configured to copy data from the second cache to a third cache or a memory after a particular time interval. In some embodiments, the circuit is configured to copy data while one or more pipeline elements of the apparatus are in a low-power state.Type: GrantFiled: January 4, 2013Date of Patent: September 8, 2015Assignee: Apple Inc.Inventors: Brian P. Lilly, Gerard R. Williams, III
-
Patent number: 9098414Abstract: A multi-core processor system includes shared memory shared by cores of a multi-core processor; first cache memories respectively for each of the cores; a second cache memory between the shared memory and the first cache memories, and storing shared data shared by the cores and referred to by at least threads executed by the multi-core processor; a reading unit that reads a value of a given variable from the shared memory; a determining unit that based on a read request for the given variable, determines whether the given variable is shared data or non-shared data that is referred to by only one thread; and a transferring unit that, when the given variable is determined as non-shared data, transfers without using the second cache memory, the value of the given variable to a first cache memory of a core that is a request origin of the read request.Type: GrantFiled: December 19, 2012Date of Patent: August 4, 2015Assignee: FUJITSU LIMITEDInventors: Takahisa Suzuki, Koichiro Yamashita, Hiromasa Yamauchi, Koji Kurihara
-
Patent number: 9092341Abstract: A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.Type: GrantFiled: July 10, 2012Date of Patent: July 28, 2015Assignee: International Business Machines CorporationInventors: Harold W. Cain, III, Vijayalakshmi Srinivasan, Jason Zebchuk
-
Patent number: 9081607Abstract: A method for executing a transaction in a data processing system initiates the transaction by a transactional-memory system coupled to that memory component. The method includes initiating the transaction by a transactional-memory system that is part of a memory component of the data processing system. The transaction includes instructions for comparing multiple parameters, and aborting the transaction by the transactional-memory system based upon a comparison of the multiple parameters.Type: GrantFiled: October 24, 2012Date of Patent: July 14, 2015Assignee: International Business Machines CorporationInventors: Robert J Blainey, Harold W Cain, III, Bradly G Frey, Hung Q Le, Cathy May
-
Patent number: 9043555Abstract: Provided is a method and system for reducing duplicate buffers in buffer cache associated with a storage device. Reducing buffer duplication in a buffer cache includes accessing a file reference pointer associated with a file in a deduplicated filesystem when attempting to load a requested data block from the file into the buffer cache. To determine if the requested data block is already in the buffer cache, aspects of the invention compare a fingerprint that identifies the requested data block against one or more fingerprints identifying a corresponding one or more sharable data blocks in the buffer cache. A match between the fingerprint of the requested data block and the fingerprint from a sharable data block in the buffer cache indicates that the requested data block is already loaded in buffer cache. The sharable data block in buffer cache is used instead thereby reducing buffer duplication in the buffer cache.Type: GrantFiled: February 25, 2009Date of Patent: May 26, 2015Assignee: NetApp, Inc.Inventors: Rahul Khona, Subramaniam Periyagaram, Sandeep Yadav, Dnyaneshwar Pawar
-
Publication number: 20150143050Abstract: The present application is directed to a control circuit that provides a directory configured to maintain a plurality of entries, wherein each entry can indicate sharing of resources, such as cache lines, by a plurality of agents/hosts. Control circuit of the present invention can further provide consolidation of one or more entries having a first format to a single entry having a second format when resources corresponding to the one or more entries are shared by the agents. First format can include an address and a pointer representing one of the agents, and the second format can include a sharing vector indicative of more than one of the agents. In another aspect, the second format can utilize, incorporate, and/or represent multiple entries that may be indicative of one or more resources based on a position in the directory.Type: ApplicationFiled: November 20, 2013Publication date: May 21, 2015Applicant: Netspeed SystemsInventors: Joe ROWLANDS, Sailesh KUMAR
-
Publication number: 20150143051Abstract: In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed.Type: ApplicationFiled: January 30, 2015Publication date: May 21, 2015Inventors: Yen-Cheng Liu, Robert G. Blankenship, Geeyarpuram N. Santhanakrishnan, Ganapati N. Srinivasa, Kenneth C. Creta, Sridhar Muthrasanallur, Bahaa Fahim
-
Patent number: 9037802Abstract: According to one embodiment, a method for managing cache space in a virtual tape controller includes receiving data from at least one host using the virtual tape controller; storing data received from the at least one host to a cache using the virtual tape controller; sending a first alert to the at least one host when a cache free space size is less than a first threshold and entering into a warning state using the virtual tape controller; sending a second alert to the at least one host when the cache free space size is less than a second threshold and entering into a critical state using the virtual tape controller; and allowing previously mounted virtual drives to continue normal writing activity when in the critical state.Type: GrantFiled: May 30, 2012Date of Patent: May 19, 2015Assignee: International Business Machines CorporationInventors: Ralph T. Beeston, Erika M. Dawson, Duke A. Lee, David Luciani, Joel K. Lyman
-
Patent number: 9032156Abstract: For each access request received at a shared cache of the data processing device, a memory access pattern (MAP) monitor predicts which of the memory banks, and corresponding row buffers, would be accessed by the access request if the requesting thread were the only thread executing at the data processing device. By recording predicted accesses over time for a number of access requests, the MAP monitor develops a pattern of predicted memory accesses by executing threads. The pattern can be employed to assign resources at the shared cache, thereby managing memory more efficiently.Type: GrantFiled: July 6, 2011Date of Patent: May 12, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Jaewoong Chung, Shekhar Srikantaiah, Lisa Hsu
-
Patent number: 9032155Abstract: A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member.Type: GrantFiled: October 29, 2013Date of Patent: May 12, 2015Assignee: Parallel Networks, LLCInventors: Keith A. Lowery, Bryan S. Chin, David A. Consolver, Gregg A. DeMasters
-
Patent number: 9032158Abstract: A method of identifying a cache line of a cache memory (180) for replacement, is disclosed. Each cache line in the cache memory has a stored sequence number and a stored transaction data stream identifying label. A request (e.g., 400) associated with a label identifying a transaction data stream is received. The label corresponds to the stored transaction data stream identifying label of the cache line. The stored sequence number of the cache line is compared with a response sequence number. The response sequence number is associated with the stored transaction data stream identifying label of the cache line. The cache line is identified for replacement based on the comparison.Type: GrantFiled: April 26, 2011Date of Patent: May 12, 2015Assignee: Canon Kabushiki KaishaInventor: David Charles Ross
-
Publication number: 20150120998Abstract: In an embodiment, a first portion of a cache memory is associated with a first core. This first cache memory portion is of a distributed cache memory, and may be dynamically controlled to be one of a private cache memory for the first core and a shared cache memory shared by a plurality of cores (including the first core) according to an addressing mode, which itself is dynamically controllable. Other embodiments are described and claimed.Type: ApplicationFiled: October 31, 2013Publication date: April 30, 2015Inventors: Kebing Wang, Zhaojuan Bian, Wei Zhou, Zhihong Wang
-
Patent number: 9021207Abstract: In response to a processor core exiting a low-power state, a cache is set to a minimum size so that fewer than all of the cache's entries are available to store data, thus reducing the cache's power consumption. Over time, the size of the cache can be increased to account for heightened processor activity, thus ensuring that processing efficiency is not significantly impacted by a reduced cache size. In some embodiments, the cache size is increased based on a measured processor performance metric, such as an eviction rate of the cache. In some embodiments, the cache size is increased at regular intervals until a maximum size is reached.Type: GrantFiled: December 20, 2012Date of Patent: April 28, 2015Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Edward J. McLellan, Paul Keltcher, Srilatha Manne, Richard E. Klass, James M. O'Connor
-
Patent number: 9021214Abstract: According to a storage system of a prior art adopting a cluster structure, various types of large-capacity memories were arranged to enhance the access performance, so that the system required a dedicated control circuit, and there was difficulty in realizing cost reduction and improvement of access performance simultaneously. In order to solve the problems, the present invention provides a storage system in which a group of memories is integrated to MPU memories directly coupled to MPUs in respective controller units, wherein each MPU memory is divided into a duplication information area and a non-duplication information area, and attribute information for controlling accesses thereto are provided.Type: GrantFiled: December 14, 2011Date of Patent: April 28, 2015Assignee: Hitachi, Ltd.Inventors: Yuki Sakashita, Shintaro Kudo, Yusuke Nonaka
-
Patent number: 9015418Abstract: A method and system for self-sizing dynamic cache for virtualized environments is disclosed. The preferred embodiment self sizes unequal portions of the total amount of cache and allocates to a plurality of active virtualized machines (VM) according to VM requirements and administrative standards. As a new VM may emerge and request an amount of cache, the cache controller reclaims currently used cache from the active VM and reallocates the unequal portions of cache required by each VM. To ensure cache availability, a quick reclamation amount of cache is immediately available to each new VM as it makes the request begins operation. After reallocation, the newly created VM may rely on a guaranteed minimum quota of cache to ensure performance.Type: GrantFiled: November 20, 2012Date of Patent: April 21, 2015Assignee: LSI CorporationInventor: Luca Bert
-
Patent number: 9015415Abstract: An apparatus is described that includes a plurality of processors, a plurality of cache slices and respective cache agents. Each of the cache agents have a buffer to store requests from the processors. The apparatus also includes a network between the processors and the cache slices to carry traffic of transactions that invoke the processors and/or said cache agents. The apparatus also includes communication resources between the processors and the cache agents reserved to transport one or more warnings from one or more of the cache agents to the processors that the one or more cache agents' respective buffers have reached a storage capacity threshold.Type: GrantFiled: September 24, 2010Date of Patent: April 21, 2015Assignee: Intel CorporationInventors: Ankush Varma, Adrian C. Moga, Liqun Cheng