Shared Cache Patents (Class 711/130)
  • Patent number: 9009410
    Abstract: A system and method for locking data in a cache memory. A first processing thread may be operated to run a program requesting data, where at least some of the requested data is loaded from a source memory into a non-empty cache. A second processing thread may be operated independently of the first processing thread to determine whether or not to lock the requested data in the cache. If the requested data is determined to be locked, the requested data may be locked in the cache at the same time as the data is loaded into the cache.
    Type: Grant
    Filed: August 23, 2011
    Date of Patent: April 14, 2015
    Assignee: Ceva D.S.P. Ltd.
    Inventors: Amos Rohe, Alex Shlezinger
  • Patent number: 9009409
    Abstract: A method to store objects in a memory cache is disclosed. A request is received from an application to store an object in a memory cache associated with the application. The object is stored in a cache region of the memory cache based on an identification that the object has no potential for storage in a shared memory cache and a determination that the cache region is associated with a storage policy that specifies that objects to be stored in the cache region are to be stored in a local memory cache and that a garbage collector is not to remove objects stored in the cache region from the local memory cache.
    Type: Grant
    Filed: July 12, 2011
    Date of Patent: April 14, 2015
    Assignee: SAP SE
    Inventors: Galin Galchev, Frank Kilian, Oliver Luik, Dirk Marwinski, Petio G. Petev
  • Patent number: 9009416
    Abstract: A method, computer program product, and computing system for reclassifying a first assigned cache portion associated with a first machine as a public cache portion associated with the first machine and at least one additional machine after the occurrence of a reclassifying event. The public cache portion includes a plurality of pieces of content received by the first machine. A content identifier for each of the plurality of pieces of content included within the public cache portion is compared with content identifiers for pieces of content included within a portion of a data array associated with the at least one additional machine to generate a list of matching data portions. The list of matching data portions is provided to at least one additional assigned cache portion within the cache system that is associated with the at least one additional machine.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: April 14, 2015
    Assignee: EMC Corporation
    Inventors: Philip Derbeko, Anat Eyal, Roy E. Clark
  • Patent number: 9009385
    Abstract: At least one virtual machine implemented on a given physical machine in an information processing system is able to detect the presence of one or more other virtual machines that are also co-resident on that same physical machine. More particularly, at least one virtual machine is configured to avoid usage of a selected portion of a memory resource of the physical machine for a period of time, and to monitor the selected portion of the memory resource for activity during the period of time. Detection of a sufficient level of such activity indicates that the physical machine is also being shared by at least one other virtual machine. The memory resource of the physical machine may comprise, for example, a cache memory, and the selected portion of the memory resource may comprise one or more randomly selected sets of the cache memory.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: April 14, 2015
    Assignee: EMC Corporation
    Inventors: Ari Juels, Alina M. Oprea, Michael Kendrick Reiter, Yinqian Zhang
  • Patent number: 9003130
    Abstract: A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 7, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: James O'Connor, Bradford M. Beckmann
  • Publication number: 20150095580
    Abstract: A processor includes a cache-side address monitor unit corresponding to a first cache portion of a distributed cache that has a total number of cache-side address monitor storage locations less than a total number of logical processors of the processor. Each cache-side address monitor storage location is to store an address to be monitored. A core-side address monitor unit corresponds to a first core and has a same number of core-side address monitor storage locations as a number of logical processors of the first core. Each core-side address monitor storage location is to store an address, and a monitor state for a different corresponding logical processor of the first core. A cache-side address monitor storage overflow unit corresponds to the first cache portion, and is to enforce an address monitor storage overflow policy when no unused cache-side address monitor storage location is available to store an address to be monitored.
    Type: Application
    Filed: September 27, 2013
    Publication date: April 2, 2015
    Inventors: Yen-Cheng Liu, Bahaa Fahim, Erik G. Hallnor, Jeffrey D. Chamberlain, Stephen R. Van Doren, Antonio Juan
  • Publication number: 20150095581
    Abstract: A cache manager application provides a data caching policy in a multiple tenant enterprise resource planning (ERP) system. The cache manager application manages multiple tenant caches in a single process. The application applies the caching policy. The caching policy optimizes system performance compared to local cache optimization. As a result, tenants with high cache consumption receive a larger portion of caching resources.
    Type: Application
    Filed: February 7, 2014
    Publication date: April 2, 2015
    Applicant: Microsoft Corporation
    Inventors: John Stairs, Esben Nyhuus Kristoffersen, Thomas Hejlsberg
  • Patent number: 8996819
    Abstract: A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit.
    Type: Grant
    Filed: November 7, 2012
    Date of Patent: March 31, 2015
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Patent number: 8996805
    Abstract: Shared cache modules, systems, and methods are provided herein. The shared cache module is useable with at least one initiator on a serial attached small computer system interface system. The shared cache module includes a memory device and a memory interface. The memory device assigns each of the at least one initiator to a portion of a cache memory on the memory device. The memory interface indexes the assignment and communicates with the at least one initiator to perform a memory task.
    Type: Grant
    Filed: October 26, 2011
    Date of Patent: March 31, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Joseph David Black, Balaji Natrajan, Michael G Myrah
  • Publication number: 20150089145
    Abstract: A processor comprising multiple processor cores and a bus for exchanging data between the multiple processor cores is disclosed. Each of the multiple processor cores includes: at least one processor register; a cache for storing at least one cache line of memory; a load store unit for executing a memory command to exchange data between the cache and the at least one processor register; an atomic memory operation unit for executing an atomic memory operation on the at least one cache line of memory; and a high throughput register for storing a status indicating a high throughput or a normal status. The load store unit is operable to transfer the atomic memory operation to the atomic memory operation unit of a designated processor core if the atomic memory operation status is the high throughput status using the bus.
    Type: Application
    Filed: September 3, 2014
    Publication date: March 26, 2015
    Inventor: Burkhard Steinmacher-Burow
  • Publication number: 20150081977
    Abstract: In one embodiment, a method includes receiving a read request from a first caching agent, determining whether a directory entry associated with the memory location indicates that the information is not present in a remote caching agent, and if so, transmitting the information from the memory location to the first caching agent before snoop processing with respect to the read request is completed. Other embodiments are described and claimed.
    Type: Application
    Filed: November 21, 2014
    Publication date: March 19, 2015
    Inventors: SAILESH KOTTAPALLI, HENK G. NEEFS, RAHUL PAL, MANOJ K. ARORA, DHEEMANTH NAGARAJ
  • Publication number: 20150081976
    Abstract: A multi-core processor providing heterogeneous processor cores and a shared cache is presented.
    Type: Application
    Filed: June 30, 2014
    Publication date: March 19, 2015
    Inventors: Frank T. Hady, Mason B. Cabot, John Beck, Mark B. Rosenbluth
  • Patent number: 8984229
    Abstract: A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: March 17, 2015
    Assignee: Parallel Networks, LLC
    Inventors: Keith A. Lowery, Bryan S. Chin, David A. Consolver, Gregg A. DeMasters
  • Patent number: 8984228
    Abstract: In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: March 17, 2015
    Assignee: Intel Corporation
    Inventors: Yen-Cheng Liu, Robert G. Blankenship, Geeyarpuram N. Santhanakrishnan, Ganapati N. Srinivasa, Kenneth C. Creta, Sridhar Muthrasanallur, Bahaa Fahim
  • Patent number: 8977818
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Grant
    Filed: September 20, 2013
    Date of Patent: March 10, 2015
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Publication number: 20150067250
    Abstract: A microprocessor includes a plurality of processing cores, a resource shared by the plurality of processing cores, and a hardware semaphore readable and writeable by each of the plurality of processing cores within a non-architectural address space. Each of the plurality of processing cores is configured to write to the hardware semaphore to request ownership of the shared resource and to read from the hardware semaphore to determine whether or not the ownership was obtained. Each of the plurality of processing cores is configured to write to the hardware semaphore to relinquish ownership of the shared resource.
    Type: Application
    Filed: May 19, 2014
    Publication date: March 5, 2015
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: G. Glenn Henry, Terry Parks
  • Publication number: 20150067263
    Abstract: A microprocessor includes a plurality of processing cores, a service processing unit and a memory accessible by both the service processing unit and the plurality of processing cores. At least one of the plurality of processing cores is configured to write a patch to the memory. The patch comprises one or more instructions to be fetched from the memory and executed by the service processing unit after written to the memory by the at least one of the plurality of processing cores.
    Type: Application
    Filed: May 19, 2014
    Publication date: March 5, 2015
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: G. Glenn Henry, Stephan Gaskins
  • Publication number: 20150067214
    Abstract: A microprocessor includes a plurality of cores, a shared cache memory, and a control unit that individually puts each core to sleep by stopping its clock signal. Each core executes a sleep instruction and responsively makes a respective request of the control unit to put the core to sleep, which the control unit responsively does, and detects when all the cores have made the respective request and responsively wakes up only the last requesting cores. The last core writes back and invalidates the shared cache memory and indicates it has been invalidated and makes a request to the control unit to put the last core back to sleep. The control unit puts the last core back to sleep and continuously keeps the other cores asleep while the last core writes back and invalidates the shared cache memory, indicates the shared cache memory was invalidated, and is put back to sleep.
    Type: Application
    Filed: May 19, 2014
    Publication date: March 5, 2015
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: G. Glenn Henry, Terry Parks, Brent Bean, Stephan Gaskins
  • Patent number: 8972666
    Abstract: A computer program product for mitigating conflicts for shared cache lines between an owning core currently owning a cache line and a requestor core. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes determining whether the owning core is operating in a transactional or non-transactional mode and setting a hardware-based reject threshold at a first or second value with the owning core determined to be operating in the transactional or non-transactional mode, respectively. The method further includes taking first or second actions to encourage cache line sharing between the owning core and the requestor core in response to a number of rejections of requests by the requestor core reaching the reject threshold set at the first or second value, respectively.
    Type: Grant
    Filed: December 3, 2013
    Date of Patent: March 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, Chung-Lung K. Shum
  • Publication number: 20150058667
    Abstract: Embodiments of the invention relate to a method and apparatus for a zero voltage processor sleep state. A processor may include a dedicated cache memory. A voltage regulator may be coupled to the processor to provide an operating voltage to the processor. During a transition to a zero voltage power management state for the processor, the operational voltage applied to the processor by the voltage regulator may be reduced to approximately zero and the state variables associated with the processor may be saved to the dedicated cache memory.
    Type: Application
    Filed: September 25, 2014
    Publication date: February 26, 2015
    Inventors: Sanjeev Jahagirdar, Varghese George, John B. Conrad, Robert Milstrey, Stephen A. Fischer, Alon Naveh, Shai Rotem
  • Patent number: 8966183
    Abstract: A cache management system employs a replacement policy in a manner that manages concurrent accesses to cache. The cache management system comprises a cache, a replacement policy storage for storing replacement statuses of cache lines of the cache, and an update module. The update module, comprising access filtering and a concurrent update handling, determines how updates to the replacement policy storage are handled. In a multi-threaded compute environment, a concurrent access to shared cache causes a selective update to the replacement policy storage.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: February 24, 2015
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Brian C. Grayson, Jyotsna S. Kartha, Kathryn C. Stacer
  • Publication number: 20150052312
    Abstract: A processing unit includes a processor core and a cache memory. Entries in the cache memory are grouped in multiple congruence classes. The cache memory includes tracking logic that tracks a transaction footprint including cache line(s) accessed by transactional memory access request(s) of a memory transaction. The cache memory, responsive to receiving a memory access request that specifies a target cache line having a target address that maps to a congruence class, forms a working set of ways in the congruence class containing cache line(s) within the transaction footprint and updates a replacement order of the cache lines in the congruence class. Based on membership of the at least one cache line in the working set, the update promotes at least one cache line that is not the target cache line to a replacement order position in which the at least one cache line is less likely to be replaced.
    Type: Application
    Filed: September 26, 2013
    Publication date: February 19, 2015
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Jonathan R. Jackson, Derek E. Williams
  • Publication number: 20150052311
    Abstract: In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request.
    Type: Application
    Filed: September 26, 2013
    Publication date: February 19, 2015
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Jonathan R. Jackson, Derek E. Williams
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8954681
    Abstract: A command processing pipeline is coupled to a shared cache. The command processing pipeline comprises (i) a first command processing stage configured to sequentially receive and process first and second cache commands, and (ii) a second command processing stage coupled to the first command processing stage. The first and the second command processing stages are two consecutive command processing stages of the command processing pipeline. The first and second command processing stages may access different groups of cache resources, and the first and second cache commands may be processed during consecutive clock cycles of a clock signal. Processing of the second cache command may be performed independently of an outcome of processing the first cache command by the first command processing stage. A third command processing stage may write data associated with the first cache command to one of a valid memory and a data memory included in the cache.
    Type: Grant
    Filed: December 7, 2012
    Date of Patent: February 10, 2015
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Tarek Rohana, Gil Stoler
  • Patent number: 8954676
    Abstract: Disclosed are a cache with a scratch pad memory (SPM) structure and a processor including the same. The cache with a scratch pad memory structure includes: a block memory configured to include at least one block area in which instruction codes read from an external memory are stored; a tag memory configured to store an external memory address corresponding to indexes of the instruction codes stored in the block memory; and a tag controller configured to process a request from a fetch unit for the instruction codes, wherein a part of the block areas is set as a SPM area according to cache setting input from a cache setting unit. According to the present invention, it is possible to reduce the time to read instruction codes from the external memory and realize power saving by operating the cache as the scratch pad memory.
    Type: Grant
    Filed: November 19, 2012
    Date of Patent: February 10, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Jin Ho Han
  • Publication number: 20150039834
    Abstract: Sharing local cache from a failover node, including: determining, by a managing compute node, whether a first compute node and a second compute node each have a local cache, where the second compute node is a mirrored copy of the first compute node; responsive to determining that the first compute node and the second compute node each have a local cache, combining, by the managing compute node, local cache on the first compute node and local cache on the second compute node into unified logical cache; receiving, by the managing compute node, a memory access request; and sending, by the managing compute node, the memory access request to an appropriate local cache in the unified logical cache.
    Type: Application
    Filed: August 1, 2013
    Publication date: February 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: GARY D. CUDAK, LYDIA M. DO, CHRISTOPHER J. HARDEE, ADAM ROBERTS
  • Patent number: 8949539
    Abstract: A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.
    Type: Grant
    Filed: February 1, 2010
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Martin Ohmacht
  • Patent number: 8943282
    Abstract: A method is used in managing snapshot in cache-based storage systems. A request to create a snapshot of a data object is received. A portion of the data object is cached in a global cache. The data object is associated with a mapping object. The mapping object manages access to the portion of the data object. A snapshot of the data object is created. A snapshot mapping object is associated with the snapshot of the data object. The snapshot mapping object includes a link to the mapping object. The snapshot mapping object is a version of the mapping object and shares the portion of the data object cached in the global cache.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: January 27, 2015
    Assignee: EMC Corporation
    Inventors: Philippe Armangau, Jean-Pierre Bono, Sitaram Pawar, Christopher Seibel, Yubing Wang
  • Publication number: 20150019819
    Abstract: Embodiments relate to a method and computer program product for prefetching data on a chip. The chip has at least one scout core, multiple parent cores that cooperate together to execute various tasks, and a shared cache that is common between the scout core and the multiple parent cores. An aspect of the embodiments includes monitoring the multiple parent cores by the at least one scout core through the shared cache for a shared cache access occurring in a base parent core. The method includes saving a fetch address by the at least one scout core based on the shared cache access occurring. The fetch address indicates a location of a specific line of cache requested by the base parent core.
    Type: Application
    Filed: September 30, 2014
    Publication date: January 15, 2015
    Inventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-lung K. Shum
  • Publication number: 20150019820
    Abstract: Embodiments of the invention relate to prefetching data on a chip having at least one scout core, at least one parent core, and a shared cache that is common between the at least one scout core and the at least one parent core. A prefetch code is executed by the scout core for monitoring the parent core. The prefetch code executes independently from the parent core. The scout core determines that at least one specified data pattern has occurred in the parent core based on monitoring the parent core. A prefetch request is sent from the scout core to the shared cache. The prefetch request is sent based on the at least one specified pattern being detected by the scout core. A data set indicated by the prefetch request is sent to the parent core by the shared cache.
    Type: Application
    Filed: September 30, 2014
    Publication date: January 15, 2015
    Inventors: Brian R. Prasky, Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Chung-lung K. Shum
  • Publication number: 20150019821
    Abstract: Embodiments relate to a method and computer program product for prefetching data on a chip having at least one scout core and a parent core. The method includes saving a prefetch code start address by the parent core. The prefetch code start address indicates where a prefetch code is stored. The prefetch code is specifically configured for monitoring the parent core based on a specific application being executed by the parent core. The method includes sending a broadcast interrupt signal by the parent core to the at least one scout core. The broadcast interrupt signal being sent based on the prefetch code start address being saved. The method includes monitoring the parent core by the prefetch code executed by at least one scout core. The scout core executes the prefetch code based on receiving the broadcast interrupt signal.
    Type: Application
    Filed: September 30, 2014
    Publication date: January 15, 2015
    Inventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-lung K. Shum
  • Patent number: 8935485
    Abstract: A data processing apparatus 2 includes a plurality of transaction sources 8, 10 each including a local cache memory. A shared cache memory 16 stores cache lines of data together with shared cache tag values. Snoop filter circuitry 14 stores snoop filter tag values tracking which cache lines of data are stored within the local cache memories. When a transaction is received for a target cache line of data, then the snoop filter circuitry 14 compares the target tag value with the snoop filter tag values and the shared cache circuitry 16 compares the target tag value with the shared cache tag values. The shared cache circuitry 16 operates in a default non-inclusive mode. The shared cache memory 16 and the snoop filter 14 accordingly behave non-inclusively in respect of data storage within the shared cache memory 16, but inclusively in respect of tag storage given the combined action of the snoop filter tag values and the shared cache tag values.
    Type: Grant
    Filed: August 8, 2011
    Date of Patent: January 13, 2015
    Assignee: ARM Limited
    Inventors: Jamshed Jalal, Brett Stanley Feero, Mark David Werkheiser, Michael Alan Filippo
  • Patent number: 8935475
    Abstract: Embodiments of the present invention provides for the execution of threads and/or workitems on multiple processors of a heterogeneous computing system in a manner that they can share data correctly and efficiently. Disclosed method, system, and article of manufacture embodiments include, responsive to an instruction from a sequence of instructions of a work-item, determining an ordering of visibility to other work-items of one or more other data items in relation to a particular data item, and performing at least one cache operation upon at least one of the particular data item or the other data items present in any one or more cache memories in accordance with the determined ordering. The semantics of the instruction includes a memory operation upon the particular data item.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: January 13, 2015
    Assignees: ATI Technologies ULC, Advanced Micro Devices, Inc.
    Inventors: Anthony Asaro, Kevin Normoyle, Mark Hummel, Norman Rubin, Mark Fowler
  • Publication number: 20150012710
    Abstract: Various embodiments of the present disclosure relate to a cache stickiness index for providing measurable metrics associated with caches of a content delivery networking system. In one embodiment, a method for generating a cache stickiness index, including a cluster stickiness index and a region stickiness index, is disclosed. In embodiments, the cluster stickiness index is generated by comparing cache keys shared among a plurality of front-end clusters. In embodiments, the region stickiness index is generated by comparing cache keys shared among a plurality of data centers. In one embodiment, a system comprising means for generating a stickiness index is disclosed.
    Type: Application
    Filed: July 3, 2013
    Publication date: January 8, 2015
    Applicant: Facebook, Inc.
    Inventors: Xiaojun LIANG, Hongzhong Jia, Jason Taylor
  • Publication number: 20150012711
    Abstract: A system for operating a shared memory of a multiprocessor system includes a set of processor cores and a corresponding set of core local caches, a set of I/O devices and a corresponding set of I/O device local caches. Read and write operations performed on a core local cache, an I/O device local cache, and the shared memory are governed by a cache coherence protocol (CCP) that ensures that the shared memory is updated atomically.
    Type: Application
    Filed: July 4, 2013
    Publication date: January 8, 2015
    Inventors: VAKUL GARG, Varun Sethi, Bharat Bhushan
  • Publication number: 20150012692
    Abstract: Systems and methods for managing data input/output operations are described. In one aspect, a device driver identifies a data read operation generated by a virtual machine in a virtual environment. The device driver is located in the virtual machine and the data read operation identifies a physical cache address associated with the data requested in the data read operation. A determination is made regarding whether data associated with the data read operation is available in a cache associated with the virtual machine.
    Type: Application
    Filed: September 25, 2014
    Publication date: January 8, 2015
    Applicant: INTELLECTUAL PROPERTY HOLDINGS 2 LLC
    Inventors: Vikram Joshi, Yang Luan, Manish R. Apte, Hrishikesh A. Vidwans, Michael F. Brown
  • Patent number: 8930627
    Abstract: A computer program product for mitigating conflicts for shared cache lines between an owning core currently owning a cache line and a requestor core. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes determining whether the owning core is operating in a transactional or non-transactional mode and setting a hardware-based reject threshold at a first or second value with the owning core determined to be operating in the transactional or non-transactional mode, respectively. The method further includes taking first or second actions to encourage cache line sharing between the owning core and the requestor core in response to a number of rejections of requests by the requestor core reaching the reject threshold set at the first or second value, respectively.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, Chung-Lung K. Shum
  • Patent number: 8930628
    Abstract: Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Berger, Michael F. Fee, Christine C. Jones, Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20150006803
    Abstract: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a single-port memory, a dual-port memory, and a control circuit. The single-port memory may be store tag information associated with a cache memory, and the dual-port memory may be configured to store state information associated with the cache memory. The control circuit may be configured to receive a request which includes a tag address, access the tag and state information stored in the single-port memory and the dual-port memory, respectively, dependent upon the received tag address. A determination of if the data associated with the received tag address is contained in the cache memory may be made the control circuit, and the control circuit may update and store state information in the dual-port memory responsive to the determination.
    Type: Application
    Filed: June 27, 2013
    Publication date: January 1, 2015
    Inventors: Harshavardhan Kaushikkar, Muditha Kanchana, Odutola O. Ewedemi
  • Patent number: 8924652
    Abstract: Embodiments provide a method comprising receiving, at a cache associated with a central processing unit that is disposed on an integrated circuit, a request to perform a cache operation on the cache; in response to receiving and processing the request, determining that first data cached in a first cache line of the cache is to be written to a memory that is coupled to the integrated circuit; identifying a second cache line in the cache, the second cache line being complimentary to the first cache line; transmitting a single memory instruction from the cache to the memory to write to the memory (i) the first data from the first cache line and (ii) second data from the second cache line; and invalidating the first data in the first cache line, without invalidating the second data in the second cache line.
    Type: Grant
    Filed: April 4, 2012
    Date of Patent: December 30, 2014
    Assignee: Marvell Israel (M.I.S.L.) Ltd.
    Inventors: Adi Habusha, Eitan Joshua, Shaul Chapman
  • Patent number: 8918537
    Abstract: Systems and methods are provided for selecting a path through which to send data in a host-based multi-path system. In one embodiment, a system includes a management server that determines a topology of the network and analyzes a plurality of paths for sending data through the network. The management server may also create a path quality index based on the topology and the analysis, the path quality index indicating a quality of individual paths within the plurality of paths. The system further includes a host that receives the path quality index from the management server and automatically selects, based on the path quality index, a path from the plurality of paths through which to send data.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: December 23, 2014
    Assignee: EMC Corporation
    Inventors: Harold M. Sandstrom, Amanuel Ronen Artzi, Michael E. Bappe, Helen S. Raizen, William Zahavi
  • Patent number: 8918593
    Abstract: A single-ported memory for storing information and only accessible to a plurality of clients, and a dual-ported memory for storing links and accessible to the plurality of clients and to a list manager that maintains a data structure for allocating memory blocks from the first memory and the second memory to the plurality of clients. The dual-ported memory is accessible to both the plurality of clients and the list manager. A method includes receiving a request from a client for access to memory storage at the single-ported memory and the dual-ported memory, and allocating a block of the single-ported memory to the client and a block of the dual-ported memory to the client. After the client has used the memory storage, the allocated block of the single-ported memory and the dual-ported memory are released to a free list data structure used by the list manager to assign storage.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: December 23, 2014
    Assignee: QLOGIC, Corporation
    Inventors: Biswajit Khandai, Oscar L. Grijalva
  • Patent number: 8909872
    Abstract: A computer system is provided including a central processing unit having an internal cache, a memory controller is coupled to the central processing unit, and a closely coupled peripheral is coupled to the central processing unit. A coherent interconnection may exist between the internal cache and both the memory controller and the closely coupled peripheral, wherein the coherent interconnection is a bus.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 9, 2014
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli
  • Publication number: 20140359225
    Abstract: Disclosed herein is a multi-core processor including: a plurality of processor cores; a shared data cache storing cache data previously accessed by at least one of the plurality of processor cores; and an address decoder comparing an address value of a data required by at least one of the plurality of processor cores and a set address register value with each other and allowing at least one of the plurality of processor cores to access the shared data cache or a separate memory in which non-cacheable data that are not stored in the shared data cache are stored.
    Type: Application
    Filed: May 27, 2014
    Publication date: December 4, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Jae-Jin LEE
  • Patent number: 8904117
    Abstract: Various systems and methods for performing write-back caching in a cluster. For example, one method can involve a first node detecting that no failover nodes are available. A determination is made whether the first node should use write-back caching or not. If the first node is to continue using write-back caching, a first local cache identifier and a global cache identifier are both updated.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 2, 2014
    Assignee: Symantec Corporation
    Inventors: Santosh Kalekar, Niranjan S. Pendarkar, Vipul Jain, Shailesh Marathe, Anindya Banerjee, Rishikesh Bhagwandas Jethwani
  • Patent number: 8904114
    Abstract: Various implementations of shared upper level cache architectures for multi-core processors including a first subset of processor cores and a second subset of processor cores and a module configured to copy data from a first shared upper level cache memory to a second shared upper level cache memory are generally disclosed.
    Type: Grant
    Filed: November 24, 2009
    Date of Patent: December 2, 2014
    Assignee: Empire Technology Development LLC
    Inventor: Ezekiel Kruglick
  • Patent number: 8904115
    Abstract: Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: December 2, 2014
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Raguram Damodaran, Jonathan (Son) Hung Tran, Timothy David Anderson, Sanjive Agarwala
  • Publication number: 20140351524
    Abstract: A cache memory eviction method includes maintaining thread-aware cache access data per cache block in a cache memory, wherein the cache access data is indicative of a number of times a cache block is accessed by a first thread, associating a cache block with one of a plurality of bins based on cache access data values of the cache block, and selecting a cache block to evict from a plurality of cache block candidates based, at least in part, upon the bins with which the cache block candidates are associated.
    Type: Application
    Filed: March 15, 2013
    Publication date: November 27, 2014
    Inventors: Ragavendra Natarajan, Jayesh Guar, Nithiyanandan Bashyam, Mainak Chaudhuri, Sreenivas Subramoney
  • Publication number: 20140351523
    Abstract: The disclosure is directed to a system and method for managing cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a first cache data and a first cache metadata are stored for data transfers between a respective node and regions of a storage cluster receiving at least a first selected number of data transfer requests. When the node is rebooted, a second (new) cache data is stored to replace the first (old) cache data. The second cache data is compiled utilizing the first cache metadata to identify previously cached regions of the storage cluster receiving at least a second selected number of data transfer requests after the node is rebooted. The second selected number of data transfer requests is less than the first selected number of data transfer requests to enable a rapid build of the second cache data.
    Type: Application
    Filed: June 25, 2013
    Publication date: November 27, 2014
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish