With Shared Cache (epo) Patents (Class 711/E12.038)
  • Publication number: 20120005430
    Abstract: Access to various types of resources is controlled efficiently, thereby enhancing the throughput. A storage system includes: a disk device for providing a volume for storing data to a host system; a channel adapter for writing data from the host system to the disk device via a cache memory; a disk adapter for transferring data to and from the disk device; and at least one processor package including a plurality of processors for controlling the channel adapter and the disk adapter; wherein any one of the processor packages includes a processor for incorporatively transferring related types of ownership based on specific control information for managing the plurality of types of ownership for each of the plurality of types of resources.
    Type: Application
    Filed: April 21, 2010
    Publication date: January 5, 2012
    Applicant: HITACHI, LTD.
    Inventors: Koji Watanabe, Toshiya Seki, Takashi Sakaguchi
  • Publication number: 20110320727
    Abstract: An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110320729
    Abstract: Various embodiments of the present invention manage access to a cache memory. In one embodiment, a set of cache bank availability vectors are generated based on a current set of cache access requests currently operating on a set of cache banks and at least a variable busy time of a cache memory includes the set of cache banks. The set of cache bank availability vectors indicate an availability of the set of cache banks. A set of cache access requests for accessing a set of given cache banks within the set of cache banks is received. At least one cache access request in the set of cache access requests is selected to access a given cache bank based on the a cache bank availability vectors associated with the given cache bank and the set of access request parameters associated with the at least one cache access that has been selected.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: International Business Machines Corporation
    Inventors: TIMOTHY C. BRONSON, Garrett M. Drapala, Hieu T. Huynh, Kenneth D. Klapproth
  • Publication number: 20110320728
    Abstract: A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Michael F. Fee, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20110320726
    Abstract: A storage apparatus has a channel board 11; a drive board 13; a cache memory 14; a plurality of processor boards 12 that transfer data; and a shared memory 15. The channel board 11 stores a frame transfer table 521 containing information indicative of correspondence between LDEV 172 and each of the processor boards 12, set in accordance with a right of ownership that is a right of access to the LDEV 172. The processor boards 12 store LDEV control information 524 in a local memory 123, which is referred to by the processor board at the time of access The channel board 11 transfers a data frame that forms the received data I/O request, to one of the processor boards 12 corresponding to the LDEV 172 specified from the information contained in the frame by using the frame transfer table 521.
    Type: Application
    Filed: May 26, 2009
    Publication date: December 29, 2011
    Applicant: HITACHI, LTD.
    Inventors: Takashi Noda, Takashi Ochi, Yoshihito Nakagawa
  • Publication number: 20110320720
    Abstract: Cache line replacement in a symmetric multiprocessing computer, the computer having a plurality of processors, a main memory that is shared among the processors, a plurality of cache levels including at least one high level of private caches and a low level shared cache, and a cache controller that controls the shared cache, including receiving in the cache controller a memory instruction that requires replacement of a cache line in the low level shared cache; and selecting for replacement by the cache controller a least recently used cache line in the low level shared cache that has no copy stored in any higher level cache.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Craig Walters, Vijayalakshmi Srinivasan
  • Publication number: 20110320730
    Abstract: A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Craig R. Walters
  • Publication number: 20110320695
    Abstract: Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: International Business Machines Corporation
    Inventors: Deanna P. BERGER, Michael F. Fee, Christine C. Jones, Arthur J. O'Neill, Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110307665
    Abstract: Subject matter disclosed herein relates to a system of one or more processors that includes persistent memory.
    Type: Application
    Filed: June 9, 2010
    Publication date: December 15, 2011
    Inventors: John Rudelic, August Camber, Mostafa Naguib Abdulla
  • Patent number: 8078804
    Abstract: A data cache memory coupled to a processor including processor clusters are adapted to operate simultaneously on scalar and vectorial data by providing data locations in the data cache memory for storing data for processing. The data locations are accessed either in a scalar mode or in a vectorial mode. This is done by explicitly mapping the data locations that are scalar and the data locations that are vectorial.
    Type: Grant
    Filed: June 26, 2007
    Date of Patent: December 13, 2011
    Assignees: STMicroelectronics S.r.l., STMicroelectronics N.V.
    Inventors: Francesco Pappalardo, Giuseppe Notarangelo, Elena Salurso, Elio Guidetti
  • Publication number: 20110296407
    Abstract: In a virtual machine environment, a hypervisor is configured to expose a virtual cache topology to a guest operating system, such that the virtual cache topology may be provided by corresponding physical cache topology. The virtual cache topology may be determined by the hypervisor or, in the case of a datacenter environment, may be determined by the datacenter's management system. The virtual cache topology may be calculated from the physical cache topology of the system such that virtual machines may be instantiated with virtual processors and virtual cache that may be mapped to corresponding logical processors and physical cache.
    Type: Application
    Filed: June 1, 2010
    Publication date: December 1, 2011
    Applicant: Microsoft Corporation
    Inventors: Aditya Bhandari, Dmitry Meshchaninov, Shuvabrata Ganguly
  • Publication number: 20110296406
    Abstract: Techniques for configuring a hypervisor scheduler to make use of cache topology of processors and physical memory distances between NUMA nodes when making scheduling decisions. In the same or other embodiments the hypervisor scheduler can be configured to optimize the scheduling of latency sensitive workloads. In the same or other embodiments a hypervisor can be configured to expose a virtual cache topology to a guest operating system running in a virtual machine.
    Type: Application
    Filed: June 1, 2010
    Publication date: December 1, 2011
    Applicant: Microsoft Corporation
    Inventors: Aditya Bhandari, Dmitry Meshchaninov, Shuvabrata Ganguly
  • Publication number: 20110293097
    Abstract: Techniques for memory compartmentalization for trusted execution of a virtual machine (VM) on a multi-core processing architecture are described. Memory compartmentalization may be achieved by encrypting layer 3 (L3) cache lines using a key under the control of a given VM within the trust boundaries of the processing core on which that VMs is executed. Further, embodiments described herein provide an efficient method for storing and processing encryption related metadata associated with each encrypt/decrypt operation performed for the L3 cache lines.
    Type: Application
    Filed: May 27, 2010
    Publication date: December 1, 2011
    Inventors: FABIO R. MAINO, Pere Monclus, David A. McGrew
  • Publication number: 20110238915
    Abstract: A switch device includes interfaces connected to a host, a first storage device, and a second storage device having a cache memory, and a processor executing receiving a copy command indicating to copy target data stored in the first storage device to the second storage device from the host, transmitting a reading out command indicating to read out the target data stored in the first storage device corresponding to the copy command, receiving the target data corresponding to the transmitted reading out command from the first storage device, and transmitting, to the second storage device, a writing command for writing the target data and release information indicating that the target data is releasable from the cache memory.
    Type: Application
    Filed: March 11, 2011
    Publication date: September 29, 2011
    Applicant: Fujitsu Limited
    Inventor: Yasuhito KIKUCHI
  • Publication number: 20110219191
    Abstract: In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line.
    Type: Application
    Filed: January 18, 2011
    Publication date: September 8, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Daniel Ahn, Luis H. Ceze, Alan Gara, Martin Ohmacht, Zhuang Xiaotong
  • Publication number: 20110208916
    Abstract: A monitoring section 139 monitors a power control command for controlling power supplied to a processor for operating a plurality of operating systems or a plurality of processors. A cache entry selecting section 141 sets a cache entry used by the operating system or the processor having executed the power control command to a state used in the past using executed states of the plurality of operating systems or the plurality of processors that are changed based on the power control command upon selecting a cache entry to be replaced from a plurality of cache entries constituting a cache storage device 111. A replacement object selecting section 136 selects the cache entry set to the state used in the past as the cache entry to be replaced. In this way, the plurality of operating systems or the plurality of processors can effectively utilize one cache storage device.
    Type: Application
    Filed: November 28, 2008
    Publication date: August 25, 2011
    Inventor: Masahiko Saito
  • Patent number: 8006039
    Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining if the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data. A corresponding system and computer program product.
    Type: Grant
    Filed: February 25, 2008
    Date of Patent: August 23, 2011
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait
  • Publication number: 20110197031
    Abstract: Disclosed herein is a miss handler for a multi-channel cache memory, and a method that includes determining a need to update a multi-channel cache memory due at least to one of an occurrence of a cache miss or a data prefetch being needed. The method further includes operating a multi-channel cache miss handler to update at least one cache channel storage of the multi-channel cache memory from a main memory.
    Type: Application
    Filed: February 5, 2010
    Publication date: August 11, 2011
    Inventors: Eero Aho, Jari Nikara, Kimmo Kuusilinna
  • Patent number: 7996644
    Abstract: An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion.
    Type: Grant
    Filed: December 29, 2004
    Date of Patent: August 9, 2011
    Assignee: Intel Corporation
    Inventor: Sailesh Kottapalli
  • Publication number: 20110191542
    Abstract: Methods and apparatus relating to system-wide quiescence and per-thread transaction fence in a distributed caching agent are described. Some embodiments utilize messages, counters, and/or state machines that support system-wide quiescence and per-thread transaction fence flows. Other embodiments are also disclosed.
    Type: Application
    Filed: December 26, 2010
    Publication date: August 4, 2011
    Inventors: James R. Vash, Bongjin Jung, Rishan Tan
  • Publication number: 20110185117
    Abstract: According to one embodiment, a system includes a virtual tape library having a cache, a virtual tape controller (VTC) coupled to the virtual tape library, and an interface for coupling at least one host to the VTC. The cache is shared by all the hosts, and a common view of a cache state, a virtual library state, and a number of write requests pending is provided to all the hosts by the VTC. In another embodiment, a method includes receiving data from at least one host using a VTC, storing data received from all the hosts to a cache using the VTC, sending an alert to all the hosts when free space is low and entering into a warning state, sending another alert to all the hosts when free space is critically low and entering into a critical state while allowing previously mounted virtual drives to continue normally.
    Type: Application
    Filed: January 25, 2010
    Publication date: July 28, 2011
    Applicant: International Business Machines Corporation
    Inventors: Ralph T. Beeston, Erika M. Dawson, Duke A. Lee, David Luciani, Joel K. Lyman
  • Publication number: 20110185126
    Abstract: When a processor has transitioned to an operation stop state, it is possible to reduce the power consumption of a cache memory while maintaining the consistency of cache data. A multiprocessor system includes first and second processors, a shared memory, first and second cache memories, a consistency management circuit for managing consistency of data stored in the first and second cache memories, a request signal line for transmitting a request signal for a data update request from the consistency management circuit to the first and second cache memories, an information signal line for transmitting an information signal for informing completion of the data update from the first and second cache memories to the consistency management circuit, and a cache power control circuit for controlling supply of a clock signal and power to the first and second cache memories in accordance with the request signal and the information signal.
    Type: Application
    Filed: January 24, 2011
    Publication date: July 28, 2011
    Applicant: RENESAS ELECTRONICS CORPORATION
    Inventors: Tsuneki SASAKI, Shuichi KUNIE, Tatsuya KAWASAKI
  • Publication number: 20110185125
    Abstract: A processor may include several processor cores, each including a respective higher-level cache; a lower-level cache including several tag units each including several controllers, where each controller corresponds to a respective cache bank configured to store data, and where the controllers are concurrently operable to access their respective cache banks; and an interconnect network configured to convey data between the cores and the lower-level cache. The controllers may share access to an interconnect egress port coupled to the interconnect network, and may generate multiple concurrent requests to convey data via the shared port, where each of the requests is destined for a corresponding core, and where a datapath width of the port is less than a combined width of the multiple requests. The given tag unit may arbitrate among the controllers for access to the shared port, such that the requests are transmitted to corresponding cores serially rather than concurrently.
    Type: Application
    Filed: January 27, 2010
    Publication date: July 28, 2011
    Inventors: Prashant Jain, Yoganand Chillarige, Sandip Das, Shukur Moulali Pathan, Srinivasan R. Iyengar, Sanjay Patel
  • Publication number: 20110161590
    Abstract: A processing unit includes a store-in lower level cache having reservation logic that determines presence or absence of a reservation and a processor core including a store-through upper level cache, an instruction execution unit, a load unit that, responsive to a hit in the upper level cache on a load-reserve operation generated through execution of a load-reserve instruction by the instruction execution unit, temporarily buffers a load target address of the load-reserve operation, and a flag indicating that the load-reserve operation bound to a value in the upper level cache. If a storage-modifying operation is received that conflicts with the load target address of the load-reserve operation, the processor core sets the flag to a particular state, and, responsive to execution of a store-conditional instruction, transmits an associated store-conditional operation to the lower level cache with a fail indication if the flag is set to the particular state.
    Type: Application
    Filed: December 31, 2009
    Publication date: June 30, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guy L. Guthríe, William J. Starke, Derek E. Williams
  • Publication number: 20110161586
    Abstract: Technologies are described herein related to multi-core processors that are adapted to share processor resources. An example multi-core processor can include a plurality of processor cores. The multi-core processor further can include a shared register file selectively coupled to two or more of the plurality of processor cores, where the shared register file is adapted to serve as a shared resource among the selected processor cores.
    Type: Application
    Filed: December 29, 2009
    Publication date: June 30, 2011
    Inventors: Miodrag Potkonjak, Nathan Zachary Beckmann
  • Patent number: 7970999
    Abstract: An information distribution system includes an interconnect and multiple data processing nodes coupled to the interconnect. Each data processing node includes mass storage and a cache. Each data processing node also includes interface logic configured to receive signals from the interconnect and to apply the signals from the interconnect to affect the content of the cache, and to receive signals from the mass storage and to apply the signals from the mass storage to affect the content of the cache. The content of the mass storage and cache of a particular node may also be provided to other nodes of the system, via the interconnect.
    Type: Grant
    Filed: January 22, 2008
    Date of Patent: June 28, 2011
    Assignee: ARRIS Group
    Inventor: Robert C Duzett
  • Publication number: 20110154345
    Abstract: Implementations and techniques for multicore processors having a domain interconnection network configured to associate a first collision domain network with a second collision domain network in communication are generally disclosed.
    Type: Application
    Filed: December 21, 2009
    Publication date: June 23, 2011
    Inventor: Ezekiel Kruglick
  • Publication number: 20110145505
    Abstract: Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.
    Type: Application
    Filed: December 15, 2009
    Publication date: June 16, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vaijayanthimala K. Anand, Diane G. Flemming, William A. Maron, Mysore S. Srinivas
  • Publication number: 20110145506
    Abstract: In one embodiment, the present invention includes a cache memory including cache lines that each have a tag field including a state portion to store a cache coherency state of data stored in the line and a weight portion to store a weight corresponding to a relative importance of the data. In various implementations, the weight can be based on the cache coherency state and a recency of usage of the data. Other embodiments are described and claimed.
    Type: Application
    Filed: December 16, 2009
    Publication date: June 16, 2011
    Inventors: Naveen Cherukuri, Dennis W. Brzezinski, Ioannis T. Schoinas, Anahita Shayesteh, Akhilesh Kumar, Mani Azimi
  • Patent number: 7962692
    Abstract: The present invention is directed to a method and system for managing performance data. In accordance with a particular embodiment of the present invention, cache metrics are received. At least one of the cache metrics may be compared with a threshold value. A determination may be made as to whether one or more parameter adjustments are required based upon the comparison.
    Type: Grant
    Filed: October 5, 2006
    Date of Patent: June 14, 2011
    Assignee: Computer Associates Think, Inc.
    Inventors: Robert E. Puishys, Jr., Daniel E. Butterworth, Paul N. Williams, Wayne C. Sauer
  • Publication number: 20110131377
    Abstract: A multi-core processor chip comprises at least one shared cache having a plurality of ports and a plurality of address spaces and a plurality of processor cores. Each processor core is coupled to one of the plurality of ports such that each processor core is able to access the at least one shared cache simultaneously with another of the plurality of processor cores. Each processor core is assigned one of a unique application or a unique application task and the multi-core processor is operable to execute a partitioning operating system that temporally and spatially isolates each unique application and each unique application task such that each of the plurality of processor cores does not attempt to write to the same address space of the at least one shared cache at the same time as another of the plurality of processor cores.
    Type: Application
    Filed: December 2, 2009
    Publication date: June 2, 2011
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Scott Gray, Nicholas Wilt
  • Publication number: 20110125971
    Abstract: Various implementations of shared upper level cache architectures are disclosed.
    Type: Application
    Filed: November 24, 2009
    Publication date: May 26, 2011
    Applicants: Empire Technology Development LLC, Glitter Technology LLP
    Inventor: Ezekiel Kruglick
  • Publication number: 20110119447
    Abstract: According to embodiments described in the specification, a method and apparatus for managing memory in a mobile electronic device are provided. The method comprises: receiving a request to install an application; receiving at least one indication of data intended to be maintained in a shared cache; determining, based on the at least one indication, whether data corresponding to the intended data exists in the shared cache; upon a negative determination, writing the intended data to the shared cache; and repeating the receiving at least one indication, the determining and the writing for at least one additional application.
    Type: Application
    Filed: November 18, 2009
    Publication date: May 19, 2011
    Applicant: Research in Motion Limited
    Inventor: Ankur AGGARWAL
  • Publication number: 20110119446
    Abstract: A method, system and computer program product are disclosed for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that one of the processor units. If an address in the memory cache is reserved for that one of the processors, the data are stored at this reserved address.
    Type: Application
    Filed: February 1, 2010
    Publication date: May 19, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthias A. Blumrich, Martin Ohmacht
  • Publication number: 20110113199
    Abstract: An apparatus and method is described herein for optimization to prefetch throttling, which potentially enhances performance, reduces power consumption, and maintains positive gain for workloads that benefit from prefetching. More specifically, the optimizations described herein allow for bandwidth congestion and prefetch accuracy to be taken into account as feedbacks for throttling at the source of prefetch generation. As a result, when there is low congestion, full prefetch generation is allowed, even if the prefetch is inaccurate, since there is available bandwidth. However, when congestion is high, the determination of throttling falls to prefetch accuracy. If accuracy is high—miss rate is low—then less throttling is needed, because the prefetches are being utilized—performance is being enhanced.
    Type: Application
    Filed: November 9, 2009
    Publication date: May 12, 2011
    Inventors: Puqi P. Tang, Hemant G. Rotithor, Ryan L. Carlson, Nagi Aboulenein
  • Publication number: 20110087843
    Abstract: An apparatus, method, and system are disclosed. In one embodiment the apparatus includes a cache memory, which a number of sets. Each of the sets in the cache memory have several cache lines. The apparatus also includes at least one process resource table. The process resource table maintains a cache line occupancy count of a number of cache lines. Specifically, the cache line occupancy count for each cache line describes the number of cache lines in the cache storing information utilized by a process running on a computer system. Additionally, the process resource table stores the occupancy count of less cache lines than the total number of cache lines in the cache memory.
    Type: Application
    Filed: October 9, 2009
    Publication date: April 14, 2011
    Inventors: Li Zhao, Ravishankar Iyer, Rameshkumar G. Illikkal, Erik G. Hallnor, Martin G. Dixon, Donald K. Newell
  • Publication number: 20110060880
    Abstract: A multiprocessor according to an embodiment of the present invention comprises: a provisional determination unit that provisionally determines one transfer source for each transfer destination by performing predetermined prediction processing based on monitoring of transfer of cache data among cache memories. A data transfer unit activates, after a provisional determination result of the provisional determination unit is obtained, only a tag cache corresponding to the provisionally-determined one transfer source when the transfer of the cache data is performed and determines whether cache data corresponding to a refill request is cached referring to only the activated tag cache.
    Type: Application
    Filed: July 29, 2010
    Publication date: March 10, 2011
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Soichiro Hosoda
  • Publication number: 20110055482
    Abstract: Various example embodiments are disclosed. According to an example embodiment, a shared cache may be configured to determine whether a word requested by one of the L1 caches is currently stored in the L2 shared cache, read the requested word from the main memory based on determining that the requested word is not currently stored in the L2 shared cache, determine whether at least one line in a way reserved for the requesting L1 cache is unused, store the requested word in the at least one line based on determining that the at least one line in the reserved way is unused, and store the requested word in a line of the L2 shared cache outside the reserved way based on determining that the at least one line in the reserved way is not unused.
    Type: Application
    Filed: November 25, 2009
    Publication date: March 3, 2011
    Applicant: Broadcom Corporation
    Inventors: Kimming So, Binh Truong
  • Publication number: 20110055487
    Abstract: In one embodiment, the present invention includes a method to obtain topology information regarding a system including at least one multicore processor, provide the topology information to a plurality of parallel processes, generate a topological map based on the topology information, access the topological map to determine a topological relationship between a sender process and a receiver process, and select a given memory copy routine to pass a message from the sender process to the receiver process based at least in part on the topological relationship. Other embodiments are described and claimed.
    Type: Application
    Filed: March 31, 2008
    Publication date: March 3, 2011
    Inventors: Sergey I. Sapronov, Alexey V. Bayduraev, Alexander V. Supalov, Vladimir D. Truschin, Igor Ermolaev, Dmitry Mishura
  • Publication number: 20110022773
    Abstract: A mechanism is provided in a virtual machine monitor for fine grained cache allocation in a shared cache. The mechanism partitions a cache tag into a most significant bit (MSB) portion and a least significant bit (LSB) portion. The MSB portion of the tags is shared among the cache lines in a set. The LSB portion of the tags is private, one per cache line. The mechanism allows software to set the MSB portion of tags in a cache to allocate sets of cache lines. The cache controller determines whether a cache line is locked based on the MSB portion of the tag.
    Type: Application
    Filed: July 27, 2009
    Publication date: January 27, 2011
    Applicant: International Business Machines Corporation
    Inventors: Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
  • Publication number: 20110010504
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Application
    Filed: July 10, 2009
    Publication date: January 13, 2011
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Publication number: 20110004729
    Abstract: Methods, apparatuses, and systems directed to the caching of blocks of lines of memory in a cache-coherent, distributed shared memory system. Block caches used in conjunction with line caches can be used to store more data with less tag memory space compared to the use of line caches alone and can therefore reduce memory requirements. In one particular embodiment, the present invention manages this caching using a DSM-management chip, after the allocation of the blocks by software, such as a hypervisor. An example embodiment provides processing relating to block caches in cache-coherent distributed shared memory.
    Type: Application
    Filed: December 19, 2007
    Publication date: January 6, 2011
    Applicant: 3Leaf Systems, Inc.
    Inventors: Isam Akkawi, Najeeb Imran Ansari, Bryan Chin, Chetana Nagendra Keltcher, Krishnan Subramani, Janakiramanan Vaidyanathan
  • Patent number: 7865691
    Abstract: A virtual address cache and a method for sharing data. The virtual address cache includes: a memory, adapted to store virtual addresses, task identifiers and data associated with the virtual addresses and the task identifiers; and a comparator, connected to the memory, adapted to determine that data associated with a received virtual address and a received task identifier is stored in the memory if at least a portion of the received virtual address equals at least a corresponding portion of a certain stored virtual address and a stored task identifier associated with the certain stored virtual address indicates that the data is shared between multiple tasks.
    Type: Grant
    Filed: August 31, 2004
    Date of Patent: January 4, 2011
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Itay Peled, Moshe Anschel, Moshe Bachar, Jacob Efrat, Alon Eldar, Yakov Tokar
  • Publication number: 20100332763
    Abstract: An apparatus, system, and method are disclosed for improving cache coherency processing. The method includes determining that a first processor in a multiprocessor system receives a cache miss. The method also includes determining whether an application associated with the cache miss is running on a single processor core and/or whether the application is running on two or more processor cores that share a cache. A cache coherency algorithm is executed in response to determining that the application associated with the cache miss is running on two or more processor cores that do not share a cache, and is skipped in response to determining that the application associated with the cache miss is running on one of a single processor core and two or more processor cores that share a cache.
    Type: Application
    Filed: June 30, 2009
    Publication date: December 30, 2010
    Applicant: International Business Machines Corporation
    Inventors: Marcus L. Kornegay, Ngan N. Pham
  • Publication number: 20100312968
    Abstract: A common L2 cache unit of a CPU constituting a multicore processor, in addition to a PFPORT arranged for each CPU core unit, has a common PFPORT shared by the plurality of the CPU core units. The common PFPORT secures an entry when the prefetch request loaded from the PFPORT into a L2 pipeline processing unit fails to be completed. The uncompleted prefetch request is loaded again from the common PFPORT to the L2 pipeline processing unit.
    Type: Application
    Filed: August 13, 2010
    Publication date: December 9, 2010
    Applicant: FUJITSU LIMITED
    Inventor: Toru Hikichi
  • Publication number: 20100312969
    Abstract: Methods and apparatus provide for interconnecting one or more multiprocessors and one or more external devices through one or more configurable interface circuits, which are adapted for operation in: (i) a first mode to provide a coherent symmetric interface; or (ii) a second mode to provide a non-coherent interface.
    Type: Application
    Filed: August 18, 2010
    Publication date: December 9, 2010
    Applicant: Sony Computer Entertainment Inc.
    Inventors: Takeshi Yamazaki, Scott Douglas Clark, Charles Ray Johns, James Allan Kahle
  • Publication number: 20100312967
    Abstract: A cache control apparatus is provided in a computer system including an access source and a storage apparatus. This device, based on I/O status information, which is information denoting the I/O status in accordance with an I/O command from the access source, determines whether or not the I/O performance from the access source drops. In a case where the result of this determination is affirmative, the cache control apparatus changes a cache utilization status specified from cache utilization status information, which is information denoting the cache utilization status related to a cache area, to a cache utilization status that improves I/O performance.
    Type: Application
    Filed: July 29, 2009
    Publication date: December 9, 2010
    Inventors: Yosuke Kasai, Manabu Obana, Akihiko Sakaguchi
  • Publication number: 20100306475
    Abstract: A cache memory system includes a first array of storage elements each configured to store a cache line, a second array of storage elements corresponding to the first array of storage elements each configured to store a first partial status of the cache line in the corresponding storage element of the first array, and a third array of storage elements corresponding to the first array of storage elements each configured to store a second partial status of the cache line in the corresponding storage element of the first array. The second partial status indicates whether or not the cache line has been modified. When the cache memory system modifies the cache line within a storage element of the first array, it writes only the second partial status in the corresponding storage element of the third array to indicate that the cache line has been modified but refrains from writing the first partial status in the corresponding storage element of the second array.
    Type: Application
    Filed: May 27, 2009
    Publication date: December 2, 2010
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: Rodney E. Hooker, Colin Eddy, G. Glenn Henry
  • Patent number: 7840759
    Abstract: Methods and systems for shared cache eviction in a multi-core processing environment having a cache shared by a plurality of processor cores are provided. Embodiments include receiving from a processor core a request to load a cache line in the shared cache; determining whether the shared cache is full; determining whether a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache if the shared cache is full; and evicting a cache line that has been accessed by fewer than all the processor cores sharing the cache if a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache.
    Type: Grant
    Filed: March 21, 2007
    Date of Patent: November 23, 2010
    Assignee: International Business Machines Corporation
    Inventors: Marcus L. Kornegay, Ngan Pham
  • Publication number: 20100293334
    Abstract: Version indicators within an existing range can be associated with a data partition in a distributed data store. A partition reconfiguration can be associated with one of multiple partitions in the data store, and a new version indicator that is outside the existing range can be assigned to the reconfigured partition. Additionally, a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that are configured to communicate with storage nodes to access data in a distributed data store. The broadcast message can include updated location information for data in the data store. In addition, a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data. The response message can include the requested updated location information.
    Type: Application
    Filed: May 15, 2009
    Publication date: November 18, 2010
    Applicant: Microsoft Corporation
    Inventors: Lu Xun, Hua-Jun Zeng, Muralidhar Krishnaprasad, Radhakrishnan Srikanth, Ankur Agrawal, Balachandar Pavadaisamy