Multiple Caches Patents (Class 711/119)
-
Patent number: 8959287Abstract: A method is used in managing caches for reporting storage system information. A cache is created. The cache includes information associated with a set of storage objects of a data storage system. The information of the cache is made available to a virtual system. The virtual system uses the information for reporting storage system information. The virtual system is notified for retrieving updated storage system information from the cache.Type: GrantFiled: September 30, 2011Date of Patent: February 17, 2015Assignee: EMC CorporationInventors: Peter Shajenko, Jr., Deene A. Dafoe, Kevin S. Labonte
-
Publication number: 20150046649Abstract: Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.Type: ApplicationFiled: October 24, 2014Publication date: February 12, 2015Inventors: Michael T. Benhase, Lokesh M. Gupta, Paul H. Muench, Cheng-Chung Song
-
Publication number: 20150046650Abstract: A processor having a streaming unit is disclosed. In one embodiment, a processor includes a streaming unit configured to load one or more input data streams from a memory coupled to the processor. The streaming unit includes an internal network having a plurality of queues configured to store streams of data. The streaming unit further includes a plurality of operations circuits configured to perform operations on the streams of data. The streaming unit is software programmable to operatively couple two or more of the plurality of operations circuits together via one or more of the plurality of queues. The operations circuits may perform operations on multiple streams of data, resulting in corresponding output streams of data.Type: ApplicationFiled: August 6, 2013Publication date: February 12, 2015Applicant: Oracle International CorporationInventors: Darryl J. Gove, David L. Weaver
-
Patent number: 8949540Abstract: A victim cache line having a data-invalid coherence state is selected for castout from a first lower level cache of a first processing unit. The first processing unit issues on an interconnect fabric a lateral castout (LCO) command identifying the victim cache line to be castout from the first lower level cache, indicating the data-invalid coherence state, and indicating that a lower level cache is an intended destination of the victim cache line. In response to a coherence response to the LCO command indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in a second lower level cache of a second processing unit in the data-invalid coherence state.Type: GrantFiled: March 11, 2009Date of Patent: February 3, 2015Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, Hien M. Le, Alvan W. Ng, Michael S. Siegel, Derek E. Williams, Phillip G. Williams
-
Patent number: 8949533Abstract: The present invention prdvides a method and a caching node entity for ensuring at least a predetermined number of a content object to be kept stored in a network, comprising a plurality of cache nodes for storing copies of content objects. The present invention makes use of ranking states values, deletable or non-deletable, which when assigned to copies of content objects are indicating whether a copy is either deletable or non-deletable. At least one copy of each content object is assigned the value non-deletable The value for a copy of a content object changing from deletable to non-deletable in one cache node of the network, said copy being a candidate for the value non-deletable, if a certain condition is fulfilled.Type: GrantFiled: February 5, 2010Date of Patent: February 3, 2015Assignee: Telefonaktiebolaget L M Ericsson (publ)Inventors: Hareesh Puthalath, Stefan Hellkvist, Lars-Örjan Kling
-
Patent number: 8949535Abstract: Technology is described for performing cache data invalidations. The method may include identifying cache update information at a first cache. The cache update information may identify a cache entry (e.g., a trending cache entry). A second cache may be selected to receive the cache update information from the first cache. The cache update information identifying the cache entry may be sent from the first cache to the second cache. For example, the second cache may be populated by adding the trending cache entry into the second cache.Type: GrantFiled: February 4, 2013Date of Patent: February 3, 2015Assignee: Amazon Technologies, Inc.Inventor: Jamie Hunter
-
Patent number: 8949546Abstract: Embodiments include a local cache management system that is configured to be coupled to a local cache and that includes an index engine configured to store fingerprints of message segments stored in the local cache and a redundancy management engine coupled to the index engine. The redundancy management engine includes an adaptive emitter configured to receive a message segment to be transmitted to a remote device, determine expected latency costs of a plurality of transmission algorithms, and select a transmission algorithm, such as by selecting the lowest expected latency cost. The adaptive emitter is also configured to determine whether the message segment is stored within a remote cache management system associated with the remote device, and transmit the message segment through a network to the remote cache management system using the selected transmission algorithm upon a determination that the message segment is not stored within the remote cache management system.Type: GrantFiled: May 31, 2012Date of Patent: February 3, 2015Assignee: VMware, Inc.Inventors: Liang Cui, Chengzhong Liu, Zhifeng Xia
-
Patent number: 8949536Abstract: A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A point-in-time copy relationship associates tracks in the source storage with tracks in the target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request before destaging the updated source track to the source storage.Type: GrantFiled: January 8, 2014Date of Patent: February 3, 2015Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 8949534Abstract: A multi-CPU data processing system, comprising: a multi-CPU processor, comprising: a first CPU configured with at least a first core, a first cache, and a first cache controller configured to access the first cache; and a second CPU configured with at least a second core, and a second cache controller configured to access a second cache, wherein the first cache is configured from a shared portion of the second cache.Type: GrantFiled: January 15, 2013Date of Patent: February 3, 2015Assignee: Samsung Electronics Co., Ltd.Inventors: Hoi Jin Lee, Young Min Shin
-
Patent number: 8943272Abstract: According to one aspect of the present disclosure, a method and technique for variable cache line size management is disclosed. The method includes: determining whether an eviction of a cache line from an upper level sectored cache to an unsectored lower level cache is to be performed, wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache; responsive to determining that an eviction is to be performed, identifying referenced sub-sectors of the cache line to be evicted; invalidating unreferenced sub-sectors of the cache line to be evicted; and storing the referenced sub-sectors in the lower level cache.Type: GrantFiled: April 20, 2012Date of Patent: January 27, 2015Assignee: International Business Machines CorporationInventors: Robert H. Bell, Jr., Wen-Tzer T. Chen, Diane G. Flemming, Hong L. Hua, William A. Maron, Mysore S. Srinivas
-
Patent number: 8935476Abstract: Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.Type: GrantFiled: January 17, 2012Date of Patent: January 13, 2015Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta, Paul H. Muench, Cheng-Chung Song
-
Patent number: 8935477Abstract: Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.Type: GrantFiled: February 26, 2013Date of Patent: January 13, 2015Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta, Paul H. Muench, Cheng-Chung Song
-
Publication number: 20150012706Abstract: A computer program product, system, and method for managing metadata for caching devices during shutdown and restart procedures. Fragment metadata for each fragment of data from the storage server stored in the cache device is generated. The fragment metadata is written to at least one chunk of storage in the cache device in a metadata directory in the cache device. For each of the at least one chunk in the cache device to which the fragment metadata is written, chunk metadata is generated for the chunk and writing the generated chunk metadata to the metadata directory in the cache device. Header metadata having information on access of the storage server is written to the metadata directory in the cache device. The written header metadata, chunk metadata, and fragment metadata are used to validate the metadata directory and the fragment data in the cache device during a restart operation.Type: ApplicationFiled: July 8, 2013Publication date: January 8, 2015Inventors: Stephen L. Blinick, Clement L. Dickey, Xioa-Yu Hu, Nikolas Ioannou, Ioannis Koltsidas, Paul H. Muench, Roman Pletka, Sangeetha Seshadri
-
Patent number: 8930629Abstract: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.Type: GrantFiled: October 19, 2012Date of Patent: January 6, 2015Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
-
Patent number: 8930626Abstract: A method and computer program product for dividing a cache memory system into a plurality of cache memory portions. Data to be written to a specific address within an electromechanical storage system is received. The data is assigned to one of the plurality of cache memory portions, thus defining an assigned cache memory portion. Association information for the data is generated, wherein the association information defines the specific address within the electromechanical storage system. The data and the association information is written to the assigned cache memory portion.Type: GrantFiled: August 30, 2013Date of Patent: January 6, 2015Assignee: EMC CorporationInventors: Roy E. Clark, Kiran Madnani, David W. DesRoches
-
Patent number: 8930636Abstract: One embodiment sets forth a technique for ensuring relaxed coherency between different caches. Two different execution units may be configured to access different caches that may store one or more cache lines corresponding to the same memory address. During time periods between memory barrier instructions relaxed coherency is maintained between the different caches. More specifically, writes to a cache line in a first cache that corresponds to a particular memory address are not necessarily propagated to a cache line in a second cache before the second cache receives a read or write request that also corresponds to the particular memory address. Therefore, the first cache and the second are not necessarily coherent during time periods of relaxed coherency. Execution of a memory barrier instruction ensures that the different caches will be coherent before a new period of relaxed coherency begins.Type: GrantFiled: July 20, 2012Date of Patent: January 6, 2015Assignee: NVIDIA CorporationInventors: Joel James McCormack, Rajesh Kota, Olivier Giroux, Emmett M. Kilgariff
-
Patent number: 8924653Abstract: A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.Type: GrantFiled: October 31, 2006Date of Patent: December 30, 2014Assignee: Hewlett-Packard Development Company, L.P.Inventors: Blaine D. Gaither, Judson E. Veazey
-
Patent number: 8924663Abstract: The storage system includes a first auxiliary storage device, a second auxiliary storage device, and a main storage device, and also includes a data management unit which stores and keeps, in the main storage device, index data based on feature data by referring to the feature data of storage target data stored in the first auxiliary storage device, and if the index data stored and kept in the main storage device reaches a preset amount, stores and keeps, in the second auxiliary storage device, the index data stored and kept in the main storage device, and deletes the index data stored and kept in the second auxiliary storage device from the main storage device.Type: GrantFiled: August 25, 2011Date of Patent: December 30, 2014Assignee: NEC CorporationInventors: Jerzy Szczepkowski, Michal Welnicki, Cezary Dubnicki
-
Publication number: 20140379967Abstract: A computer has a mother board upon which is mounted, a millimetre wave oscillator and a central processing unit (CPU). The millimetre wave oscillator is operable to generate a clock signal and transmit this to the CPU via a link. The clock signal may be employed as a system clock signal and a processing clock signal for the CPU. The millimetre wave oscillator allows higher frequency clock signals than are currently available whilst generating significantly less heat. Therefore, the CPU may not require any cooling system and if it does then a smaller cooling system than is required by the prior art will suffice. Furthermore, the CPU will be more stable. This arrangement requires less power than prior art arrangements and therefore may increase the battery life of a computer.Type: ApplicationFiled: January 9, 2013Publication date: December 25, 2014Applicant: FENTON SYSTEMS LTDInventor: Martin Calder
-
Patent number: 8918580Abstract: A storage device includes a flash memory, a buffer memory and a memory controller. The buffer memory is configured to temporarily store write data to be written in the flash memory, the buffer memory including volatile RAM and non-volatile RAM. The memory controller is configured to select one of the volatile RAM and the non-volatile RAM to temporally store the write data based on a write pattern of the write data, and to transmit a host command complete signal to a host when the write data is stored in the non-volatile RAM.Type: GrantFiled: February 24, 2012Date of Patent: December 23, 2014Assignee: Samsung Electronics Co., Ltd.Inventor: Wonmoon Cheon
-
Patent number: 8909866Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.Type: GrantFiled: November 6, 2012Date of Patent: December 9, 2014Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Ravindra Nath Bhargava, Ramkumar Jayaseelan
-
Patent number: 8909865Abstract: A device may comprise a Universal Serial Bus (USB) interface and a wireless interface operable to communicate in accordance with the ISO 18000-7 standard. The device may be operable to receive a command via the USB interface and transmit the command via the wireless interface. The device may be operable to receive data via the wireless interface and transmit the data via the USB interface. A form factor of the USB device may be such that it can be plugged directly into a USB port without any external cabling between the USB device and said USB port.Type: GrantFiled: February 15, 2012Date of Patent: December 9, 2014Assignee: Blackbird Technology Holdings, Inc.Inventor: John Peter Norair
-
Patent number: 8909872Abstract: A computer system is provided including a central processing unit having an internal cache, a memory controller is coupled to the central processing unit, and a closely coupled peripheral is coupled to the central processing unit. A coherent interconnection may exist between the internal cache and both the memory controller and the closely coupled peripheral, wherein the coherent interconnection is a bus.Type: GrantFiled: October 31, 2006Date of Patent: December 9, 2014Assignee: Hewlett-Packard Development Company, L. P.Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli
-
Publication number: 20140359222Abstract: A system comprises a storage device, a cache coupled to the storage device and a metadata structure, coupled to the storage device and the cache, having metadata corresponding to each data location in the cache to control data promoted to the cache from the storage device.Type: ApplicationFiled: February 25, 2013Publication date: December 4, 2014Inventor: Rayan Zachariassen
-
Patent number: 8904117Abstract: Various systems and methods for performing write-back caching in a cluster. For example, one method can involve a first node detecting that no failover nodes are available. A determination is made whether the first node should use write-back caching or not. If the first node is to continue using write-back caching, a first local cache identifier and a global cache identifier are both updated.Type: GrantFiled: December 21, 2012Date of Patent: December 2, 2014Assignee: Symantec CorporationInventors: Santosh Kalekar, Niranjan S. Pendarkar, Vipul Jain, Shailesh Marathe, Anindya Banerjee, Rishikesh Bhagwandas Jethwani
-
Patent number: 8904033Abstract: Media content is downloaded on a media device. Portions of the media content are buffered successively during the download in a buffer on the device. During the buffering, the buffered portions are read for playback. In the buffer, a non-write buffer region trails behind a current playback read position. Upon the buffering reaching an end of the buffer, the buffering of media content is continued between a buffer beginning and the non-write buffer region.Type: GrantFiled: June 7, 2010Date of Patent: December 2, 2014Assignee: Adobe Systems IncorporatedInventor: Samuli Tapio Kekki
-
Patent number: 8898393Abstract: Methods and apparatus relating to ring protocols and techniques are described. In one embodiment, a first agent generates a request to write to a cache line of a cache over a first ring of a computing platform. A second agent that receives the write request forwards it to a third agent over the first ring of the computing platform. In turn, a third agent (e.g., a home agent) receives data corresponding to the write request over a second, different ring of the computing platform and writes the data to the cache. Other embodiments are also disclosed.Type: GrantFiled: May 23, 2013Date of Patent: November 25, 2014Assignee: Intel CorporationInventors: Meenakshisundaram R. Chinthamani, R. Guru Prasadh, Hari K. Nagpal, Phanindra K. Mannava
-
Patent number: 8898374Abstract: A flash memory device includes a flash memory and a controller. The flash memory includes a single level memory module and a multi level memory module. The single level memory module includes a first data bus and at least one single level cell flash memory. Each memory cell of the single level cell flash memory stores one bit of data. The multi level memory module includes a second data bus and at least one multi level cell flash memory. Each memory cell of the multi level cell flash memory stores more than one bit of data. The first data bus is coupled to the second data bus. During a write operation, the controller writes data to the single level memory module, and the single level memory module further transmits the data to the multi level memory module through the first and second data buses coupled therebetween without passing the data through the controller.Type: GrantFiled: July 21, 2011Date of Patent: November 25, 2014Assignee: Silicon Motion, Inc.Inventor: Tsung-Chieh Yang
-
Patent number: 8886880Abstract: A method for destaging data from a memory of a storage controller to a striped volume is provided. The method includes determining if a stripe should be destaged from a write cache of the storage controller to the striped volume, destaging a partial stripe if a full stripe write percentage is less than a full stripe write affinity value, and destaging a full stripe if the full stripe write percentage is greater than the full stripe write affinity value. The full stripe write percentage includes a full stripe count divided by the sum of the full stripe count and a partial stripe count. The full stripe count is the number of stripes in the write cache where all chunks of a stripe are dirty. The partial stripe count is the number of stripes where at least one chunk but less than all chunks of the stripe are dirty.Type: GrantFiled: May 29, 2012Date of Patent: November 11, 2014Assignee: Dot Hill Systems CorporationInventors: Michael David Barrell, Zachary David Traut
-
Patent number: 8886885Abstract: Apparatus having corresponding methods and computer-readable media comprise: a plurality of flash modules, wherein each of the flash modules comprises a cache memory; a flash memory; and a flash controller in communication with the cache memory and the flash memory; wherein the flash controller of a first one of the flash modules is configured to operate the cache memories together as a global cache; wherein the flash controller of a second one of the flash modules is configured to operate a second one of the flash modules as a directory controller for the flash memories.Type: GrantFiled: November 5, 2010Date of Patent: November 11, 2014Assignee: Marvell World Trade Ltd.Inventors: Wei Zhou, Chee Hoe Chu, Po-Chien Chang
-
Patent number: 8874852Abstract: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.Type: GrantFiled: March 28, 2012Date of Patent: October 28, 2014Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
-
Patent number: 8874680Abstract: A method for enforcing data integrity in an RDMA data storage system includes flushing data write requests to a data storage device before sending an acknowledgment that the data write requests have been executed. An RDMA data storage system includes a node configured to flush data write requests to a data storage device before sending an acknowledgment that a data write request has been executed.Type: GrantFiled: November 3, 2011Date of Patent: October 28, 2014Assignee: NetApp, Inc.Inventor: Dhananjoy Das
-
Patent number: 8874845Abstract: In one embodiment, a method includes receiving data at a cache node in a network of cache nodes, the cache node located on a data path between a source of the data and a network device requesting the data, and determining if the received data is to be cached at the cache node, wherein determining comprises calculating a cost incurred to retrieve the data. An apparatus and logic are also disclosed.Type: GrantFiled: April 10, 2012Date of Patent: October 28, 2014Assignee: Cisco Technology, Inc.Inventors: Ashok Narayanan, David R. Oran
-
Patent number: 8868834Abstract: Some embodiments provide systems and methods for validating cached content based on changes in the content instead of an expiration interval. One method involves caching content and a first checksum in response to a first request for that content. The caching produces a cached instance of the content representative of a form of the content at the time of caching. The first checksum identifies the cached instance. In response to receiving a second request for the content, the method submits a request for a second checksum representing a current instance of the content and a request for the current instance. Upon receiving the second checksum, the method serves the cached instance of the content when the first checksum matches the second checksum and serves the current instance of the content upon completion of the transfer of the current instance when the first checksum does not match the second checksum.Type: GrantFiled: October 1, 2012Date of Patent: October 21, 2014Assignee: Edgecast Networks, Inc.Inventor: Andrew Lientz
-
Publication number: 20140310466Abstract: A multi-processor cache and bus interconnection system. A multi-processor is provided a segmented cache and an interconnection system for connecting the processors to the cache segments. An interface unit communicates to external devices using module IDs and timestamps. A buffer protocol includes a retransmission buffer and method.Type: ApplicationFiled: June 27, 2014Publication date: October 16, 2014Applicant: PACT XPP TECHNOLOGIES AGInventors: Martin Vorbach, Volker Baumgarte, Frank May, Armin Nuckel
-
Publication number: 20140310465Abstract: Methods, apparatus and computer program products implement embodiments of the present invention that include defining, in a storage system including receiving, by a processor, metadata describing a first cache configured as a master cache having non-destaged data, and defining, using the received metadata, a second cache configured as a backup cache for the master cache. Subsequent to defining the second cache, the non-destaged data is retrieved from the first cache, and the non-destaged data is stored to the second cache.Type: ApplicationFiled: April 16, 2013Publication date: October 16, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David D. CHAMBLISS, Ehood GARMIZA, Leah SHALEV
-
Publication number: 20140310467Abstract: A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores, wherein at least one of a number of the processor cores, a size of each of the plurality of caches, or a size of each of the plurality of memories is configured for performing a reverse-time-migration (RTM) computation.Type: ApplicationFiled: October 26, 2012Publication date: October 16, 2014Inventors: John Shalf, David Donofrio, Leonid Oliker, Jens Kruger, Samuel Williams
-
Publication number: 20140310464Abstract: Methods, apparatus and computer program products implement embodiments of the present invention that include identifying non-destaged first data in a write cache. Upon detecting second data in a master read cache, the second data is copied the second data to one or more backup read caches, and the second data is pinned to the master and the backup read caches. Using the first data stored in the write cache and the second data stored in the master read cache, one or more parity values are calculated, and the first data and the one or more parity values are destaged.Type: ApplicationFiled: April 16, 2013Publication date: October 16, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David D. CHAMBLISS, Ehood GARMIZA, Leah SHALEV
-
Patent number: 8861011Abstract: A print image processing system includes plural logical page interpretation units, a dual interpretation unit, a cache memory, an assignment unit, and a print image data generation unit. The logical page interpretation units interpret assigned logical pages in input print data in parallel. The dual interpretation unit interprets an assigned logical page in the print data or an element to be cached which is included in the logical page. The cache memory stores interpretation results of elements to be cached. The assignment unit assigns logical pages to the dual interpretation unit and the logical page interpretation units. The print image data generation unit generates print image data for the logical pages using the interpretation results output from the logical page interpretation units or the dual interpretation unit and the interpretation results stored in the cache memory. The print image data generation unit supplies the print image data to a printer.Type: GrantFiled: September 5, 2013Date of Patent: October 14, 2014Assignee: Fuji Xerox Co., Ltd.Inventor: Michio Hayakawa
-
Patent number: 8862825Abstract: A processor and an operating method are described. By diversifying an L1 memory being accessed, based on an execution mode of the processor, an operating performance of the processor may be enhanced. By disposing a local/stack section in a system dynamic random access memory (DRAM) located external to the processor, a size of a scratch pad memory may be reduced without deteriorating a performance. While a core of the processor is performing in a very long instruction word (VLIW) mode, the core may data-access a cache memory and thus, a bottleneck may not occur with respect to the scratch pad memory even though a memory access occurs with respect to the scratch pad memory by an external component.Type: GrantFiled: June 22, 2011Date of Patent: October 14, 2014Assignee: Samsung Electronics Co., Ltd.Inventor: Kwon Taek Kwon
-
Patent number: 8856457Abstract: In a system including a plurality of CPU units having a cache memory of different capacity each other and a system controller that connects to the plurality of CPUs and controls cache synchronization, the system controller includes a cache synchronization unit which monitors an address contention between a preceding request and a subsequent request and a setting unit which sets different monitoring range of the contention between the preceding request and the subsequent request for each capacity of the cache memory in each of the CPU units.Type: GrantFiled: November 27, 2012Date of Patent: October 7, 2014Assignee: Fujitsu LimitedInventors: Yuuji Konno, Hiroshi Murakami
-
Patent number: 8856455Abstract: A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss.Type: GrantFiled: March 28, 2012Date of Patent: October 7, 2014Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
-
Publication number: 20140297956Abstract: An arithmetic processing apparatus includes a plurality of first processing units to be connected to a cache memory; a plurality of second processing units to be connected to the cache memory and to acquire, into the cache memory, data to be processed by the first processing unit before each of the plurality of first processing units executes processing; and a schedule processing unit to control a schedule for acquiring the data of the plurality of second processing units into the cache memory.Type: ApplicationFiled: March 13, 2014Publication date: October 2, 2014Applicant: FUJITSU LIMITEDInventors: Takashi Ishinaka, Jun Moroo
-
Patent number: 8850127Abstract: Various embodiments of the present invention allow concurrent accesses to a cache. A request to update an object stored in a cache is received. A first data structure comprising a new value for the object is created in response to receiving the request. A cache pointer is atomically modified to point to the first data structure. A second data structure comprising an old value for the cached object is maintained until a process, which holds a pointer to the old value of the cached object, at least one of one of ends and indicates that the old value is no longer needed.Type: GrantFiled: March 27, 2014Date of Patent: September 30, 2014Assignee: International Business Machines CorporationInventors: Paul M. Dantzig, Robert O. Dryfoos, Sastry S. Duri, Arun Iyengar
-
Publication number: 20140289468Abstract: One aspect provides a method including: responsive to a request for data and a miss in both a first cache and a second cache, retrieving the data from memory, the first cache storing at least a subset of data stored in the second cache; inferring from information pertaining to the first cache a replacement entry in the second cache; and responsive to inferring from information pertaining to the first cache a replacement entry in the second cache, replacing an entry in the second cache with the data from memory. Other aspects are described and claimed.Type: ApplicationFiled: March 25, 2013Publication date: September 25, 2014Applicant: International Business Machines CorporationInventors: Bulent Abali, Mohammad Banikazemi, Parijat Dube
-
Publication number: 20140289469Abstract: A processor includes: processing units, each including a first cache memory; a second cache memory being shared among the processing units; an acquiring unit to acquire lock target information including first storage location information in an first cache memory included in one of the processing units from an access request to data cached in the second cache memory; a retaining unit to retain the lock target information until an response processing to the access request is completed; and a control unit to control an access request to the second cache memory, the access request being related to a replace request to a first cache memory, based on second storage location information of replace target data in the first cache memory and the lock target information, the second storage location information acquired from the access request related to the replace request.Type: ApplicationFiled: June 9, 2014Publication date: September 25, 2014Inventors: Hiroyuki ISHII, Hiroyuki KOJIMA
-
Patent number: 8843706Abstract: Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed.Type: GrantFiled: February 27, 2013Date of Patent: September 23, 2014Assignee: International Business Machines CorporationInventors: Timothy H. Heil, Robert A. Shearer
-
Publication number: 20140281239Abstract: A method for determining an inclusion policy includes determining a ratio of a capacity of a large cache to a capacity of a core cache in a cache subsystem of a processor and selecting an inclusive policy as the inclusion policy for the cache subsystem in response to the cache ratio exceeding an inclusion threshold. The method may further include selecting a non-inclusive policy in response to the cache ratio not exceeding the inclusion threshold and, responsive to a cache transaction resulting in a cache miss, performing an inclusion operation that invokes the inclusion policy.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Applicant: Intel CorporationInventors: Larisa Novakovsky, Joseph Nuzman, Alexander Gendler
-
Publication number: 20140281243Abstract: A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.Type: ApplicationFiled: October 26, 2012Publication date: September 18, 2014Inventors: John Shalf, David Donofrio, Leonid Oliker
-
Publication number: 20140281233Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing data on storage nodes. In one aspect, a method includes receiving a file to be stored across a plurality of storage nodes each including a cache. The is stored by storing portions of the file each on a different storage node. A first portion is written to a first storage node's cache until determining that the first storage node's cache is full. A different second storage node is selected in response to determining that the first storage node's cache is full. For each portion of the file, a location of the portion is recorded, the location indicating at least a storage node storing the portion.Type: ApplicationFiled: June 2, 2014Publication date: September 18, 2014Applicant: Google Inc.Inventors: Andrew Kadatch, Lawrence E. Greenfield