Coherency Patents (Class 711/141)
-
Patent number: 9164698Abstract: Memories having internal processors and methods of data communication within such memories are provided. One such memory may include a fetch unit configured to substantially control performing commands on a memory array based on the availability of banks to be accessed. The fetch unit may receive instructions including commands indicating whether data is to be read from or written to a bank, and the address of the data to be read from or written to the bank. The fetch unit may perform the commands based on the availability of the bank. In one embodiment, control logic communicates with the fetch unit when an activated bank is available. In another implementation, the fetch unit may wait for a bank to become available based on timers set to when a previous command in the activated bank has been performed.Type: GrantFiled: May 5, 2014Date of Patent: October 20, 2015Assignee: Micron Technology, Inc.Inventors: Robert M. Walker, Dan Skinner, J. Thomas Pawlowski
-
Patent number: 9160709Abstract: Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end.Type: GrantFiled: September 4, 2014Date of Patent: October 13, 2015Assignee: Open Text S.A.Inventor: Mark R. Scheevel
-
Patent number: 9158582Abstract: A method of managing execution threads launched by processes being executed in a computer unit having at least one calculation core connected to a shared memory.Type: GrantFiled: July 17, 2013Date of Patent: October 13, 2015Assignee: MORPHOInventors: Sebastien Bronsart, Matthieu Darbois, Cedric Thuillier
-
Patent number: 9146871Abstract: A multi-core processing apparatus may provide a cache probe and data retrieval method. The method may comprise sending a memory request from a requester to a record keeping structure. The memory request may have a memory address of a memory that stores requested data. The method may further comprise determining a last accessor of the memory address, sending a cache probe to the last accessor, determining the last accessor no longer has a copy of the line; and sending a request for the previously accessed version of the line. The request may bypass the tag-directories and obtain the requested data from memory.Type: GrantFiled: December 28, 2011Date of Patent: September 29, 2015Assignee: Intel CorporationInventors: Simon C. Steely, Jr., William C. Hasenplaugh, Joel S. Emer
-
Patent number: 9141545Abstract: A cache coherence manager, disposed in a multi-core microprocessor, includes a request unit, an intervention unit, a response unit and an interface unit. The request unit receives coherent requests and selectively issues speculative requests in response. The interface unit selectively forwards the speculative requests to a memory. The interface unit includes at least three tables. Each entry in the first table represents an index to the second table. Each entry in the second table represents an index to the third table. The entry in the first table is allocated when a response to an associated intervention message is stored in the first table but before the speculative request is received by the interface unit. The entry in the second table is allocated when the speculative request is stored in the interface unit. The entry in the third table is allocated when the speculative request is issued to the memory.Type: GrantFiled: December 2, 2014Date of Patent: September 22, 2015Assignee: ARM Finance Overseas LimitedInventors: William Lee, Thomas Benjamin Berg
-
Patent number: 9129071Abstract: This invention speeds operation for coherence writes to shared memory. This invention immediately commits to the memory endpoint coherence write data. Thus this data will be available earlier than if the memory controller stalled this write pending snoop responses. This invention computes write enable strobes for the coherence write data based upon the cache dirty tags. This invention initiates a snoop cycle based upon the address of the coherence write. The stored write enable strobes enable determination of which data to write to the endpoint memory upon a cached and dirty snoop response.Type: GrantFiled: October 18, 2013Date of Patent: September 8, 2015Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Matthew D Pierson, Kai Chirca, Timothy D Anderson
-
Patent number: 9122617Abstract: Specifically, under the present invention, a cache memory unit can be designated as a pseudo cache memory unit for another cache memory unit within a common hierarchal level. For example, in case of cache miss at cache memory unit “X” on cache level L2 of a hierarchy, a request is sent to a cache memory unit on cache level L3 (external), as well as one or more other cache memory units on cache level L2. The L2 level cache memory units return search results as a hit or a miss. They typically do not search L3 nor write back with the L3 result even (e.g., if it the result is a miss). To this extent, only the immediate origin of the request is written back with L3 results, if all L2s miss. As such, the other L2 level cache memory units serve the original L2 cache memory unit as pseudo caches.Type: GrantFiled: November 21, 2008Date of Patent: September 1, 2015Assignee: International Business Machines CorporationInventors: Karl J. Duvalsaint, Daeik Kim, Moon J. Kim
-
Patent number: 9098325Abstract: Example embodiments disclosed herein relate to a persistent volume at an offset of a virtual block device of a storage server. Example embodiments include requesting that a persistent volume be dissociated from a virtual block device in response to the termination of a virtual machine.Type: GrantFiled: February 28, 2012Date of Patent: August 4, 2015Assignee: Hewlett-Packard Development Company, L.P.Inventors: Timothy Reddin, Liam Noel Kelleher, Alistair Coles, Aled Edwards
-
Patent number: 9086975Abstract: A coherent attached processor proxy (CAPP) of a primary coherent system receives a memory access request from an attached processor (AP) and an expected coherence state of a target address of the memory access request with respect to a cache memory of the AP. In response, the CAPP determines a coherence state of the target address and whether or not the expected state matches the determined coherence state. In response to determining that the expected state matches the determined coherence state, the CAPP issues a memory access request corresponding to that received from the AP on a system fabric of the primary coherent system. In response to determining that the expected state does not match the coherence state determined by the CAPP, the CAPP transmits a failure message to the AP without issuing on the system fabric a memory access request corresponding to that received from the AP.Type: GrantFiled: February 26, 2013Date of Patent: July 21, 2015Assignee: International Business Machines CorporationInventors: Bartholomew Blaner, Charles Marino, Michael S. Siegel, William J. Starke, Jeff A. Stuecheli
-
Patent number: 9081707Abstract: A method is described that includes recognizing that TLB information of one or more hardware threads is to be invalidated. The method also includes determining which ones of the one or more hardware threads are in a state in which TLB information is flushed. The method also includes directing a TLB shootdown to those of the or more hardware threads that are in a state in which TLB information is not flushed.Type: GrantFiled: December 29, 2012Date of Patent: July 14, 2015Assignee: Intel CorporationInventors: Shaun M. Conrad, Russell J. Fenger, Gaurav Khanna, Rahul Seth, James B. Crossland, Anil Aggarwal
-
Patent number: 9081670Abstract: A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache, and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction.Type: GrantFiled: May 8, 2014Date of Patent: July 14, 2015Assignee: Nimble Storage, Inc.Inventors: Umesh Maheshwari, Varun Mehta
-
Patent number: 9075731Abstract: Techniques for achieving crash consistency when performing write-behind caching using a flash storage-based cache are provided. In one embodiment, a computer system receives from a virtual machine a write request that includes data to be written to a virtual disk and caches the data in a flash storage-based cache. The computer system further logs a transaction entry for the write request in the flash storage-based cache, where the transaction entry includes information usable for flushing the data from the flash storage-based cache to a storage device storing the virtual disk. The computer system then communicates an acknowledgment to the VM indicating that the write request has been successfully processed.Type: GrantFiled: January 23, 2013Date of Patent: July 7, 2015Assignee: VMware, Inc.Inventors: Deng Liu, Thomas A. Phelan, Ramkumar Vadivelu, Wei Zhang, Sandeep Uttamchandani, Li Zhou
-
Patent number: 9075723Abstract: A plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases.Type: GrantFiled: June 17, 2011Date of Patent: July 7, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta, Carol S. Mellgren, Kenneth W. Todd
-
Patent number: 9069670Abstract: In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed.Type: GrantFiled: October 23, 2012Date of Patent: June 30, 2015Assignee: Intel CorporationInventors: Ali-Reza Adl-Tabatabai, Yang Ni, Bratin Saha, Vadim Bassin, Gad Sheaffer, David Callahan, Jan Gray
-
Patent number: 9069674Abstract: A coherent attached processor proxy (CAPP) of a primary coherent system receives a memory access request from an attached processor (AP) and an expected coherence state of a target address of the memory access request with respect to a cache memory of the AP. In response, the CAPP determines a coherence state of the target address and whether or not the expected state matches the determined coherence state. In response to determining that the expected state matches the determined coherence state, the CAPP issues a memory access request corresponding to that received from the AP on a system fabric of the primary coherent system. In response to determining that the expected state does not match the coherence state determined by the CAPP, the CAPP transmits a failure message to the AP without issuing on the system fabric a memory access request corresponding to that received from the AP.Type: GrantFiled: November 27, 2012Date of Patent: June 30, 2015Assignee: International Business Machines CorporationInventors: Bartholomew Blaner, Charles Marino, Michael S. Siegel, William J. Starke, Jeff A. Stuecheli
-
Patent number: 9069677Abstract: Techniques, systems, and articles of manufacture for input/output de-duplication based on variable-size chunks. A method includes partitioning virtual block data into multiple variable-sized chunks, caching each of the multiple variable-sized chunks in a chunk cache according to content of each of the multiple variable-sized chunks, initializing virtual block-to-chunk mapping and chunk-to-physical block mapping for each of the multiple variable-sized chunks, and detecting duplicate disk input and/or output requests across multiple hosts based on content-based mappings of the input and/or output requests to the chunk cache and the virtual block-to-chunk mapping and chunk-to-physical block mapping for each of the multiple variable-sized chunks in the chunk cache.Type: GrantFiled: April 29, 2013Date of Patent: June 30, 2015Assignee: International Business Machines CorporationInventors: Rahul Balani, Sujesha Sudevalayam, Akshat Verma
-
Patent number: 9065706Abstract: An abnormality detection unit provided in at least one node among a plurality of nodes included in an information processing apparatus detects abnormality in a data transmission path of data transmission using a shared memory area sharable in a single node and other node, which is included in the storage unit provided in the single node or other nodes. An error information generation unit provided in the single node generates error information, based on the abnormality detected by the abnormality detection unit, and generates an interrupt with respect to a processor within a self node. The processor provided in the single node performs recovery processing, based on the error information according to the interrupt.Type: GrantFiled: September 12, 2012Date of Patent: June 23, 2015Assignee: FUJITSU LIMITEDInventors: Hideyuki Koinuma, Go Sugizaki, Toshikazu Ueki
-
Patent number: 9058195Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.Type: GrantFiled: February 27, 2013Date of Patent: June 16, 2015Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
-
Patent number: 9058270Abstract: A mechanism is provided for detecting false sharing misses. Responsive to performing either an eviction or an invalidation of a cache line in a cache memory of the data processing system, a determination is made as to whether there is an entry associated with the cache line in a false sharing detection table. Responsive to the entry associated with the cache line existing in the false sharing detection table, a determination is made as to whether an overlap field associated with the entry is set. Responsive to the overlap field failing to be set, identification is made that a false sharing coherence miss has occurred. A first signal is then sent to a performance monitoring unit indicating the false sharing coherence miss.Type: GrantFiled: June 24, 2011Date of Patent: June 16, 2015Assignee: International Business Machines CorporationInventors: Harold W. Cain, III, Hubertus Franke
-
Patent number: 9058347Abstract: A collection of content objects and a representative content object may be stored in a k-dimensional tree. In one embodiment, a method includes receiving a content object; constructing a first k-dimensional tree in response to determining a second k-dimensional tree is storing information corresponding to a number of content objects that is equal to a number of nodes of the second k-dimensional tree; storing information corresponding to the received content object as a node in the first k-dimensional tree; and moving information corresponding to a stored content object from of each node of the second k-dimensional tree to a corresponding node of the first k-dimensional tree, wherein the corresponding node of the first k-dimensional tree is identified based at least in part on content of the content object.Type: GrantFiled: August 30, 2012Date of Patent: June 16, 2015Assignee: Facebook, Inc.Inventor: Vikram Chandrasekhar
-
Patent number: 9053022Abstract: Some implementations disclosed herein provide techniques and arrangements for a synchronous software interface for a specialized logic engine. The synchronous software interface may receive, from a first core of a plurality of cores, a control block including a transaction for execution by the specialized logic engine. The synchronous software interface may send the control block to the specialized logic engine and wait to receive a confirmation from the specialized logic engine that the transaction was successfully executed.Type: GrantFiled: December 30, 2011Date of Patent: June 9, 2015Assignee: Intel CorporationInventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann
-
Patent number: 9055044Abstract: In order to reduce the amount of consumption of a back-end bandwidth in a storage apparatus, a computer system includes: a first storage device; and a second storage device that is coupled to the first controller through a first interface and is coupled to the second controller through a second interface. The first controller receives data from a host computer through a first communication channel; write the received data into the first storage device; identify part of the received data as first data, the part satisfying a preset particular condition; and write a replica of the first data as second data into the second storage device. The second controller reads the second data from the second storage device in response to a Read request received from the host computer through a second communication channel; and transmit the second data to the host computer through the second communication channel.Type: GrantFiled: December 3, 2012Date of Patent: June 9, 2015Assignee: Hitachi, Ltd.Inventors: Takumi Takagi, Takashi Chikusa
-
Patent number: 9047334Abstract: Atomically updating an in-memory data structure that is directly accessible by a processor includes comparing old information associated with an old version of the in-memory data structure with current information associated with a current version of the in-memory data structure; in the event that the old information and the current information are the same, replacing the old version with a new version of the in-memory data structure; in the event that the old information and the current information are not the same, determining a difference between the current version of the in-memory data structure and the new version of the in-memory data structure, and determining whether the difference is logically consistent; and in the event that the difference is logically consistent, merging a change in the current version with the new version.Type: GrantFiled: July 29, 2010Date of Patent: June 2, 2015Assignee: David R. CheritonInventor: David R. Cheriton
-
Patent number: 9043530Abstract: Among other things, one or more techniques and/or systems are provided for storing data within a hybrid storage aggregate comprising a lower-latency storage tier and a higher-latency storage tier. In particular, frequently accessed data, randomly accessed data, and/or short lived data may be stored (e.g., read caching and/or write caching) within the lower-latency storage tier. Infrequently accessed data and/or sequentially accessed data may be stored within the higher-latency storage tier. Because the hybrid storage aggregate may comprise a single logical container derived from the higher-latency storage tier and the lower-latency storage tier, additional storage and/or file system functionality may be implemented across the storage tiers. For example, deduplication functionality, caching functionality, backup/restore functionality, and/or other functionality may be provided through a single file system (or other type of arrangement) and/or a cache map implemented within the hybrid storage aggregate.Type: GrantFiled: April 9, 2012Date of Patent: May 26, 2015Assignee: NetApp, Inc.Inventors: Rajesh Sundaram, Douglas Paul Doucette, David Grunwald, Jeffrey S. Kimmel, Ashish Prakash
-
Patent number: 9043559Abstract: Techniques for handling version information using a copy engine. In one embodiment, an apparatus comprises a copy engine configured to perform one or more operations associated with a block memory operation in response to a command. Examples of block memory operations may include copy, clear, move, and/or compress operations. In one embodiment, the copy engine is configured to handle version information associated with the block memory operation based on the command. The one or more operations may include operating on data in a cache and/or modifying entries in a memory. In one embodiment, the copy engine is configured to compare version information in the command with stored version information. The copy engine may overwrite or preserve version information based on the command. The copy engine may be a coprocessing element. The copy engine may be configured to maintain coherency with other copy engines and/or processing elements.Type: GrantFiled: October 23, 2012Date of Patent: May 26, 2015Assignee: Oracle International CorporationInventors: Zoran Radovic, Darryl J. Gove
-
Patent number: 9043554Abstract: Systems, processors, and methods for keeping uncacheable data coherent. A processor includes a multi-level cache hierarchy, and uncacheable load memory operations can be cached at any level of the cache hierarchy. If an uncacheable load misses in the L2 cache, then allocation of the uncacheable load will be restricted to a subset of the ways of the L2 cache. If an uncacheable store memory operation hits in the L1 cache, then the hit cache line can be updated with the data from the memory operation. If the uncacheable store misses in the L1 cache, then the uncacheable store is sent to a core interface unit. Multiple contiguous store misses are merged into larger blocks of data in the core interface unit before being sent to the L2 cache.Type: GrantFiled: December 21, 2012Date of Patent: May 26, 2015Assignee: Apple Inc.Inventors: Brian P. Lilly, Gerard R. Williams, III, Perumal R. Subramoniam, Pradeep Kanapathipillai
-
Publication number: 20150143050Abstract: The present application is directed to a control circuit that provides a directory configured to maintain a plurality of entries, wherein each entry can indicate sharing of resources, such as cache lines, by a plurality of agents/hosts. Control circuit of the present invention can further provide consolidation of one or more entries having a first format to a single entry having a second format when resources corresponding to the one or more entries are shared by the agents. First format can include an address and a pointer representing one of the agents, and the second format can include a sharing vector indicative of more than one of the agents. In another aspect, the second format can utilize, incorporate, and/or represent multiple entries that may be indicative of one or more resources based on a position in the directory.Type: ApplicationFiled: November 20, 2013Publication date: May 21, 2015Applicant: Netspeed SystemsInventors: Joe ROWLANDS, Sailesh KUMAR
-
Patent number: 9037791Abstract: For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the SSD portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones.Type: GrantFiled: January 22, 2013Date of Patent: May 19, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta, Cheng-Chung Song
-
Patent number: 9032160Abstract: In a first embodiment, a method and computer program product for use in a storage system comprising quiescing IO commands the sites of an ACTIVE/ACTIVE storage system, the active/active storage system having at least two storage sites communicatively coupled via a virtualization layer, creating a change set, unquiescing IO commands by the virtualization layers, transferring data of a change set to the other sites of the active/active storage system by the virtualization layer, and flushing the data by the virtualization layer. In a second embodiment, a method and computer program product for use in a storage system comprising fracturing a cluster of an active/active storage system; wherein the cluster includes at least two sites, stopping IO on a first site of the cluster; and rolling to a point in time on the first site.Type: GrantFiled: December 29, 2011Date of Patent: May 12, 2015Assignee: EMC CorporationInventors: Assaf Natanzon, Saar Cohen, Steven R. Bromling
-
Patent number: 9032151Abstract: To ensure that the contents of a non-volatile memory device cache may be relied upon as accurately reflecting data stored on disk storage, it may be determined whether the cache contents and/or disk contents are modified during a power transition, causing cache contents to no longer accurately reflect data stored in disk storage. The cache device may be removable from the computer, and unexpected removal of the cache device may cause cache contents to no longer accurately reflect data stored in disk storage. Cache metadata may be managed during normal operations and across power transitions, ensuring that cache metadata may be efficiently accessed and reliably saved and restored across power transitions. A state of a log used by a file system may be determined prior to and subsequent to reboot of an operating system in order to determine whether data stored on a cache device may be reliably used.Type: GrantFiled: November 14, 2008Date of Patent: May 12, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Mehmet Iyigun, Yevgeniy Bak, Michael Fortin, David Fields, Cenk Ergan, Alexander Kirshenbaum
-
Publication number: 20150127910Abstract: A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel.Type: ApplicationFiled: January 31, 2014Publication date: May 7, 2015Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
-
Publication number: 20150127908Abstract: A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel.Type: ApplicationFiled: November 6, 2013Publication date: May 7, 2015Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
-
Patent number: 9026742Abstract: A processor provides memory request and a coherency state value for a coherency granule associated with a memory request. The processor further provides either a first indicator or a second indicator depending on whether the coherency state value represents a cumulative coherency state for a plurality of caches of the processor. The first indicator and the second indicator identify the coherency state value as representing a cumulative coherency state or a potentially non-cumulative coherency state, respectively. If the second indicator is provided, a transaction management module determines whether to request the cumulative coherency state for the coherency granule in response to receiving the second indicator. The transaction management module then provides an indicator of the request for the cumulative coherency state to the processor in response to determining to request the cumulative coherency state.Type: GrantFiled: December 21, 2007Date of Patent: May 5, 2015Assignee: Freescale Semiconductor, Inc.Inventors: Sanjay R. Deshpande, Klas M. Bruce, Michael D. Snyder
-
Patent number: 9026736Abstract: Described herein is a system and method for maintaining cache coherency. The system and method may maintain coherency for a cache memory that is coupled to a plurality of primary storage devices. The system and method may write data to the cache memory and associate the data with a cache generation identification (ID). A different cache generation ID may be associated with each new set of data that is written to the cache memory. The cache generation ID may be written to the primary storage devices. A backup restore operation may be performed on one of the primary storage devices and a backup restore notification may be received. In response to the notification, the system and method may compare the cache generation ID with the generation ID stored on the restored primary storage device and invalidate data stored on the cache memory for the restored primary storage device.Type: GrantFiled: August 6, 2012Date of Patent: May 5, 2015Assignee: NetApp, Inc.Inventors: Narayan Venkat, David Franklin Lively, Kenny W. Speer
-
Patent number: 9026743Abstract: For a flexible replication with skewed mapping in a multi-core chip, a request for a cache line is received, at a receiver core in the multi-core chip from a requester core in the multi-core chip. The receiver and requester cores comprise electronic circuits. The multi-core chip comprises a set of cores including the receiver and the requester cores. A target core is identified from the request to which the request is targeted. A determination is made whether the target core includes the requester core in a neighborhood of the target core, the neighborhood including a first subset of cores mapped to the target core according to a skewed mapping. The cache line is replicated, responsive to the determining being negative, from the target core to a replication core. The cache line is provided from the replication core to the requester core.Type: GrantFiled: April 30, 2012Date of Patent: May 5, 2015Assignee: International Business Machines CorporationInventors: Jian Li, William Evan Speight
-
Patent number: 9021212Abstract: In a semiconductor memory computer equipped with a flash memory, use of backed-up data is enabled. The semiconductor memory computer includes an address conversion table for detecting physical addresses of at least two pages storing data by designating a logical address from one of logical addresses to be designated by a reading request. The semiconductor memory computer includes a page status register for detecting one page status allocated to each page, and page statuses to be detected include the at least following four statuses: (1) a latest data storage status, (2) a not latest data storage status, (3) an invalid data storage status, and (4) an unwritten status. By using the address conversion table and the page status register, at least two data s (latest data and past data) can be read for one designated logical address from a host computer.Type: GrantFiled: February 24, 2014Date of Patent: April 28, 2015Assignee: Hitachi, Ltd.Inventor: Nagamasa Mizushima
-
Patent number: 9021209Abstract: A processing node tracks probe activity level associated with its cache. The processing node and/or processing system further predicts an idle duration. If the probe activity level increases above a threshold probe activity level, and the idle duration prediction is above a threshold idle duration threshold, the processing node flushes its cache to prevent probes to the cache. If the probe activity level is above the threshold probe activity level but the predicted idle duration is too short, the performance state of the processing node is increased above its current performance state to provide enhanced performance capability in responding to the probe requests.Type: GrantFiled: February 8, 2010Date of Patent: April 28, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Alexander Branover, Maurice B. Steinman
-
Patent number: 9021211Abstract: A coherent attached processor proxy (CAPP) participates in coherence communication in a primary coherent system on behalf of an attached processor external to the primary coherent system. The CAPP includes an epoch timer that advances at regular intervals to define epochs of operation of the CAPP. Each of one or more entries in a data structure in the CAPP are associated with a respective epoch. Recovery operations for the CAPP are initiated based on a comparison of an epoch indicated by the epoch timer and the epoch associated with one of the one or more entries in the data structure.Type: GrantFiled: January 11, 2013Date of Patent: April 28, 2015Assignee: International Business Machines CorporationInventors: Bartholomew Blaner, Kevin F. Reick, Michael S. Siegel, Jeff A. Stuecheli
-
Publication number: 20150113224Abstract: Atomic write operations for storage devices are implemented by maintaining the data that would be overwritten in the cache until the write operation completes. After the write operation completes, including generating any related metadata, a checkpoint is created. After the checkpoint is created, the old data is discarded and the new data becomes the current data for the affected storage locations. If an interruption occurs prior to the creation of the checkpoint, the old data is recovered and any new is discarded. If an interruption occurs after the creation of the checkpoint, any remaining old data is discarded and the new data becomes the current data. Write logs that indicate the locations affected by in progress write operation are used in some implementations. If neither all of the new data nor all of the old data is recoverable, a predetermined pattern can be written into the affected locations.Type: ApplicationFiled: January 24, 2014Publication date: April 23, 2015Applicant: NetApp, Inc.Inventors: Greg William Achilles, Gordon Hulpieu, Donald Roman Humlicek, Martin Oree Parrish, Kent Prosch, Alan Stewart
-
Patent number: 9015719Abstract: A method for scheduling tasks to be processed by one of a plurality of non-coherent processing devices, at least two of the devices being heterogeneous devices and at least some of said tasks being targeted to a specific one of the processing devices. The devices process data that is stored in local storage and in a memory accessible by at least some of the devices. The method includes the steps of: for each of a plurality of non-dependent tasks to be processed by the device, determining consistency operations required to be performed prior to processing the non-dependent task; performing the consistency operations for one of the non-dependent tasks and on completion issuing the task to the device for processing; performing consistency operations for a further non-dependent task such that, on completion of the consistency operations, the device can process the further task.Type: GrantFiled: February 27, 2012Date of Patent: April 21, 2015Assignee: ARM LimitedInventor: Robert Elliott
-
Patent number: 9015416Abstract: Some embodiments provide systems and methods for validating cached content based on changes in the content instead of an expiration interval. One method involves caching content and a first checksum in response to a first request for that content. The caching produces a cached instance of the content representative of a form of the content at the time of caching. The first checksum identifies the cached instance. In response to receiving a second request for the content, the method submits a request for a second checksum representing a current instance of the content and a request for the current instance. Upon receiving the second checksum, the method serves the cached instance of the content when the first checksum matches the second checksum and serves the current instance of the content upon completion of the transfer of the current instance when the first checksum does not match the second checksum.Type: GrantFiled: July 21, 2014Date of Patent: April 21, 2015Assignee: Edgecast Networks, Inc.Inventor: Andrew Lientz
-
Patent number: 9015424Abstract: A memory interconnect between transaction masters and a shared memory. A first snoop request is sent to other transaction masters to trigger them to invalidate any local copy of that data they may hold and for them to return any cached line of data corresponding to the write line of data that is dirty. A first write transaction is sent to the shared memory. When and if any cached line of data is received from the further transaction masters, then the portion data is used to form a second write transaction which is sent to the shared memory and writes the remaining portions of the cached line of data which were not written by the first write transaction in to the shared memory. The serialization circuitry stalls any transaction requests to the write line of data until the first write transaction.Type: GrantFiled: August 15, 2012Date of Patent: April 21, 2015Assignee: ARM LimitedInventor: Timothy Charles Mace
-
Patent number: 9009364Abstract: A packet processor has a packet memory manager configured to store a page walk link list, receive a descriptor and initiate a page walk through the page walk link list in response to the descriptor and without a prompt from transmit direct memory access circuitry. The packet memory manager is configured to receive an indicator of a single page packet and read a new packet in response to the indicator without waiting to obtain page state associated with the page of the single page packet.Type: GrantFiled: December 27, 2013Date of Patent: April 14, 2015Assignee: Xpliant, Inc.Inventors: Tsahi Daniel, Enric Musoll, Dan Tu, Sridevi Polasanapalli
-
Patent number: 9009411Abstract: A store gathering policy is enabled or disabled at a data processing device. A store gathering policy to be implemented by a store buffer can be selected from a plurality of store gathering polices. For example, the plurality of store gathering policies can be constrained or unconstrained. A store gathering policy can be enabled by a user programmable storage location. A specific store gathering policy can be specified by a user programmable storage location. A store gathering policy can be determined based upon an attribute of a store request, such as based upon a destination address.Type: GrantFiled: November 20, 2012Date of Patent: April 14, 2015Assignee: Freescale Semiconductor, Inc.Inventors: William C. Moyer, Quyen Pho
-
Patent number: 9009214Abstract: A mechanism is provided for managing a process-to-process inter-cluster communication request. A call from a first application is received in a first operating system in a first data processing system. The first operating system passes the call from the first operating system to a first host fabric interface controller in the first data processing system without processing the call. The first host fabric interface processes the call to determine a second data processing system in the plurality of data processing systems with which the call is associated, wherein the call is processed by the first host fabric interface without intervention by the first operating system. The first host fabric interface initiates an inter-cluster connection to a second host fabric interface in the second data processing. The call is then transferred to the second host fabric interface in the second data processing system via the inter-cluster connection.Type: GrantFiled: December 23, 2008Date of Patent: April 14, 2015Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Piyush Chaudhary
-
Patent number: 9009416Abstract: A method, computer program product, and computing system for reclassifying a first assigned cache portion associated with a first machine as a public cache portion associated with the first machine and at least one additional machine after the occurrence of a reclassifying event. The public cache portion includes a plurality of pieces of content received by the first machine. A content identifier for each of the plurality of pieces of content included within the public cache portion is compared with content identifiers for pieces of content included within a portion of a data array associated with the at least one additional machine to generate a list of matching data portions. The list of matching data portions is provided to at least one additional assigned cache portion within the cache system that is associated with the at least one additional machine.Type: GrantFiled: December 30, 2011Date of Patent: April 14, 2015Assignee: EMC CorporationInventors: Philip Derbeko, Anat Eyal, Roy E. Clark
-
Patent number: 9003162Abstract: A request to modify an object in storage that is associated with one or more computing devices may be obtained, the storage organized based on a latch-free B-tree structure. A storage address of the object may be determined, based on accessing a mapping table that includes map indicators mapping logical object identifiers to physical storage addresses. A prepending of a first delta record to a prior object state of the object may be initiated, the first delta record indicating an object modification associated with the obtained request. Installation of a first state change associated with the object modification may be initiated via a first atomic operation on a mapping table entry that indicates the prior object state of the object. For example, the latch-free B-tree structure may include a B-tree like index structure over records as the objects, and logical page identifiers as the logical object identifiers.Type: GrantFiled: June 20, 2012Date of Patent: April 7, 2015Assignee: Microsoft Technology Licensing, LLCInventors: David Lomet, Justin Levandoski, Sudipta Sengupta
-
Patent number: 9003130Abstract: A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags.Type: GrantFiled: December 19, 2012Date of Patent: April 7, 2015Assignee: Advanced Micro Devices, Inc.Inventors: James O'Connor, Bradford M. Beckmann
-
Publication number: 20150095589Abstract: A cache memory system includes a cache memory, which stores cache data corresponding to portions of main data stored in a main memory and priority data respectively corresponding to the cache data; a table storage unit, which stores a priority table including information regarding access frequencies with respect to the main data; and a controller, which, when at least one from among the main data is requested, determines whether cache data corresponding to the request is stored in the cache memory, deletes one from among the cache data based on the priority data, and updates the cache data set with new data, wherein the priority data is determined based on the information regarding access frequencies.Type: ApplicationFiled: September 29, 2014Publication date: April 2, 2015Applicant: Samsung Electronics Co., Ltd.Inventors: Jeong-soo PARK, Kwon-taek Kwon, Jeong-ae Park
-
Patent number: 8996812Abstract: A write-back coherency data cache for temporarily holding cache lines. Upon receiving a processor request for data, a determination is made from a coherency directory whether a copy of the data is cached in a write-back cache located in a memory controller hardware. The write-back cache holds data being written back to main memory for a period of time prior to writing the data to main memory. If the data is cached in the write-back cache, the data is removed from the write-back cache and forwarded to the requesting processor. The cache coherency state in the coherency directory entry for the data is updated to reflect the current cache coherency state of the data based on the requesting processor's intended use of the data.Type: GrantFiled: June 19, 2009Date of Patent: March 31, 2015Assignee: International Business Machines CorporationInventors: Marcus Lathan Kornegay, Ngan Ngoc Pham