Coherency Patents (Class 711/141)
  • Patent number: 9471313
    Abstract: Technical solutions are described for avoiding a transaction abort in a multiprocessor that supports transactional memory during out-of-order execution of an instruction stream. An example method described includes detecting an instruction that represents an end of a transaction in the instruction stream. The method also includes identifying a conflict in execution of an outside instruction in conjunction with execution of the transaction, the outside instruction being after instruction that represents the end of the transaction, and where the conflict causes the transaction to abort. The method also includes flushing the outside instruction; and resuming the execution of the transaction, without aborting the transaction.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: October 18, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fadi Y. Busaba, Michael K. Gschwind, Chung-Lung K. Shum
  • Patent number: 9465740
    Abstract: An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message.
    Type: Grant
    Filed: April 11, 2013
    Date of Patent: October 11, 2016
    Assignee: Apple Inc.
    Inventors: Erik P. Machnicki, Harshavardhan Kaushikkar, Shinye Shiu
  • Patent number: 9455045
    Abstract: The present invention provides a method and apparatus that includes a timing device circuit for generating a timing signal, a RAM coupled to the timing device circuit, an OTP NVM and selection logic. The RAM is operable upon receiving a burn address to read configuration data in the RAM beginning at the burn address and the OTP NVM is operable to burn the configuration data read from RAM into the OTP NVM. The OTP NVM is configured to read configuration data in the OTP NVM and the RAM is configured to store the configuration data from the OTP NVM beginning at an address in the RAM corresponding to a read start address to define a timing device configuration in the RAM.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: September 27, 2016
    Assignee: INTEGRATED DEVICE TECHNOLOGY, INC.
    Inventors: Xiaohong Zheng, Hui Li
  • Patent number: 9448938
    Abstract: A memory device having a memory controller, a main memory with at least a portion comprising persistent memory, and at least two processing entities, wherein the memory controller enables the processing entities to access the main memory according to a cache coherence protocol. The cache coherency protocol can signal when the main memory is being updated and when the update has finished. The processing entities can be configured to wait for the main memory to be updated or can access previously stored memory.
    Type: Grant
    Filed: June 9, 2010
    Date of Patent: September 20, 2016
    Assignee: Micron Technology, Inc.
    Inventors: John Rudelic, August Camber, Mostafa Naguib Abdulla
  • Patent number: 9448869
    Abstract: Aspects of the subject matter described herein relate to error detection for files. In aspects, before allowing updates to a clean file, a flag marking the file as dirty is written to non-volatile storage. Thereafter, the file may be updated as long as desired. Periodically or at some other time, the file may be marked as clean after all outstanding updates to the file and error codes associated with the file are written to storage. While waiting for outstanding updates and error codes to be written to storage, if additional requests to update the file are received, the file may be marked as dirty again prior to allowing the additional requests to update the file. The request to write a clean flag regarding the file may be done lazily.
    Type: Grant
    Filed: July 24, 2014
    Date of Patent: September 20, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas J. Miller, Jonathan M. Cargille, William R. Tipton, Surendra Verma
  • Patent number: 9444496
    Abstract: A correctable parity-protected memory system may include a parity-protected memory configured to hold dirty data, an error correction register configured to hold data, an exclusive-OR (XOR) circuit configured to exclusive-OR dirty data that is written into and removed from the parity-protected memory with the data in the error-correction register, and a controller. The controller may be configured to cause the results of the XOR circuit to accumulate in the error-correction register each time dirty data is written into and removed from the parity-protected memory, and, in response to detection of a fault in dirty data in the parity-protected memory, correct the fault based on the data in the error-correction register and dirty data in the parity-protected memory.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 13, 2016
    Assignee: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Mehrtash Manoochehri, Michel Dubois
  • Patent number: 9424344
    Abstract: A method for natural language search for variables is provided. The method may include searching an index using key words from a user's natural language question and the context of the user's question. The index may reference variables and/or web service calls in a domain model. The method may also include saving documents obtained in response to the search. The method may also include mapping each of the documents as a node into an object graph. Each node may be associated with a parent node, except when the node is a root node. The method may also include identifying the root node of each document. The method may also include identifying the path of each node from the node to the node's root node. The method may also include identifying matching paths. Each matching path may provide an answer to the user's question.
    Type: Grant
    Filed: May 7, 2014
    Date of Patent: August 23, 2016
    Assignee: Bank of America Corporation
    Inventors: Viju Kothuvatiparambil, Ramakrishna R. Yannam
  • Patent number: 9418018
    Abstract: A Fill Buffer (FB) based data forwarding scheme that stores a combination of Virtual Address (VA), TLB (Translation Look-aside Buffer) entry# or an indication of a location of a Page Table Entry (PTE) in the TLB, and a TLB page size information in the FB and uses these values to expedite FB forwarding. Load (Ld) operations send their non-translated VA for an early comparison against the VA entries in the FB, and are then further qualified with the TLB entry# to determine a “hit.” This hit determination is fast and enables FB forwarding at higher frequencies without waiting for a comparison of Physical Addresses (PA) to conclude in the FB. A safety mechanism may detect a false hit in the FB and generate a late load cancel indication to cancel the earlier-started FB forwarding by ignoring the data obtained as a result of the Ld execution. The Ld is then re-executed later and tries to complete successfully with the correct data.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: August 16, 2016
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Karthik Sundaram, Rama Gopal, Murali Chinnakonda
  • Patent number: 9400607
    Abstract: Embodiments are directed towards storing data in a storage system. A data controller may obtain a write request and write data from a client computer. A write message may be generated and provided to a data coordinator computer. The data coordinator may communicate the write message to a plurality of L-node computers. The data coordinator may obtain write confirmation messages from the L-node computers that indicate that the write data is stored. If enough write confirmation messages are obtained to indicate that a quorum is reached, the data coordinator may communicate a save confirmation message to the data controller. The data controller may generate a write acknowledgement message based on the save confirmation message provided by the data coordinator. The data controller may provide the write acknowledgement message to the client computer that made the original write request.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: July 26, 2016
    Assignee: Igneous Systems, Inc.
    Inventors: Asif Arif Daud, Andrew Martin Pilloud, Eric Michael Lemar, Triantaphyllos Byron Rakitzis
  • Patent number: 9396127
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: July 19, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9390014
    Abstract: A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: July 12, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 9390011
    Abstract: A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation is treated as a cache miss to ensure that the requesting CPU will receive valid data.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: July 12, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Naveen Bhoria, Raguram Damodaran, Abhijeet Ashok Chachad
  • Patent number: 9390013
    Abstract: A coherent attached processor proxy (CAPP) of a primary coherent system receives a memory access request specifying a target address in the primary coherent system from an attached processor (AP) external to the primary coherent system. The CAPP includes a CAPP directory of contents of a cache memory in the AP that holds copies of memory blocks belonging to a coherent address space of the primary coherent system. In response to the memory access request, the CAPP performs a first determination of a coherence state for the target address and allocates a master machine to service the memory access request in accordance with the first determination. Thereafter, during allocation of the master machine, the CAPP updates the coherence state and performs a second determination of the coherence state. The master machine services the memory access request in accordance with the second determination.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: July 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, David W. Cummings, Michael S. Siegel, Jeffrey A. Stuecheli
  • Patent number: 9390026
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: July 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9384133
    Abstract: A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: July 5, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 9367578
    Abstract: An invalidation tracker system for tracking messages in a caching architecture of a pricing and shopping platform. The caching architecture includes multiple levels each comprising one or more servers. Invalidation messages are communicated from one level to another to send invalidation messages to all servers in the caching architecture. The system receives data from provider databases to be communicated to the servers in the caching architecture. The system includes a recording module for recording all invalidation messages communicated to the servers in the caching architecture to form a set of sent invalidation messages, an analyzing module for determining the invalidation messages received at each server in the caching architecture and comparing this with the set of sent invalidation messages to identify one or more undelivered invalidation messages, and a reply module for resending the one or more identified undelivered invalidation messages to an appropriate server in the caching architecture.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: June 14, 2016
    Assignee: Amadeus S.A.S.
    Inventors: Remy Edouard Gole, Benoit Ducol, Marc Traina
  • Patent number: 9369527
    Abstract: Disclosed is a storage system that includes a first node and a second node. The first node and the second node are connected to a client and a storage device. The first node includes a first memory for cache data, receives data from the client, stores the data into the first memory, and controls the storing the content of the data into the storage device from the first memory in accordance with an instruction from the second node. The second node includes a second memory for the same cache data as the first memory. If the content of data in the storage device or the content of data in the second memory before the storing of the data is the same as the content of the data in the second memory, the second node instructs the first node not to store the content of the data into the storage device.
    Type: Grant
    Filed: February 21, 2014
    Date of Patent: June 14, 2016
    Assignee: HITACHI, LTD.
    Inventors: Tatsuo Nishibori, Nobuyuki Saika, Akira Murotani
  • Patent number: 9361230
    Abstract: A system and method are disclosed for communicating coherency information between initiator and target agents on semiconductor chips. Sufficient information communication to support full coherency is performed through a socket interface using only three channels. Transaction requests are issued on one channel with responses given on a second. Intervention requests are issued on the same channel as transaction responses. Intervention responses are given on a third channel. Such an approach drastically reduces the complexity of cache coherent socket interfaces compared to conventional approaches. The net effect is faster logic, smaller silicon area, improved architecture performance, and a reduced probability of bugs by the designers of coherent initiators and targets.
    Type: Grant
    Filed: September 20, 2015
    Date of Patent: June 7, 2016
    Assignee: Qualcomm Technologies, Inc.
    Inventor: Jean-Jacques Lecler
  • Patent number: 9363335
    Abstract: The present invention is directed to method and apparatus that enables a web-based client-server application to be used offline that substantially obviate one or more problems due to limitations and disadvantages of the related art. According to one aspect of the present invention, a method for using a web-based client-server application offline, the method comprise: reading a markup file including a list of web resource file and period information; connecting periodically to web server according to the period information; downloading the web resource file included in the markup file form the web server to a cache included in the application.
    Type: Grant
    Filed: October 30, 2012
    Date of Patent: June 7, 2016
    Assignee: LG ELECTRONICS INC.
    Inventors: Soonbo Han, Dongyoung Lee
  • Patent number: 9354918
    Abstract: Embodiments relate to migrating a local cache state with a virtual machine (VM) migration. An aspect includes detecting that a VM executing on a source host machine has been paused as part of a migration of the VM from the source host machine to a target host machine. A state of a first local cache associated with the VM is identified. The first local cache is accessible by the source host machine and includes data previously fetched from a shared storage. Pre-fetch hints that are based on the state of the first local cache are sent to the target host machine prior to the migration completing. The pre-fetch hints are utilized by the target host machine to fetch, from the shared storage, at least a subset of the data stored in the first local cache for storage in a second local cache accessible by the target host machine.
    Type: Grant
    Filed: February 10, 2014
    Date of Patent: May 31, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aayush Gupta, James L. Hafner
  • Patent number: 9336156
    Abstract: A processing device and method for cache control including tracking updates to the line state of a cache superline are described. In response to a request pertaining to a superline, a cache controller of the processing device can perform one or more read-modify-write (RMW) operations to (a) a line state vector of a line state array and (b) a counter of the line state array. Based on a determination that one or more requests to the superline have completed, the line state vector from the line state array can be written to a tag array. The cache controller can track pending line state updates to a superline outside of the tag array, and a line state update can occur in the cache controller, rather than awaiting completion of all outstanding operations on a superline. Updates to multiple line states can be maintained simultaneously, and up-to-date ECCs computed.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: May 10, 2016
    Assignee: Intel Corporation
    Inventors: Zhongying Zhang, Erik G. Hallnor, Stanley S. Kulick, Jeffrey L. Miller
  • Patent number: 9336090
    Abstract: Storage apparatus, in response to write command specifying write destination with regards to multiple virtual areas, allocates a free real area of multiple real areas based on storage devices to a write-destination virtual area, of the multiple virtual areas, to which the write destination belongs, and writes write-target data conforming to the write command to the allocated real area. The storage apparatus, where a first write command has been received subsequent to a snapshot acquisition time point, erases an allocation of a first real area to a first virtual area to which the write destination specified in the first write command belongs, allocates the first real area to a free second virtual area to which a real area has not been allocated, allocates a free second real area to the first virtual area, writes write-target data conforming to the first write command to the second real area.
    Type: Grant
    Filed: October 10, 2012
    Date of Patent: May 10, 2016
    Assignee: Hitachi, Ltd.
    Inventors: Yudai Takayama, Yuko Matsui
  • Patent number: 9304925
    Abstract: The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace) in a multicore SoC. Each processor has an associated return buffer allowing out of order responses of memory read data and cache snoop responses to ensure maximum bandwidth at the endpoints, and all endpoints receive status messages to simplify the return queue.
    Type: Grant
    Filed: October 23, 2013
    Date of Patent: April 5, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Kai Chirca, Matthew D Pierson
  • Patent number: 9298627
    Abstract: Embodiments include multi-processor systems, including multi-core processor systems, as well as methods for operating the same, in which at least one processor or processor core is configured to receive an instruction directing the at least one processor core to read a value associated with a memory address. In response to receiving the instruction and before execution of the instruction, the at least one processor or processor core causes ones of the plurality of mutually communicatively inter-coupled processor cores to provide a plurality of locally stored values that are stored individually in the respective processor cores and that are associated with the memory address.
    Type: Grant
    Filed: January 13, 2014
    Date of Patent: March 29, 2016
    Assignee: Marvell World Trade Ltd.
    Inventors: Eitan Joshua, Noam Mizrahi
  • Patent number: 9298621
    Abstract: A chip multi-processor (CMP) with virtual domain management. The CMP has a plurality of tiles each including a core and a cache, a mapping storage, a plurality of memory controllers, a communication bus interconnecting the tiles and the memory controllers, and machine-executable instructions. The tiles and memory controllers are responsive to the instructions to group the tiles into a plurality of virtual domains, each virtual domain associated with at least one memory controller, and to store a mapping unique to each virtual domain in the mapping storage.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: March 29, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sheng Li, Norman Paul Jouppi, Naveen Muralimanohar
  • Patent number: 9298628
    Abstract: Embodiments include multi-processor systems, including multi-core processor systems, as well as methods for operating the same, in which at least one processor or processor core is configured to receive an instruction directing the at least one processor core to read a value associated with a memory address. In response to receiving the instruction and before execution of the instruction, the at least one processor or processor core causes ones of the plurality of mutually communicatively inter-coupled processor cores to provide a plurality of locally stored values that are stored individually in the respective processor cores and that are associated with the memory address.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: March 29, 2016
    Assignee: Marvell World Trade Ltd.
    Inventors: Eitan Joshua, Noam Mizrahi
  • Patent number: 9286293
    Abstract: Aspects of the subject matter described herein relate to client-side caching. In aspects, when a client receives a request for data that is located on a remote server, the client first checks a local cache to see if the data is stored in the local cache. If the data is not stored in the local cache, the client may check a peer cache to see if the data is stored in the peer cache. If the data is not stored in the peer cache, the client obtains the data from the remote server, caches it locally, and publishes to the peer cache that the client has a copy of the data.
    Type: Grant
    Filed: November 28, 2008
    Date of Patent: March 15, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Ewan Jolly, James T. Pinkerton, Eileen C. Brown, David Matthew Kruse, Prashanth Prahalad, Vikrant H. Desai
  • Patent number: 9275696
    Abstract: Technologies are described herein for conserving energy in a multicore chip via selectively refreshing memory directory entries. Some described examples may refresh a dynamic random access memory (DRAM) that stores a cache coherence directory of a multicore chip. More particularly, a directory entry may be accessed in the cache coherence directory stored in the DRAM. Some further examples may identify a cache coherence state of a block associated with the directory entry. In some examples, refresh of the directory entry stored in the DRAM may be selectively disabled based on the identified cache coherence state of the block such that energy associated with the multicore chip is conserved.
    Type: Grant
    Filed: July 26, 2012
    Date of Patent: March 1, 2016
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Patent number: 9268623
    Abstract: Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address.
    Type: Grant
    Filed: December 18, 2012
    Date of Patent: February 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, James E. Carey, Philip J. Sanders, Brian E. Smith
  • Patent number: 9262243
    Abstract: Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: February 16, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, James E. Carey, Philip J. Sanders, Brian E. Smith
  • Patent number: 9256537
    Abstract: A coherent attached processor proxy (CAPP) of a primary coherent system receives a memory access request specifying a target address in the primary coherent system from an attached processor (AP) external to the primary coherent system. The CAPP includes a CAPP directory of contents of a cache memory in the AP that holds copies of memory blocks belonging to a coherent address space of the primary coherent system. In response to the memory access request, the CAPP performs a first determination of a coherence state for the target address and allocates a master machine to service the memory access request in accordance with the first determination. Thereafter, during allocation of the master machine, the CAPP updates the coherence state and performs a second determination of the coherence state. The master machine services the memory access request in accordance with the second determination.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: February 9, 2016
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, David W. Cummings, Michael S. Siegel, Jeffrey A. Stuecheli
  • Patent number: 9256534
    Abstract: Embodiments relate to the orchestration of data shuffling among memory devices of a non-uniform memory access device. An aspect includes a method of orchestrated shuffling of data in a non-uniform memory access device includes running an application on a plurality of threads executing on a plurality of processing nodes and identifying data to be shuffled among the plurality of processing nodes. The method includes registering the data to be shuffled and generating a plan for orchestrating the shuffling of the data. The method further includes disabling cache coherency of cache memory associated with the processing nodes and shuffling the data among all of the memory devices upon disabling the cache coherency, the shuffling performed based on the plan for orchestrating the shuffling. The method further includes restoring the cache coherency of the cache memory based on completing the shuffling of the data among all of the memory devices.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: February 9, 2016
    Assignee: International Business Machines Corporation
    Inventors: Yinan Li, Guy M. Lohman, Rene Mueller, Ippokratis Pandis, Vijayshankar Raman
  • Patent number: 9250908
    Abstract: A multi-processor cache and bus interconnection system. A multi-processor is provided a segmented cache and an interconnection system for connecting the processors to the cache segments. An interface unit communicates to external devices using module IDs and timestamps. A buffer protocol includes a retransmission buffer and method.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: February 2, 2016
    Assignee: PACT XPP TECHNOLOGIES AG
    Inventors: Martin Vorbach, Volker Baumgarte, Frank May, Armin Nuckel
  • Patent number: 9251082
    Abstract: Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. The storage system sends to the client agent the requested data by partitioning the returned data into segments for each read request. The storage system sends each segment in a separate network message.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: February 2, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lior Aronovich, Konstantin Mushkin, Oded Sonin
  • Patent number: 9251084
    Abstract: An arithmetic processing apparatus includes a plurality of processors, each of the processors having an arithmetic unit and a cache memory. The processor includes an instruction port that holds a plurality of instructions accessing data of the cache memory, a first determination unit that validates a first flag when receiving an invalidation request for data in the cache memory, a cache index of a target address and a way ID of the received request match with a cache index of a designated address and a way ID of the load instruction, a second determination unit that validates a second flag when target data is transmitted due to a cache miss, and an instruction re-execution determination unit that instructs re-execution of an instruction subsequent to the load instruction when both the first flag and the second flag are validated at the time of completion of an instruction in the instruction port.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: February 2, 2016
    Assignee: FUJITSU LIMITED
    Inventor: Naohiro Kiyota
  • Patent number: 9244837
    Abstract: A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation is treated as a cache miss to ensure that the requesting CPU will receive valid data.
    Type: Grant
    Filed: October 11, 2012
    Date of Patent: January 26, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Naveen Bhoria, Raguram Damodaran, Abhijeet Ashok Chachad
  • Patent number: 9244845
    Abstract: The present disclosure is directed to hardware hash tables, and more specifically, to generation of a cache coherent system such as in a Network on Chip (NoC). The present disclosure is further directed to a directory structure that includes a new field, referred to, for instance as, encoded value, which indicates the original owner of a dirty line. As an original holder may have held or modified the original line, by tracking the original holder, example implementations can track the agents that are potentially dirty, as the encoded value can indicate the agent with the most recently unique line, which can then be shared with the other agents.
    Type: Grant
    Filed: May 12, 2014
    Date of Patent: January 26, 2016
    Assignee: NetSpeed Systems
    Inventors: Joe Rowlands, Sailesh Kumar
  • Patent number: 9239795
    Abstract: A surface cache stores pixel data on behalf of a pixel processing pipeline that is configured to generate screen tiles. The surface cache assigns hint levels to cache lines storing pixel data according to whether that pixel data is likely to be needed again. When the pixel data is needed to process a subsequent tile, the corresponding cache line is assigned a higher hint value. When the pixel data is not needed again, the corresponding cache line is assigned a lower hint value. The surface cache is configured to preferentially evict cache lines having a lower hint value, thereby preserving cache lines that store pixel data needed for future processing. In addition, a fetch controller is configured to throttle the rate at which fetch requests are issued to the surface cache to prevent situations where pixel data needed for future operations becomes prematurely evicted.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: January 19, 2016
    Assignee: NVIDIA Corporation
    Inventors: Mukesh Chand Agarwal, Narendra Keshav Rane
  • Patent number: 9235519
    Abstract: A home node for selecting a source node using a cache coherency protocol, comprising a logic unit cluster coupled to a directory, wherein the logic unit cluster is configured to receive a request for data from a requesting cache node, determine a plurality of nodes that hold a copy of the requested data using the directory, select one of the nodes using one or more selection parameters as the source node, and transmit a message to the source node to determine whether the source node stores a copy of the requested data, wherein the source node forwards the requested data to the requesting cache node when the requested data is found within the source node, and wherein some of the nodes are marked as a Shared state corresponding to the cache coherency protocol.
    Type: Grant
    Filed: June 17, 2013
    Date of Patent: January 12, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang
  • Patent number: 9229865
    Abstract: Technologies are generally described for methods, systems, and devices effective to implement one-cacheable multi-core architectures. In one example, a multi-core processor that includes a first and second tile may be configured to implement a one-cacheable architecture. The second tile may be configured to generate a request for a data block. The first tile may be configured to receive the request for the data block, and determine that the requested data block is part of a group of data blocks identified as one-cacheable. The first tile may further determine that the requested data block is stored in a first cache in the first tile. The first tile may send the data block from the first cache in the first tile to the second tile, and invalidate the data blocks of the group of data blocks in the first cache in the first tile.
    Type: Grant
    Filed: February 21, 2013
    Date of Patent: January 5, 2016
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 9223717
    Abstract: A computer cache system delays cache coherence invalidation messages related to cache lines of a common memory region to collect these messages into a combined message that can be transmitted more efficiently. This delay may be coordinated with a detection of whether the processor is executing a data-race free portion of the program so that the delay system may be used for a variety of types of programs which may have data-race and data-race free sections.
    Type: Grant
    Filed: October 8, 2012
    Date of Patent: December 29, 2015
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Gurindar S. Sohi, Hongil Yoon
  • Patent number: 9213780
    Abstract: Many computing scenarios involve an item cache or index, comprising items corresponding to source items that may change without notice, rendering the item in the item cache or index stale. It may not be possible to guarantee the freshness of the items, but it may be desirable to reduce staleness in an efficient manner. Therefore, the refreshing of items may be prioritized by first predicting the query frequency of respective item representing the rate at which an item is retrieved from the item cache (e.g., by monitoring queries for the item), predicting an update frequency representing the rate at which the source item is updated by the source item host (e.g., by classifying the source item type), and computing a refresh utility representing the improvement in cache freshness achieved by refreshing the item. Respective items may then be prioritized for refreshing according to the computed refresh utilities.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: December 15, 2015
    Assignee: Microsoft Technology Licensing LLC
    Inventors: Joseph Yossi Azar, Eric Horvitz, Eyal Lubetzky, Dafna Shahaf
  • Patent number: 9213646
    Abstract: Systems and methods are disclosed for cache data value tracking. In an embodiment, a controller may be configured to select data; set a node weight for the data representing a cache hit potential for the data; store a first time stamp value for the data representing when the data was accessed; and store the data in a cache memory based on the node weight and the first time stamp value. In another embodiment, a method may comprise setting a node weight for data associated with a data access command, storing a first access counter value for the data representing a number of times new data has been stored to the cache memory when the data was accessed, and removing the data from the cache memory or maintaining the data in the cache memory based on the node weight and the first access counter value.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: December 15, 2015
    Assignee: Seagate Technology LLC
    Inventors: Margot Ann LaPanse, Joseph Masaki Baum, Stanton MacDonough Keeler, Michael Edward Baum, Thomas Dale Hosman, Robert Dale Murphy
  • Patent number: 9201809
    Abstract: Various embodiments of accidental shared volume erasure prevention include systems, methods, and/or computer program products for receiving a request to access a volume from a requesting system, determining whether the volume is associated with any system other than the requesting system, and preventing accidental erasure of the volume based on the determination.
    Type: Grant
    Filed: May 14, 2013
    Date of Patent: December 1, 2015
    Inventors: Gavin S. Johnson, Michael J. Koester, John R. Paveza
  • Patent number: 9201792
    Abstract: A multi-core processing apparatus may provide a cache probe and data retrieval method. The method may comprise sending a memory request from a requester to a record keeping structure. The memory request may have a memory address of a memory that stores requested data. The method may further comprise determining that a local last accessor of the memory address may have a copy of the requested data up to date with the memory. The local last accessor may be within a local domain that the requester belongs to. The method may further comprise sending a cache probe to the local last accessor and retrieving a latest value of the requested data from the local last accessor to the requester.
    Type: Grant
    Filed: December 29, 2011
    Date of Patent: December 1, 2015
    Assignee: Intel Corporation
    Inventors: Simon C. Steely, Jr., Samantika Subramaniam, William C. Hasenplaugh, Joel S. Emer
  • Patent number: 9189413
    Abstract: A technique for implementing read-copy update in a shared-memory computing system having two or more processors operatively coupled to a shared memory and to associated incoherent caches that cache copies of data stored in the memory. According to example embodiments disclosed herein, cacheline information for data that has been rendered obsolete due to a data update being performed by one of the processors is recorded. The recorded cacheline information is communicated to one or more of the other processors. The one or more other processors use the communicated cacheline information to flush the obsolete data from all incoherent caches that may be caching such data.
    Type: Grant
    Filed: June 20, 2011
    Date of Patent: November 17, 2015
    Assignee: International Business Machines Corporation
    Inventor: Paul E. McKenney
  • Patent number: 9183156
    Abstract: A technique for implementing read-copy update in a shared-memory computing system having two or more processors operatively coupled to a shared memory and to associated incoherent caches that cache copies of data stored in the memory. According to example embodiments disclosed herein, cacheline information for data that has been rendered obsolete due to a data update being performed by one of the processors is recorded. The recorded cacheline information is communicated to one or more of the other processors. The one or more other processors use the communicated cacheline information to flush the obsolete data from all incoherent caches that may be caching such data.
    Type: Grant
    Filed: November 29, 2013
    Date of Patent: November 10, 2015
    Assignee: International Business Machines Corporation
    Inventor: Paul E. McKenney
  • Patent number: 9170931
    Abstract: Examples disclose partitioning a volatile memory into a high performance partition and a low performance partition. Further the example discloses retrieving an application with a high performance data and a low performance data from a non-volatile memory to place the high and the low performance data in the high and low performance partitions, respectively. Additionally, the example also discloses receiving a request to decrease power and in response, reduce an amount of power to the high performance partition and maintaining an amount of power provided to the low performance partition.
    Type: Grant
    Filed: October 27, 2011
    Date of Patent: October 27, 2015
    Assignee: QUALCOMM INCORPORATED
    Inventor: Yoon K Wong
  • Patent number: 9164698
    Abstract: Memories having internal processors and methods of data communication within such memories are provided. One such memory may include a fetch unit configured to substantially control performing commands on a memory array based on the availability of banks to be accessed. The fetch unit may receive instructions including commands indicating whether data is to be read from or written to a bank, and the address of the data to be read from or written to the bank. The fetch unit may perform the commands based on the availability of the bank. In one embodiment, control logic communicates with the fetch unit when an activated bank is available. In another implementation, the fetch unit may wait for a bank to become available based on timers set to when a previous command in the activated bank has been performed.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: October 20, 2015
    Assignee: Micron Technology, Inc.
    Inventors: Robert M. Walker, Dan Skinner, J. Thomas Pawlowski
  • Patent number: 9160709
    Abstract: Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end.
    Type: Grant
    Filed: September 4, 2014
    Date of Patent: October 13, 2015
    Assignee: Open Text S.A.
    Inventor: Mark R. Scheevel