Access Control Bit Patents (Class 711/145)
  • Patent number: 9336153
    Abstract: A computer system, comprising a server on which an application runs, and a storage system that stores data to be used by the application, the cache driver being configured to change, in a case of the condition of a cache area is a first cache condition in which data is readable from a cache area and writing of data into the cache area is prohibited, the condition of the cache area to a third cache condition in which reading of data from the cache area is prohibited and writing of data into the cache area is prohibited, from the first cache condition.
    Type: Grant
    Filed: July 23, 2014
    Date of Patent: May 10, 2016
    Assignee: HITACHI, LTD.
    Inventors: Ken Sugimoto, Yuusuke Fukumura, Nobukazu Kondo
  • Patent number: 9317284
    Abstract: In an embodiment, a processor may implement a vector hazard check instruction to detect dependencies between vector memory operations based on the addresses of the vectors accessed by the vector memory operations. The addresses may be specified via a base address and a vector of indexes for each vector. In an embodiment, one of the base addresses may be an implied (or assumed) zero address, reducing the number of operands of the hazard check instruction.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: April 19, 2016
    Assignee: Apple Inc.
    Inventors: Jeffry E. Gonion, Alexander C. Klaiber
  • Patent number: 9311238
    Abstract: A computer system microprocessor core having a cache subsystem executes a demote instruction to cause an exclusively owned demote instruction specified cache line owned by the same microprocessor core to be shared or read-only.
    Type: Grant
    Filed: June 18, 2014
    Date of Patent: April 12, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chung-Lung Kevin Shum, Kathryn Marie Jackson, Charles Franklin Webb
  • Patent number: 9268596
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: February 23, 2016
    Assignee: Intel Corparation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 9235585
    Abstract: A method, article of manufacture, and apparatus for recovering data. An object to be recovered is selected, sub-objects of the object are recovered based on the priorities assigned to the sub-objects, and the sub-objects are reprioritized based on an application's I/O during recovery. Reprioritizing the sub-objects may include assigning a lower priority to the sub-objects when an application has completed I/O on the object. Sub-objects may be recovered to a remote location.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: January 12, 2016
    Assignee: EMC Corporation
    Inventors: Michael John Dutch, Christopher Hercules Claudatos, Mandavilli Navneeth Rao
  • Patent number: 9218204
    Abstract: A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 22, 2015
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: James M. O'Connor, Michael J. Schulte, Nuwan S. Jayasena, Gabriel H. Loh
  • Patent number: 9208092
    Abstract: A coherent attached processor proxy (CAPP) includes transport logic having a first interface configured to support communication with a system fabric of a primary coherent system and a second interface configured to support communication with an attached processor (AP) that is external to the primary coherent system and that includes a cache memory that holds copies of memory blocks belonging to a coherent address space of the primary coherent system. The CAPP further includes one or more master machines that initiate memory access requests on the system fabric of the primary coherent system on behalf of the AP, one or more snoop machines that service requests snooped on the system fabric, and a CAPP directory having a precise directory having a plurality of entries each associated with a smaller data granule and a coarse directory having a plurality of entries each associated with a larger data granule.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: December 8, 2015
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Bartholomew Blaner, Michael S. Siegel, Jeffrey A. Stuecheli, Charles F. Marino
  • Patent number: 9208091
    Abstract: A coherent attached processor proxy (CAPP) includes transport logic having a first interface configured to support communication with a system fabric of a primary coherent system and a second interface configured to support communication with an attached processor (AP) that is external to the primary coherent system and that includes a cache memory that holds copies of memory blocks belonging to a coherent address space of the primary coherent system. The CAPP further includes one or more master machines that initiate memory access requests on the system fabric of the primary coherent system on behalf of the AP, one or more snoop machines that service requests snooped on the system fabric, and a CAPP directory having a precise directory having a plurality of entries each associated with a smaller data granule and a coarse directory having a plurality of entries each associated with a larger data granule.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: December 8, 2015
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Bartholomew Blaner, Michael S. Siegel, Jeffrey A. Stuecheli, Charles F. Marino
  • Patent number: 9208103
    Abstract: A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Lookups to the caches of the MTLB can be selectively bypassed based on a control configuration and the attributes of a received address.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: December 8, 2015
    Assignee: Cavium, Inc.
    Inventors: Richard E. Kessler, Bryan W. Chin, Michael Bertone
  • Patent number: 9195625
    Abstract: A data processing device includes an interconnect controller operable to manage the communication of information between modules of the data processing device via an interconnect. In response to a transaction request the interconnect controller selects a tag value from a set of available tag values, assigns the tag to the transaction and reserves the tag value so that it is unavailable for assignment to other transactions. If an expected response to the transaction request is not received within a designated amount of time, the transaction enters a timed-out state and the interconnect controller locks the tag value, so that it remains unavailable for assignment to other transactions until an unlock event, such as a request from software.
    Type: Grant
    Filed: October 29, 2009
    Date of Patent: November 24, 2015
    Assignee: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Gus P. Ikonomopoulos, Thang Q. Nguyen, Jose M. Nunez, Kun Xu
  • Patent number: 9183038
    Abstract: In a computer system of the present invention, whether or not master data has been updated is managed for each division key as master data management information. If the master data has been updated, a job is re-executed, but when the job is re-executed, data is divided using only a division key corresponding to updated master data, and thereby a sub-job which is a re-execution target is localized with the division key unit so as to re-execute a job.
    Type: Grant
    Filed: November 10, 2010
    Date of Patent: November 10, 2015
    Assignee: HITACHI, LTD.
    Inventors: Norichika Hatabe, Tsutomu Sugita
  • Patent number: 9146746
    Abstract: Devices and methods for providing deterministic execution of multithreaded applications are provided. In some embodiments, each thread is provided access to an isolated memory region, such as a private cache. In some embodiments, more than one private cache are synchronized via a modified MOESI coherence protocol. The modified coherence protocol may be configured to refrain from synchronizing the isolated memory regions until the end of an execution quantum. The execution quantum may end when all threads experience a quantum end event such as reaching a threshold instruction count, overflowing the isolated memory region, and/or attempting to access a lock released by a different thread in the same quantum.
    Type: Grant
    Filed: March 1, 2012
    Date of Patent: September 29, 2015
    Assignee: University of Washington through its Center of Commercialization
    Inventors: Luis Henrique Ceze, Thomas Bergan, Joseph Devietti, Daniel Joseph Grossman, Jacob Eric Nelson
  • Patent number: 9141669
    Abstract: An exemplary system and method for generating a data list of websites and configuring at least one server computer coupled to a communications network for an origin server website content delivery may comprise a network storage device communicatively coupled to a network and storing a routing table for a CDN, the routing table map one or more edge server IP addresses for one or more edge servers to each of one or more geographic regions. The network storage device may be configured to transmit the routing table to one or more DNS servers communicatively coupled to the network.
    Type: Grant
    Filed: February 20, 2013
    Date of Patent: September 22, 2015
    Assignee: Go Daddy Operating Company, LLC
    Inventors: Craig Jellick, David Koopman
  • Patent number: 9110811
    Abstract: A method and apparatus for determining data to be prefetched based on previous cache miss history is disclosed. In one embodiment, a processor includes a first cache memory and a controller circuit. The controller circuit is configured to load data from a first address into the first cache memory responsive to a cache miss corresponding to the first address. The controller circuit is further configured to determine, responsive to a cache miss for the first address, if a previous cache miss occurred at a second address. Responsive to determining that the previous cache miss occurred at the second address, the controller circuit is configured to load data from a second address into the first cache.
    Type: Grant
    Filed: September 18, 2012
    Date of Patent: August 18, 2015
    Assignee: Oracle International Corporation
    Inventor: Yuan C. Chou
  • Patent number: 9086989
    Abstract: A system and a method are disclosed for more efficiently handling shared code stored in memory, comprising a modified memory management unit containing a new shared address space identifier register, and a modified TLB entry containing a new shared bit.
    Type: Grant
    Filed: March 23, 2012
    Date of Patent: July 21, 2015
    Assignee: Synopsys, Inc.
    Inventors: Vineet Gupta, Thomas Pennello
  • Patent number: 9086980
    Abstract: The present disclosure provides a method, device, and system for processing a request in a multi-core system. The method comprises steps of: receiving a request for data by a filter from a requesting unit; comparing an indicator indicative of a logical partition in the request with an indicator indicative of the logical partition in a record of the filter; searching in a unit where the filter is located based on the request and returning a search result to the requesting unit if a comparison result matches; and returning a NONE response to the requesting unit from the filter if the comparison result does not match.
    Type: Grant
    Filed: August 1, 2012
    Date of Patent: July 21, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yi Ge, Rui Hou, Kun Wang, Hong Bo Zeng, Yu Zhang
  • Patent number: 9086973
    Abstract: The invention relates to a multi-core processor system, in particular a single-package multi-core processor system, comprising at least two processor cores, preferably at least four processor cores, each of said at least two cores, preferably at least four processor cores, having a local LEVEL-1 cache, a tree communication structure combining the multiple LEVEL-1 caches, the tree having at least one node, preferably at least three nodes for a four processor core multi-core processor, and TAG information is associated to data managed within the tree, usable in the treatment of the data.
    Type: Grant
    Filed: June 9, 2010
    Date of Patent: July 21, 2015
    Assignee: Hyperion Core, Inc.
    Inventor: Martin Vorbach
  • Patent number: 9058272
    Abstract: An apparatus including a snoop filter decoupled from a cache and an associated method for snoop filtering are disclosed. The snoop filter is decoupled from the cache such that the cache changes states of lines in the cache from a first state that is a clean state, such as an exclusive (E) state, to a second state that is not a clean state, such as a modified (M) state, without the snoop filter's knowledge. The snoop filter buffers addresses of replaced lines that are unknown to be clean until a write-back associated with the replacement lines occurs, or until actual states of the replaced lines are determined by the snoop filter generating a snoop. A multi-level cache system in which a reallocation or replacement policy is biased to favor replacing certain lines such as inclusive lines, non-temporal lines or prefetched lines that have not been accessed, is also disclosed.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: June 16, 2015
    Assignee: MARVELL INTERNATIONAL LTD.
    Inventors: Frank O'Bleness, Sujat Jamil, David Miner, Joseph Delgross, Tom Hameenanttila, Jeffrey Kehl, Adi Habusha
  • Patent number: 9058271
    Abstract: A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data.
    Type: Grant
    Filed: December 28, 2013
    Date of Patent: June 16, 2015
    Assignee: Intel Corporation
    Inventors: Robert H. Beers, Ching-Tsun Chou, Robert J. Safranek, James Vash
  • Patent number: 9037804
    Abstract: Method and apparatus to efficiently organize data in caches by storing/accessing data of varying sizes in cache lines. A value may be assigned to a field indicating the size of usable data stored in a cache line. If the field indicating the size of the usable data in the cache line indicates a size less than the maximum storage size, a value may be assigned to a field in the cache line indicating which subset of the data in the field to store data is usable data. A cache request may determine whether the size of the usable data in a cache line is equal to the maximum data storage size. If the size of the usable data in the cache line is equal to the maximum data storage size the entire stored data in the cache line may be returned.
    Type: Grant
    Filed: December 29, 2011
    Date of Patent: May 19, 2015
    Assignee: Intel Corporation
    Inventors: Simon C. Steely, Jr., William C. Hasenplaugh, Joel S. Emer
  • Publication number: 20150134896
    Abstract: In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed.
    Type: Application
    Filed: November 10, 2014
    Publication date: May 14, 2015
    Inventors: ALI-REZA ADL-TABATABAI, YANG NI, BRATIN SAHA, VADIM BASSIN, GAD SHEAFFER, DAVID CALLAHAN, JAN GRAY
  • Publication number: 20150113230
    Abstract: The present invention discloses a directory storage method and a directory storage node controller. The method includes: obtaining, by a node controller NC in a local node, a storage address of a data block in a CPU in the local node, where the data block is read by a remote node; determining first content and second content that are respectively located in a first specific bit and a second specific bit of the storage address; determining, according to the first content and from each preset storage space used for storing a directory, a storage space in which an addressing address matches the first content; and correspondingly storing the second content and the directory in the determined storage space.
    Type: Application
    Filed: October 16, 2014
    Publication date: April 23, 2015
    Inventor: Yongbo CHENG
  • Publication number: 20150113229
    Abstract: Code versioning for enabling transactional memory region promotion may include receiving a portion of candidate source code; outlining the portion of candidate source code received for parallel execution; wrapping a critical region with entry and exit routines to enter into a speculation sub-process, wherein the entry and exit routines also gather conflict statistics at run time; and generating an outlined code portion comprising multiple loop versions using a processor.
    Type: Application
    Filed: October 2, 2014
    Publication date: April 23, 2015
    Inventors: Hans Boettiger, Yaoqing Gao, Martin Ohmacht, Kai-Ting Amy Wang
  • Patent number: 9015436
    Abstract: In one embodiment, the present invention includes a method for receiving a lock message for an address in a processor from a quiesce master of a system. This lock message indicates that a requester agent of the system is to enter a locking phase with respect to the address. Responsive to receipt of this message, logic of the processor can write an entry in a tracking buffer of the processor for the address and thereafter allow a transaction to be sent from the processor via an interconnect if an address of the transaction does not match any address stored in the tracking buffer. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 22, 2011
    Date of Patent: April 21, 2015
    Assignee: Intel Corporation
    Inventor: Pik Shen Chee
  • Publication number: 20150106571
    Abstract: A cache coherence management system includes: a set of directories distributed between nodes of a network for interconnecting processors including cache memories, each directory including a correspondence table between cache lines and information fields on the cache lines; and a mechanism updating the directories by adding, modifying, or deleting cache lines in the correspondence tables. In each correspondence table and for each cache line identified, at least one field is provided for indicating a possible blocking of a transaction relative to the cache line considered, when the blocking occurs in the node associated with the correspondence table considered. The system further includes a mechanism detecting fields indicating a transaction blocking and restarting each transaction detected as blocked from the node in which it is indicated as blocked.
    Type: Application
    Filed: April 12, 2013
    Publication date: April 16, 2015
    Applicants: Commissariat a l'energie atomique et aux ene alt, BULL SAS
    Inventors: Christian Bernard, Eric Guthmuller, Huy Nam Nguyen
  • Patent number: 9003130
    Abstract: A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 7, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: James O'Connor, Bradford M. Beckmann
  • Patent number: 8996810
    Abstract: A system and method of detecting cache inconsistencies among distributed data centers is described. Key-based sampling captures a complete history of a key for comparing cache values across data centers. In one phase of a cache inconsistency detection algorithm, a log of operations performed on a sampled key is compared in reverse chronological order for inconsistent cache values. In another phase, a log of operations performed on a candidate key having inconsistent cache values as identified in the previous phase is evaluated in near real time in forward chronological order for inconsistent cache values. In a confirmation phase, a real time comparison of actual cache values stored in the data centers is performed on the candidate keys identified by both the previous phases as having inconsistent cache values. An alert is issued that identifies the data centers in which the inconsistent cache values were reported.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: March 31, 2015
    Assignee: Facebook, Inc.
    Inventor: Xiaojun Liang
  • Patent number: 8994740
    Abstract: A cache line allocation method, wherein the cache is coupled to a graphic processing unit and the cache comprising a plurality of cache lines, each cache line stores one of a plurality of instructions the method comprising the steps of: putting the plurality of instructions in whole cache lines; locking the whole cache lines if an instruction size is less than a cache size; locking a first number of cache lines when the instruction size is larger than the cache size and a difference between the instruction size and the cache size is less than or equal to a threshold; and locking a second number of cache lines when the instruction size is larger than the cache size and a difference between the instruction size and the cache size is large than the threshold; wherein the first number is greater than the second number.
    Type: Grant
    Filed: April 16, 2012
    Date of Patent: March 31, 2015
    Assignee: VIA Technologies, Inc.
    Inventors: Bingxu Gao, Xian Chen
  • Patent number: 8996837
    Abstract: A technique provides multi-tenancy within a data storage apparatus. The technique involves dividing, by processing circuitry, storage units of the data storage apparatus into multiple groups of storage units. The technique further involves forming, by the processing circuitry, segregated slice pools from the multiple groups of storage units. Each segregated slice pool is formed from a different group of storage units. The technique further involves allocating, by the processing circuitry, slices from the segregated slice pools to mutually exclusive sets of virtual storage processors (VSPs) on the data storage apparatus. Each mutually exclusive set of VSPs operates as a separate tenant of the data storage apparatus.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 31, 2015
    Assignee: EMC Corporation
    Inventors: Jean-Pierre Bono, Frederic Corniquet, Miles A. de Forest, Himabindu Tummala, Walter C. Forrester
  • Patent number: 8990506
    Abstract: In one embodiment, the present invention includes a cache memory including cache lines that each have a tag field including a state portion to store a cache coherency state of data stored in the line and a weight portion to store a weight corresponding to a relative importance of the data. In various implementations, the weight can be based on the cache coherency state and a recency of usage of the data. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 24, 2015
    Assignee: Intel Corporation
    Inventors: Naveen Cherukuri, Dennis W. Brzezinski, Ioannis T. Schoinas, Anahita Shayesteh, Akhilesh Kumar, Mani Azimi
  • Publication number: 20150081945
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to manage cache memory in multi-cache environments. A disclosed apparatus includes a remote cache manager to identify a remote cache memory communicatively connected to a bus, a delegation manager to constrain the remote cache memory to share data with a host cache memory via the bus, and a lock manager to synchronize the host cache memory and the remote cache memory with a common lock state.
    Type: Application
    Filed: September 19, 2013
    Publication date: March 19, 2015
    Inventors: Shiow-Wen Cheng, Robert Joseph Woodruff
  • Patent number: 8977818
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Grant
    Filed: September 20, 2013
    Date of Patent: March 10, 2015
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Patent number: 8966193
    Abstract: Electrical interfaces, addressing schemes, and command protocols allow for communications with memory modules in computing devices such as imaging and printing devices. Memory modules may be assigned an address through a set of discrete voltages. One, multiple, or all of the memory modules may be addressed with a single command, which may be an increment counter command, a write command, a punch out bit field, or a cryptographic command. The commands may be transmitted using a broadcast scheme or a split transaction scheme. The status of the memory modules may be determined by sampling a single signal that may be at a low, high, or intermediate voltage level.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: February 24, 2015
    Assignee: Lexmark International, Inc.
    Inventors: James Ronald Booth, Bryan Scott Willett
  • Publication number: 20150052315
    Abstract: In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request.
    Type: Application
    Filed: August 15, 2013
    Publication date: February 19, 2015
    Inventors: SANJEEV GHAI, GUY L. GUTHRIE, JONATHAN R. JACKSON, DEREK E. WILLIAMS
  • Patent number: 8954691
    Abstract: A network device that includes a first memory to store packets in segments; a second memory to store pointers associated with the first memory; a third memory to store summary bits and allocation bits, where the allocation bits correspond to the segments. The network device also includes a processor to receive a request for memory resources; determine whether a pointer is stored in the second memory, where the pointer corresponds to a segment that is available to store a packet; and send the pointer when the pointer is stored in the second memory. The processor is further to perform a search to identify other pointers when the pointer is not stored in the second memory, where performing the search includes identifying a set of allocation bits, based on an unallocated summary bit, that corresponds to the other pointers; identify another pointer, of the other pointers, based on an unallocated allocation bit of the set of allocation bits; and send the other pointer in response to the request.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: February 10, 2015
    Assignee: Juniper Networks, Inc.
    Inventors: Robert Rhoades, Paul Kim, Gary Goldman
  • Patent number: 8953354
    Abstract: A semiconductor memory device includes a memory portion that includes i (i is a natural number) sets each including j (j is a natural number of 2 or larger) arrays each including k (k is a natural number of 2 or larger) lines to each of which a first bit column of an address is assigned in advance; a comparison circuit; and a control circuit. The i×j lines to each of which a first bit column of an objective address is assigned in advance are searched more than once and less than or equal to j times with the use of the control circuit and a cache hit signal or a cache miss signal output from the selection circuit. In such a manner, the line storing the objective data is specified.
    Type: Grant
    Filed: June 5, 2012
    Date of Patent: February 10, 2015
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Yoshiyuki Kurokawa
  • Publication number: 20150040111
    Abstract: A method and apparatus for enabling a Software Transactional Memory (STM) with precompiled binaries is herein described. Upon encountering an access operation in a transaction, an annotation field associated with a memory location referenced by the access is checked. In response to the memory location representing a previous similar access within the transaction, the access is performed without access barriers. However, if the annotation field is in a default state representing no previous access during a pendancy of the transaction, then a mode of the processor is determined. If the processor mode is in implicit mode, an access handler/barrier is asynchronously executed. Conversely, in an explicit mode, a flag is set instead of asynchronously executing the handler. In addition, during compilation convert explicit and convert implicit instructions are inserted to intelligently convert modes for precompiled and newly compiled binaries.
    Type: Application
    Filed: May 6, 2014
    Publication date: February 5, 2015
    Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Quinn A. Jacobson
  • Patent number: 8949539
    Abstract: A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.
    Type: Grant
    Filed: February 1, 2010
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Martin Ohmacht
  • Publication number: 20150026417
    Abstract: A caching method for a distributed storage system, a lock server node, and a lock client node is disclosed. When the lock server node receives a first lock request sent by the first lock client node for locking a first data stripe, if the lock server node determines that the first lock request is a read lock request received for the first time on the first data stripe or a write lock request on the first data stripe, the lock server node records the owner of the first data stripe is the first lock client node in recorded attribute information of data stripes, and returns to the first lock client node a first response message indicating that the owner of the first data stripe is the first lock client node, and instructing the first lock client node to cache the first data stripe.
    Type: Application
    Filed: October 8, 2014
    Publication date: January 22, 2015
    Inventor: Hongxing Guo
  • Patent number: 8930628
    Abstract: Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Berger, Michael F. Fee, Christine C. Jones, Diana L. Orf, Robert J. Sonnelitter, III
  • Patent number: 8930637
    Abstract: An arrangement includes a first part and a second part. The first part includes a memory controller for accessing a memory, at least one first cache memory and a first directory. The second part includes at least one second cache memory configured to request access to said memory. The first directory is configured to use a first coherency protocol for the at least one first cache memory and a second different coherency protocol for the at least one second memory.
    Type: Grant
    Filed: June 6, 2012
    Date of Patent: January 6, 2015
    Assignee: STMicroelectronics (Research & Development) Limited
    Inventors: Andrew Michael Jones, Stuart Ryan
  • Patent number: 8930634
    Abstract: A cache coherence manager, disposed in a multi-core microprocessor, includes a request unit, an intervention unit, a response unit and an interface unit. The request unit receives coherent requests and selectively issues speculative requests in response. The interface unit selectively forwards the speculative requests to a memory. The interface unit includes at least three tables. Each entry in the first table represents an index to the second table. Each entry in the second table represents an index to the third table. The entry in the first table is allocated when a response to an associated intervention message is stored in the first table but before the speculative request is received by the interface unit. The entry in the second table is allocated when the speculative request is stored in the interface unit. The entry in the third table is allocated when the speculative request is issued to the memory.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: January 6, 2015
    Assignee: ARM Finance Overseas Limited
    Inventors: William Lee, Thomas Benjamin Berg
  • Patent number: 8930657
    Abstract: One embodiment of the present invention relates to a heap overflow detection system that includes an arithmetic logic unit, a datapath, and address violation detection logic. The arithmetic logic unit is configured to receive an instruction having an opcode and an operand and to generate a final address and to generate a compare signal on the opcode indicating a heap memory access related instruction. The datapath is configured to provide the opcode and the operand to the arithmetic logic unit. The address violation detection logic determines whether a heap memory access is a violation according to the operand and the final address on receiving the compare signal from the arithmetic logic unit.
    Type: Grant
    Filed: July 18, 2011
    Date of Patent: January 6, 2015
    Assignee: Infineon Technologies AG
    Inventor: Prakash Kalanjeri Balasubramanian
  • Patent number: 8924644
    Abstract: Methods, apparatuses, and computer program products of extending cache in a multi-processor computer are provided.
    Type: Grant
    Filed: December 28, 2011
    Date of Patent: December 30, 2014
    Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.
    Inventors: William E. Atherton, Marcus A. Baker, Sreekanth Konireddygari, Jeffrey B. Williams
  • Patent number: 8924817
    Abstract: The present invention provides a method and apparatus for selectively updating error correction code bits. One embodiment of the method includes determining a first subset of a plurality of error correction code bits formed from a plurality of data bits in response to changes in a first subset of the data bits. The first subset of the plurality of error correction code bits is less than all of the plurality of error correction code bits.
    Type: Grant
    Filed: September 29, 2010
    Date of Patent: December 30, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Robert Krick
  • Patent number: 8924653
    Abstract: A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 30, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Blaine D. Gaither, Judson E. Veazey
  • Publication number: 20140379993
    Abstract: Methods and systems may provide for receiving, at a graphics processor, a workload from a host processor and using a kernel on the graphics processor to issue a thread group for execution of the workload on the graphics processor. Additionally, one or more coherency messages may be initiated, by the graphics processor, in response to a thread-related condition of one or more caches on the graphics processor. In one example, the thread-related condition is associated with the execution of the workload on the graphics processor and indicates that the one or more caches on the graphics processor are not coherent with a system memory associated with the host processor.
    Type: Application
    Filed: June 25, 2013
    Publication date: December 25, 2014
    Inventors: Niraj Gupta, Hong Jiang
  • Publication number: 20140379996
    Abstract: An apparatus and method is described herein for providing speculative escape instructions. Specifically, an explicit non-transactional load operation is described herein. During execution of a speculative code region (e.g. a transaction or critical section) loads are normally tracked in a read set. However, a programmer or compiler may utilize the explicit non-transactional read to load from a memory address into a destination register, while not adding the read/load to the transactional read set. Similarly, a non-transactional store is also provided. Here, a transactional store is performed and not added to a write set during speculative code execution. And the store may be immediately globally visible and/or persistent (even after an abort of the speculative code region). In other words, speculative escape operations are provided to ‘escape’ a speculative code region to perform non-transactional memory accesses without causing the speculative code region to abort or fail.
    Type: Application
    Filed: February 2, 2012
    Publication date: December 25, 2014
    Inventors: Ravi Rajwar, Martin G. Dixon, Konrad K. Lai, Robert S. Chappell, Bret L. Toll
  • Publication number: 20140365736
    Abstract: Methods for managing region lock in a storage controller are disclosed. Upon receiving in a hardware based region lock management circuit a data request, the region lock management circuit determines a lock type of a region lock specified in the data request and conditionally creates a region lock data structure, wherein the region lock data structure is conditionally created based on the type of the region lock requested. The region lock management circuit further determines whether the data request is requesting access to at least a portion of a locked data region and the lock type of the locked data region. If the data request is requesting access to at least a portion of the locked data region and the lock type of the locked data region permits firmware diversion, the region lock management circuit diverts the data request to a storage controller firmware processor for further processing.
    Type: Application
    Filed: June 14, 2013
    Publication date: December 11, 2014
    Inventors: Horia C. Simionescu, James Yu, Robert L. Sheffield
  • Patent number: 8909872
    Abstract: A computer system is provided including a central processing unit having an internal cache, a memory controller is coupled to the central processing unit, and a closely coupled peripheral is coupled to the central processing unit. A coherent interconnection may exist between the internal cache and both the memory controller and the closely coupled peripheral, wherein the coherent interconnection is a bus.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 9, 2014
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli