Cross-interrogating Patents (Class 711/124)
  • Patent number: 11892953
    Abstract: An interprocess communication (IPC) method and an IPC system for transmit communication data from a first process to a second process, where the method includes performing initialization configuration on the first process and the second process, including creating first memory space in shared memory space, selecting a communication manner based on a length of the communication data and a value of a threshold, where the threshold is a size of the first memory space, performing interprocess data exchange in the selected communication manner, selecting a memory sharing manner for communication when the length of the communication data is less than the threshold, and selecting a data file manner for communication when the length of the communication data reaches or exceeds the threshold.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: February 6, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qibin Yang, Senyu Liu, Xiaohui Bie
  • Patent number: 11726918
    Abstract: Dynamically coalescing atomic memory operations for memory-local computing is disclosed. In an embodiment, it is determined whether a first atomic memory access and a second atomic memory access are candidates for coalescing. In response to a triggering event, the atomic memory accesses that are candidates for coalescing are coalesced in a cache prior to requesting memory-local processing by a memory-local compute unit. The atomic memory accesses may be coalesced in the same cache line or atomic memory accesses in different cache lines may be coalesced using a multicast memory-local processing command.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: August 15, 2023
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Johnathan Alsop, Alexandru Dutu, Shaizeen Aga, Nuwan Jayasena
  • Patent number: 11556471
    Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: January 17, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Michael S. Woodacre, Thomas McGee, Michael Malewicki
  • Patent number: 10929199
    Abstract: A system includes a memory system with a shared memory resource and a processor with multiple processor cores operably coupled to the memory system. A lock requesting core is configured to access a shared location of the shared memory resource to determine whether a lockable portion of the shared memory resource is in a locked state. The lock requesting core adds a memory lock request to a lock waiting list associated with the lockable portion based on the shared location indicating the locked state. A lock granted location dedicated to the lock requesting core is monitored for an indication that the lock requesting core has been granted the locked state from a previous locking core of the processor cores. The lock requesting core performs one or more updates to the lockable portion based on determining that the lock requesting core has been granted the locked state.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: February 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Mark R. Gambino
  • Patent number: 10909036
    Abstract: A system includes a shared memory accessible by a plurality of members of a shared memory system, and a processor operably coupled to the shared memory system. The processor is configured to receive a command from an exploiting member of the plurality of members, the command instructing an update to a piece of shared data, perform the update to the piece of shared data, and send an invalidation signal to each other member of the plurality of members. An invalidation signal sent to a respective member indicates that a local copy of the piece of data stored by the respective member is no longer valid, and each invalidation signal is sent asynchronously with respect to processing of the command. The processor is further configured to return the results of the command to the exploiting member.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: February 2, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Allen Carney, David Surman, William Charles Neiman, Georgette Kurdt, Scott F. Schiffer, Michael Lawrence Greenblatt
  • Patent number: 10795585
    Abstract: An embodiment of a semiconductor apparatus may include technology to determine if a memory operation on a memory is avoidable, and suppress the memory operation if the memory operation is determined to be avoidable. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: October 6, 2020
    Assignee: Intel Corporation
    Inventors: Kshitij Doshi, Bhanu Shankar
  • Patent number: 10725915
    Abstract: Disclosed herein are methods, systems, and processes to provide coherency across disjoint caches in clustered environments. It is determined whether a data object is owned by an owner node, where the owner node is one of multiple nodes of a cluster. If the owner node for the data object is identified by the determining, a request is sent to the owner node for the data object. However, if the owner node for the data object is not identified by the determining, selects a node in the cluster is selected as the owner node, and the request for the data object is sent to the owner node.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: July 28, 2020
    Assignee: Veritas Technologies LLC
    Inventors: Bhushan Jagtap, Mark Hemment, Anindya Banerjee, Ranjit Noronha, Jitendra Patidar, Kundan Kumar, Sneha Pawar
  • Patent number: 10671532
    Abstract: A method and a system detects a cache line as a potential or confirmed hot cache line based on receiving an intervention of a processor associated with a fetch of the cache line. The method and system include suppressing an action of operations associated with the hot cache line. A related method and system detect an intervention and, in response, communicates an intervention notification to another processor. An alternative method and system detect a hot data object associated with an intervention event of an application. The method and system can suppress actions of operations associated with the hot data object. An alternative method and system can detect and communicate an intervention associated with a data object.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: June 2, 2020
    Assignee: International Business Machines Corporation
    Inventors: Christian Zoellin, Christian Jacobi, Chung-Lung K. Shum, Martin Recktenwald, Anthony Saporito, Aaron Tsai
  • Patent number: 10628314
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: April 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Patent number: 10628313
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: April 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Patent number: 10585800
    Abstract: A method and a system detects a cache line as a potential or confirmed hot cache line based on receiving an intervention of a processor associated with a fetch of the cache line. The method and system include suppressing an action of operations associated with the hot cache line. A related method and system detect an intervention and, in response, communicates an intervention notification to another processor. An alternative method and system detect a hot data object associated with an intervention event of an application. The method and system can suppress actions of operations associated with the hot data object. An alternative method and system can detect and communicate an intervention associated with a data object.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: March 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Christian Zoellin, Christian Jacobi, Chung-Lung K. Shum, Martin Recktenwald, Anthony Saporito, Aaron Tsai
  • Patent number: 10579525
    Abstract: A method and a system detects a cache line as a potential or confirmed hot cache line based on receiving an intervention of a processor associated with a fetch of the cache line. The method and system include suppressing an action of operations associated with the hot cache line. A related method and system detect an intervention and, in response, communicates an intervention notification to another processor. An alternative method and system detect a hot data object associated with an intervention event of an application. The method and system can suppress actions of operations associated with the hot data object. An alternative method and system can detect and communicate an intervention associated with a data object.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: March 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Christian Zoellin, Christian Jacobi, Chung-Lung K. Shum, Martin Recktenwald, Anthony Saporito, Aaron Tsai
  • Patent number: 10568025
    Abstract: An approach is provided for data processing methods wherein a PHY layer transmit operation is constructed by aggregating MAC layer data units (MPDUs) by means of a linked list. A link list of TX descriptors can be formed separately from the payloads. In order to minimize processor speed rates, the next transmission buffer is pre-fetched whenever appropriate. Interlocks are used to prevent conflict on descriptor fetches and payload fetches.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: February 18, 2020
    Assignees: ADVANCED MICRO DEVICES, INC., AMD FAR EAST LTD.
    Inventors: Sebastian Ahmed, Douglas A. Mammoser
  • Patent number: 10423346
    Abstract: An apparatus for processing data 2 contains multiple power domains which may be in a non-retaining power state or a retaining power state. If a power domain is in a non-retaining power state in which it is not able to retain a copy of a stored parameter value and it is switched into a retaining power state in which it requires a copy of that parameter value, then it fetches the parameter value from a store within another power domain. One of the power domains contains a master copy of the parameter value to which writes changing in the parameter value are made. At least one of the other power domains fetches a copy of the parameter value if required from a power domain other than the power domain containing the master copy.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: September 24, 2019
    Assignee: ARM Limited
    Inventor: Arthur Brian Laughton
  • Patent number: 10339064
    Abstract: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes detecting, by a processing device, a hot cache line scenario. The computer-implemented method further includes tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests. The computer-implemented method further includes facilitating, by the processing device, servicing of the requests according to hierarchy of the requestors.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: July 2, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Jason D. Kohl, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 10331561
    Abstract: Systems and methods for rebuilding an index for a flash cache are provided. The index is rebuilt by reading headers of containers stored in the cache and inserting information from the headers into the index. The index is enabled while being rebuild such that lookup operations can be performed using the index even when the index is incomplete. New containers can be inserted into used or unused regions of the cache while the index is being rebuilt.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: June 25, 2019
    Assignee: EMC Corporation
    Inventors: Philip N. Shilane, Grant R. Wallace
  • Patent number: 10228951
    Abstract: Systems, apparatuses, and methods for committing store instructions out of order from a store queue are described. A processor may store a first store instruction and a second store instruction in the store queue, wherein the first store instruction is older than the second store instruction. In response to determining the second store instruction is ready to commit to the memory hierarchy, the processor may allow the second store instruction to commit before the first store instruction, in response to determining that all store instructions in the store queue older than the second store instruction are non-speculative. However, if it is determined that at least one store instruction in the store queue older than the second store instruction is speculative, the processor may prevent the second store instruction from committing to the memory hierarchy before the first store instruction.
    Type: Grant
    Filed: August 20, 2015
    Date of Patent: March 12, 2019
    Assignee: Apple Inc.
    Inventors: Kulin N. Kothari, Mridul Agarwal, Pradeep Kanapathipillai
  • Patent number: 10120889
    Abstract: In one embodiment, a method includes receiving a content object; and determining whether a list configured to store information of a pre-determined number of stored content objects is full. Each content object is represented as a vector of elements. The method also includes identifying a corresponding node of a k-dimensional tree for each of the stored content objects and the received content object based on determining one or more median vectors from the vectors of the content objects. Each node of the k-dimensional tree is configured to store the vector of a particular one of the content objects. The method also includes moving information corresponding to the vector of one or more of the stored content objects and the received content object from the list to the corresponding node of the k-dimensional tree.
    Type: Grant
    Filed: May 5, 2015
    Date of Patent: November 6, 2018
    Assignee: Facebook, Inc.
    Inventor: Vikram Chandrasekhar
  • Patent number: 10120684
    Abstract: Logic is provided to receive and execute a mask move instruction to transfer unmasked data elements of a vector data element including a plurality of packed data elements from a source location to a destination location, subject to mask information for the instruction. The logic is to execute a speculative full width operation, and if an exception occurs is to perform operations sequentially or one at a time. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: November 6, 2018
    Assignee: Intel Corporation
    Inventors: Doron Orenstien, Zeev Sperber, Robert Valentine, Benny Eitan
  • Patent number: 10073775
    Abstract: An apparatus and method are described for a triggered prefetch operation. For example, one embodiment of a processor comprises: a first core comprising a first cache to store a first set of cache lines; a second core comprising a second cache to store a second set of cache lines; a cache management circuit to maintain coherency between one or more cache lines in the first cache and the second cache, the cache management circuit to allocate a lock on a first cache line to the first cache; a prefetch circuit comprising a prefetch request buffer to store a plurality of prefetch request entries including a first prefetch request entry associated with the first cache line, the prefetch circuit to cause the first cache line to be prefetched to the second cache in response to an invalidate command detected for the first cache line.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: September 11, 2018
    Assignee: Intel Corporation
    Inventors: Christopher B. Wilerkson, Ren Wang, Antoine Kaufmann, Anil Vasudevan, Robert G. Blankenship, Venkata Krishnan, Tsung-Yuan C. Tai
  • Patent number: 10073776
    Abstract: A processing system includes a plurality of processor cores and a plurality of private caches. Each private cache is associated with a corresponding processor core of the plurality of processor cores and includes a corresponding first set of cachelines. The processing system further includes a shared cache shared by the plurality of processor cores. The shared cache includes a second set of cachelines, and a shadow tag memory including a plurality of entries, each entry storing state information for a corresponding cacheline of the first set of cachelines of one of the private caches.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: September 11, 2018
    Assignee: Advanced Micro Device, Inc.
    Inventors: Sriram Srinivasan, William L. Walker
  • Patent number: 9990292
    Abstract: A data processing system includes a snoop filter organized as a number of lines, each storing an address tag associated with the address of data stored in one or more caches of the system, a coherency state of the data, and presence data. A snoop controller sends snoop messages in response to data access requests. The presence data is configurable in a first format, in which the value of a bit in the presence data is indicative of a subset of the nodes for which at least one node in the subset has a copy of the data in its local cache, and in a second format, in which the presence data comprises a unique identifier of a node having a copy of the data in its local cache. The snoop controller sends snoop messages to the nodes indicated by the presence data.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: June 5, 2018
    Assignee: ARM Limited
    Inventors: Jamshed Jalal, Mark David Werkheiser
  • Patent number: 9934146
    Abstract: Methods and apparatuses to control cache line coherency are described. A processor may include a first core having a cache to store a cache line, a second core to send a request for the cache line from the first core, moving logic to cause a move of the cache line between the first core and a memory and to update a tag directory of the move, and cache line coherency logic to create a chain home in the tag directory from the request to cause the cache line to be sent from the tag directory to the second core. A method to control cache line coherency may include creating a chain home in a tag directory from a request for a cache line in a first processor core from a second processor core to cause the cache line to be sent from the tag directory to the second processor core.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: April 3, 2018
    Assignee: INTEL CORPORATION
    Inventors: Simon C. Steely, Jr., Samantika S. Sury, William C. Hasenplaugh
  • Patent number: 9864687
    Abstract: An application processor is provided. The application processor includes a cache coherent interconnect, a first master device connected to the cache coherent interconnect, a second master device, and a master-side filter connected between the cache coherent interconnect and the second master device. The master-side filter receives a snoop request from the first master device through the cache coherent interconnect, compares a second security attribute of the second master device with a first security attribute of the first master device which is included in the snoop request, and determines whether to transmit an address included in the snoop request to the second master device according to a comparison result.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: January 9, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sik Kim, Woo Hyung Chun, Seong Min Jo, Jae Young Hur
  • Patent number: 9864690
    Abstract: A processor in a multi-processor configuration is configured perform dynamic address translation from logical addresses to real address and to detect memory conflicts for shared logical memory in transactional memory based on logical (virtual) addresses comparisons.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: January 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9641508
    Abstract: An information processing apparatus according to the present invention is arranged in a client terminal connected to a server storing data via a network, wherein the information processing apparatus receives requests from one or a plurality of applications in the client terminal and controls transmission and reception of information to/from the server. The information processing apparatus includes an authentication information storage unit for storing authentication information of a user for accessing the server, and a request transmission unit for attaching the authentication information of the user of the client terminal to a request based on the request given by the application of the client terminal, and transmits the request to the server.
    Type: Grant
    Filed: April 2, 2015
    Date of Patent: May 2, 2017
    Assignee: SONY CORPORATION
    Inventors: Shuhei Sonoda, Tsutomu Kawachi, Masayuki Takada
  • Patent number: 9594683
    Abstract: A data processing system including multiple processors with a hierarchical cache structure comprising multiple levels of cache between the processors and a main memory, wherein at least one page mover is positioned closer to the main memory and is connected to the cache memories of the at least one shared cache level (L2, L3, L4), the main memory and to the multiple processors to move data between the cache memories of the at least one shared cache level, the main memory and the processors. In response to a request from one of the processors, the at least one page mover fetches data of a storage area line-wise from at least one of the following memories: the cache memories and the main memory maintaining multiple processor cache memory access coherency.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: March 14, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jens Dittrich, Christian Jacobi, Matthias Pflanz, Stefan Schuh, Kai Weber
  • Patent number: 9575831
    Abstract: An apparatus and method detect the use of stale data values due to weak consistency between parallel threads on a computer system. A consistency error detection mechanism uses object code injection to build a consistency error detection table during the operation of an application. When the application is paused, the consistency error detection mechanism uses the consistency error detection table to detect consistency errors where stale data is used by the application. The consistency error detection mechanism alerts the user/programmer to the consistency errors in the application program.
    Type: Grant
    Filed: September 16, 2014
    Date of Patent: February 21, 2017
    Assignee: International Business Machines Corporation
    Inventors: Cary L. Bates, Lee N. Helgeson, Justin K. King, Michelle A. Schlicht
  • Patent number: 9547593
    Abstract: A microprocessor system is disclosed that includes a first data cache that is shared by a first group of one or more program threads in a multi-thread mode and used by one program thread in a single-thread mode. A second data cache is shared by a second group of one or more program threads in the multi-thread mode and is used as a victim cache for the first data cache in the single-thread mode.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: January 17, 2017
    Assignee: NXP USA, Inc.
    Inventor: Thang M. Tran
  • Patent number: 9424081
    Abstract: Examples of the disclosure enable callback operations, such as interrupts, Asynchronous Procedure Calls (APCs), and Deferred Procedure Calls (DPCs), to be efficiently managed. In some examples, an emulated thread includes a request for a callback operation. When the request is detected, the emulated thread and/or a cooperating thread associated with the callback operation is executed based on an execution mode associated with the callback operation. Examples of the disclosure manage callback operations while efficiently managing system resources, including processor load, by providing at least one cooperating thread that consumes little or no processing power until the callback operation is ready to be executed.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: August 23, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Barry Clayton Bond
  • Patent number: 9413726
    Abstract: Methods and systems for improving efficiency of direct cache access (DCA) are provided. According to one embodiment, a set of DCA control settings are defined by a network I/O device of a network security device for each of multiple I/O device queues based on network security functionality performed by corresponding CPUs of a host processor. The control settings specify portions of network packets that are to be copied to a cache of the corresponding CPU. A packet is received by the network I/O device. Information associated with the packet is queued onto an I/O device queue. The information is then transferred from the I/O device queue to a host memory of the network security device. Based on the control settings for the I/O device queue only those portions of the information corresponding to the one or more specified portions are copied to the cache of the corresponding CPU.
    Type: Grant
    Filed: June 11, 2015
    Date of Patent: August 9, 2016
    Assignee: Fortinet, Inc.
    Inventors: Xu Zhou, Hongbin Lu
  • Patent number: 9405696
    Abstract: A cache is provided for operatively coupling a processor with a main memory. The cache includes a cache memory and a cache controller operatively coupled with the cache memory. The cache controller is configured to receive memory requests to be satisfied by the cache memory or the main memory. In addition, the cache controller is configured to process cache activity information to cause at least one of the memory requests to bypass the cache memory.
    Type: Grant
    Filed: January 28, 2014
    Date of Patent: August 2, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Blaine D. Gaither, Patrick Knebel
  • Patent number: 9378237
    Abstract: Methods and apparatus are provided for serializing data. A computing device can generate a serialization buffer (SB). The SB can specify fields storing data and corresponding offsets, with an offset referring to a location in the SB storing the corresponding field. The SB can access a designated field in the SB by determining a designated offset for the designated field, determining a starting location based on the designated offset, and accessing data at the starting location. A distinct copy of the SB can be stored on a storage device.
    Type: Grant
    Filed: April 15, 2014
    Date of Patent: June 28, 2016
    Assignee: Google Inc.
    Inventors: Wouter van Oortmerssen, Martin Froehlich
  • Patent number: 9367412
    Abstract: A network-based storage system includes multiple storage devices and system controllers. Each storage device in multiple aggregates of storage devices can include ownership portion(s) that are configured to indicate a system controller to which it belongs. First and second system controllers can form an HA pair, and can be in communication with each other, the storage devices, and a separate host server. A first system controller controls an aggregate of storage devices and can facilitate an automated hotswap replacement of a second system controller that controls another aggregate of storage devices with a separate third system controller that subsequently controls the other aggregate of storage devices.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: June 14, 2016
    Assignee: NetApp, Inc.
    Inventors: Sravana Kumar Elpula, Varun Garg, Sakshi Chaitanya Veni
  • Patent number: 9367473
    Abstract: A processor (600) in a distributed shared memory multi-processor computer system (10) may initiate a flush request to remove data from its cache. A processor interface (24) receives the flush request and performs a snoop operation to determine whether the data is maintained in a one of the local processors (601) and whether the data has been modified. If the data is maintained locally and it has been modified, the processor interface (24) initiates removal of the data from the cache of the identified processor (601). The identified processor (601) initiates a writeback to a memory directory interface unit (24) associated with a home memory 17 for the data in order to preserve the modification to the data. If the data is not maintained locally or has not been modified, the processor interface (24) forwards the flush request to the memory directory interface unit (22).
    Type: Grant
    Filed: December 26, 2013
    Date of Patent: June 14, 2016
    Assignee: Silicon Graphics International Corp.
    Inventor: Jeffrey S. Kuskin
  • Patent number: 9361176
    Abstract: An apparatus and method detect the use of stale data values due to weak consistency between parallel threads on a computer system. A consistency error detection mechanism uses object code injection to build a consistency error detection table during the operation of an application. When the application is paused, the consistency error detection mechanism uses the consistency error detection table to detect consistency errors where stale data is used by the application. The consistency error detection mechanism alerts the user/programmer to the consistency errors in the application program.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: June 7, 2016
    Assignee: International Business Machines Corporation
    Inventors: Cary L. Bates, Lee Helgeson, Justin K. King, Michelle A. Schlicht
  • Patent number: 9348716
    Abstract: Provided are a system, computer program, and method for restoring redundancy in a storage group when a storage device in the storage group fails. In response to detecting a failure of a first storage device in a storage group, wherein the storage group stores each of a plurality of extents in the first storage device and a second storage device to provide redundancy, a determination is made whether a spare storage device that has a storage capacity less than that of the storage group. One of the extents in a storage location in the second storage device that is beyond an upper limit of positions in the spare storage device is moved to a new storage location. The spare drive is incorporated into the storage group to provide redundant storage for the storage group, wherein the extents in the storage group are copied to the spare drive.
    Type: Grant
    Filed: June 6, 2013
    Date of Patent: May 24, 2016
    Assignee: International Business Machines Corporation
    Inventors: Eric J. Bartlett, Matthew J. Fairhurst
  • Patent number: 9262251
    Abstract: Systems and methods for the analysis of memory information of a computing device are provided. One or more user computing devices may transmit memory information to a memory analysis system. The memory analysis system may generate a weighted object graph based on the received memory information, and identify subgraphs to inspect for potential memory use patterns. If such patterns are common in an identified subgraph, they may indicate a potential memory leak. The memory analysis system may further analyze a larger portion of the weighted object graph based on a detected common pattern. Each detected pattern may be ranked based on the likelihood that it corresponds to a memory leak.
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: February 16, 2016
    Assignee: Amazon Technologies, Inc
    Inventor: Nicholas Alexander Allen
  • Patent number: 9264509
    Abstract: Methods and systems for improving efficiency of direct cache access (DCA) are provided. According to one embodiment, a DCA control is defined by a network Input/Output (I/O) device for an I/O device queue corresponding to a central processing unit (CPU) of a host processor. A part of an incoming packet is configured by the DCA control to be copied to a cache of the CPU. The incoming packet is parsed by the network I/O device based on one or more of packet analysis, packet protocol, header format and payload data information. The parsed incoming packet is transferred from an I/O device queue of the network I/O device to a host queue of a host memory that is operatively coupled with the host processor. The specified part of the parsed incoming packet is copied by a host controller to the cache of the CPU based on the DCA control.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: February 16, 2016
    Assignee: Fortinet, Inc.
    Inventors: Xu Zhou, Hongbin Lu
  • Patent number: 9253536
    Abstract: This document describes tools capable of updating data-consuming entities. These tools allow a developer of an application to use data binding to update data-consuming entities without the need to write custom code.
    Type: Grant
    Filed: March 18, 2009
    Date of Patent: February 2, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bradley R. Pettit, Nicolae Surpatanu
  • Patent number: 9229940
    Abstract: In one aspect of the invention, a search engine parses files stored among one or more file servers in order to create and maintain index information used by the search engine to perform searches. For a given file server, the population of files presented to the search engine is reduced in size to facilitate the process of updating the index. In another aspect of the invention, the file server limits the files presented in a directory list request made by a search engine. This reduces the number of file s that need to be considered when performing an index update.
    Type: Grant
    Filed: September 4, 2009
    Date of Patent: January 5, 2016
    Assignee: Hitachi, Ltd.
    Inventor: Shojii Kodama
  • Patent number: 9224440
    Abstract: A non volatile memory device includes a first buffer register configured to receive and store the data to be stored into the memory device provided via a memory bus. A command window is activatable for interposing itself for access to a memory matrix between the first buffer element and the memory matrix. The command window includes a second buffer element that stores data stored in or to be stored into a group of memory elements. A first data transfer means executes a first transfer of the data stored in the second buffer register into the first buffer register during a first phase of a data write operation started by the reception of a first command. A second data transfer means receives the data provided by the memory bus and modifies, based on the received data, the data stored in the first buffer register during a second phase of the data write operation started by the reception of a second command.
    Type: Grant
    Filed: August 28, 2014
    Date of Patent: December 29, 2015
    Assignee: Micron Technology, Inc.
    Inventors: Daniele Balluchi, Graziano Mirichigni
  • Patent number: 9026743
    Abstract: For a flexible replication with skewed mapping in a multi-core chip, a request for a cache line is received, at a receiver core in the multi-core chip from a requester core in the multi-core chip. The receiver and requester cores comprise electronic circuits. The multi-core chip comprises a set of cores including the receiver and the requester cores. A target core is identified from the request to which the request is targeted. A determination is made whether the target core includes the requester core in a neighborhood of the target core, the neighborhood including a first subset of cores mapped to the target core according to a skewed mapping. The cache line is replicated, responsive to the determining being negative, from the target core to a replication core. The cache line is provided from the replication core to the requester core.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: May 5, 2015
    Assignee: International Business Machines Corporation
    Inventors: Jian Li, William Evan Speight
  • Patent number: 9015416
    Abstract: Some embodiments provide systems and methods for validating cached content based on changes in the content instead of an expiration interval. One method involves caching content and a first checksum in response to a first request for that content. The caching produces a cached instance of the content representative of a form of the content at the time of caching. The first checksum identifies the cached instance. In response to receiving a second request for the content, the method submits a request for a second checksum representing a current instance of the content and a request for the current instance. Upon receiving the second checksum, the method serves the cached instance of the content when the first checksum matches the second checksum and serves the current instance of the content upon completion of the transfer of the current instance when the first checksum does not match the second checksum.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: April 21, 2015
    Assignee: Edgecast Networks, Inc.
    Inventor: Andrew Lientz
  • Patent number: 8990506
    Abstract: In one embodiment, the present invention includes a cache memory including cache lines that each have a tag field including a state portion to store a cache coherency state of data stored in the line and a weight portion to store a weight corresponding to a relative importance of the data. In various implementations, the weight can be based on the cache coherency state and a recency of usage of the data. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 24, 2015
    Assignee: Intel Corporation
    Inventors: Naveen Cherukuri, Dennis W. Brzezinski, Ioannis T. Schoinas, Anahita Shayesteh, Akhilesh Kumar, Mani Azimi
  • Patent number: 8972664
    Abstract: Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: March 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Michael A. Blake, Craig R. Walters, Pak-Kin Mak
  • Patent number: 8972666
    Abstract: A computer program product for mitigating conflicts for shared cache lines between an owning core currently owning a cache line and a requestor core. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes determining whether the owning core is operating in a transactional or non-transactional mode and setting a hardware-based reject threshold at a first or second value with the owning core determined to be operating in the transactional or non-transactional mode, respectively. The method further includes taking first or second actions to encourage cache line sharing between the owning core and the requestor core in response to a number of rejections of requests by the requestor core reaching the reject threshold set at the first or second value, respectively.
    Type: Grant
    Filed: December 3, 2013
    Date of Patent: March 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, Chung-Lung K. Shum
  • Patent number: 8966187
    Abstract: For a flexible replication with skewed mapping in a multi-core chip, a request for a cache line is received, at a receiver core in the multi-core chip from a requester core in the multi-core chip. The receiver and requester cores comprise electronic circuits. The multi-core chip comprises a set of cores including the receiver and the requester cores. A target core is identified from the request to which the request is targeted. A determination is made whether the target core includes the requester core in a neighborhood of the target core, the neighborhood including a first subset of cores mapped to the target core according to a skewed mapping. The cache line is replicated, responsive to the determining being negative, from the target core to a replication core. The cache line is provided from the replication core to the requester core.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Jian Li, William Evan Speight
  • Patent number: 8954675
    Abstract: In one embodiment, a system includes a database; and a cache layer comprising one or more cache nodes, the one or more cache nodes operative to: maintain in a memory one or more data structures storing association information describing associations between nodes in a graph a plurality of distributed cache clusters for storing information in the form of a graph, the graph comprising a plurality of nodes, each uniquely identified by a node identifier, and edge information indicating associations between nodes; respond to queries for associations between nodes in the graph by accessing the memory; and forward other queries to the database for processing.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: February 10, 2015
    Assignee: Facebook, Inc.
    Inventors: Venkateshwaran Venkataramani, George Cabrera, III, Venkatasiva Prasad Chakkabala, Mark Marchukov
  • Patent number: 8930627
    Abstract: A computer program product for mitigating conflicts for shared cache lines between an owning core currently owning a cache line and a requestor core. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes determining whether the owning core is operating in a transactional or non-transactional mode and setting a hardware-based reject threshold at a first or second value with the owning core determined to be operating in the transactional or non-transactional mode, respectively. The method further includes taking first or second actions to encourage cache line sharing between the owning core and the requestor core in response to a number of rejections of requests by the requestor core reaching the reject threshold set at the first or second value, respectively.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, Chung-Lung K. Shum