Shared Cache Patents (Class 711/130)
  • Patent number: 10409690
    Abstract: A storage method and device for a solid-state drive is provided in embodiments of the present disclosure. The method includes: configuring a checkpoint drive and a cache drive; backing up data blocks from a data drive into the checkpoint drive; and in response to the data drives being corrupted, writing into a further data drive part of the data blocks backed up into the checkpoint drive and part of data blocks in the cache drive. The number of required SSD drives can be significantly reduced with the method and device without losing the data restoration capability. In addition, the degrading performance can also be maintained at a relatively high level.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Huibing Xiao, Jian Gao, Hongpo Gao, Geng Han, Jibing Dong, Liam Xiongcheng Li
  • Patent number: 10402329
    Abstract: A congestion controller may be configured to control traffic on an interconnect between a higher level cache and a lower level cache. The lower level cache may also be coupled to a main memory. The congestion controller may be configured to reduce congestion on the interconnect by blocking transactions that include writing of data to the lower level cache if the data has not been modified relative to a copy of the data in the main memory. The congestion controller may also be configured to control the traffic by blocking certain transactions in a controlled manner for traffic shaping or for performance features.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: September 3, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Dana Michelle Vantrease
  • Patent number: 10394459
    Abstract: A data storage device includes a filter, a central processing unit (CPU), a first memory configured to store a page, a second memory, and a page type analyzer configured to analyze a type of the page output from the first memory and to transmit an indication signal to the CPU according to an analysis result. According to control of the CPU that operates based on the indication signal, the filter passes the page to the second memory or filters each row in the page, and transmits first filtered data to the second memory.
    Type: Grant
    Filed: February 16, 2015
    Date of Patent: August 27, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Man Keun Seo, Kwang Hoon Kim, Sang Kyoo Jeong, Kwang Seok Im
  • Patent number: 10372623
    Abstract: A storage control apparatus includes a cache memory, and a processor configured to access to a first area of the cache memory in accordance with a command, generate a first processing report identifying the first area, input the first processing report to a processing report queue when a plurality of second processing reports each of which identifies the first area are not stored in the processing report queue, execute management list update processing in which the access to the first area is recorded in a management list in accordance with the first processing report, identify data to be deleted from the cache memory in accordance with the management list, and not to input the first processing report to the processing report queue when the plurality of second processing reports are stored in the processing report queue.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: August 6, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Jun Kato
  • Patent number: 10346309
    Abstract: In an embodiment, a prefetch circuit may implement prefetch “boosting” to reduce the cost of cold (compulsory) misses and thus potentially improve performance. When a demand miss occurs, the prefetch circuit may generate one or more prefetch requests. The prefetch circuit may monitor the progress of the demand miss (and optionally the previously-generated prefetch requests as well) through the cache hierarchy to memory. At various progress points, if the demand miss remains a miss, additional prefetch requests may be launched. For example, if the demand miss accesses a lower level cache and misses, additional prefetch requests may be launched because the latency avoided in prefetching the additional cache blocks is higher, which may over ride the potential that the additional cache blocks are incorrectly prefetched.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: July 9, 2019
    Assignee: Apple Inc.
    Inventors: James R. Hakewill, Ian D. Kountanis, Douglas C. Holman
  • Patent number: 10339023
    Abstract: In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: July 2, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Tsung-Yuan C. Tai, Paul S. Diefenbaugh, Andrew J. Herdrich
  • Patent number: 10331585
    Abstract: Provided are a device and computer readable storage medium for programming a memory module to initiate a training mode in which the memory module transmits continuous bit patterns on a side band lane of the bus interface; receiving the bit patterns over the bus interface; determining from the received bit patterns a transition of values in the bit pattern to determine a data eye between the determined transitions of the values; and determining a setting to control a phase interpolator to generate interpolated signals used to sample data within the determined data eye.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: June 25, 2019
    Assignee: INTEL CORPORATION
    Inventors: Tonia G. Morris, Jonathan C. Jasper, Arnaud J. Forestier
  • Patent number: 10324850
    Abstract: A cache system is configurable to trade power consumption for cache access latency. When it is desired for a system with a cache to conserve dynamic power, the lookup of accesses (e.g., snoops) to cache tag ways is serialized to perform one (or less than all) tag way access per clock (or even slower). Thus, for an N-way set associative cache, instead of performing a lookup/comparison on the N tag ways in parallel, the lookups are performed one tag way at a time. This take N times more cycles thereby reducing the access/snoop bandwidth by a factor of N. However, the power consumption of the serialized access when compared to ‘all parallel’ accesses/snoops is reduced.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: June 18, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Patrick P. Lai, Robert Allen Shearer
  • Patent number: 10318422
    Abstract: A computer-readable recording medium storing an information processing program for causing a computer to execute a process, the process includes: acquiring a cache memory size allocated to each process within an application program; acquiring a cash miss ratio for a process executed using an allocated cache memory size; correcting a cache memory size to be allocated to the process based on an acquired cache miss ratio; acquiring a first cache memory size allocated to the process after the correcting is performed; acquiring a first performance value when the process is executed using the first cache memory size; acquiring a second cache memory size which is allocated to the process later than the first cache memory size; acquiring a second performance value when the process is executed using the second cache memory size; and correcting the second cache memory size based on the first performance value and the second performance value.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: June 11, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Katsumi Ichinose
  • Patent number: 10298715
    Abstract: The present invention relates to a distributed processing system having a master node and a plurality of worker nodes. Each worker node has an assigned identifier. A worker node caches in its own memory first output data, which is the result of the execution of a first task, and copies said first output data to another worker node. The master node selects, on the basis of the identifier information of the first worker node, a worker node to which to assign a second task, wherein the first output data is used as input data.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: May 21, 2019
    Assignee: Hitachi, Ltd.
    Inventors: Kazuhide Aikoh, Masafumi Kinoshita, Go Kojima
  • Patent number: 10275284
    Abstract: Methods determine a capacity-forecast model based on historical capacity metric data and historical business metric data. The capacity-forecast model may be to estimate capacity requirements with respect to changes in demand for the data center customer's application program. The capacity-forecast model provides an analytical “what-if” approach to reallocating data center resources in order to satisfy projected business level expectations of a data center customer and calculate estimated capacities for different business scenarios.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: April 30, 2019
    Assignee: VMware, Inc.
    Inventors: Arnak Poghosyan, Ashot Nshan Harutyunyan, Naira Movses Grigoryan, Khachatur Nazaryan, Ruzan Hovhannisyan
  • Patent number: 10261879
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 10261722
    Abstract: In one general embodiment, a computer-implemented method includes receiving at a first system a request for data, searching one or more local buffers within the first system for the requested data, determining whether the requested data is located within an additional buffer of an additional system in communication with the first system, in response to determining that the one or more local buffers within the first system do not contain the requested data, receiving the requested data by the first system from the additional buffer of the additional system, in response to determining that the requested data is located within the additional buffer of the additional system, and retrieving the requested data from a data disk within the first system, in response to determining that the requested data is not located within the additional buffer of the additional system.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: April 16, 2019
    Assignee: International Business Machines Corporation
    Inventors: Neal E. Bohling, Roity Prieto Perez
  • Patent number: 10248524
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 10248464
    Abstract: A plurality of processing entities of a processor complex is maintained, wherein each processing entity has a local cache and the processor complex has a shared cache and a shared memory. One of the plurality of processing entities is allocated for execution of a critical task. In response to the allocating of one of the plurality of processing entities for the execution of the critical task, other processing entities of the plurality of processing entities are folded. The critical task utilizes the local cache of the other processing entities that are folded, the shared memory, and the shared cache, in addition to the local cache of the processing entity allocated for the execution of the critical task.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: April 2, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
  • Patent number: 10244068
    Abstract: In accordance with an embodiment, described herein is a system and method for providing distributed caching in a transactional processing environment. The caching system can include a plurality of layers that provide a caching feature for a plurality of data types, and can be configured for use with a plurality of caching providers. A common data structure can be provided to store serialized bytes of each data type, and architecture information of a source platform executing a cache-setting application, so that a cache-getting application can use the information to convert the serialized bytes to a local format. A proxy server can be provided to act as a client to a distributed in-memory grid, and advertise services to a caching client, where each advertised service can match a cache in the distributed in-memory data grid, such as Coherence. The caching system can be used to cache results from a service.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: March 26, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Todd Little, Xugang Shen, Jim Yongshun Jin, Jesse Hou
  • Patent number: 10223227
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: March 5, 2019
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 10210065
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: February 19, 2019
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 10210066
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: February 19, 2019
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 10204127
    Abstract: A method and apparatus for performing storage and retrieval in an information storage system cache is disclosed that uses the hashing technique with the open-addressing method for collision resolution. Insertion, retrieval, and deletion operations are limited to a predetermined number of probes, after which it may be assumed that the table does not contain the desired data. Moreover, when using linear probing, the technique facilitates maximum concurrent, multi-thread access to the table, thereby improving system throughput, since only a relatively small section is locked and made unavailable while a thread modifies that section, allowing other threads complete access to the remainder of the table.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: February 12, 2019
    Inventors: Richard Michael Nemes, Mikhail Lotvin, David Garrod
  • Patent number: 10204050
    Abstract: Methods and systems for memory-side shared caching include determining whether a requested memory access is directed to shared portion of memory by referencing a lock address list in a memory controller. If the requested memory access is for the shared portion of memory, it is determined whether an associated data object is present in a memory-side cache. If the associated data object is present in the memory-side cache, the memory-side cache is accessed. If the associated data object is not present in the memory-side cache, an external memory is accessed.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: February 12, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Yasunao Katayama
  • Patent number: 10180904
    Abstract: Provided is a cache memory. The cache memory includes a first to Nth level-1 caches configured to correspond to first to Nth cores, respectively, a level-2 sharing cache configured to be shared by the first to Nth level-1 caches, and a coherence controller configured to receive an address from each of the first to Nth cores and allocate at least a partial area in an area of the level-2 sharing cache to one of the first to Nth level-1 caches based on the received address.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: January 15, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Ho Han, Young-Su Kwon, Kyung Jin Byun, Nak Woong Eum
  • Patent number: 10152401
    Abstract: Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: December 11, 2018
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Bret L. Toll, Konrad K. Lai, Matthew C. Merten, Martin G. Dixon
  • Patent number: 10152243
    Abstract: Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing data flow management on a computing device. Embodiment methods may include initializing a buffer partition of a first memory of a first heterogeneous processing device for an output of execution of a first iteration of a first operation by the first heterogeneous processing device on which a first iteration of a second operation assigned for execution by a second heterogeneous processing device depends. Embodiment methods may include identifying a memory management operation for transmitting the output by the first heterogeneous processing device from the buffer partition as an input to the second heterogeneous processing device. Embodiment methods may include allocating a second memory for storing data for an iteration executed by a third heterogeneous processing device to minimize a number of memory management operations for the second allocated memory.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: December 11, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Han Zhao, Arun Raman, Aravind Natarajan
  • Patent number: 10126971
    Abstract: A computer-implemented method, according to one embodiment, includes: maintaining a heat map monitoring table on a per volume basis for a plurality of volumes in a multi-tier data storage architecture, where the heat map monitoring table includes a heat count for each data block in the respective volume. The computer-implemented method further includes: receiving a request to delete a first volume of the plurality of volumes, identifying which data blocks in the first volume are depended on by one or more other volumes of the plurality of volumes, copying the identified data blocks and the corresponding heat counts to the respective one or more other volumes, and sending a list which includes the identified data blocks and the corresponding heat counts to a controller. Other systems, methods, and computer program products are described in additional embodiments.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: November 13, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Abhishek Jain, Kushal S. Patel, Sarvesh S. Patel, Subhojit Roy
  • Patent number: 10102126
    Abstract: A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: October 16, 2018
    Assignee: Intel Corporation
    Inventors: Raj K. Ramanujan, Rajat Agarwal, Glenn J. Hinton
  • Patent number: 10102123
    Abstract: Embodiments of the invention relate a hybrid hardware and software implementation of transactional memory accesses in a computer system. A processor including a transactional cache and a regular cache is utilized in a computer system that includes a policy manager to select one of a first mode (a hardware mode) or a second mode (a software mode) to implement transactional memory accesses. In the hardware mode the transactional cache is utilized to perform read and write memory operations and in the software mode the regular cache is utilized to perform read and write memory operations.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: October 16, 2018
    Assignee: Intel Corporation
    Inventors: Sanjeev Kumar, Christopher J. Hughes, Partha Kundu, Anthony Nguyen
  • Patent number: 10095739
    Abstract: Systems and methods of the present disclosure provide for caching, by a device intermediary to a client and a database, a result of a structured query language (SQL) query request. In some embodiments, the device intermediary to a plurality of clients and a database receives a SQL response from the database to a first SQL query request of a client of the plurality of clients. The device may maintain a cache of SQL responses from the database. The device may identify that the first SQL query request matches a rule of a policy for caching SQL responses from the database. The policy may include a cache action to take when the rule is matched. The device may perform, responsive to the policy, on the SQL response the cache action identified by the policy.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: October 9, 2018
    Assignee: Citrix Systems, Inc.
    Inventors: Shaleen Sharma, Sudish Sah, Rajesh Joshi
  • Patent number: 10089229
    Abstract: Systems and methods for cache allocation with code and data prioritization. An example system may comprise: a cache; a processing core, operatively coupled to the cache; and a cache control logic, responsive to receiving a cache fill request comprising an identifier of a request type and an identifier of a class of service, to identify a subset of the cache corresponding to a capacity bit mask associated with the request type and the class of service.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: October 2, 2018
    Assignee: Intel Corporation
    Inventors: Andrew J. Herdrich, Edwin Verplanke, Ravishankar Iyer, Christopher C. Gianos, Jeffrey D. Chamberlain, Ronak Singhal, Julius Mandelblat, Bret L. Toll
  • Patent number: 10083066
    Abstract: A computer implemented method and system for data processing. An example method includes setting at least one SMT preliminary value for at least one operating node; monitoring performance metrics for the at least one operating node set to the at least one SMT preliminary value; and determining a SMT revised value based on performance metrics. An example system includes a memory; a processor communicatively coupled to the memory; and a feature selection module communicatively coupled to the memory and processor. The feature selection module performs a method that includes setting, using a setting device, at least one SMT preliminary value for at least one operating node; monitoring, using a monitoring device, performance metrics for the at least one operating node set to the at least one SMT preliminary value; and determining, using a determining device, a SMT revised value based on performance metrics.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: September 25, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guan Cheng Chen, Qi Guo, Jian Li, Xin Li, Yan Li
  • Patent number: 10073629
    Abstract: Examples of techniques for memory transaction prioritization for a memory are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method may include: The method may further include: allocating, by a memory controller, a reserved portion of the memory controller to process prioritized transactions; receiving, by the memory controller, a request transaction from a processor to the memory, wherein the request transaction comprises a priority; determining, by the memory controller, whether the priority of the request transaction is above a priority threshold; and responsive to determining that the priority of the request transaction is above the priority threshold, executing the request using the reserved portion of the memory controller.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: September 11, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Irving G. Baysah, Prasanna Jayaraman
  • Patent number: 10061529
    Abstract: A method and structure for dynamic memory re-allocation for an application runtime environment (ARE) includes receiving, through an interface of an application runtime environment (ARE), a first set of internal operational metrics of the ARE executing at a current setting S1 on a processor of a computer. A first performance P1 of the ARE is determined at the current setting S1 using the received first set of internal operation metrics. The current setting S1 of the ARE is varied to a new setting S2. A second set of internal operational metrics of the ARE executing at the new setting S2 is received through the interface of the ARE. A second performance P2 of the ARE is determined at the new setting S2, using the received second set of internal operation metrics. A memory allocation for the ARE is re-allocated, based on the determined performances P1 and P2.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: August 28, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Norman Bobroff, Liana Liyow Fong, Peter Hans Westerink
  • Patent number: 10061589
    Abstract: Systems, methods, and apparatuses for data speculation execution (DSX) are described. In some embodiments, a hardware apparatus for DSX comprises decoder hardware to decode a class of instructions to support data speculative execution (DSX) including an instruction to begin a DSX, end a DSX, and speculative instructions to execute during a DSX, and execution hardware to speculatively execute decoded instructions that support DSX including the speculative instructions and update speculative instruction tracking hardware.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: August 28, 2018
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Christopher J. Hughes, Robert Valentine, Milind B. Girkar
  • Patent number: 10056156
    Abstract: An information processing apparatus includes an arithmetic processing apparatus, a main memory and an auxiliary memory configured to store a program for diagnosing the main memory and diagnosing an apparatus accessed by the arithmetic processing apparatus. The arithmetic processing apparatus executes the program stored in the auxiliary memory to determine whether the program can be executed on the main memory. The arithmetic processing apparatus executes the program on the main memory when the arithmetic processing apparatus determines that the program can be executed on the main memory and executes the program on the auxiliary memory when the arithmetic processing apparatus determines that the program cannot be executed on the main memory.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: August 21, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Masato Fukumori, Koji Fujita
  • Patent number: 10042761
    Abstract: Facilitating processing in a computing environment. A request to access a cache of the computing environment is obtained from a transaction executing on a processor of the computing environment. Based on obtaining the request, a determination is made as to whether a tracking set to be used to track cache accesses is to be updated. The tracking set includes a read set to track read accesses of at least a selected portion of the cache and a write set to track write accesses of at least the selected portion of the cache. The tracking set is assigned to the transaction, and another transaction to access the cache has another tracking set assigned thereto. The tracking set assigned to the transaction is updated based on the determining indicating the tracking set is to be updated.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: August 7, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10042765
    Abstract: Facilitating processing in a computing environment. A request to access a cache of the computing environment is obtained from a transaction executing on a processor of the computing environment. Based on obtaining the request, a determination is made as to whether a tracking set to be used to track cache accesses is to be updated. The tracking set includes a read set to track read accesses of at least a selected portion of the cache and a write set to track write accesses of at least the selected portion of the cache. The tracking set is assigned to the transaction, and another transaction to access the cache has another tracking set assigned thereto. The tracking set assigned to the transaction is updated based on the determining indicating the tracking set is to be updated.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: August 7, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10008287
    Abstract: Apparatuses and methods for an interface chip are described. An example apparatus includes a first chip. The first chip includes, on a single semiconductor substrate, first terminals, circuit groups, and terminal groups corresponding to the circuit groups, each of the circuit groups including circuit blocks. A control circuit in the first chip selects one of the circuit groups and electrically couples the first terminals to the circuit blocks of the selected circuit group. Second terminals are included in each of the terminal groups. A number of all of the second terminals in each of the terminal groups is smaller than a number of all of the circuit blocks in the corresponding circuit group. The first chip further includes, for example, a remapping circuit.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: June 26, 2018
    Assignee: Micron Technology, Inc.
    Inventors: Chiaki Dono, Taihei Shido, Yuki Ebihara
  • Patent number: 10003603
    Abstract: A processor is coupled to a hierarchical memory structure which includes a plurality of levels of cache memories that hierarchically cache data that is read by the processor from a main memory. The processor is integrated within a computer terminal. The processor performs operations that include generating a hierarchical cache latency signature vector by repeating for each of a plurality of buffer sizes, the following: 1) allocating in the main memory a buffer having the buffer size; 2) measuring elapsed time for the processor to read data from buffer addresses that include upper and lower boundaries of the buffer; and 3) storing the elapsed time and the buffer size as an associated set in the hierarchical cache latency signature vector. The operations further include communicating through a network interface circuit a computer identification message containing computer terminal identification information generated based on the hierarchical cache latency signature vector.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: June 19, 2018
    Assignee: CA, Inc.
    Inventors: Himanshu Ashiya, Atmaram Shetye
  • Patent number: 9990290
    Abstract: Embodiments relate to cache coherency verification using ordered lists. An aspect includes maintaining a plurality of ordered lists, each ordered list corresponding to a respective thread that is executed by a processor, wherein each ordered list comprises a plurality of atoms, each atom corresponding to a respective operation performed in a cache by the respective thread that corresponds to the ordered list in which the atom is located, wherein the plurality of atoms in an ordered list are ordered based on program order. Another aspect includes determining a state of an atom in an ordered list of the plurality of ordered lists. Another aspect includes comparing the state of the atom in an ordered list to a state of an operation corresponding to the atom in the cache. Yet another aspect includes, based on the comparing, determining that there is a coherency violation in the cache.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: June 5, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dean G. Bair, Jonathan T. Hsieh, Matthew G. Pardini, Eugene S. Rotter
  • Patent number: 9965309
    Abstract: Technologies for virtual machine placement within a data center are described herein. An example method may include determining a shared threat potential for a virtual machine based, at least in part, on a degree of co-location the virtual machine has with a current virtual machine operating on a physical machine, determining a workload threat potential for the virtual machine based, at least in part, on a level of advantage associated with placing the virtual machine on the physical machine, determining a threat potential for the virtual machine based, at least in part, on a combination of the shared threat potential and the workload threat potential, and placing the virtual machine on the physical machine based on the threat potential.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: May 8, 2018
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventors: Kevin Fine, Ezekiel Kruglick
  • Patent number: 9959423
    Abstract: A multi-tenant hosting system receives business data and tenant-identifying data, from a tenant. The data from multiple different tenants is stored on a single database, but the data corresponding to each tenant is partitioned by marketing the data with a partition identifier, within the database. Therefore, the hosting system only allows individual tenants to have access to their own data.
    Type: Grant
    Filed: July 30, 2012
    Date of Patent: May 1, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vijeta Johri, Amar Nalla, Madan G. Natu
  • Patent number: 9933942
    Abstract: Embodiments include methods for operating a first storage system having a first number of data storage drives for enabling access to a first set of removable media. Aspects include providing a second storage system having a number K of data storage drives for enabling access to a second set of removable media and providing a set of parameters describing operational characteristics of the second storage system. Aspects also include determining an analytical model using the set of parameters, the analytical model describing the variation of average waiting time as a function of system load over a predefined range covering multiple system load regime domains and determining values of the set of parameters using the analytical model and data of the second storage system. Aspects further include using the analytical model and the values of the set of parameters for reconfiguring the first storage system.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: April 3, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ilias Iliadis, Yusik Kim, Slavisa Sarafijanovic, Vinodh Venkatesan
  • Patent number: 9910769
    Abstract: Embodiments relate to accessing data in a memory. A method for accessing data in a memory coupled to a processor is provided. The method receives a memory reference instruction for accessing data of a first size at an address in the memory. The method determines an alignment size of the address in the memory. The method accesses the data of the first size in one or more groups of data by accessing each group of data block concurrently. The groups of data have sizes that are multiples of the alignment size.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: March 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Timothy J. Slegel
  • Patent number: 9910788
    Abstract: A processor device includes a cache and a memory storing a set of counters. Each counter of the set is associated with a corresponding block of a plurality of blocks of the cache. The processor device further includes a cache access monitor to, for each time quantum for a series of one or more time quanta, increment counter values of the set of counters based on accesses to the corresponding blocks of the cache. The processor device further includes a transfer engine to, after completion of each time quantum, transfer the counter values of the set of counters for the time quantum to a corresponding location in a system memory.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: March 6, 2018
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Philip J. Rogers, Benjamin T. Sander, Anthony Asaro
  • Patent number: 9904618
    Abstract: Embodiments relate to accessing data in a memory. A method for accessing data in a memory coupled to a processor is provided. The method receives a memory reference instruction for accessing data of a first size at an address in the memory. The method determines an alignment size of the address in the memory. The method accesses the data of the first size in one or more groups of data by accessing each group of data block concurrently. The groups of data have sizes that are multiples of the alignment size.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: February 27, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Timothy J. Slegel
  • Patent number: 9881030
    Abstract: A system for a distributed archive and data restoration which achieves both high-speed processing and security is provided. A random number is generated by a seed random number generator, and inputted to a key random number generator as a seed, and each data fragment is obtained by dividing a source data file to be archived, and is redundantly stored on (n?k+1) storage mediums identified as destination storage mediums among n storage mediums on the basis of the random number generated by the key random number generator each time where n is an integer no less than 2 and k is an integer no more than value of n.
    Type: Grant
    Filed: November 7, 2011
    Date of Patent: January 30, 2018
    Assignees: Digital Media Research Institute, Inc., GLOBIT Co., Ltd.
    Inventor: Yoshihiro Shin
  • Patent number: 9864863
    Abstract: In a compression processing storage system, using a pool of encryption processing cores, the encryption processing cores are assigned to process either encryption operations, decryption operations, and decryption and encryption operations, that are scheduled for processing. A maximum number of the encryption processing cores are set for processing only the decryption operations, thereby lowering a decryption latency. A minimal number of the encryption processing cores are allocated for processing the encryption operations, thereby increasing encryption latency. Upon reaching a throughput limit for the encryption operations that causes the minimal number of the plurality of encryption processing cores to reach a busy status, the minimal number of the plurality of encryption processing cores for processing the encryption operations is increased.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: January 9, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Amit, Amir Lidor, Sergey Marenkov, Rostislav Raikhman
  • Patent number: 9857864
    Abstract: According to one or more embodiments of the disclosure, systems and methods for reducing power consumption in a memory architecture are provided. In one embodiment, a method may include determining a transition from a first power state to a second power state. The method may also include determining, using a page location identifier to access a page location table, a first dirty memory page indication. Furthermore, the method may include copying data stored in a first memory location in a volatile memory corresponding to the page location identifier to a second memory location in a non-volatile memory corresponding to the page location identifier. The method may also include deactivating the volatile memory.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: January 2, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Sathish Thoppay Egambaram, Robert Nasry Hasbun
  • Patent number: 9830275
    Abstract: Embodiments disclosed pertain to apparatuses, systems, and methods for Translation Lookaside Buffers (TLBs) that support virtualization and multi-threading. Disclosed embodiments pertain to a TLB that includes a content addressable memory (CAM) with variable page size entries and a set associative memory with fixed page size entries. The CAM may include: a first set of logically contiguous entry locations, wherein the first set comprises a plurality of subsets, and each subset comprises logically contiguous entry locations for exclusive use of a corresponding virtual processing element (VPE); and a second set of logically contiguous entry locations, distinct from the first set, where the entry locations in the second set may be shared among available VPEs. The set associative memory may comprise a third set of logically contiguous entry locations shared among the available VPEs distinct from the first and second set of entry locations.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: November 28, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Ranjit J. Rozario, Sanjay Patel
  • Patent number: 9812221
    Abstract: A system and method for verifying cache coherency in a safety-critical avionics processing environment includes a multi-core processor (MCP) having multiple cores, each core having at least an L1 data cache. The MCP may include a shared L2 cache. The MCP may designate one core as primary and the remainder as secondary. The primary core and secondary cores create valid TLB mappings to a data page in system memory and lock L1 cache lines in their data caches. The primary core locks an L2 cache line in the shared cache and updates its locked L1 cache line. When notified of the update, the secondary cores check the test pattern received from the primary core with the updated test pattern in their own L1 cache lines. If the patterns match, the test passes; the MCP may continue the testing process by updating the primary and secondary statuses of each core.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: November 7, 2017
    Assignee: Rockwell Collins, Inc.
    Inventors: John L. Hagen, David J. Radack, Lloyd F. Aquino, Todd E. Miller