Associative Patents (Class 711/128)
  • Patent number: 10776022
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: September 15, 2020
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Patent number: 10733100
    Abstract: Embodiments of the present disclosure generally relate to a target device handling overlap write commands. In one embodiment, a target device includes a non-volatile memory and a controller coupled to the non-volatile memory. The controller includes a random accumulated buffer, a sequential accumulated buffer, and an overlap accumulated buffer. The controller is configured to receive a new write command, classify the new write command, and write data associated with the new write command to one of the random accumulated buffer, the sequential accumulated buffer, or the overlap accumulated buffer. Once the overlap accumulated buffer becomes available, the controller first flushes to the non-volatile memory the data in the random accumulated buffer and the sequential accumulated buffer that was received prior in sequence to the data in the overlap accumulated buffer. The controller then flushes the available overlap accumulated buffer, ensuring that new write commands override prior write commands.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 4, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Shay Benisty
  • Patent number: 10725527
    Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 28, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Oluleye Olorode, Mehrdad Nourani
  • Patent number: 10725912
    Abstract: Aspects of the present disclosure provide systems and methods for improved power loss protection in a memory sub-system of a device. In particular, a power loss protection component allocates a portion of the memory sub-system to non-volatile memory. Responsive to detecting a trigger event at the device, wherein the trigger event may include asynchronous power loss of the device, the power loss protection component detects data written to a volatile cache of the memory sub-system, retrieves the data from the volatile cache, and writes the data to the portion of the memory sub-system allocated to the non-volatile memory.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: July 28, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Andrew M. Kowles
  • Patent number: 10719434
    Abstract: A cache stores 2{circumflex over (?)}J-byte cache lines has an array of 2{circumflex over (?)}N sets each holds tags each X bits and 2{circumflex over (?)}W ways. An input receives a Q-bit address, MA[(Q?1):0], having a tag MA[(Q?1):(Q?X)] and index MA[(Q?X?1):J]. Q is at least (N+J+X?1). Set selection logic selects one set using the index and tag LSB; comparison logic compares all but the LSB of the tag with all but the LSB of each tag in the selected set and indicates a hit if a match; allocation logic, when the comparison logic indicates there is not a match: allocates into any of the 2{circumflex over (?)}W ways of the selected set when operating in a first mode; and into a subset of the 2{circumflex over (?)}W ways of the selected set when operating in a second mode. The subset of is limited based on bits of the tag portion.
    Type: Grant
    Filed: December 14, 2014
    Date of Patent: July 21, 2020
    Assignee: VIA ALLIANCE SEMICONDUCTORS CO., LTD.
    Inventor: Douglas R. Reed
  • Patent number: 10698827
    Abstract: A cache memory comprising: a mode input indicates in which of a plurality of allocation modes the cache memory is to operate; a set-associative array of entries having a plurality of sets by W ways; an input receives a memory address comprising: an index used to select a set from the plurality of sets; and a tag used to compare with tags stored in the entries of the W ways of the selected set to determine whether the memory address hits or misses; and allocation logic, when the memory address misses in the array: selects one or more bits of the tag based on the allocation mode; performs a function, based on the allocation mode, on the selected bits of the tag to generate a subset of the W ways of the array; and allocates into one way of the subset of the ways of the selected set.
    Type: Grant
    Filed: December 14, 2014
    Date of Patent: June 30, 2020
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventor: Douglas R. Reed
  • Patent number: 10691606
    Abstract: An apparatus and method are provided for supporting multiple cache features. The apparatus provides cache storage comprising a plurality of cache ways and organised as a plurality of ways groups, where each way group comprises multiple cache ways from the plurality of cache ways. First cache feature circuitry is provided to implement a first cache feature that is applied to the way groups, and second cache feature circuitry is provided to implement a second cache feature that is applied to the way groups. Way group control circuitry is then arranged to provide a first mapping defining which cache ways belong to each way group when the first cache feature is applied to the way groups, and a second mapping defining which cache ways belong to each way group when the second cache feature is applied to the way groups.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: June 23, 2020
    Assignee: ARM Limited
    Inventors: Davide Marani, Alex James Waugh
  • Patent number: 10691604
    Abstract: A processor(s) performs a cache access to retrieve data, wherein the cache access by initiating a request that includes an address of a first address type. The cache access includes the processor(s) generating, based on historical data related to the address, a prediction for a location of the data in the cache that is a set identifier of a predicted cache set. The processor(s) concurrently perform a data access to the cache to retrieve sets in the cache. The processor(s) confirm(s) that the retrieved include the predicted cache set. The processor(s) utilize(s) the set identifier to select data from the predicted set.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 23, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dwifuzi Coe, Christian Jacobi, Markus Kaltenbach, Eyal Naor, Martin Recktenwald
  • Patent number: 10684951
    Abstract: A processor(s) performs a cache access to retrieve data, wherein the cache access by initiating a request that includes an address of a first address type. The cache access includes the processor(s) generating, based on historical data related to the address, a prediction for a location of the data in the cache that is a set identifier of a predicted cache set. The processor(s) concurrently perform a data access to the cache to retrieve sets in the cache. The processor(s) confirm(s) that the retrieved include the predicted cache set. The processor(s) utilize(s) the set identifier to select data from the predicted set.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: June 16, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dwifuzi Coe, Christian Jacobi, Markus Kaltenbach, Eyal Naor, Martin Recktenwald
  • Patent number: 10650021
    Abstract: A mechanism for managing data operations in an integrated database system. The method includes receiving a request to perform a data operation and retrieving a data set from a primary data source (PDS) in view of the request. The method also includes storing the data set in a temporary data store (TDS). The method further includes performing the data operation on the stored data set in the TDS.
    Type: Grant
    Filed: December 3, 2013
    Date of Patent: May 12, 2020
    Assignee: Red Hat, Inc.
    Inventors: Filip Elias, Filip Nguyen
  • Patent number: 10635593
    Abstract: A cache controller is to allocate memory within set-associative cache that includes a plurality of sets of ways. The cache controller is to request to assign an entry for a system address in the set-associative cache and execute a function to determine a set, from a series of sets within the plurality of sets of ways, to which to allocate the entry in the set-associative cache. The cache controller is further to identify an available number of ways in the set and identify a way that is available in response to execution of a way bias algorithm. The cache controller is also to determine whether the way is among the ways available within the set and select the way for allocation of the entry in response to the way being among the ways available within the set.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: April 28, 2020
    Assignee: Intel Corporation
    Inventors: Daniel Greenspan, Anant V. Nori, Supratik Majumder, Yoav Lossin, Asaf Rubinstein
  • Patent number: 10628052
    Abstract: According to one embodiment, a memory system includes a nonvolatile memory and a controller configured to manage a first cache which stores a part of a logical-to-physical address translation table in the nonvolatile memory. The first cache includes cache lines each including sub-lines. Each of entries of a first cache tag includes bitmap flags corresponding to the sub-lines in the corresponding cache line. Each bitmap flag indicates whether data of the logical-to-physical address translation table is already transferred to a corresponding sub-line. The controller determines a cache line including the smallest number of sub-lines to which data of the logical-to-physical address translation table is already transferred, as a cache line to be replaced.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: April 21, 2020
    Assignee: Toshiba Memory Corporation
    Inventors: Satoshi Kaburaki, Katsuya Ohno, Hiroshi Katougi
  • Patent number: 10606509
    Abstract: A data storage device includes a storage medium; a buffer memory configured to temporarily store data to be inputted to, or outputted from, the storage medium; and a controller configured to control data exchange with the storage medium, allocate a write tag to a write command, and change an attribute of the write tag according to a processing status of the write command.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: March 31, 2020
    Assignee: SK hynix Inc.
    Inventors: Soong Sun Shin, Han Choi, Jin Soo Kim
  • Patent number: 10606600
    Abstract: Techniques are disclosed for receiving an instruction for processing data that includes a plurality of sectors. A method includes decoding the instruction to determine which of the plurality of sectors are needed to process the instruction and fetching at least one of the plurality of sectors from memory. The method includes determining whether each sector that is needed to process the instruction has been fetched. If all sectors needed to process the instruction have been fetched, the method includes transmitting a sector valid signal and processing the instruction. If all sectors needed to process the instruction have not been fetched, the method includes blocking a data valid signal from being transmitted, fetching an additional one or more of the plurality of sectors until all sectors needed to process the instruction have been fetched, transmitting a sector valid signal, and reissuing and processing the instruction using the fetched sectors.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventor: David A. Hrusecky
  • Patent number: 10599210
    Abstract: An application processor including at least one core, at least one first cache respectively connected to the at least one core, the at least one first cache associated with an operation of the at least one core, a second cache associated with an operation of the at least one core, the second cache having a storage capacity greater than the first cache, a cache utilization management circuit configured to generate, a power control signal for power management of the application processor based on a cache hit rate of the second cache; and a power management circuit configured to determine, a power state level of the application processor based on the power control signal and an expected idle time, the power management circuit configured to control the at least one core, the at least one first cache, and the second cache based on the power state level may be provided.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: March 24, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jong-lae Park, Ju-hwan Kim, Bum-gyu Park, Dae-yeong Lee, Dong-hyeon Ham
  • Patent number: 10592414
    Abstract: Improving access to a cache by a processing unit. One or more previous requests to access data from a cache are stored. A current request to access data from the cache is retrieved. A determination is made whether the current request is seeking the same data from the cache as at least one of the one or more previous requests. A further determination is made whether the at least one of the one or more previous requests seeking the same data was successful in arbitrating access to a processing unit when seeking access. A next cache write access is suppressed if the at least one of previous requests seeking the same data was successful in arbitrating access to the processing unit.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: March 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Simon H. Friedmann, Girish G. Kurup, Markus Kaltenbach, Ulrich Mayer, Martin Recktenwald
  • Patent number: 10585797
    Abstract: A computer implemented method to operate different processor cache levels of a cache hierarchy for a processor with pipelined execution is suggested. The cache hierarchy comprises at least a lower hierarchy level entity and a higher hierarchy level entity. The method comprises: sending a fetch request to the cache hierarchy; detecting a miss event from a lower hierarchy level entity; sending a fetch request to a higher hierarchy level entity; and scheduling at least one write pass.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: March 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Simon H. Friedmann, Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Anthony Saporito
  • Patent number: 10579535
    Abstract: A processor includes a processor core and a micro-op cache communicably coupled to the processor core. The micro-op cache includes a micro-op tag array, wherein tag array entries in the micro-op tag array are indexed according to set and way of set-associative cache, and a micro-op data array to store multiple micro-ops. The data array entries in the micro-op data array are indexed according to bank number of a plurality of cache banks and to a set within one cache bank of the plurality of cache banks.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Lihu Rappoport, Jared Warner Stark, IV, Franck Sala, Michael Tal, Gil Shmueli, Adrian Flesler
  • Patent number: 10552331
    Abstract: An arithmetic processing device includes a memory access request issuance unit and a cache including a cache memory for tags and data and a move-in buffer control unit for issuing a move-in request for data on the memory access request when a cache miss occurs. The move-in buffer control unit, when the cache miss occurs, determines to acquire a move-in buffer and issue the move-in request when the memory access request has the same index as an index of any move-in request registered in the move-in buffer and the number of move-in requests of the same index registered in the move-in buffer is less than the number of ways, and determines not to acquire the move-in buffer and does not issue the move-in request when the memory access request has the same index and the number of the move-in requests of the same index reaches the number of the ways.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: February 4, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Yuki Kamikubo, Noriko Takagi, Takahito Hirano
  • Patent number: 10545867
    Abstract: A device, method, and a data storage medium, configured to enhance an item access bandwidth and atomic operation are provided. The device comprises: a comparison module, a cache, and a distribution module; wherein the comparison module is configured to receive a query request from a service side, determine whether an address pointed to by the query request and an item address stored in the cache are identical. If so, and a valid identifier vld is valid, the comparison module is configured to directly return the item data stored in the cache to the service side without initiating a request for looking up an off-chip memory, so as to reduce a frequency of accessing the off-chip memory. If not, the comparison module is configured to initiate a request for looking up the off-chip memory, so as to process, according to a first preconfigured rule, item data returned by the off-chip memory.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: January 28, 2020
    Assignee: SANECHIPS TECHNOLOGY CO., LTD.
    Inventors: Chuang Bao, Zhenlin Yan, Chunhui Zhang, Kang An
  • Patent number: 10522209
    Abstract: One of a plurality of chip select inputs of a load-reduced dual inline memory module (LRDIMM) may be repurposed to an address input. One of a plurality of memory ranks of the LRDIMM may be selected based on a remainder of the plurality of chip select inputs. The repurposed chip select input may be used to support non-binary rank multiplication of the LRDIMM.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: December 31, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Melvin K. Benedict
  • Patent number: 10496551
    Abstract: Method, system, and apparatus for leveraging non-uniform miss penalty in cache replacement policy to improve performance and power in a chip multiprocessor platform is described herein. One embodiment of a method includes: determining a first set of cache line candidates for eviction from a first memory in accordance to a cache line replacement policy, the first set comprising a plurality of cache line candidates; determining a second set of cache line candidates from the first set based on replacement penalties associated with each respective cache line candidate in the first set; selecting a target cache line from the second set of cache line candidates; and responsively causing the selected target cache line to be moved from the first memory to a second memory.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: December 3, 2019
    Assignee: Intel Corporation
    Inventors: Binh Q. Pham, Ren Wang
  • Patent number: 10482018
    Abstract: An arithmetic processing unit includes a cache including a cache memory for storing states of data and data in a block at an index of the memory access request, a move-in buffer control unit that issues a move-in request when cache miss, and move-in buffers for registering the move-in request. The move-in buffer control unit, in response to cache miss, (a) secures a vacant move-in buffer when the vacant move-in buffer exists, (b) issues a move-in request when a move-in request having a same index as the memory access request is not registered in the move-in buffers, (c) issues the move-in request when the move-in request having the same index is registered in the move-in buffers and all ways are not used by the move-in request having the same index in the move-in buffers, and (d) releases the secured move-in buffer when all the ways are used.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: November 19, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Yuki Kamikubo, Noriko Takagi
  • Patent number: 10482032
    Abstract: Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: November 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao Y. Hu, Matthew J. Kalos, Ioannis Koltsidas, Roman A. Pletka
  • Patent number: 10467140
    Abstract: An apparatus has a cache configured to store entries which correspond to blocks of addresses having one of a plurality of sizes as selected by a control device. When the control device has not yet indicated which size to use, cache access circuitry assumes a default size which is greater than at least one of the plurality of sizes.
    Type: Grant
    Filed: April 14, 2016
    Date of Patent: November 5, 2019
    Assignee: Arm Limited
    Inventors: Roko Grubisic, Hakan Persson, Neil Andrew Jameson
  • Patent number: 10430341
    Abstract: A log-structured storage method and a server, where the method includes obtaining a current incremental update of an object when the object is updated, wherein a current version of the object is stored in a log-structured storage area of the server, determining whether there is a previous incremental update of the object stored in the log-structured storage area, writing the current incremental update as a latest incremental update in the log-structured storage area when there is no previous incremental update of the object stored in the log-structured storage area such that the utilization of memory can be improved.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: October 1, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Vinoth Veeraraghavan, Zhibiao Chen
  • Patent number: 10417005
    Abstract: A data processing apparatus is provided comprising a front-end interface electronically coupled to a main processor. The front-end interface is configured to receive data stored in a repository, in particular an external storage and/or a network, determine whether the data is a single-access data or a multiple-access data by analyzing an access parameter designating the data, route the multiple-access data for processing by the main processor, and route the single-access data for pre-processing by the front-end interface and routing results of the pre-processing to the main processor.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: September 17, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Uri Weiser, Tal Horowitz, Jintang Wang
  • Patent number: 10402092
    Abstract: A method may include receiving, by a controller of a storage device and from a host device, a command to resize a first namespace of a plurality of namespaces stored in a non-volatile memory device of the storage device. The method may further include, relocating, by the controller, a physical block address for the non-volatile memory device from an entry in a virtual to physical table identified by a first index value to an entry in the virtual to physical table identified by a second index value, and in response to relocating the physical block address, updating, by the controller, a mapping, by a namespace table, to indicate an initial index value of a second namespace of the plurality of namespaces.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: September 3, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Dylan Mark Dewitt, Piyush Garg
  • Patent number: 10404603
    Abstract: An appliance o for evicting data based on traffic priority of data is described. The appliance has one or more processors and includes a compression history manager configured to acquire traffic priority information of data, the data being conveyed over a connection and to assign a compression history set based on the traffic priority information of the data. The compression history manager is also configured to, if cache space does not exist to store the data and another compression history set corresponds to lower traffic priority in a cache queue, evict data from the other compression history set corresponding to lower traffic priority.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: September 3, 2019
    Assignee: CITRIX SYSTEMS, INC.
    Inventors: Praveen Raja Dhanabalan, Chaitra Maraliga Ramaiah, Karthick Srivatsan
  • Patent number: 10402344
    Abstract: Methods and systems for in direct data access in, e.g., multi-level cache memory systems are described. A cache memory system includes a cache location buffer configured to store cache location entries, wherein each cache location entry includes an address tag and a cache location table which are associated with a respective cacheline stored in a cache memory. The system also includes a first cache memory configured to store cachelines, each cacheline having data and an identity of a corresponding cache location entry in the cache location buffer, and a second cache memory configured to store cachelines, each cacheline having data and an identity of a corresponding cache location entry in the cache location buffer. Responsive to a memory access request for a cacheline, the cache location buffer generates access information using one of the cache location tables which enables access to the cacheline without performing a tag comparison at the one of the first and second cache memories.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: September 3, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Erik Hagersten, Andreas Sembrant, David Black-Schaffer, Stefanos Kaxiras
  • Patent number: 10380027
    Abstract: An improved virtual memory scheme designed for multi-processor environments that uses processor registers and a small amount of dedicated logic to eliminate the overhead that is associated with the use of page tables. The virtual addressing provides a contiguous virtual address space where the actual real memory is distributed across multiple memories. Locally, within an individual memory, the virtual space may be composed of discontinuous “real” segments or “chunks” within the memory, allowing bad blocks of memory to be bypassed without alteration of the virtual addresses. The delays and additional bus traffic associated with translating from virtual to real addresses are substantially reduced or eliminated.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: August 13, 2019
    Assignee: Friday Harbor LLC
    Inventors: Jerome Vincent Coffin, Douglas A. Palmer
  • Patent number: 10366013
    Abstract: The present disclosure relates to a system and method of managing operation of a cache memory. The system and method assign each nested task a level, and each task within a nested level an instance. Using the assigned task levels and instances, the cache management module is able to determine which cache entries to evict from cache when space is needed, and which evicted cache entries to recover upon completion of preempting tasks.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: July 30, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Lee McFearin, Sushma Wokhlu, Alan Gatherer
  • Patent number: 10339055
    Abstract: A cache system stores a number of different datasets. The cache system includes a number of cache units, each in a state associated with one of the datasets. In response to determining that a hit ratio of a cache unit drops below a threshold, the state of the cache unit is changed and the dataset is replaced with that associated with the new state.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: July 2, 2019
    Assignee: Red Hat, Inc.
    Inventors: Filip Eliá{hacek over (s)}, Filip Nguyen
  • Patent number: 10332614
    Abstract: Methods, apparatus and systems pertain to performing READ, WRITE functions in a memory which is coupled to a repair controller. One such repair controller could receive a row address and a column address associated with the memory and store a first plurality of tag fields indicating a type of row/column repair to be performed for at least a portion of a row/column of memory cells, and a second plurality of tag fields to indicate a location of memory cells used to perform the row/column repair.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: June 25, 2019
    Assignee: Micron Technology, Inc.
    Inventor: Todd Houg
  • Patent number: 10318425
    Abstract: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Robert Birke, Yiyu Chen, Navaneeth Rameshan, Martin Schmatz
  • Patent number: 10320935
    Abstract: A method includes, with a computing system, receiving a first resource request for a Representational State Transfer (REST) web service, in response to determining that a resource request result of the first resource request is not cached, passing the first resource request to the REST web service, receiving from the REST web service, the resource request result and metadata associated with the resource request result, the metadata indicating a set of entities associated with the resource request result, caching the result and storing the metadata with the cached result, receiving a second resource request, the second resource request being the same as the first resource request, in response to determining that an entity from the set of entities has changed since the resource request result was cached, invalidating the cached resource request result and passing the first resource request to the REST web service.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: June 11, 2019
    Assignee: RED HAT, INC.
    Inventors: Pavel Slavicek, Rostislav Svoboda
  • Patent number: 10310980
    Abstract: A system is provided. The system includes a storage controller configured to receive a prefetch command from a host interface. The storage controller includes a read cache memory that stores prefetch data in response to the prefetch command and a plurality of storage tiers coupled to the storage controller and providing the prefetch data. The plurality of storage tiers includes a fastest storage tier that stores the prefetch data if the read cache memory discards the prefetch data after storing the prefetch data.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: June 4, 2019
    Assignee: Seagate Technology LLC
    Inventor: George Alexander Kalwitz
  • Patent number: 10303613
    Abstract: The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: May 28, 2019
    Assignee: Micron Technology, Inc.
    Inventor: Robert M. Walker
  • Patent number: 10289431
    Abstract: Technologies for control and status register (CSR) access include a computing device that starts a firmware initialization phase. The firmware accesses a CSR at an abstract CSR address. The computing device determines whether an upper part of the CSR address matches a cached upper part of a previously accessed CSR address. If the upper parts do not match, the computing device converts the CSR address into a physical address and caches the upper part of the CSR address and the upper part of the physical address. If the upper parts match, the computing device combines a cached upper part of a previously accessed physical address with an offset of the CSR address. The upper part may include 20 bits and the lower part may include 12 bits. The physical address may be a PCIe address of the CSR added with an MMCFG base address. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 1, 2016
    Date of Patent: May 14, 2019
    Assignee: INTEL Corporation
    Inventors: Xueyan Wang, Wenjuan Mao, Qiang Li, John V. Lovelace, James R. Goffena
  • Patent number: 10255100
    Abstract: A method for processor parameter adjustment using a performance optimization engine is provided. An aspect includes receiving, by the performance optimization engine comprising a hardware module in a processor of a computer system, a request to adjust an operating parameter of the processor from software that is executing on the computer system. Another aspect includes determining an adjusted value for the operating parameter by the performance optimization engine during execution of the software. Another aspect includes setting the operating parameter to the adjusted value in a parameter register of the processor. Yet another aspect includes executing the software according to the parameter register by the processor.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: April 9, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giles R. Frazier, Michael Karl Gschwind
  • Patent number: 10248422
    Abstract: Systems, methods, and apparatuses for executing an instruction are described. In some embodiments, a decoder circuit decodes an instruction, wherein the instruction to include at least an opcode, a field for source operand, and a field for a destination operand. Execution circuitry executes the decoded instruction to determine if a tag from the address from the source operand matches a tag in a selected non-volatile memory address cache (NVMAC) cache line, wherein when there is a match a hit indication is stored in the destination operand, and when there is not a match, a no hit indication is stored in the destination operand and the NVMAC is updated with the tag from the address from the source operand.
    Type: Grant
    Filed: July 2, 2016
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventor: Sara Baghsorkhi
  • Patent number: 10241705
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: March 26, 2019
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Patent number: 10241795
    Abstract: A method for managing mappings of storage on a code cache for a processor. The method includes storing a plurality of guest address to native address mappings as entries in a conversion look aside buffer, wherein the entries indicate guest addresses that have corresponding converted native addresses stored within a code cache memory, and receiving a subsequent request for a guest address at the conversion look aside buffer. The conversion look aside buffer is indexed to determine whether there exists an entry that corresponds to the index, wherein the index comprises a tag and an offset that is used to identify the entry that corresponds to the index. Upon a hit on the tag, the corresponding entry is accessed to retrieve a pointer to the code cache memory corresponding block of converted native instructions. The corresponding block of converted native instructions are fetched from the code cache memory for execution.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 10216634
    Abstract: A cache directory processing method for a multi-core processor system, and directory controllers is presented. The method includes obtaining a first directory entry corresponding to first-type storage space in shared storage space of the multi-core processor system and in a directory of the shared storage space; performing a directory entry combination operation on the first directory entry according to each directory entry in the first directory entry and according to an access type and a sharer, to form a second directory entry of the first-type storage space; and when a record quantity of the second directory entry is less than a record quantity of the first directory entry, replacing the first directory entry with the second directory entry, and using the second directory entry as a directory entry corresponding to the first-type storage space and in the directory of the shared storage space.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: February 26, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Michael Huang, Tongtong Cao
  • Patent number: 10216646
    Abstract: A method, system and computer program product for cache replacement. The present invention leverages Belady's optimal replacement algorithm by applying it to past cache accesses to inform future cache replacement decisions. The occupied cache capacity of a cache is tracked at every time interval using an occupancy vector. The cache capacity is retroactively assigned to the cache lines of the cache in order of their reuse, where a cache line is considered to a cache hit if the cache capacity is available at all times between two subsequent accesses. The occupancy vector is updated using a last touch timestamp of a current memory address. A determination is made as to whether the current memory address results in a cache hit or a cache miss based on the updated occupancy vector. The replacement state for the cache line is stored using the results of the determination.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: February 26, 2019
    Assignee: Board of Regents, The University of Texas System
    Inventors: Calvin Lin, Akanksha Jain
  • Patent number: 10210099
    Abstract: A system and method for low latency and higher bandwidth communication between a central processing unit (CPU) and an accelerator is disclosed. When the CPU updates a copy of data stored at a shared memory, the CPU also sends an “invalidate” command to a cache coherent interconnect (CCI). The CCI forwards the invalidate command to a dedicated cache register (DCR). The DCR marks its copy of the data as “out-of-date” and requests an up-to-date copy of the data from the CCI. The CCI then retrieves up-to-date data for the DCR. When the DCR receives the up-to-date data from the CCI, the DCR replaces the out-of-date data with the up-to-date data, and marks the up-to-date data with the status of “valid.” The DCR can then provide data to an accelerator with a status of “out-of-date” or “valid.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: February 19, 2019
    Assignee: Waymo LLC
    Inventors: Grace Nordin, Daniel Rosenband
  • Patent number: 10187247
    Abstract: A computer system has a plurality of computer servers, each including at least one central processing unit (CPU). A memory appliance is spaced remotely from the plurality of computer servers. The memory appliance includes a memory controller and random access memory (RAM). At least one photonic interconnection is between the plurality of computer servers and the memory appliance. An allocated portion of the RAM is addressable by a predetermined CPU selected during a configuration event from the plurality of computer servers.
    Type: Grant
    Filed: July 30, 2010
    Date of Patent: January 22, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Terrel Morris
  • Patent number: 10180907
    Abstract: A processor includes an arithmetic processing circuit, a cache memory including a plurality of ways, a usage information register storing usage information indicating whether to use each of the plurality of ways, a purge control circuit performing purge processing on a basis of rewriting of the usage information within the usage information register according to an instruction executed by the arithmetic processing circuit, the purge processing including processing of deleting, from the cache memory, target data retained in a target way to be stopped and processing of writing back part of the target data, the part of the target data being data rewritten in the cache memory, to a main memory at a lower level than the cache memory, and an access control circuit controlling accessing the cache memory on a basis of a memory access request received from the arithmetic processing circuit and status of the purge processing.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: January 15, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Shuji Yamamura
  • Patent number: 10176211
    Abstract: Described are methods, systems and computer readable media for external table index mapping.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: January 8, 2019
    Assignee: Deephaven Data Labs LLC
    Inventors: Charles Wright, Ryan Caudy, David R. Kent, IV, Mark Zeldis, Radu Teodorescu
  • Patent number: 10169246
    Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: January 1, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman