Partitioned Cache Patents (Class 711/129)
  • Patent number: 11300992
    Abstract: Methods and systems for implementing independent time in a hosted operating environment are disclosed. The hosted, or guest, operating environment, can be seeded with a guest time value by a guest operating environment manager that maintains a time delta between a host clock time and an enterprise time. The guest operating environment can subsequently manage its guest clock from the guest time value. If the guest operating environment is halted, the guest operating environment manager can manage correspondence between the host clock time and the enterprise time by periodically assessing divergence between actual and expected values of the host clock time.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: April 12, 2022
    Assignee: Unisys Corporation
    Inventors: Robert F. Inforzato, Dwayne E. Ebersole, Daryl R. Smith, Grace W. Lin, Andrew Ward Beale, Loren C. Wilton
  • Patent number: 11256625
    Abstract: Memory transactions can be tagged with a partition identifier selected depending on which software execution environment caused the memory transaction to be issued. A memory system component can control allocation of resources for handling the memory transaction or manage contention for said resources depending on a selected set of memory system component parameters selected depending on the partition identifier specified by the memory transaction, or can control, depending on the partition identifier specified by the memory transaction, whether performance monitoring data is updated in response to the memory transaction. Page table walk memory transactions may be assigned a different partition identifier to the partition identifier assigned to the corresponding data/instruction access memory transaction.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: February 22, 2022
    Assignee: Arm Limited
    Inventor: Steven Douglas Krueger
  • Patent number: 11256620
    Abstract: System and methods are disclosed include a memory device and a processing device coupled to the memory device. The processing device can determine an amount of valid blocks in a memory device of a memory sub-system. The processing device can then determine a surplus amount of valid blocks on the memory device based on the amount of valid blocks. The processing device can then configure a size of a cache of the memory device based on the surplus amount of valid blocks.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: February 22, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kevin R. Brandt, Peter Feeley, Kishore Kumar Muchherla, Yun Li, Sampath K. Ratnam, Ashutosh Malshe, Christopher S. Hale, Daniel J. Hubbard
  • Patent number: 11231949
    Abstract: Disclosed are embodiments for migrating a virtual machine (VM) from a source host to a destination host while the virtual machine is running on the destination host. The system includes an RDMA facility connected between the source and destination hosts and a device coupled to a local memory, the local memory being responsible for memory pages of the VM instead of the source host. The device is configured to copy pages of the VM to the destination host and to maintain correct operation of the VM by monitoring coherence events, such as a cache miss, caused by the virtual machine running on the destination host. The device services these cache misses using the RDMA facility and copies the cache line satisfying the cache miss to the CPU running the VM. The device also tracks the cache misses to create an access pattern that it uses to predict future cache misses.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: January 25, 2022
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Jayneel Gandhi, Aasheesh Kolli, Pratap Subrahmanyam
  • Patent number: 11234151
    Abstract: There is provided a method in a first device of a cellular communication system, the method comprising: acquiring a first value of a performance indicator; causing a transmission of management plane performance data to a second device of the cellular communication system, said performance data comprising said first value; acquiring a second value of the performance indicator; and preventing a transmission of the second value if the second value is substantially equal to the first value.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: January 25, 2022
    Assignee: Nokia Technologies Oy
    Inventors: Kimmo Kalervo Hatonen, Shubham Kapoor, Ville Matti Kojola, Sasu Tarkoma
  • Patent number: 11210231
    Abstract: Techniques for performing cache management includes partitioning entries of a hash table into buckets, wherein each of the buckets includes a portion of the entries of the hash table, configuring a cache, wherein the configuring includes allocating a section of the cache for exclusive use by each bucket, and performing first processing that stores a data block in the cache. The first processing includes determining a hash value for a data block, selecting, in accordance with the hash value, a first bucket of the plurality of buckets, wherein a first section of the cache is used exclusively for storing cached data blocks of the first bucket, storing metadata used in connection with caching the data block in a first entry of the first bucket, and storing the data block in a first cache location of the first section of the cache.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: December 28, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Anton Kucherov, Ronen Gazit, Vladimir Shveidel, Uri Shabi
  • Patent number: 11176041
    Abstract: A method for cache coherency in a reconfigurable cache architecture is provided. The method includes receiving a memory access command, wherein the memory access command includes at least an address of a memory to access; determining at least one access parameter based on the memory access command; and determining a target cache bin for serving the memory access command based in part on the at least one access parameter and the address.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: November 16, 2021
    Assignee: Next Silicon Ltd.
    Inventor: Elad Raz
  • Patent number: 11150962
    Abstract: A technique is introduced for intercepting memory calls from a user-space application and applying an allocation policy to determine whether such calls are handled using volatile memory such as dynamic random-access memory (DRAM) or persistent memory (PMEM). In an example embodiment, memory calls from an application are intercepted by a memory allocation capture library. Such calls may be to a memory function such as malloc( ) and may be configured to cause a portion of DRAM to be allocated to the application to process a task. The memory allocation capture library then determines whether the intercepted call satisfies capture criteria associated with an allocation policy. If the intercepted call does satisfy the capture criteria, the call is processed to cause a portion of PMEM to be allocated to the application instead of DRAM.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: October 19, 2021
    Assignee: MemVerge, Inc.
    Inventors: Ronald S. Niles, Yue Li
  • Patent number: 11151005
    Abstract: A method, computer program product, and computing system for writing, from a first node to a second node, a first portion of data from a memory pool in the first node defined by, at least in part, a first pointer. One or more input/output (IO) operations may be received while writing the first portion of data to the second node. Data from the one or more IO operations may be stored within the memory pool after the first pointer.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: October 19, 2021
    Assignee: EMC Holding Company, LLC
    Inventors: Bar David, Vladimir Shveidel
  • Patent number: 11138123
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: October 5, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Patent number: 11119981
    Abstract: In one example, a method may include receiving a write operation corresponding to a portion of a data chunk stored at a first storage location in a write-in-place file system. The write-in-place file system may include encoded data chunks and unencoded data chunks. The method may include determining whether the data chunk is an encoded data chunk based on metadata associated with the data chunk, modifying the data chunk based on the write operation, and selectively performing a redirect-on-write operation on the modified data chunk based on the determination.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: September 14, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Shyamalendu Sarkar, Sri Satya Sudhanva Kambhammettu, Narayanan Ananthakrishnan Nellayi, Naveen B
  • Patent number: 11113302
    Abstract: Database environments may choose to schedule complex analytics processing to be performed by specialized processing environments by caching source datasets or other data needed for the analytics and then outputting results back to customer datasets. It is complex to schedule user database operations, such as running dataflows, recipes, scripts, rules, or the like that may rely on output from the analytics, if the user database operations are on one schedule, while the analytics is on another schedule. User/source datasets may become out of sync and one or both environments may operate on stale data. One way to resolve this problem is to define triggers that, for example, monitor for changes to datasets (or other items of interest) by analytics or other activity and automatically run dataflows, recipes, or the like that are related to the changed datasets (or other items of interest).
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: September 7, 2021
    Assignee: SALESFORCE.COM, INC.
    Inventors: Keith Kelly, Ravishankar Arivazhagan, Wenwen Liao, Zhongtang Cai, Ali Sakr
  • Patent number: 11086793
    Abstract: Techniques for cache management may include: partitioning a cache into buckets of cache pages, wherein each bucket has an associated cache page size and each bucket includes cache pages of the associated cache page size for that bucket, wherein the cache includes compressed pages of data and uncompressed pages of data; and performing processing that stores a first page of data in the cache. The processing may include storing the first page of data in a first cache page of a selected bucket having a first associated cache page size determined in accordance with a first compressed size of the first page of data. The cache may be repartitioned among the buckets based on associated access frequencies of the buckets of cache pages.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: August 10, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Anton Kucherov, David Meiri
  • Patent number: 11074197
    Abstract: A computational device receives indications of a minimum retention time and a maximum retention time in cache for a first plurality of tracks, wherein no indications of a minimum retention time or a maximum retention time in the cache are received for a second plurality of tracks. A cache management application demotes a track of the first plurality of tracks from the cache, in response to determining that the track is a least recently used (LRU) track in a LRU list of tracks in the cache and the track has been in the cache for a time that exceeds the minimum retention time. The cache management application demotes the track of the first plurality of tracks, in response to determining that the track has been in the cache for a time that exceeds the maximum retention time.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: July 27, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Joseph Hayward, Kyler A. Anderson, Matthew G. Borlick
  • Patent number: 11061816
    Abstract: Techniques are provided for computer memory mapping and allocation. In an example, a virtual memory address space is divided into an active half and a passive half. Processors make memory allocations to their respective portions of the active half until one processor has made a determined number of allocations. When that occurs, and when all memory in the passive half that has been allocated has been returned, then the active and passive halves are switched, and all processors are switched to making allocations in the newly-active half.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 13, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventor: Max Laier
  • Patent number: 11042483
    Abstract: A computer system includes a cache and processor. The cache includes a plurality of data compartments configured to store data. The data compartments are arranged as a plurality of data rows and a plurality of data columns. Each data row is defined by an addressable index. The processor is in signal communication with the cache, and is configured to operate in a full cache purge mode and a selective cache purge mode. In response to invoking one or both of the full cache purge mode and the selective cache purge mode, the processor performs a pipe pass on a selected addressable index to determine a number of valid compartments and a number of invalid compartments, and performs an eviction operation on the valid compartments while skipping the eviction operation on the invalid compartments.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: June 22, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Robert J. Sonnelitter, III, Deanna P. D. Berger, Vesselina Papazova
  • Patent number: 11023383
    Abstract: A list of a first type of tracks in a cache is generated. A list of a second type of tracks in the cache is generated, wherein I/O operations are completed relatively faster to the first type of tracks than to the second type of tracks. A determination is made as to whether to demote a track from the list of the first type of tracks or from the list of the second type of tracks.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: June 1, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 11003592
    Abstract: In an example, an apparatus comprises a plurality of compute engines; and logic, at least partially including hardware logic, to detect a cache line conflict in a last-level cache (LLC) communicatively coupled to the plurality of compute engines; and implement context-based eviction policy to determine a cache way in the cache to evict in order to resolve the cache line conflict. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: May 11, 2021
    Assignee: INTEL CORPORATION
    Inventors: Neta Zmora, Eran Ben-Avi
  • Patent number: 10997031
    Abstract: A method, computer program product, and computer system for executing an automatic recovery of log metadata. A secondary storage processor may request one or more log metadata buffer values from a first buffer used by a primary storage processor. The secondary storage processor may update one or more log metadata buffer values from a second buffer used by the secondary storage processor.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: May 4, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Cheng Wan, Socheavy D. Heng, Xinlei Xu, Yousheng Liu, Baote Zhuo
  • Patent number: 10956331
    Abstract: Techniques described herein generally include methods and systems related to cache partitioning in a chip multiprocessor. Cache-partitioning for a single thread or application between multiple data sources improves energy or latency efficiency of a chip multiprocessor by exploiting variations in energy cost and latency cost of the multiple data sources. Partition sizes for each data source may be selected using an optimization algorithm that minimizes or otherwise reduces latencies or energy consumption associated with cache misses.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: March 23, 2021
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 10929310
    Abstract: Systems and methods provide for optimizing utilization of an Address Translation Cache (ATC). A network interface controller (NIC) can write information reserving one or more cache lines in a first level of the ATC to a second level of the ATC. The NIC can receive a request for a direct memory access (DMA) to an untranslated address in memory of a host computing system. The NIC can determine that the untranslated address is not cached in the first level of the ATC. The NIC can identify a selected cache line in the first level of the ATC to evict using the request and the second level of the ATC. The NIC can receive a translated address for the untranslated address. The NIC can cache the untranslated address in the selected cache line. The NIC can perform the DMA using the translated address.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: February 23, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Sagar Borikar, Ravikiran Kaidala Lakshman
  • Patent number: 10915462
    Abstract: Provided are techniques for destaging pinned retryable data in cache. A ranks scan structure is created with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable. A cache directory is partitioned into chunks, wherein each of the chunks includes one or more tracks from the cache. A number of tasks are determined for the scan of the cache. The number of tasks are executed to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 10877675
    Abstract: Provided is a system and method for improving memory management in a database. In one example, the method may include receiving a request to store a data object within a database, determining a category type associated with the data object from among a plurality of category types based on an attribute of the data object, and storing the data object via a memory pool corresponding to the determined category from among a plurality of memory pools corresponding to the plurality of respective categories, where the storing comprises allocating a first category type of data object to a first memory pool locked to main memory and allocating a second category type of data object to a second memory pool that is swapped out to disk over time. The locked memory pool can ensure that more important data items remain available even when they are the least recently used.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: December 29, 2020
    Assignee: SAP SE
    Inventors: Anupam Mukherjee, Mihnea Andrei
  • Patent number: 10856046
    Abstract: A buffer status metric representing a current amount of video packets in a video playback buffer (303) of a user device (300). A buffer status action is triggered based on the buffer status action and at least one of a mobility metric representing a mobility pattern of the user device (300) and a radio quality metric representing a signal strength of a radio channel carrying video packets towards the user device (300). The embodiments also relate to determination of a buffer control model (142) that can be used in order to control the video playback buffer based on the input metrics. The embodiments achieve a more efficient control of video playback buffers (303) and may reduce the risk of freezes during video playback.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: December 1, 2020
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Jing Fu, Steven Corroy, Selim Ickin
  • Patent number: 10783131
    Abstract: A system and method for efficiently storing data in a storage system. A data storage subsystem includes multiple data storage locations on multiple storage devices in addition to at least one mapping table. A data storage controller determines whether data to store in the storage subsystem has one or more patterns of data intermingled with non-pattern data within an allocated block. Rather than store the one or more pattern on the storage devices, the controller stores information in a header on the storage devices. The information includes at least an offset for the first instance of a pattern, a pattern length, and an identification of the pattern. The data may be reconstructed for a corresponding read request from the information stored in the header.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: September 22, 2020
    Assignee: Pure Storage, Inc.
    Inventors: Marco Sanvido, Richard Hankins, John Hayes, Steve Hodgson, Feng Wang, Sergey Zhuravlev, Andrew Kleinerman
  • Patent number: 10769021
    Abstract: A cache coherency protection system provides for data redundancy by sharing a cache coherence memory pool for protection purposes. The system works consistently across all communication protocols, yields improved data availability with potentially less memory waster and makes data availability faster in node/director failure scenarios. According to various embodiments, the cache coherency protection system may include a writer/requester director that receives a write request from host, a protection target director that is a partner of the writer/request director and a directory.
    Type: Grant
    Filed: December 31, 2010
    Date of Patent: September 8, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Victor Salamon, Paul A. Shelley, Ronald C. Unrau, Steven R. Bromling
  • Patent number: 10761984
    Abstract: Disclosed are embodiments for running an application on a local processor when the application is dependent on pages not locally present but contained in a remote host. The system is informed that the pages on which the application depends are locally present. While running, the application encounters a cache miss and a cache line satisfying the miss from the remote host is obtained and provided to the application. Alternatively, the page containing the cache line satisfying the miss is obtained and the portion of the page not including the cache line is stored locally while the cache line is provided to the application. The cache miss is discovered by monitoring coherence events on a coherence interconnect connected to the local processor. In some embodiments, the cache misses are tracked and provide a way to predict a set of pages to be pre-fetched in anticipation of the next cache misses.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: September 1, 2020
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Jayneel Gandhi, Aasheesh Kolli, Pratap Subrahmanyam
  • Patent number: 10725914
    Abstract: An infrequently used method is selected for eviction from a code cache repository by accessing a memory management data structure from an operating system, using the data structure to identify a first set of pages that are infrequently referenced relative to a second set of pages, determining whether or not a page of the first set of pages is part of a code cache repository and includes at least one method, in response to the page of the first set of pages being part of the code cache repository and including at least one method, flagging the at least one method as a candidate for eviction from the code cache repository, determining whether or not a code cache storage space limit has been reached for the code cache repository, and, in response to the storage space limit being reached, evicting the at least one flagged method from the code cache repository.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: July 28, 2020
    Assignee: International Business Machines Corporation
    Inventor: Marius Pirvu
  • Patent number: 10698832
    Abstract: The present invention discloses a method of using memory allocation to address hot and cold data, which comprises steps: using a hardware performance monitor (HPM) to detect at least one read/write event of a central processor; while a number of the read/write events reaches a threshold or a random value, a computer system recording an access type of the read/write event occurring latest and a memory address causing the read/write event; and the computer system assigning the memory object in the memory address to a volatile memory or a non-volatile memory according to the memory address and the access type. Thereby, data pages can be assigned automatically according to the access types categorized by the central processor, exempted from being assigned manually by engineers.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: June 30, 2020
    Assignee: NATIONAL CHUNG CHENG UNIVERSITY
    Inventor: Shi-Wu Lo
  • Patent number: 10698778
    Abstract: A dispersed storage network (DSN) includes multiple storage units. A processing unit included in the DSN issues an access request to one of the storage units, and identifies the storage unit as a failing storage unit based, at least in part, on a rate of growth of a network queue associated with the storage unit. the processing unit then issues an error indicator to a recovery unit for further action.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: June 30, 2020
    Assignee: PURE STORAGE, INC.
    Inventors: Kumar Abhijeet, Andrew D. Baptist, Ilir Iljazi, Gregory A. Papadopoulos, Jason K. Resch
  • Patent number: 10691566
    Abstract: Provided are a computer program product, system, and method for using a track format code in a cache control block for a track in a cache to process read and write requests to the track in the cache. A track format table associates track format codes with track format metadata. A determination is made as to whether the track format table has track format metadata matching track format metadata of a track staged into the cache. A determination is made as to whether a track format code from the track format table for the track format metadata in the track format table matches the track format metadata of the track staged. A cache control block for the track being added to the cache is generated including the determined track format code when the track format table has the matching track format metadata.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: June 23, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos, Beth A. Peterson
  • Patent number: 10678690
    Abstract: Providing fine-grained Quality of Service (QoS) control using interpolation for partitioned resources in processor-based systems is disclosed. In this regard, in one aspect, a processor-based system provides a partitioned resource (such as a system cache or memory access bandwidth to a shared system memory) that is subdivided into a plurality of partitions, and that is configured to service a plurality of resource clients. A resource allocation agent of the processor-based system provides a plurality of allocation indicators corresponding to each combination of resource client and partition, and indicating an allocation of each partition for each resource client. The resource allocation agent allocates the partitioned resource among the resource clients based on an interpolation of the plurality of allocation indicators.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: June 9, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Derek Robert Hower, Carl Alan Waldspurger, Vikramjit Sethi
  • Patent number: 10664396
    Abstract: A method and apparatus for performing a data transfer, which include a selection a data transfer operation mode, based on telemetry data, from a first operation mode where a first type of data is transferred from a memory of a computing system to one or more shared storage devices, and a second operation mode where a second type of data is transferred from the memory to the one or more shared storage devices, the first type of data being associated with a first range of address space of the one or more shared storage devices, the second type of data being associated with a second range of address space of the one or more shared storage devices different from the first range of address space. Furthermore, a data transfer from the memory to the one or more shared storage devices in the selected data transfer operation mode may be included.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: May 26, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Kshitij Doshi, Sujoy Sen
  • Patent number: 10664453
    Abstract: According to one embodiment, a file system (FS) of a storage system is partitioned into a plurality of FS partitions, where each FS partition stores segments of data files. In response to a request for writing a file to the storage system, the file is stored in a first of the FS partitions that is selected based on a time attribute of the file, such that files having similar time attributes are stored in an identical FS partition.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 26, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Soumyadeb Mitra, Windsor W. Hsu
  • Patent number: 10642757
    Abstract: Single hypervisor call to perform pin and unpin operations. A hypervisor call relating to the pinning of units of memory is obtained. The hypervisor call specifies an unpin operation for a first memory address and a pin operation for a second memory address. Based on obtaining the hypervisor call, at least one of the unpin operation for the first memory address and the pin operation for the second memory address is performed.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: May 5, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10642735
    Abstract: Disclosed are methods and apparatuses that implement automatic resizing of statement caches in response to cache metrics. One embodiment provides an approach for periodically calculating a session eligibility index for each session cache, wherein the session eligibility index indicates the priority level of the session cache for resizing, and selecting and resizing one or more cache sessions based at least in part on the session eligibility index.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 5, 2020
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Mehul Dilip Bastawala, Srinath Krishnaswamy, Lakshminarayanan Luxi Chidambaran, Santanu Datta
  • Patent number: 10635594
    Abstract: One embodiment is related to a method for redistributing cache space, comprising: determining utility values associated with all of a plurality of clients, each client being associated with a respective utility value, the utility value being indicative of an efficiency of cache space usage of the associated client; and redistributing cache space among the plurality of clients based on the utility values.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: April 28, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Grant Wallace, Philip Shilane, Shuang Liang
  • Patent number: 10592420
    Abstract: One embodiment is related to a method for redistributing cache space, comprising: determining a request by a first client of a plurality of clients for additional cache space, each of the plurality of clients being associated with a guaranteed minimum amount (MIN) and a maximum amount (MAX) of cache space; and fulfilling or denying the request based on an amount of cache space the first client currently occupies, an amount of cache space requested by the first client, and the MIN and the MAX cache space associated with the first client.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: March 17, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Shuang Liang, Philip Shilane, Grant Wallace
  • Patent number: 10592288
    Abstract: A computing system includes a computer in communication with a tiered storage system. The computing system identifies a set of data transferring to a storage tier within the storage system. The computing system identifies a program to which the data set is allocated and determines to increase or reduce resources of the computer allocated to the program, based on the set of data transferring to the storage tier. The computing system discontinues transferring the set of data to the storage tier if a resource allocated to the program cannot be increased.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: March 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rahul M. Fiske, Akshat Mithal, Sandeep R. Patil, Subhojit Roy
  • Patent number: 10592418
    Abstract: Shared memory caching resolves latency issues in computing nodes associated with a cluster in a virtual computing environment. A portion of random access memory in one or more of the computing nodes is allocated for shared use by the cluster. Whenever local cache memory is unable in one of the computing nodes, a cluster neighbor cache allocated in a different computing node may be utilized as remote cache memory. Neighboring computing nodes may thus share their resources for the benefit of the cluster.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: March 17, 2020
    Assignee: Dell Products, L.P.
    Inventor: John Kelly
  • Patent number: 10585808
    Abstract: Single hypervisor call to perform pin and unpin operations. A hypervisor call relating to the pinning of units of memory is obtained. The hypervisor call specifies an unpin operation for a first memory address and a pin operation for a second memory address. Based on obtaining the hypervisor call, at least one of the unpin operation for the first memory address and the pin operation for the second memory address is performed.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: March 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10585670
    Abstract: A processor architecture includes a register file hierarchy to implement virtual registers that provide a larger set of registers than those directly supported by an instruction set architecture to facilitate multiple copies of the same architecture register for different processing threads, where the register file hierarchy includes a plurality of hierarchy levels. The processor architecture further includes a plurality of execution units coupled to the register file hierarchy.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventor: Mohammad A. Abdallah
  • Patent number: 10565139
    Abstract: Multiple memory devices, such as hard drives, can be combined and logical partitions can be formed between the drives to allow a user to control regions on the drives that will be used for storing content, and also to provide redundancy of stored content in the event that one of the drives fails. Priority levels can be assigned to content recordings such that higher value content can be stored in more locations and easily accessible locations within the utilized drives. Users can control and organize how recorded content is stored between the drives such that an external drive may be removed from a first gateway device and attached to a second gateway device without losing the ability to access the recorded content from the first gateway device at a later time. In this manner, a user is provided with the ability to transport an external drive containing stored content recordings between multiple different gateway devices such that the recordings may be accessed at different locations or user premises.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: February 18, 2020
    Assignee: Comcast Cable Communications, LLC
    Inventor: Ross Gilson
  • Patent number: 10552329
    Abstract: A SSD caching system for hybrid storages is disclosed. The caching system for hybrid storages includes: a Solid State Drive (SSD) for storing cached data, separated into a Repeated Pattern Cache (RPC) area and a Dynamical Replaceable Cache (DRC) area; and a caching managing module, including: an Input/output (I/O) profiling unit, for detecting I/O requests for accesses of blocks in a Hard Disk Drive (HDD) during a number of continuously detecting time intervals, and storing first data corresponding to first blocks being repeatedly accessed at least twice in individual continuously detecting time intervals to the RPC area sequentially; and a hot data searching unit, for detecting I/O requests for accesses of a HDD during a independently detecting time interval, and storing second data corresponding to second blocks being accessed at least twice in the independently detecting time interval to the DRC area sequentially.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 4, 2020
    Assignee: Prophetstor Data Services, Inc.
    Inventors: Wen Shyen Chen, Ming Jen Huang
  • Patent number: 10552327
    Abstract: Systems, methods, and computer readable media to improve the operation of electronic devices that use integrated cache systems are described. In general, techniques are disclosed to manage the leakage power attributable to an integrated cache memory by dynamically resizing the cache during device operations. More particularly, run-time cache operating parameters may be used to dynamically determine if the cache may be resized. If effective use of the cache may be maintained using a smaller cache, a portion of the cache may be power-gated (e.g., turned off). The power loss attributable to that portion of the cache power-gated may thereby be avoided. Such power reduction may extend a mobile device's battery runtime. Cache portions previously turned off may be brought back online as processing needs increase so that device performance does not degrade.
    Type: Grant
    Filed: August 23, 2016
    Date of Patent: February 4, 2020
    Assignee: Apple Inc.
    Inventors: Robert P. Esser, Nikolay N. Stoimenov
  • Patent number: 10541044
    Abstract: Providing efficient handling of memory array failures in processor-based systems is disclosed. In this regard, in one aspect, a memory controller of a processor-based device is configured to detect a defect within a memory element of a plurality of memory elements of a memory array. In response, a disable register of one or more disable registers is set to correspond to the memory element to indicate that the memory element is disabled. The memory controller receives a memory access request to a memory address corresponding to the memory element, and determines, based on one or more disable registers, whether the memory element is disabled. If so, the memory controller disallows the memory access request. Some aspects may provide that the memory controller, in response to detecting the defect, provides a failure indication to an executing process, and subsequently receives, from the executing process, a request to set the disable register.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: January 21, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Thomas Philip Speier, Viren Ramesh Patel, Michael Phan, Manish Garg, Kevin Magill, Paul Steinmetz, Clint Mumford, Kshitiz Saxena
  • Patent number: 10503655
    Abstract: The described embodiments include a computing device that caches data acquired from a main memory in a high-bandwidth memory (HBM), the computing device including channels for accessing data stored in corresponding portions of the HBM. During operation, the computing device sets each of the channels so that data blocks stored in the corresponding portions of the HBM include corresponding numbers of cache lines. Based on records of accesses of cache lines in the HBM that were acquired from pages in the main memory, the computing device sets a data block size for each of the pages, the data block size being a number of cache lines. The computing device stores, in the HBM, data blocks acquired from each of the pages in the main memory using a channel having a data block size corresponding to the data block size for each of the pages.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: December 10, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Mitesh R. Meswani, Jee Ho Ryoo
  • Patent number: 10474486
    Abstract: Various systems, methods, and processes for accelerating data access in application and testing environments are disclosed. A production dataset is received from a storage system, and cached in a consolidated cache. The consolidated cache is implemented by an accelerator virtual machine. A file system client intercepts a request for the production dataset from one or more application virtual machines, and transmits the request to the accelerator virtual machine. The accelerator virtual machine serves the production dataset to the one or more application virtual machines from the consolidated cache.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: November 12, 2019
    Assignee: Veritas Technologies LLC
    Inventors: Chirag Dalal, Vaijayanti Bharadwaj
  • Patent number: 10437822
    Abstract: In one respect, there is provided a method. The method can include identifying, based on a plurality of queries executed at a distributed database, a disjoint table set. The identifying of the disjoint table set can include: identifying a first table used in executing a first query; identifying a second query also using the first table used in executing the first query; identifying a second table used in executing the second query but not in executing the first query; and including, in the disjoint table set, the first table and the second table. The method can further include allocating, based at least on the first disjoint table set, a storage and/or management of the first disjoint table set such that the first disjoint table set is stored at and/or managed by at least one node in the distributed database. Related systems and articles of manufacture are also disclosed.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: October 8, 2019
    Assignee: SAP SE
    Inventors: Antje Heinle, Hans-Joerg Leu
  • Patent number: 10417139
    Abstract: A list of a first type of tracks in a cache is generated. A list of a second type of tracks in the cache is generated, wherein I/O operations are completed relatively faster to the first type of tracks than to the second type of tracks. A determination is made as to whether to demote a track from the list of the first type of tracks or from the list of the second type of tracks.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: September 17, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta