Partitioned Cache Patents (Class 711/129)
  • Patent number: 12207309
    Abstract: Provided are a data merging method and apparatus for physical random access channel (PRACH) data merging, and a storage medium. The method includes the following. A task parameter of Current PRACH data merging is parsed. PRACH data of a to-be-merged antenna is read from a PRACH data cache. PRACH data merging between multiple antennas and/or RACH data merging within a PRACH of a present antenna are performed according to the task parameter. Merged data is output to a shared cache.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 21, 2025
    Assignee: SANECHIPS TECHNOLOGY CO., LTD.
    Inventors: Dong Li, Wenyue Liu
  • Patent number: 12141159
    Abstract: Database environments may choose to schedule complex analytics processing to be performed by specialized processing environments by caching source datasets or other data needed for the analytics and then outputting results back to customer datasets. It is complex to schedule user database operations, such as running dataflows, recipes, scripts, rules, or the like that may rely on output from the analytics, if the user database operations are on one schedule, while the analytics is on another schedule. User/source datasets may become out of sync and one or both environments may operate on stale data. One way to resolve this problem is to define triggers that, for example, monitor for changes to datasets (or other items of interest) by analytics or other activity and automatically run dataflows, recipes, or the like that are related to the changed datasets (or other items of interest).
    Type: Grant
    Filed: April 25, 2023
    Date of Patent: November 12, 2024
    Assignee: Salesforce, Inc.
    Inventors: Keith Kelly, Ravishankar Arivazhagan, Wenwen Liao, Zhongtang Cai, Ali Sakr
  • Patent number: 12079194
    Abstract: A database system stores a table as a set of column files in a columnar format in a manner that improves the write performance of the table and avoids use of separate metadata repository. In embodiments, each column file groups values into entity chunks indexed by an entity index. Each chunk includes a live value index that determines which rows in chunk has live values. New values are written to the column file by appending an updated copy of the entity chunk. The entity index to refer to the newly written chunk as the latest version. This approach avoids expensive in-place updating of individual column values and allows the update to be performed much more quickly. In embodiments, the database system encodes metadata such as table schema information using file naming and placement conventions in the file store, so that a centralized metadata repository is not required.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: September 3, 2024
    Inventors: Austin Lee, Vikram Jiandani
  • Patent number: 12079135
    Abstract: A memory controller includes logic circuitry to generate a first data address identifying a location in a first external memory array for storing first data, a first tag address identifying a location in a second external memory array for storing a first tag, a second data address identifying a location in the second external memory array for storing second data, and a second tag address identifying a location in the first external memory array for storing a second tag. The memory controller includes an interface that transfers the first data address and the first tag address for a first set of memory operations in the first and the second external memory arrays. The interface transfers the second data address and the second tag address for a second set of memory operations in the first and the second external memory arrays.
    Type: Grant
    Filed: June 14, 2023
    Date of Patent: September 3, 2024
    Assignee: Rambus Inc.
    Inventor: Frederick A. Ware
  • Patent number: 12066943
    Abstract: The present disclosure is suitable for the field of hardware chip design, and particularly relates to an alias processing method and system based on L1D-L2 caches and a related device. A method for solving an alias problem of the L1D cache based on a L1D cache-L2 cache structure and a corresponding system module are disclosed. The method provided by the present disclosure can maximize hardware resource efficiency, without limiting a chip structure, a hardware system type, an operating system compatibility and a chip performance, and meanwhile, the module realized based on the cache cannot greatly increase power consumption of the whole system, thus having good expandability.
    Type: Grant
    Filed: November 20, 2023
    Date of Patent: August 20, 2024
    Assignee: Rivai Technologies (Shenzhen) Co., Ltd.
    Inventors: Muyang Liu, Rong Chen, Zhilei Yang
  • Patent number: 12001237
    Abstract: Systems, methods, and devices for performing pattern-based cache block compression and decompression. An uncompressed cache block is input to the compressor. Byte values are identified within the uncompressed cache block. A cache block pattern is searched for in a set of cache block patterns based on the byte values. A compressed cache block is output based on the byte values and the cache block pattern. A compressed cache block is input to the decompressor. A cache block pattern is identified based on metadata of the cache block. The cache block pattern is applied to a byte dictionary of the cache block. An uncompressed cache block is output based on the cache block pattern and the byte dictionary. A subset of cache block patterns is determined from a training cache trace based on a set of compressed sizes and a target number of patterns for each size.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: June 4, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Matthew Tomei, Shomit N. Das, David A. Wood
  • Patent number: 11983117
    Abstract: The embodiments herein describe a multi-tenant cache that implements fine-grained allocation of the entries within the cache. Each entry in the cache can be allocated to a particular tenant—i.e., fine-grained allocation—rather than having to assign all the entries in a way to a particular tenant. If the tenant does not currently need those entries (which can be tracked using counters), the entries can be invalidated (i.e., deallocated) and assigned to another tenant. Thus, fine-grained allocation provides a flexible allocation of entries in a hardware cache that permits an administrator to reserve any number of entries for a particular tenant, but also permit other tenants to use this bandwidth when the reserved entries are not currently needed by the tenant.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: May 14, 2024
    Assignee: XILINX, INC.
    Inventors: Millind Mittal, Jaideep Dastidar
  • Patent number: 11940914
    Abstract: Aspects of the present disclosure relate to systems and methods for improving performance of a partial cache collapse by a processing device. Certain embodiments provide a method for performing a partial cache collapse procedure, the method including: counting a number of cache lines that satisfy an eviction criteria based on a deterministic cache eviction policy in each cache way of a group of cache ways; selecting at least one cache way from the group for collapse, based on its corresponding number of cache lines that satisfy the eviction criteria; and performing the partial cache collapse procedure based on the at least one cache way selected from the group for collapse.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: March 26, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Hithesh Hassan Lepaksha, Sharath Kumar Nagilla, Darshan Kumar Nandanwar, Nirav Narendra Desai, Venkata Biswanath Devarasetty
  • Patent number: 11880312
    Abstract: A method includes storing a function representing a set of data elements stored in a backing memory and, in response to a first memory read request for a first data element of the set of data elements, calculating a function result representing the first data element based on the function.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Kishore Punniyamurthy, SeyedMohammad SeyedzadehDelcheh, Sergey Blagodurov, Ganesh Dasika, Jagadish B Kotra
  • Patent number: 11868256
    Abstract: Processing a read request to read metadata from an entry of a metadata page may include: determining whether the metadata page is cached; responsive to determining the metadata page is cached, obtaining the first metadata from the cached metadata page; responsive to determining the metadata page is not cached, determining whether the requested metadata is in a metadata log of metadata changes stored in a volatile memory; and responsive to determining the metadata is the metadata log of metadata changes stored in the volatile memory, obtaining the requested metadata from the metadata log. Processing a write request that overwrites an existing value of a metadata page with an updated value may include: recording a metadata change in the metadata log that indicates to update the metadata page with the updated value; and performing additional processing during destaging that uses the existing value prior to overwriting it with the updated value.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: January 9, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Philip Love, Vladimir Shveidel, Bar David
  • Patent number: 11803473
    Abstract: Systems and techniques for dynamic selection of policy that determines whether copies of shared cache lines in a processor core complex are to be stored and maintained in a level 3 (L3) cache of the processor core complex are based on one or more cache line sharing parameters or based on a counter that tracks L3 cache misses and cache-to-cache (C2C) transfers in the processor core complex, according to various embodiments. Shared cache lines are shared between processor cores or between threads. By comparing either the cache line sharing parameters or the counter to corresponding thresholds, a policy is set which defines whether copies of shared cache lines at such indices are to be retained in the L3 cache.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kelley, Paul Moyer
  • Patent number: 11768767
    Abstract: A query for opaque objects stored within a data store and not currently cached within a cache is received from an application. The query to the data store is passed to the data store, and a handle to memory location is received from the data store at which the data store has temporarily stored a message including the opaque objects. The opaque objects are added to the cache by treating the memory location as a cache entry for the opaque objects. Cache metadata for the cache entry is generated, and along with the handle is stored within a metadata cache entry of a metadata cache structure separate from the message. The handle and the cache metadata can be returned to the application, where the cache metadata can be returned as an opaque context of the cache entry.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: September 26, 2023
    Assignee: MICRO FOCUS LLC
    Inventor: Michael Wojcik
  • Patent number: 11762581
    Abstract: A method, device, and system for controlling a data read/write command in an NVMe over fabric architecture. In the method provided in the embodiments of the present disclosure, a data processing unit receives a control command sent by a control device, the data processing unit divides a storage space of a buffer unit into at least two storage spaces according to the control command sent by the control device, and establishes a correspondence between the at least two storage spaces and command queues, and after receiving a first data read/write command that is in a first command queue and that is sent by the control device, the data processing unit buffers, in a storage space that is of the buffer unit and that is corresponding to the first command queue, data to be transmitted according to the first data read/write command.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: September 19, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Victor Gissin, Xin Qiu, Pei Wu, Huichun Qu, Jinbin Zhang
  • Patent number: 11734276
    Abstract: Embodiments of the present application provide a method and apparatus for updating search cache, which relate to the technical field of multimedia.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: August 22, 2023
    Assignee: BEIJING QIYI CENTURY SCIENCE & TECHNOLOGY CO., LTD.
    Inventors: Hongpeng Wang, Aiyun Chen, Ting Yao
  • Patent number: 11714784
    Abstract: Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: August 1, 2023
    Assignee: Oracle International Corporation
    Inventors: Mark Maybee, James Kremer, Ankit Gureja, Kimberly Morneau
  • Patent number: 11693778
    Abstract: A method includes monitoring one or more metrics for each of a plurality of cache users sharing a cache, and assigning each of the plurality of cache users to one of a plurality of groups based on the monitored one or more metrics.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: July 4, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: John Kelley
  • Patent number: 11675803
    Abstract: Database environments may choose to schedule complex analytics processing to be performed by specialized processing environments by caching source datasets or other data needed for the analytics and then outputting results back to customer datasets. It is complex to schedule user database operations, such as running dataflows, recipes, scripts, rules, or the like that may rely on output from the analytics, if the user database operations are on one schedule, while the analytics is on another schedule. User/source datasets may become out of sync and one or both environments may operate on stale data. One way to resolve this problem is to define triggers that, for example, monitor for changes to datasets (or other items of interest) by analytics or other activity and automatically run dataflows, recipes, or the like that are related to the changed datasets (or other items of interest).
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: June 13, 2023
    Assignee: SALESFORCE, INC.
    Inventors: Keith Kelly, Ravishankar Arivazhagan, Wenwen Liao, Zhongtang Cai, Ali Sakr
  • Patent number: 11652685
    Abstract: Embodiments operate a multi-tenant cloud system. At a first data center, embodiments authenticate a first client corresponding to a first tenant ID and store resources that correspond to the first client, the first data center in communication with a second data center that is configured to authenticate the first client and replicate the resources. The first data center receives an Application Programming Interface (“API”) request for the first client corresponding to a change to the resources, and generates a change log and corresponding change event message in response to the API request. Embodiments compute a first hash corresponding to the first tenant ID of the change log to determine a first partition of a first queue at the first data center. The first data center pushes the change event message to the second data center via an API call.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: May 16, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Venkateswara Reddy Medam, Fannie Ho, Kuang-Yu Shih, Balakumar Balu, Sudhir Kumar Srinivasan
  • Patent number: 11645198
    Abstract: A method of managing a storage system comprises detecting a reference to a first page in the storage system. The method also comprises creating a first candidate block for the first page based on the detecting. The first candidate block may comprise a continuous series of pages that begins with the first page. The method also comprises monitoring subsequent references to pages within the first candidate block. The method also comprises determining that the first candidate block meets a first set of hot-block requirements. The method also comprises relocating the first candidate block to a hot-block space in a buffer pool based on the determining, resulting in a first hot block.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Shuo Li, Xiaobo Wang, Sheng Yan Sun, Hong Mei Zhang
  • Patent number: 11625327
    Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: April 11, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Patent number: 11403232
    Abstract: One example method includes determining a fall through threshold value for a cache, computing a length ‘s’ of a sequence that is close to LRU eviction, and the length ‘s’ is computed when a current fall through metric value is greater than the fall through threshold value, when the sequence length ‘s’ is greater than a predetermined threshold length ‘k,’ performing a first shift of an LRU position to define a protected queue of the cache, initializing a counter with a value of ‘r’, decrementing the counter each time a requested page is determined to be included in the protected queue, until ‘r’=0, and performing a second shift of the LRU position.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: August 2, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Hugo De Oliveira Barbalho, Jonas F. Dias, Vinicius Michel Gottin
  • Patent number: 11397683
    Abstract: Systems and methods are disclosed including a first memory device, a second memory device coupled to the first memory device, where the second memory device has a lower access latency than the first memory device and acts as a cache for the first memory device. A processing device operatively coupled to the first and second memory devices can track access statistics of segments of data stored at the second memory device, the segments having a first granularity, and determine to update, based on the access statistics, a segment of data stored at the second memory device from the first granularity to a second granularity. The processing device can further retrieve additional data associated with the segment of data from the first memory device and store the additional data at the second memory device to form a new segment having the second granularity.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: July 26, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Horia C. Simionescu, Paul Stonelake, Chung Kuang Chin, Narasimhulu Dharanikumar Kotte, Robert M. Walker, Cagdas Dirik
  • Patent number: 11393065
    Abstract: A mechanism is described for facilitating dynamic cache allocation in computing devices in computing devices. A method of embodiments, as described herein, includes facilitating monitoring one or more bandwidth consumptions of one or more clients accessing a cache associated with a processor; computing one or more bandwidth requirements of the one or more clients based on the one or more bandwidth consumptions; and allocating one or more portions of the cache to the one or more clients in accordance with the one or more bandwidth requirements.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Kiran C. Veernapu, Mohammed Tameem, Altug Koker, Abhishek R. Appu
  • Patent number: 11381889
    Abstract: An appliance management system manages information about appliances. The system includes an information collection system and an information transmission apparatus. The information collection system includes a database storing the information about the appliances, and first and second reception processing units that write information about the appliances that has been received to the database. The information transmission apparatus includes a transmission unit that transmits the information about the appliances to the first or second reception processing unit via a communication line, and a transmission information creation unit that creates transmission information as the information about the appliances. The transmission information creation unit creates as the transmission information to be sent to the first and second reception processing units, first and second transmission information from first and second type information about the appliances, respectively.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: July 5, 2022
    Assignee: Daikin Industries, Ltd.
    Inventors: Shuuji Furukawa, Kenta Nohara, Gou Nakatsuka
  • Patent number: 11381657
    Abstract: A computer system is provided. The computer system can include a memory, a network interface, and at least one processor coupled to the memory and the network interface. The at least one processor can be configured to identify a file to provide to a computing device; predict a geolocation at which the computing device is to request access to the file; predict a network bandwidth to be available to the computing device at the geolocation; determine, based on the file and the network bandwidth, a first portion of the file to store in a cache of the computing device; and download, via the network interface, the first portion of the file to the cache.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: July 5, 2022
    Assignee: Citrix Systems, Inc.
    Inventors: Praveen Raja Dhanabalan, Anudeep Narasimhaprasad Athlur, Nandikotkur Achyuth
  • Patent number: 11372761
    Abstract: A method for dynamically adjusting cache memory partition sizes within a storage system includes computing a read hit ratio for data accessed in each cache partition and an average read hit ratio for all the cache partitions over a time interval. The cache memory includes a higher performance portion (DRAM) and lower performance portion (SCM). The method increases or decreases the partition size for each cache partition by comparing the read hit ratio for the partition to the average read hit ratio for all the partitions. Each cache partition includes maximum and minimum partition sizes, and read hit and read access counters. The SCM portion of the cache memory includes cache partitions reserved for storing data of a specific type, or data used for a specific purpose or with a specific software application. A corresponding storage controller and computer program product are also disclosed.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: June 28, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Kyler A. Anderson, Matthew G. Borlick
  • Patent number: 11372557
    Abstract: A data storage array is configured for m-way resiliency across a first plurality of storage nodes. The m-way resiliency causes the data storage array to direct each top-level write to at least m storage nodes within the first plurality, for committing data to a corresponding capacity region allocated on each storage node to which each write operation is directed. Based on the data storage array being configured for m-way resiliency, an extra-resilient cache is allocated across a second plurality of storage nodes comprising at least s storage nodes (where s>m), including allocating a corresponding cache region on each of the second plurality for use by the extra-resilient cache. Based on determining that a particular top-level write has not been acknowledged by at least n of the first plurality of storage nodes (where n m), the particular top-level write is redirected to the extra-resilient cache.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Taylor Alan Hope, Vinod R Shankar, Justin Sing Tong Cheung
  • Patent number: 11349948
    Abstract: The invention relates to a computer-implemented method, a corresponding a computer program product and a corresponding apparatus for distributing cached content in a network, the computer-implemented method comprising: collecting statistics regarding requests made and paths taken by the requests from source nodes to server nodes via intermediate nodes, the source nodes, intermediate nodes, and server nodes interconnected by edges having queues with respective queue sizes associated therewith, the requests including indications of content items to be retrieved; storing the content items at the server nodes; caching, by the intermediate nodes, the content items up to a caching capacity; and performing caching decisions that determine which of the content items are to be cached at which of the intermediate nodes, based upon costs that are monotonic, non-decreasing functions of the sizes of the queues.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: May 31, 2022
    Assignee: Northeastern University
    Inventors: Milad Mahdian, Armin Moharrer, Efstratios Ioannidis, Edmund Meng Yeh
  • Patent number: 11336742
    Abstract: Systems, methods, apparatuses, and computer readable media may be configured for improved predictive content caching. A system may determine a value that is a function of one or more rates at which a portion of a content item is being consumed and based on this value, may also calculate a projected position after a predetermined time period. By comparing the projected position to a dynamically adjustable threshold position for requesting a new portion of the content item, a determination may be made as to when to retrieve and/or cache a new portion of the content item.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 17, 2022
    Assignee: Comcast Cable Communications, LLC
    Inventor: Warren Wong
  • Patent number: 11300992
    Abstract: Methods and systems for implementing independent time in a hosted operating environment are disclosed. The hosted, or guest, operating environment, can be seeded with a guest time value by a guest operating environment manager that maintains a time delta between a host clock time and an enterprise time. The guest operating environment can subsequently manage its guest clock from the guest time value. If the guest operating environment is halted, the guest operating environment manager can manage correspondence between the host clock time and the enterprise time by periodically assessing divergence between actual and expected values of the host clock time.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: April 12, 2022
    Assignee: Unisys Corporation
    Inventors: Robert F. Inforzato, Dwayne E. Ebersole, Daryl R. Smith, Grace W. Lin, Andrew Ward Beale, Loren C. Wilton
  • Patent number: 11256625
    Abstract: Memory transactions can be tagged with a partition identifier selected depending on which software execution environment caused the memory transaction to be issued. A memory system component can control allocation of resources for handling the memory transaction or manage contention for said resources depending on a selected set of memory system component parameters selected depending on the partition identifier specified by the memory transaction, or can control, depending on the partition identifier specified by the memory transaction, whether performance monitoring data is updated in response to the memory transaction. Page table walk memory transactions may be assigned a different partition identifier to the partition identifier assigned to the corresponding data/instruction access memory transaction.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: February 22, 2022
    Assignee: Arm Limited
    Inventor: Steven Douglas Krueger
  • Patent number: 11256620
    Abstract: System and methods are disclosed include a memory device and a processing device coupled to the memory device. The processing device can determine an amount of valid blocks in a memory device of a memory sub-system. The processing device can then determine a surplus amount of valid blocks on the memory device based on the amount of valid blocks. The processing device can then configure a size of a cache of the memory device based on the surplus amount of valid blocks.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: February 22, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kevin R. Brandt, Peter Feeley, Kishore Kumar Muchherla, Yun Li, Sampath K. Ratnam, Ashutosh Malshe, Christopher S. Hale, Daniel J. Hubbard
  • Patent number: 11234151
    Abstract: There is provided a method in a first device of a cellular communication system, the method comprising: acquiring a first value of a performance indicator; causing a transmission of management plane performance data to a second device of the cellular communication system, said performance data comprising said first value; acquiring a second value of the performance indicator; and preventing a transmission of the second value if the second value is substantially equal to the first value.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: January 25, 2022
    Assignee: Nokia Technologies Oy
    Inventors: Kimmo Kalervo Hatonen, Shubham Kapoor, Ville Matti Kojola, Sasu Tarkoma
  • Patent number: 11231949
    Abstract: Disclosed are embodiments for migrating a virtual machine (VM) from a source host to a destination host while the virtual machine is running on the destination host. The system includes an RDMA facility connected between the source and destination hosts and a device coupled to a local memory, the local memory being responsible for memory pages of the VM instead of the source host. The device is configured to copy pages of the VM to the destination host and to maintain correct operation of the VM by monitoring coherence events, such as a cache miss, caused by the virtual machine running on the destination host. The device services these cache misses using the RDMA facility and copies the cache line satisfying the cache miss to the CPU running the VM. The device also tracks the cache misses to create an access pattern that it uses to predict future cache misses.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: January 25, 2022
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Jayneel Gandhi, Aasheesh Kolli, Pratap Subrahmanyam
  • Patent number: 11210231
    Abstract: Techniques for performing cache management includes partitioning entries of a hash table into buckets, wherein each of the buckets includes a portion of the entries of the hash table, configuring a cache, wherein the configuring includes allocating a section of the cache for exclusive use by each bucket, and performing first processing that stores a data block in the cache. The first processing includes determining a hash value for a data block, selecting, in accordance with the hash value, a first bucket of the plurality of buckets, wherein a first section of the cache is used exclusively for storing cached data blocks of the first bucket, storing metadata used in connection with caching the data block in a first entry of the first bucket, and storing the data block in a first cache location of the first section of the cache.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: December 28, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Anton Kucherov, Ronen Gazit, Vladimir Shveidel, Uri Shabi
  • Patent number: 11176041
    Abstract: A method for cache coherency in a reconfigurable cache architecture is provided. The method includes receiving a memory access command, wherein the memory access command includes at least an address of a memory to access; determining at least one access parameter based on the memory access command; and determining a target cache bin for serving the memory access command based in part on the at least one access parameter and the address.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: November 16, 2021
    Assignee: Next Silicon Ltd.
    Inventor: Elad Raz
  • Patent number: 11151005
    Abstract: A method, computer program product, and computing system for writing, from a first node to a second node, a first portion of data from a memory pool in the first node defined by, at least in part, a first pointer. One or more input/output (IO) operations may be received while writing the first portion of data to the second node. Data from the one or more IO operations may be stored within the memory pool after the first pointer.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: October 19, 2021
    Assignee: EMC Holding Company, LLC
    Inventors: Bar David, Vladimir Shveidel
  • Patent number: 11150962
    Abstract: A technique is introduced for intercepting memory calls from a user-space application and applying an allocation policy to determine whether such calls are handled using volatile memory such as dynamic random-access memory (DRAM) or persistent memory (PMEM). In an example embodiment, memory calls from an application are intercepted by a memory allocation capture library. Such calls may be to a memory function such as malloc( ) and may be configured to cause a portion of DRAM to be allocated to the application to process a task. The memory allocation capture library then determines whether the intercepted call satisfies capture criteria associated with an allocation policy. If the intercepted call does satisfy the capture criteria, the call is processed to cause a portion of PMEM to be allocated to the application instead of DRAM.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: October 19, 2021
    Assignee: MemVerge, Inc.
    Inventors: Ronald S. Niles, Yue Li
  • Patent number: 11138123
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: October 5, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Patent number: 11119981
    Abstract: In one example, a method may include receiving a write operation corresponding to a portion of a data chunk stored at a first storage location in a write-in-place file system. The write-in-place file system may include encoded data chunks and unencoded data chunks. The method may include determining whether the data chunk is an encoded data chunk based on metadata associated with the data chunk, modifying the data chunk based on the write operation, and selectively performing a redirect-on-write operation on the modified data chunk based on the determination.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: September 14, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Shyamalendu Sarkar, Sri Satya Sudhanva Kambhammettu, Narayanan Ananthakrishnan Nellayi, Naveen B
  • Patent number: 11113302
    Abstract: Database environments may choose to schedule complex analytics processing to be performed by specialized processing environments by caching source datasets or other data needed for the analytics and then outputting results back to customer datasets. It is complex to schedule user database operations, such as running dataflows, recipes, scripts, rules, or the like that may rely on output from the analytics, if the user database operations are on one schedule, while the analytics is on another schedule. User/source datasets may become out of sync and one or both environments may operate on stale data. One way to resolve this problem is to define triggers that, for example, monitor for changes to datasets (or other items of interest) by analytics or other activity and automatically run dataflows, recipes, or the like that are related to the changed datasets (or other items of interest).
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: September 7, 2021
    Assignee: SALESFORCE.COM, INC.
    Inventors: Keith Kelly, Ravishankar Arivazhagan, Wenwen Liao, Zhongtang Cai, Ali Sakr
  • Patent number: 11086793
    Abstract: Techniques for cache management may include: partitioning a cache into buckets of cache pages, wherein each bucket has an associated cache page size and each bucket includes cache pages of the associated cache page size for that bucket, wherein the cache includes compressed pages of data and uncompressed pages of data; and performing processing that stores a first page of data in the cache. The processing may include storing the first page of data in a first cache page of a selected bucket having a first associated cache page size determined in accordance with a first compressed size of the first page of data. The cache may be repartitioned among the buckets based on associated access frequencies of the buckets of cache pages.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: August 10, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Anton Kucherov, David Meiri
  • Patent number: 11074197
    Abstract: A computational device receives indications of a minimum retention time and a maximum retention time in cache for a first plurality of tracks, wherein no indications of a minimum retention time or a maximum retention time in the cache are received for a second plurality of tracks. A cache management application demotes a track of the first plurality of tracks from the cache, in response to determining that the track is a least recently used (LRU) track in a LRU list of tracks in the cache and the track has been in the cache for a time that exceeds the minimum retention time. The cache management application demotes the track of the first plurality of tracks, in response to determining that the track has been in the cache for a time that exceeds the maximum retention time.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: July 27, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Joseph Hayward, Kyler A. Anderson, Matthew G. Borlick
  • Patent number: 11061816
    Abstract: Techniques are provided for computer memory mapping and allocation. In an example, a virtual memory address space is divided into an active half and a passive half. Processors make memory allocations to their respective portions of the active half until one processor has made a determined number of allocations. When that occurs, and when all memory in the passive half that has been allocated has been returned, then the active and passive halves are switched, and all processors are switched to making allocations in the newly-active half.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 13, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventor: Max Laier
  • Patent number: 11042483
    Abstract: A computer system includes a cache and processor. The cache includes a plurality of data compartments configured to store data. The data compartments are arranged as a plurality of data rows and a plurality of data columns. Each data row is defined by an addressable index. The processor is in signal communication with the cache, and is configured to operate in a full cache purge mode and a selective cache purge mode. In response to invoking one or both of the full cache purge mode and the selective cache purge mode, the processor performs a pipe pass on a selected addressable index to determine a number of valid compartments and a number of invalid compartments, and performs an eviction operation on the valid compartments while skipping the eviction operation on the invalid compartments.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: June 22, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Robert J. Sonnelitter, III, Deanna P. D. Berger, Vesselina Papazova
  • Patent number: 11023383
    Abstract: A list of a first type of tracks in a cache is generated. A list of a second type of tracks in the cache is generated, wherein I/O operations are completed relatively faster to the first type of tracks than to the second type of tracks. A determination is made as to whether to demote a track from the list of the first type of tracks or from the list of the second type of tracks.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: June 1, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 11003592
    Abstract: In an example, an apparatus comprises a plurality of compute engines; and logic, at least partially including hardware logic, to detect a cache line conflict in a last-level cache (LLC) communicatively coupled to the plurality of compute engines; and implement context-based eviction policy to determine a cache way in the cache to evict in order to resolve the cache line conflict. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: May 11, 2021
    Assignee: INTEL CORPORATION
    Inventors: Neta Zmora, Eran Ben-Avi
  • Patent number: 10997031
    Abstract: A method, computer program product, and computer system for executing an automatic recovery of log metadata. A secondary storage processor may request one or more log metadata buffer values from a first buffer used by a primary storage processor. The secondary storage processor may update one or more log metadata buffer values from a second buffer used by the secondary storage processor.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: May 4, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Cheng Wan, Socheavy D. Heng, Xinlei Xu, Yousheng Liu, Baote Zhuo
  • Patent number: 10956331
    Abstract: Techniques described herein generally include methods and systems related to cache partitioning in a chip multiprocessor. Cache-partitioning for a single thread or application between multiple data sources improves energy or latency efficiency of a chip multiprocessor by exploiting variations in energy cost and latency cost of the multiple data sources. Partition sizes for each data source may be selected using an optimization algorithm that minimizes or otherwise reduces latencies or energy consumption associated with cache misses.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: March 23, 2021
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 10929310
    Abstract: Systems and methods provide for optimizing utilization of an Address Translation Cache (ATC). A network interface controller (NIC) can write information reserving one or more cache lines in a first level of the ATC to a second level of the ATC. The NIC can receive a request for a direct memory access (DMA) to an untranslated address in memory of a host computing system. The NIC can determine that the untranslated address is not cached in the first level of the ATC. The NIC can identify a selected cache line in the first level of the ATC to evict using the request and the second level of the ATC. The NIC can receive a translated address for the untranslated address. The NIC can cache the untranslated address in the selected cache line. The NIC can perform the DMA using the translated address.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: February 23, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Sagar Borikar, Ravikiran Kaidala Lakshman