Abstract: A polling interval of a poller in a data storage system is dynamically adjusted to balance latency and resource utilization. Performance metrics are regularly estimated or measured and derived values are calculated including a latency share and a cost per event, based on a latency metric for polling latency and a cost metric for use of processing resources. A target latency share is adjusted by (1) an increase based on a CPU utilization metric being above a utilization threshold, and (2) a reduction based on the CPU utilization metric being below the utilization threshold and the cost per event being lower than a cost-per-event threshold. The polling interval is adjusted by (1) increasing the polling interval based on the latency share being less than the target latency share, and (2) decreasing the polling interval based on the latency share being greater than the target latency share.
Type:
Grant
Filed:
July 25, 2023
Date of Patent:
November 19, 2024
Assignee:
Dell Products L.P.
Inventors:
Vladimir Shveidel, Lior Kamran, Amitai Alkalay
Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
Type:
Grant
Filed:
April 25, 2023
Date of Patent:
November 19, 2024
Assignee:
Rambus Inc.
Inventors:
Evan Lawrence Erickson, Christopher Haywood, Mark D. Kellam
Abstract: Memory controllers, devices, modules, systems and associated methods are disclosed. In one embodiment, a memory system is disclosed. The memory system includes volatile memory configured as a cache. The cache stores first data at first storage locations. Backing storage media couples to the cache. The backing storage media stores second data in second storage locations corresponding to the first data. Logic uses a presence or status of first data in the first storage locations to cease maintenance operations to the stored second data in the second storage locations.
Type:
Grant
Filed:
December 2, 2022
Date of Patent:
November 19, 2024
Assignee:
Rambus Inc.
Inventors:
Collins Williams, Michael Miller, Kenneth Wright
Abstract: Embodiments of the present disclosure provide an enhanced system and methods for optimizing data placement in a memory hierarchy. A disclosed non-limiting computer-implemented method configures a counter block comprising access frequency counters mapped into an application memory space, and configures a counter map, where each entry in the counter map associates an application-defined memory region with the access frequency counters of the counter block. A memory controller identifies a memory access in a given application-defined memory region and compares an access address with a mask in the counter map to track the memory access. The memory controller generates a heatmap representing a frequency count of accesses to quantized memory using the access frequency counters. Generating the heatmap is performed by memory controller hardware.
Type:
Grant
Filed:
February 23, 2023
Date of Patent:
October 29, 2024
Assignee:
International Business Machines Corporation
Abstract: A method for providing elastic columnar cache includes receiving cache configuration information indicating a maximum size and an incremental size for a cache associated with a user. The cache is configured to store a portion of a table in a row-major format. The method includes caching, in a column-major format, a subset of the plurality of columns of the table in the cache and receiving a plurality of data requests requesting access to the table and associated with a corresponding access pattern requiring access to one or more of the columns. While executing one or more workloads, the method includes, for each column of the table, determining an access frequency indicating a number of times the corresponding column is accessed over a predetermined time period and dynamically adjusting the subset of columns based on the access patterns, the maximum size, and the incremental size.
Type:
Grant
Filed:
April 22, 2022
Date of Patent:
October 22, 2024
Assignee:
Google LLC
Inventors:
Anjan Kumar Amirishetty, Xun Cheng, Viral Shah
Abstract: A storage device includes a memory die and a controller. The controller identifies a dirty block that was subject to an interrupted I/O operation and performs a coarse inspection of the dirty block. Each iteration of the coarse inspection includes: requesting first bytes of a current page of the dirty block; receiving contents of the first bytes from the at least one memory die; and evaluating a state of the current page based on the contents of the first bytes. The controller also determines an initial last good page based on the coarse inspection and performs a fine inspection of at least one page based on a second number of bytes greater than the first number of bytes. The fine inspection validates the initial last good page and identifies the initial last good page as an actual last good page of the dirty block.
Type:
Grant
Filed:
July 21, 2023
Date of Patent:
October 22, 2024
Assignee:
Sandisk Technologies, Inc.
Inventors:
Asaf Gueta, Arie Star, Omer Fainzilber, Eran Sharon
Abstract: Methods and systems for creating and extending a row-level security (RLS) policy are provided. In one embodiment, a method is provided that includes creating an RLS policy for a primary object and searching a relationship database for one or more child relationships of the primary object. The method may further include filtering the one or more child relationships to identify a valid child relationship of the primary object. A child object of the primary object may then be identified based on the valid child relationship. The method may further include receiving a request to extend the RLS policy to the child object, and extending the RLS policy to the child object.
Type:
Grant
Filed:
February 27, 2019
Date of Patent:
October 15, 2024
Assignee:
K2 Software, Inc.
Inventors:
Paul Hoeffer, Lewis Garmston, Grant Dickinson
Abstract: In one embodiment, a method of selectively reserving portions of a last level cache (LLC) for a multi-core processor, the method comprising: allocating, by an executive system, plural classes of service to the portions of the LLC, wherein the portions comprise ways, and wherein each of the plural classes of service are allocated to one or more of the ways; assigning, by the executive system, one of the plural classes of service to an application as a default class of service, wherein the assignment controls which of the ways the application can allocate into; and overriding, by the application, the default class of service to enable allocation by the application to the one or more of the ways associated with a non-default class of service.
Abstract: Described are examples for storing data on a storage device, including storing, in a live write stream cache, one or more logical blocks (LBs) corresponding to a data segment, writing, for each LB in the data segment, a cache element of a cache entry that points to the LB in the live write stream cache, where the cache entry includes multiple cache elements corresponding to the multiple LBs of the data segment, writing, for the cache entry, a table entry in a mapping table that points to the cache entry, and when a storage policy is triggered for the cache entry, writing the multiple LBs, pointed to by each cache element of the cache entry, to a stream for storing as contiguous LBs on the storage device, and updating the table entry to point to a physical address of a first LB of the contiguous LBs on the storage device.
Type:
Grant
Filed:
November 9, 2022
Date of Patent:
September 17, 2024
Assignee:
Lemon Inc.
Inventors:
Peng Xu, Ping Zhou, Chaohong Hu, Fei Liu, Changyou Xu, Kan Frankie Fan
Abstract: An apparatus has processing circuitry, load tracking circuitry and load prediction circuitry. It is determined whether tracking information indicates that there is a risk of target data, corresponding to an address of a speculatively-issued load operation which is speculatively issued (bypassing an older operation) based on a prediction determined by the load prediction circuitry, having changed between the target data being loaded for the speculatively-issued load operation and data being loaded for a given older load operation bypassed by the speculatively-issued load operation. If so, independent of whether the addresses of the speculatively-issued load operation and the given older load operation correspond, at least the speculatively-issued load operation is reissued, even when the prediction is correct. This protects against ordering violations.
Abstract: Apparatuses, systems, and methods for hierarchical memory systems are described. An example method includes receiving a request to store data in a persistent memory device and a non-persistent memory device via an input/output (I/O) device; redirecting the request to store the data to logic circuitry in response to determining that the request corresponds to performance of a hierarchical memory operation; storing in a base address register associated with the logic circuitry, logical address information corresponding to the data responsive to receipt of the redirected request; asserting, by the logic circuitry, an interrupt signal on a hypervisor, the interrupt signal indicative of initiation of an operation to be performed by the hypervisor to control access to the data by the logic circuitry; and writing, based at least in part, on receipt of the redirected request, the data to the persistent memory device and the non-persistent memory device substantially concurrently.
Abstract: A first circuit formed on a first semiconductor substrate is wafer-bonded to a second circuit formed on a second memory circuit, wherein the first circuit includes quasi-volatile or non-volatile memory circuits and wherein the second memory circuit includes fast memory circuits that have lower read latencies than the quasi-volatile or non-volatile memory circuits, as well as logic circuits. The volatile and non-volatile memory circuits may include static random-access memory (SRAM) circuits, dynamic random-access memory (DRAM) circuits, embedded DRAM (eDRAM) circuits, magnetic random-access memory (MRAM) circuits, embedded MRAM (eMRAM), or any suitable combination of these circuits.
Type:
Grant
Filed:
April 24, 2023
Date of Patent:
August 27, 2024
Assignee:
SUNRISE MEMORY CORPORATION
Inventors:
Youn Cheul Kim, Richard S. Chernicoff, Khandker Nazrul Quader, Robert D. Norman, Tianhong Yan, Sayeef Salahuddin, Eli Harari
Abstract: An apparatus comprises a cache comprising a plurality of cache entries, and cache replacement control circuitry to select, in response to a cache request specifying a target address missing in the cache, a victim cache entry to be replaced with a new cache entry. The cache request specifies a partition identifier indicative of an execution environment associated with the cache request. The victim cache entry is selected based on re-reference interval prediction (RRIP) values for a candidate set of cache entries. The RRIP value for a given cache entry is indicative of a relative priority with which the given cache entry is to be selected as the victim cache entry. Configurable replacement policy configuration data is selected based on the partition identifier, and the RRIP value of the new cache entry is set to an initial value selected based on the selected configurable replacement policy configuration data.
Type:
Grant
Filed:
June 27, 2022
Date of Patent:
August 6, 2024
Assignee:
Arm Limited
Inventors:
Andrew David Tune, Andrew Brookfield Swaine
Abstract: In a data processing network, error detection information (EDI) is generated for first data of a first communication protocol of a plurality of communication protocols, the EDI including an error detection code and an associated validity indicator for each field group in a set of field groups. The first data and the EDI are sent through a network interconnect circuit, where the first data is translated to second data of a second communication protocol. An error is detected in the second data received from the network interconnect circuit when a validity indicator for a field group is set in EDI received with the second data and an error detection code generated for second data in the field group does not match the error detection code associated with the field group in the received EDI.
Type:
Grant
Filed:
February 28, 2023
Date of Patent:
August 6, 2024
Assignee:
Arm Limited
Inventors:
Sean Allan Steel, David Yue Williams, Premkishore Shivakumar
Abstract: A semiconductor device includes a processing unit that issue a memory access request with a virtual address, a first and a second memory management unit and a test result storage unit. The first and the second memory management unit are hierarchically provided, and each include address translation unit translating the virtual memory of the memory access request into a physical address and self-test unit testing for the address translation unit. The test result storage unit stores a first self-test result that indicates a result of the first self-test unit and a second self-test result that indicates a result of the second self-test unit.
Abstract: The present disclosure generally relates to an efficient manner of fetching data for write commands. The data can be fetched prior to classification, which is a fetch before mode. The data can alternatively be fetched after classification, which is a fetch after mode. When the data is fetched after classification, the write commands are aggregated until sufficient data associated with any command is split between memory devices. When in fetch before mode, the data should properly align such that data associated with any command is not split between memory devices. Efficiently toggling between the fetch before and fetch after modes will shape how writes are performed without impacting latency and bandwidth without significantly increasing write buffer memory size.
Abstract: A data storage system has a CPU data bus for reading and writing data to data accelerators. Each data accelerator has a controller which receives the read and write requests and determines whether to read or write a local cache memory in preprocessed form or an attached accelerator memory which has greater size capacity based on entries in an address translation table (ATT) and saves data in a raw unprocessed form. The controller may also include an address translation table for mapping input addresses to memory addresses and indicating the presence of data in preprocessed form.
Type:
Grant
Filed:
August 8, 2022
Date of Patent:
July 2, 2024
Assignee:
Ceremorphic, Inc.
Inventors:
Lizy Kurian John, Venkat Mattela, Heonchul Park
Abstract: Provided is a memory system including a plurality of memory submodules and a controller. Each submodule comprises a plurality of memory channels, each channel having a parity bit and a redundant array of independent devices (RAID) parity channel. The controller is configured to receive a block of data for storage in the plurality of memory submodules and determine whether a level of data traffic demand for a first of the plurality of submodules is high or low. When the data traffic demand is low, (i) writing a portion of the block of data in the first of the plurality of submodules and (ii) concurrently updating the parity bit and the RAID parity channel associated with the block of data. When the data traffic demand is high, (i) only writing the portion of the block of data in the first of the plurality of submodules and (ii) deferring updating of the parity bits and the RAID parity channel associated with the block of data.
Abstract: Systems and methods of the present disclosure enable intelligent dynamic caching of data by accessing an activity history of historical electronic activity data entries associated with a user account, and utilizing a trained entity relevancy machine learning model to predict a degree of relevance of each entity associated with the historical electronic activity data entries in the activity history based at least in part on model parameters and activity attributes of each electronic activity data entry. A set of relevant entities are determined based at least in part on the degree of relevance of each entity. Pre-cached entities are identified based on pre-cached entity data records cached on the user device, and un-cached relevant entities from the set of relevant entities are identified based on the pre-cached entities. The cache on the user device is updated to cache the un-cached entity data records associated with the un-cached relevant entities.
Type:
Grant
Filed:
October 14, 2022
Date of Patent:
April 23, 2024
Assignee:
Capital One Services, LLC
Inventors:
Shabnam Kousha, Lin Ni Lisa Cheng, Asher Smith-Rose, Joshua Edwards, Tyler Maiman
Abstract: A storage system and related method are for operating solid-state storage memory in a storage system. Zones of solid-state storage memory are provided. Each zone includes a portion of the solid-state storage memory. The zone has a data write requirement for the zone for reliability of data reads. The storage system adjusts power loss protection for at least one zone. The adjusting is based on the data write requirement for the zone and responsive to detecting a power loss.
Type:
Grant
Filed:
July 15, 2022
Date of Patent:
April 2, 2024
Assignee:
PURE STORAGE, INC.
Inventors:
Andrew R. Bernat, Brandon Davis, Mark L. McAuliffe, Zoltan DeWitt, Benjamin Scholbrock, Phillip Hord, Ronald Karr
Abstract: An Adaptive Memory Mirroring Performance Accelerator (AMMPA) includes a centralized transaction handling block that dynamically maps the most frequently accessed memory regions into faster access memory. The technique creates shadow copies of the most frequently accessed memory regions in memory devices associated with lower latency. The regions for which shadow copies are provided are updated dynamically based on use, and the technique flexible for different memory hierarchies.
Abstract: Examples described herein relate to an offload processor to receive data for transmission using a network interface or received in a packet by a network interface. In some examples, the offload processor can include a packet storage controller to determine whether to store data in a buffer of the offload processing device or a system memory after processing by the offload processing device. In some examples, determine whether to store data in a buffer of the offload processor or a system memory is based on one or more of: available buffer space, latency limit associated with the data, priority associated with the data, or available bandwidth through an interface between the buffer and the system memory. In some examples, the offload processor is to receive a descriptor and specify a storage location of data in the descriptor, wherein the storage location is within the buffer or the system memory.
Abstract: An apparatus for controlling access to a memory device comprising rows of memory units is provided. The apparatus comprises: an operation monitor configured to track memory operations to the rows of memory units of the memory device; a row hammer counter configured to determining, for each of the rows of memory units, row hammer effects experienced by the row of memory units due to the memory operations to the other rows of memory units of the memory device; a mitigation module configured to initiate, for each of the rows of memory units, row hammer mitigation in case that accumulated row hammer effects experienced by the row of memory units reach a predetermined threshold; and a virtual host module configured to perform the row hammer mitigation targeting a row of memory units in response to the initiation of row hammer mitigation for the row of memory units by the mitigation module.
Type:
Grant
Filed:
June 28, 2022
Date of Patent:
March 19, 2024
Assignee:
MONTAGE TECHNOLOGY (KUNSHAN) CO.
Inventors:
Yibo Jiang, Leechung Yiu, Christopher Cox, Robert Xi Jin, Lizhi Jin, Leonard Datus
Abstract: A system, process, and computer-readable medium for updating an application cache using a stream listening service is described. A stream listening service may monitor one or more data streams for content relating to a user. The stream listening service may forward the content along with time-to-live values to an application cache. A user may use an application to obtain information regarding the user's account, where the application obtains information from a data store and/or cached information from the application cache. The stream listening service, by forwarding current account information, obtained from listening to one or more streams, to the application cache, reduces traffic at the data store by providing current information from the data stream to the application cache.
Type:
Grant
Filed:
November 23, 2021
Date of Patent:
February 27, 2024
Assignee:
Capital One Services, LLC
Inventors:
Prateek Gupta, Samuel Wu, Zachary Wyman, Ramiro Ordonez
Abstract: Methods related to pre-processing a print job are disclosed. In an example method thereof, it is determined whether a document file is loaded in an application for the print job. Identifying information for the document file is retrieved. A data set obtained from the identifying information is sent to a print manager. A print history for the document file is updated responsive to the data set. The print job includes: pre-rendering of the document file into a document image; and caching of the document image. Print settings of the print job are associated with the document file. The print settings are tracked to provide criteria data to identify frequency of use of the document file.
Type:
Grant
Filed:
May 17, 2022
Date of Patent:
February 27, 2024
Assignee:
KYOCERA Document Solutions Inc.
Inventors:
Neil-Paul Payoyo Bermundo, Mohamed Al Sayed Mostafa
Abstract: Power consumption can be reduced by preventing a memory image from being destaged to a nonvolatile memory device. For example, a system can determine, subsequent to a computing device being in a first power mode and having a memory image stored in a first nonvolatile memory device that performs a caching function, that the computing device is in a second power mode that is a higher power mode than the first power mode. The system can, in response to determining that the computing device is in the second power mode, generate a first command to store the memory image in a volatile memory device and prevent the memory image from being stored in a second nonvolatile memory device. The system can, in response to generating the first command, store the memory image in the volatile memory device.
Abstract: In one example, an apparatus comprises: a direct memory access (DMA) descriptor queue that stores DMA descriptors, each DMA descriptor including an indirect address; an address translation table that stores an address mapping between indirect addresses and physical addresses; and a DMA engine configured to: fetch a DMA descriptor from the DMA descriptor queue to the address translation table to translate a first indirect address of the DMA descriptor to a first physical address based on the address mapping, and perform a DMA operation based on executing the DMA descriptor to transfer data to or from the first physical address.
Abstract: Systems and methods related to direct swap caching with noisy neighbor mitigation and dynamic address range assignment are described. A system includes a host operating system (OS), configured to support a first set of tenants associated with a compute node, where the host OS has access to: (1) a first swappable range of memory addresses associated with a near memory and (2) a second swappable range of memory addresses associated with a far memory. The host OS is configured to allocate memory in a granular fashion such that each allocation of memory to a tenant includes memory addresses corresponding to a conflict set having a conflict set size. The conflict set includes a first conflicting region associated with the first swappable range of memory addresses with the near memory and a second conflicting region associated with the second swappable range of memory addresses with the far memory.
Type:
Grant
Filed:
May 3, 2022
Date of Patent:
January 2, 2024
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Ishwar Agarwal, Yevgeniy Bak, Lisa Ru-feng Hsu
Abstract: Systems, apparatus and methods are provided for logical-to-physical (L2P) address translation. A method may comprise receiving a request for a first logical data address (LDA), and calculating a first translation data unit (TDU) index for a first TDU. The first TDU may contain a L2P entry for the first LDA. The method may further comprise searching a cache of lookup directory entries of recently accessed TDUs using the first TDU index, determining that there is a cache miss, generating and storing an outstanding request for the lookup directory entry for the first TDU in a miss buffer, retrieving the lookup directory entry for the first TDU from an in-memory lookup directory, determining that the lookup directory entry for the first TDU is not valid, reserve a TDU space for the first TDU in a memory and generating a load request for the first TDU.
Type:
Grant
Filed:
November 21, 2022
Date of Patent:
December 5, 2023
Assignee:
INNOGRIT TECHNOLOGIES CO., LTD.
Inventors:
Bo Fu, Chi-Chun Lai, Jie Chen, Dishi Lai, Jian Wu, Cheng-Yun Hsu, Qian Cheng
Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
Type:
Grant
Filed:
January 10, 2022
Date of Patent:
November 28, 2023
Assignee:
QUALCOMM Incorporated
Inventors:
Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
Abstract: A processor includes a micro-operation cache having a plurality of micro-operation cache entries for storing micro-operations decoded from instruction groups and a micro-operation filter having a plurality of micro-operation filter table entries for storing identifiers of instruction groups for which the micro-operations are predicted dead on fill if stored in the micro-operation cache. The micro-operation filter receives an identifier for an instruction group. The micro-operation filter then prevents a copy of the micro-operations from the first instruction group from being stored in the micro-operation cache when a micro-operation filter table entry includes an identifier that matches the first identifier.
Type:
Grant
Filed:
April 23, 2020
Date of Patent:
August 15, 2023
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Marko Scrbak, Mahzabeen Islam, John Kalamatianos, Jagadish B. Kotra
Abstract: A solid-state drive having an integrated circuit comprising a controller that is configured to determine, for data transferred between a host interface of the integrated circuit and nonvolatile semiconductor storage device interface of the integrated circuit, the availability of an internal buffer of the integrated circuit to transparently accumulate the transferred data, and (i) if the internal buffer is available, accumulate the data from target nonvolatile semiconductor storage devices or the host in the internal buffer, or (ii) if the internal buffer is not available, accumulate the data unit from the target nonvolatile semiconductor storage devices or the host in an external buffer communicatively coupled to the controller, wherein the external buffer is external to the integrated circuit. The controller then provides the accumulated data to the respective interfaces to furnish a read or write request from the host.
Abstract: A graphics pipeline includes a texture cache having cache lines that are partitioned into a plurality of subsets. The graphics pipeline also includes one or more compute units that selectively generates a miss request for a first subset of the plurality of subsets of a cache line in the texture cache in response to a cache miss for a memory access request to an address associated with the first subset of the cache line. In some embodiments, the cache lines are partitioned into a first sector and a second sector. The compute units generate miss requests for the first sector, and bypass generating miss requests for the second sector, in response to cache misses for memory access requests received during a request cycle being in the first sector.
Abstract: System and method for testing a device under test (DUT) that includes a multiprocessor array (MPA) executing application software at operational speed. The application software may be configured for deployment on first hardware resources of the MPA and may be analyzed. Testing code for configuring hardware resources on the MPA to duplicate data generated in the application software for testing purposes may be created. The application software may be deployed on the first hardware resources. Input data may be provided to stimulate the DUT. The testing code may be executed to provide at least a subset of first data to a pin at an edge of the MPA for analyzing the DUT using a hardware resource of the MPA not used in executing the application software. The first data may be generated in response to a send statement executed by the application software based on the input data.
Type:
Grant
Filed:
October 17, 2018
Date of Patent:
August 8, 2023
Assignee:
Coherent Logix, Incorporated
Inventors:
Geoffrey N. Ellis, John Mark Beardslee, Michael B. Doerr, Ivan Aguayo, Brian A. Dalio
Abstract: Nonsequential readahead for deep learning training that includes: receiving an indication of a list of batch storage locations for a batch of data objects; prefetching, for each storage location in the list of batch storage locations, storage content corresponding to the batch of data objects; and storing the storage content corresponding to the batch of data objects within a cache accessible to an artificial intelligence workflow.
Abstract: An electronic device that handles memory accesses includes a memory and a processor that supports a plurality of streams. The processor acquires a graph that includes paths of operations in a set of operations for processing instances of data through a model, each path of operations including a separate sequence of operations from the set of operations that is to be executed using a respective stream from among the plurality of streams. The processor then identifies concurrent paths in the graph, the concurrent paths being paths of operations between split points at which two or more paths of operations diverge and merge points at which the two or more paths of operations merge. The processor next executes operations in each of the concurrent paths using a respective stream, the executing including using memory coloring for handling memory accesses in the memory for the operations in each concurrent path.
Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
Type:
Grant
Filed:
January 10, 2022
Date of Patent:
June 27, 2023
Assignee:
QUALCOMM Incorporated
Inventors:
Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
Abstract: Systems, devices and methods are provided for operating a skewed-associative cache in a data processing system and, in particular, for changing address-to-row mappings in a skewed-associative cache.
Abstract: A memory system may include: a nonvolatile memory device suitable for storing user data and meta data of the user data; and a controller suitable for uploading at least some of the meta data to a host. When the size of a free space of a storage space of the host, allocated to store the uploaded meta data, is equal to or less than a preset value, the controller may upload hot meta data to the host according to the number of normal read requests received from the host and the ratio of the normal read requests.
Abstract: A mobile terminal including: a memory having a plurality of applications stored therein; an application management module configured to receive application information corresponding to the respective applications, and generate status information of the applications, corresponding to the application information; and a controller configured to determine execution history information of the applications through the status information provided from the application management module, wherein the application management module includes: an application information collection unit configured to collect cache data size information of the respective applications at preset time intervals; and a comparison unit configured to generate the status information of the applications by comparing the cache data size information of the applications, collected by the application information collection unit, to reference values corresponding to the respective applications.
Type:
Grant
Filed:
January 11, 2019
Date of Patent:
June 6, 2023
Assignee:
NHN Corporation
Inventors:
Daebeom Lee, Joon Sung Park, Joon Ho Lee, Donghun Kwon, Jun Sung Kim
Abstract: An operating method of a system-on-chip includes outputting a prefetch command in response to an update of mapping information on a first read target address, the update occurring in a first translation lookaside buffer storing first mapping information of a second address with respect to a first address, and storing, in response to the prefetch command, in a second translation lookaside buffer, second mapping information of a third address with respect to at least some second addresses of an address block including a second read target address.
Type:
Grant
Filed:
July 13, 2021
Date of Patent:
May 30, 2023
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Seongmin Jo, Youngseok Kim, Chunghwan You, Wooil Kim
Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
Type:
Grant
Filed:
December 6, 2021
Date of Patent:
May 30, 2023
Assignee:
Rambus Inc.
Inventors:
Evan Lawrence Erickson, Christopher Haywood, Mark D. Kellam
Abstract: Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for providing rolling updates of distributed systems with a shared cache. An embodiment operates by receiving a platform update request to update data item information associated with a first version of a data item cached in a shared cache memory. The embodiment may further operate by transmitting a cache update request to update the data item information of the first version of the data item cached in the shared cache memory, and isolating the first version of the data item cached in the shared cache memory based on a collection of version specific identifiers and a version agnostic identifier associated with the data item.
Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.
Type:
Grant
Filed:
April 20, 2021
Date of Patent:
April 11, 2023
Assignee:
Hewlett Packard Enterprise Development LP
Abstract: A processing device identifies a portion of data in a cache memory to be written to a managed unit of a separate memory device and determines, based on respective memory addresses, whether an additional portion of data associated with the managed unit is stored in the cache memory. The processing device further generates a bit mask identifying a first location and a second location in the managed unit, wherein the first location is associated with the portion of data and the second location is associated with the additional portion of data, and performs, based on the bit mask, a read-modify-write operation to write the portion of data to the first location in the managed unit of the separate memory device and the additional portion of data to the second location in the managed unit of the separate memory device.
Type:
Grant
Filed:
July 21, 2021
Date of Patent:
March 21, 2023
Assignee:
Micron Technology, Inc.
Inventors:
Trevor C. Meyerowitz, Dhawal Bavishi, Fangfang Zhu
Abstract: Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.
Type:
Grant
Filed:
July 9, 2021
Date of Patent:
March 7, 2023
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Ishwar Agarwal, George Zacharias Chrysos, Oscar Rosell Martinez
Abstract: A processing system includes a processor, a memory, and an operating system that are used to allocate a page table caching memory object (PTCM) for a user of the processing system. An allocation of the PTCM is requested from a PTCM allocation system. In order to allocate the PTCM, a plurality of physical memory pages from a memory are allocated to store a PTCM page table that is associated with the PTCM. A lockable region of a cache is designated to hold a copy of the PTCM page table, after which the lockable region of the cache is subsequently locked. The PTCM page table is populated with page table entries associated with the PTCM and copied to the locked region of the cache.
Type:
Grant
Filed:
September 27, 2019
Date of Patent:
January 10, 2023
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Derrick Allen Aguren, Eric H. Van Tassell, Gabriel H. Loh, Jay Fleischman
Abstract: Systems, apparatus and methods are provided for logical-to-physical (L2P) address translation. A method may comprise receiving a request for a first logical data address (LDA), and calculating a first translation data unit (TDU) index for a first TDU. The first TDU may contain a L2P entry for the first LDA. The method may further comprise searching a cache of lookup directory entries of recently accessed TDUs using the first TDU index, determining that there is a cache miss, generating and storing an outstanding request for the lookup directory entry for the first TDU in a miss buffer, retrieving the lookup directory entry for the first TDU from an in-memory lookup directory, determining that the lookup directory entry for the first TDU is not valid, reserve a TDU space for the first TDU in a memory and generating a load request for the first TDU.
Type:
Grant
Filed:
May 10, 2021
Date of Patent:
December 27, 2022
Assignee:
INNOGRIT TECHNOLOGIES CO., LTD.
Inventors:
Bo Fu, Chi-Chun Lai, Jie Chen, Dishi Lai, Jian Wu, Cheng-Yun Hsu, Qian Cheng
Abstract: A database management system for controlling prioritized transactions, comprising: a processor adapted to: receive from a client module a request to write into a database item as part of a high-priority transaction; check a lock status and an injection status of the database item; when the lock status of the database item includes a lock owned by a low-priority transaction and the injection status is not-injected status: change the injection status of the database item to injected status; copy current content of the database item to an undo buffer of the low-priority transaction; and write into a storage engine of the database item.