Patents Examined by Masud K Khan
  • Patent number: 11899591
    Abstract: Exemplary methods, apparatuses, and systems include detecting an operation to write dirty data to a cache. The cache is divided into a plurality of channels. In response to the operation, the dirty data is written to a first cache line in the cache, the first cache line being accessed via a first channel. Additionally, a redundant copy of the dirty data is written to a second cache line in the cache. The second cache line serves as a redundant write buffer and is accessed via a second channel, the first and second channels differing from one another. A metadata entry for the second cache line is updated to reference a location of the dirty data in the first cache line.
    Type: Grant
    Filed: December 7, 2022
    Date of Patent: February 13, 2024
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Cagdas Dirik, Robert M. Walker
  • Patent number: 11899584
    Abstract: Embodiments of the present disclosure relate to a system and an operating method of the system. Based on some embodiments of the disclosed technology, the system may include a random access memory structured to include memory cells to store data, a cache memory configured to cache at least part of the data, and a processor in communication with the random access memory and the cache memory to access at least part of the data from the random access memory or cache memory. The system may determine a cache hit ratio for the cache memory, and may set an operating frequency of the random access memory based on the cache hit ratio.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: February 13, 2024
    Assignee: SK HYNIX INC.
    Inventors: Yong Wan Hwang, Nam Hyeok Jeong, Kwang Ho Choi, Moon Hyeok Choi, Tae Woong Ha
  • Patent number: 11892982
    Abstract: Systems and methods for reducing delays between the time at which a need for a resynchronization of data replication between a volume of a local CG and its peer volume of a remote CG is detected and the time at which the resynchronization is triggered (Reseed Time Period) are provided. According to an example, information indicative of the direction of data replication between the volume and the peer volume is maintained within a cache of a node. Responsive to a disruptive operation (e.g., relocation of the volume from an original node to a new node), the Reseed Time Period is lessened by proactively adding a passive cache entry to a cache within the new node at the time the CG relationship is created when the new node represents an HA partner of the original node and prior to the volume coming online when the new node represents a non-HA partner.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: February 6, 2024
    Assignee: NetApp, Inc.
    Inventors: Murali Subramanian, Sohan Shetty, Rakesh Bhargava, Akhil Kaushik
  • Patent number: 11886342
    Abstract: A method, system, and computer program product for augmenting cache replacement operations are provided. The method identifies a set of cache lines within a first cache level of a multilevel cache. A first candidate cache line is identified based on a first replacement scheme of the first cache level. A second candidate cache line is identified based on the first replacement scheme of the first cache level. A replacement cache line is selected for replacement in the first cache level. The replacement cache line is selected from the first candidate cache line and the second candidate cache line and based on the first replacement scheme of the first cache level and a second replacement scheme of a second cache level. The method removes the replacement cache line from the first cache level.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: January 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Aaron Dingler, Mohit Karve, Alper Buyuktosunoglu
  • Patent number: 11875046
    Abstract: A method includes sending an enumeration of a resource unit of the computing device to a first computing system tenant and to a second computing system tenant. The enumeration is sent through a first protocol and indicating a managing protocol associated with managing the resource unit. The method further includes receiving a first request from the first computing system tenant to reserve the resource unit. The first request is received through the managing protocol. The method further includes receiving a second request from the second computing system tenant to reserve the resource unit. The second request is received through the managing protocol. The method further includes sending, to the second computing system tenant, an indication that the resource unit is reserved by the first computing system tenant. The indication is sent through the managing protocol.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: January 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Daniel Lee Helmick
  • Patent number: 11861199
    Abstract: Techniques are provided for data management across a persistent memory tier and a file system tier. A block within a persistent memory tier of a node is determined to have up-to-date data compared to a corresponding block within a file system tier of the node. The corresponding block may be marked as a dirty block within the file system tier. Location information of a location of the block within the persistent memory tier is encoded into a container associated with the corresponding block. In response to receiving a read operation, the location information is obtained from the container. The up-to-date data is retrieved from the block within the persistent memory tier using the location information for processing the read operation.
    Type: Grant
    Filed: July 24, 2022
    Date of Patent: January 2, 2024
    Assignee: NetApp, Inc.
    Inventors: Ananthan Subramanian, Matthew Fontaine Curtis-Maury, Ram Kesavan, Vinay Devadas
  • Patent number: 11847061
    Abstract: A technical solution to the technical problem of how to support memory-centric operations on cached data uses a novel memory-centric memory operation that invokes write back functionality on cache controllers and memory controllers. The write back functionality enforces selective flushing of dirty, i.e., modified, cached data that is needed for memory-centric memory operations from caches to the completion level of the memory-centric memory operations, and updates the coherence state appropriately at each cache level. The technical solution ensures that commands to implement the selective cache flushing are ordered before the memory-centric memory operation at the completion level of the memory-centric memory operation.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: December 19, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Shaizeen Aga, Nuwan Jayasena, John Kalamatianos
  • Patent number: 11847053
    Abstract: Systems, methods, and apparatuses relating to circuitry to implement a duplication resistant on-die irregular data prefetcher are described.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: Prathmesh Kallurkar, Anant Vithal Nori, Sreenivas Subramoney
  • Patent number: 11836077
    Abstract: Methods, systems, and devices for dynamically tuning host performance booster thresholds are described. A memory system may include a set of memory devices and an interface configured to communicate commands with a host system coupled with the memory system. The interface may communicate commands to the memory system according to a first command mode associated with a logical address space including a plurality of regions and communicate commands according to a second command mode associated with physical memory address. The memory system may further include a controller that may determine a region activated for the second command mode, receive a first plurality of commands, determine, upon deactivating the region, a first threshold based on a first quantity of read commands serviced according to the second command mode. The controller may activate the region for the second command based on a second quantity of read commands received exceeding the first threshold.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: December 5, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Yanhua Bi
  • Patent number: 11836091
    Abstract: A processor supports secure memory access in a virtualized computing environment by employing requestor identifiers at bus devices (such as a graphics processing unit) to identify the virtual machine associated with each memory access request. The virtualized computing environment uses the requestor identifiers to control access to different regions of system memory, ensuring that each VM accesses only those regions of memory that the VM is allowed to access. The virtualized computing environment thereby supports efficient memory access by the bus devices while ensuring that the different regions of memory are protected from unauthorized access.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: December 5, 2023
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Anthony Asaro, Jeffrey G. Cheng, Anirudh R. Acharya
  • Patent number: 11836053
    Abstract: Example implementations relate to metadata operations in a storage system. An example storage system includes a machine-readable storage storing instructions executable by a processor to determine to generate a synthetic full backup based on data stream representations of a plurality of data streams. The instructions are also executable to, in response to a determination to generate the synthetic full backup, create a logical group including the data stream representations. The instructions are also executable to specify a cache resource allocation for the logical group, and generate the synthetic full backup from data stream representations using an amount of a cache resource limited by the cache resource allocation for the logical group.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: December 5, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: David Malcolm Falkinder, Richard Phillip Mayo, Peter Thomas Camble
  • Patent number: 11822479
    Abstract: Techniques for performing cache operations are provided. The techniques include recording an indication that providing exclusive access of a first cache line to a first processor is deemed problematic; detecting speculative execution of a store instruction by the first processor to the first cache line; and in response to the detecting, refusing to provide exclusive access of the first cache line to the first processor, based on the indication.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: November 21, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Paul J. Moyer
  • Patent number: 11809299
    Abstract: An information handling system includes a storage system and a remote processing system. The storage system includes a storage array and a local storage usage predictor. The local storage usage predictor receives usage information from the storage array, and predicts a first usage prediction for the storage array based upon the usage information. The remote processing system includes a remote storage usage predictor remote from the storage system. The remote storage usage predictor receives the usage information and to predicts a second usage prediction for the storage array based upon the usage information.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: November 7, 2023
    Assignee: Dell Products L.P.
    Inventors: Cherry Changyue Dai, Arthur Fangbin Zhou
  • Patent number: 11809317
    Abstract: A memory controlling device configured to connect to a memory module including a resistance switching memory cell array which is partitioned into a plurality of partitions including a first partition and a second partition is provided. A first controlling module accesses the memory module. A second controlling module determines whether there is a conflict for the first partition to which a read request targets when an incoming request is the read request, instructs the first controlling module to read target data of the read request from the memory module when a write to the second partition is in progress, and suspends the read request when a write to the first partition is in progress.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: November 7, 2023
    Assignees: MemRay Corporation, Yonsei University, University—Industry Foundation (UIF)
    Inventors: Myoungsoo Jung, Gyuyoung Park, Miryeong Kwon
  • Patent number: 11803473
    Abstract: Systems and techniques for dynamic selection of policy that determines whether copies of shared cache lines in a processor core complex are to be stored and maintained in a level 3 (L3) cache of the processor core complex are based on one or more cache line sharing parameters or based on a counter that tracks L3 cache misses and cache-to-cache (C2C) transfers in the processor core complex, according to various embodiments. Shared cache lines are shared between processor cores or between threads. By comparing either the cache line sharing parameters or the counter to corresponding thresholds, a policy is set which defines whether copies of shared cache lines at such indices are to be retained in the L3 cache.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kelley, Paul Moyer
  • Patent number: 11782835
    Abstract: Disclosed herein is a heterogeneous system based on unified virtual memory. The heterogeneous system based on unified virtual memory may include a host for compiling a kernel program, which is source code of a user application, in a binary form and delivering the compiled kernel program to a heterogenous system architecture device, the heterogenous system architecture device for processing operation of the kernel program delivered from the host in parallel using two or more different types of processing elements, and unified virtual memory shared between the host and the heterogenous system architecture device.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: October 10, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Joo-Hyun Lee, Young-Su Kwon, Jin-Ho Han
  • Patent number: 11782629
    Abstract: A data processing method includes in response to determining that data corresponding to received data to be written exists, determining a first data identifier of the data corresponding to the data to be written, where the first data identifier is used to obtain a first storage area corresponding to the data, generating a second data identifier of the data to be written, writing the data to be written into the second storage area, in response to receiving a data rollback instruction, obtaining a target data identifier corresponding to the data rollback instruction, and determining a target storage area based on the target data identifier to obtain rollback data from the target storage area. The second data identifier is different from the first data identifier, and the second data identifier corresponds to a second storage area different from the first storage area.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: October 10, 2023
    Assignee: LENOVO (BEIJING) LIMITED
    Inventors: Xianwu Sun, Quan Wang, Shengyu Zhang
  • Patent number: 11782834
    Abstract: In a network-on-chip (NoC) interconnect connected to one or more agents with multiple input ports, one or more switches are provided with a round robin arbiter constructed to use representations of the input ports and, in some embodiments, the current round robin state, as thermometer codes. By using thermometer code to represent port information, the correspondence to the current input and the current state to be granted can be rapidly determined through a simple two-step AND and XOR operations. With such a simple logical procedure, the number of steps to make the determination, and therefore the energy required, can be reduced by log 2(n) steps or up to 43%. Using thermometer code reduces the number of computations required. Hence, the number of logic circuit elements required to carry out the calculation is reduced, shrinking the floorplan area needed for the arbiter.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: October 10, 2023
    Inventor: Boon Chuan
  • Patent number: 11783226
    Abstract: Systems, computer-implemented methods, and computer program products to facilitate model transfer learning across evolving processes are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a condition definition component that defines one or more conditions associated with use of a model trained on first traces of a first process to make a prediction on one or more second traces of a second process. The computer executable components can further comprise a guardrail component that determines whether to use the model to make the prediction.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: October 10, 2023
    Assignee: International Business Machines Corporation
    Inventors: Evelyn Duesterwald, Vatche Isahagian, Vinod Muthusamy
  • Patent number: 11775211
    Abstract: The present technology relates to an electronic device. A memory controller according to the present technology may include a host interface controller, a plurality of buffers, and a memory operation controller. The host interface controller may sequentially generate a plurality of commands based on a request received from a host. The plurality of buffers may store the plurality of commands according to command attributes. The memory operation controller may compare a sequence number of a target command stored in a target buffer among the plurality of buffers with a sequence number of a standby command stored in remaining buffers, and may determine a process of the target command and a process of the standby command based on a comparison. wherein a buffer satisfying a flush condition among the plurality of buffers is selected as the target buffer.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: October 3, 2023
    Assignee: SK hynix Inc.
    Inventor: Hye Mi Kang