Patents Examined by Jae U Yu
  • Patent number: 11599462
    Abstract: Methods and systems for memory cache entry replacement with pinned cache entries. Data structures are maintained for tracking a state of entries of a memory cache. A first data structure includes identifiers for pinned entries of a memory cache. A second data structure includes identifiers for unpinned entries of the memory cache that have been accessed once. A third data structure includes identifiers for unpinned entries of the memory cache that have been accessed more than once. A request to pin an entry is received. A determination is made that an identifier associated with the entry to pin is included in the second data structure or the third data structure. The identifier associated with the pinned entry is added to the first data structure. A detection is made at a time period that one or more entries of the memory cache are to be removed from the memory cache in accordance with an eviction protocol.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: March 7, 2023
    Assignee: Google LLC
    Inventor: Amalia Hawkins
  • Patent number: 11599470
    Abstract: A last-level collective hardware prefetcher (LLCHP) is described. The LLCHP is to detect a first off-chip memory access request by a first processor core of a plurality of processor cores. The LLCHP is further to determine, based on the first off-chip memory access request, that first data associated with the first off-chip memory access request is associated with second data of a second processor core of the plurality of processor cores. The LLCHP is further to prefetch the first data and the second data based on the determination.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: March 7, 2023
    Assignee: The Regents of the University of California
    Inventors: Georgios Michelogiannakis, John Shalf
  • Patent number: 11593260
    Abstract: An apparatus to facilitate memory data compression is disclosed. The apparatus includes a memory and having a plurality of banks to store main data and metadata associated with the main data and a memory management unit (MMU) coupled to the plurality of banks to perform a hash function to compute indices into virtual address locations in memory for the main data and the metadata and adjust the metadata virtual address locations to store each adjusted metadata virtual address location in a bank storing the associated main data.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: February 28, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Niranjan Cooray, Prasoonkumar Surti, Sudhakar Kamma, Vasanth Ranganathan
  • Patent number: 11586544
    Abstract: A data prefetching method and a terminal device are provided. The CPU core cluster is configured to deliver a data access request to a first cache of the at least one level of cache, where the data access request carries a first address, and the first address is an address of data that the CPU core cluster currently needs to access in the memory. The prefetcher in the terminal device provided in embodiments of this application may generate a prefetch-from address, and load data corresponding to the generated prefetch-from address to the first cache. When needing to access the data, the CPU core cluster can read from the first cache, without a need to read from the memory. This helps increase an operating rate of the CPU core cluster.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: February 21, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Gongzheng Shi, Jianliang Ma, Liqiang Wang
  • Patent number: 11586353
    Abstract: Techniques for storage management involve: in accordance with a determination that an input/output (I/O) request of a storage system is received, determining a target storage device to which the I/O request is directed. The techniques further involve: in accordance with a determination that the target storage device is a storage device of a first type, processing the I/O request by accessing a memory of the storage system. The techniques further involve: in accordance with a determination that the target storage device is a storage device of a second type different from the first type, processing the I/O request without accessing the memory, the storage device of the second type having an access speed higher than that of the storage device of the first type. Accordingly, such techniques can improve performance of a storage system.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: February 21, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Shuo Lv, Leihu Zhang, Huan Chen, Chen Gong
  • Patent number: 11580020
    Abstract: A router device may receive, from a user device, a request for access to a file. The router device may determine that a cached version of the file is stored in a first data structure associated with the router device. The router device may communicate with a server device to determine whether the cached version of the file is current. The server device may be associated with a second data structure that stores a master version of the file. The router device may generate a copy of the cached version of the file based on communicating with the server device. The router device may send the copy of the cached version of the file to the user device.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: February 14, 2023
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jonathan Emerson Hirko, Rory Liam Connolly, Wei G. Tan, Nikolay Kulikaev, Manian Krishnamoorthy
  • Patent number: 11568920
    Abstract: A memory device includes an array of 2T1C DRAM cells and a memory controller. The DRAM cells are arranged as a plurality of rows and columns of DRAM cells. The memory controller is internal to the memory device and is coupled to the array of DRAM cells. The memory controller is capable of receiving commands input to the memory device and is responsive to the received commands to control row-major access and column-major access to the array of DRAM cells. In one embodiment, each transistor of a memory cell includes a terminal directly coupled to a storage node of the capacitor. In another embodiment, a first transistor of a memory cell includes a terminal directly coupled to a storage node of the capacitor, and a second transistor of the 2T1C memory cell includes a gate terminal directly coupled to the storage node of the capacitor.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: January 31, 2023
    Inventors: Mu-Tien Chang, Dimin Niu, Hongzhong Zheng
  • Patent number: 11561892
    Abstract: Systems and methods for adapting garbage collection (GC) operations in a memory device to a pattern of host accessing the device are discussed. The host access pattern can be represented by how frequent the device is in idle states free of active host access. An exemplary memory device includes a memory controller to track a count of idle periods during a specified time window, and to adjust an amount of memory space to be freed by a GC operation in accordance with the count of idle periods. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the count of idle periods during the specified time window.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: January 24, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Qing Liang, Deping He, David Aaron Palmer
  • Patent number: 11556608
    Abstract: Systems and methods are described for processing of requests of a single page application in an application server. The method includes receiving a request from a component of a single page application from a user device, getting a page identifier (ID) from the request, getting a user ID from the request, and searching a cache lookup table for a cache entry associated with the page ID. When no cache entry for the page ID is found in the cache lookup table, a new cache entry is created in the cache lookup table for processing of the request, and the request is processed using the new cache entry to generate a response. When a cache entry for the page ID is found in the cache lookup table, the user ID from the request is compared to a user ID in the cache entry, and when the user IDs match, the request is processed using the found cache entry to generate the response; and the response is sent to the single page application on the user device.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: January 17, 2023
    Assignee: salesforce.com, inc.
    Inventor: Martin Presler-Marshall
  • Patent number: 11544195
    Abstract: An information providing method of an electronic apparatus is disclosed. The information providing method may include receiving a counter information request, identifying cache counter information corresponding to the counter information request from a cache database related to a counter, and transmitting response information corresponding to the counter information request based on the identified cache counter information.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: January 3, 2023
    Assignee: Coupang Corp.
    Inventor: Seok Hyun Kim
  • Patent number: 11537516
    Abstract: Systems and methods are provided for using a distributed cache architecture with different methods to load balance requests depending upon whether a requested data item is a freely-requested item (e.g., a “hot key”). The cache may be implemented as a consistent hash ring, and most keys may be assigned to particular node based on a consistent hash. For hot key requests, the requests may be distributed among a subset of nodes rather than being assigned to a specific node using consistent hashing. When a witness service is used to ensure that cached data is fresh, verification requests for data regarding hot keys may be batched to avoid overloading the witness service with hot key requests.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: December 27, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Tyler Michael Jung, Slavcho Georgiev Slavchev, Nishant Jain, Vishwas Narendra, Nikhil Shah, James Zuber, Sameer Choudhary, Christopher A. Stephens, Suchindra Yogendra Agarwal, Phillip H. Pruett
  • Patent number: 11537517
    Abstract: A memory device comprises: a page buffer including a first and second latch, a control circuit configured to perform reading data of a word line and storing the data in the first latch, perform discharging the word line, perform moving the data of the first latch to the second latch, and perform outputting the data of the second latch to an exterior, and a control logic configured to control the control circuit such that an execution section of the discharge and moving for a first word line at least partially overlap each other when a second or third cache read command is inputted in a section in which the storage or discharge for the first word line is performed in response to a first cache read command for the first word line.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: December 27, 2022
    Assignee: SK hynix Inc.
    Inventors: Gwan Park, Jeong Gil Choi
  • Patent number: 11526444
    Abstract: A method for facilitating predictive caching of data is provided. The method includes retrieving raw data relating to user activity for a plurality of users, the user activity including a history of web resources accessed by a user; converting the raw data into a structured data set based on a predetermined criterion; generating a model based on the structured data set; training the model by using a training data set, the training data set including the user activity for a predetermined period of time; determining, by using the trained model, a predicted first web resource for the user; and automatically caching, in a memory, the predicted first web resource.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: December 13, 2022
    Assignee: JPMorgan Chase Bank, N.A.
    Inventors: Ramin Koch, Eric-Andre Vigroux, Liang Zhou, Mathieu Cliche, Yihui Tang, Howard Spector, Rebecca Setting, Neil V O'Donnell, Timothy Lorenz
  • Patent number: 11520497
    Abstract: A variety of applications can include a memory device having a memory die designed to control a power budget for a cache and a memory array of the memory die. A first flag received from a data path identifies a start of a cache operation on the data and a second flag from the data path identifies an end of the cache operation. A controller for peak power management can be implemented to control the power budget based on determination of usage of current associated with the cache from the first and second flags. In various embodiments, the controller can be operable to feedback a signal to a memory controller external to the memory die to adjust an operating speed of an interface from the memory controller to the memory die. Additional devices, systems, and methods are discussed.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: December 6, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Liang Yu, Jonathan Scott Parry, Luigi Pilolli
  • Patent number: 11513956
    Abstract: A technique maintains availability of a non-volatile cache. The technique involves arranging a plurality of non-volatile random-access memory (NVRAM) drives into initial drive sets that form the non-volatile cache. The technique further involves detecting a failed initial drive set among the initial drive sets. The plurality of NVRAM drives now includes failed NVRAM drives that belong to the failed initial drive set and remaining non-failed NVRAM drives. The technique further involves, in response to detecting the failed initial drive set, re arranging the remaining non-failed NVRAM drives of the plurality of NVRAM drives into new drive sets that form the non-volatile cache.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: November 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vamsi K. Vankamamidi, Geng Han, Chun Ma, Jianbin Kang
  • Patent number: 11500773
    Abstract: Disclosed herein are methods, systems, and processes to provide coherency across disjoint caches in clustered environments. It is determined whether a data object is owned by an owner node, where the owner node is one of multiple nodes of a cluster. If the owner node for the data object is identified by the determining, a request is sent to the owner node for the data object. However, if the owner node for the data object is not identified by the determining, selects a node in the cluster is selected as the owner node, and the request for the data object is sent to the owner node.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: November 15, 2022
    Assignee: Veritas Technologies LLC
    Inventors: Bhushan Jagtap, Mark Hemment, Anindya Banerjee, Ranjit Noronha, Jitendra Patidar, Kundan Kumar, Sneha Pawar
  • Patent number: 11494303
    Abstract: In a method of flushing cached data in a data storage system, instances of a working-set structure (WSS) are used over a succession of operating periods to organize cached data for storing to the persistent storage. In each operating period, leaf structures of the WSS are associated with respective address ranges of a specified size. Between operating periods, a structure-tuning operation is performed to adjust the specified size and thereby dynamically adjust a PD-to-leaf ratio of the WSS, including (1) comparing a last-period PD-to-leaf ratio to a predetermined ratio range, (2) when the ratio is below the predetermined ratio range, increasing the specified size for use in a next operating period, and (3) when ratio is above the predetermined ratio range, then decreasing the specified size for use in the next operating period.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: November 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Geng Han, Yousheng Liu
  • Patent number: 11487662
    Abstract: The present technology relates to a memory controller according to an embodiment includes a map caching controller generating a slot allocation request to allocate a physical slot in which a first map segment is to be stored among a plurality of physical slots, a map buffer manager outputting the first map segment, first physical slot information, and tree slot information, in response to the slot allocation request, and a mapping manager receiving the first map segment, the first physical slot information, and the tree slot information, deleting a second map segment and second physical slot information stored in a tree slot among a plurality of tree slots of a map tree, and storing the first map segment and the first physical slot information in the tree slot. At least one of the second map segment and the second physical slot information stored in the tree slot is invalid.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: November 1, 2022
    Assignee: SK hynix Inc.
    Inventor: Hye Mi Kang
  • Patent number: 11481332
    Abstract: A microprocessor includes a physically-indexed-and-tagged second-level set-associative cache. Each cache entry is uniquely identified by a set index and a way number. Each entry of a write-combine buffer (WCB) holds write data to be written to a write physical memory address, a portion of which is a write physical line address. Each WCB entry also holds a write physical address proxy (PAP) for the write physical line address. The write PAP specifies the set index and the way number of the cache entry into which a cache line specified by the write physical line address is allocated. In response to receiving a store instruction that is being committed and that specifies a store PAP, the WCB compares the store PAP with the write PAP of each WCB entry and requires a match as a necessary condition for merging store data of the store instruction into a WCB entry.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: October 25, 2022
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 11481130
    Abstract: Techniques involve determining a target identifier of an operation command if a type of the operation command is determined to be a target type, the target type indicating that the operation command is a command for acquiring data. The techniques further involve executing the operation command to acquire a target data block if it is determined that the target identifier does not exist in a historical mapping relationship between stored data blocks and identifiers of historical operation commands for the stored data blocks. The techniques further involve storing the target data block and a target mapping relationship between the target data block and the target identifier in a storage space for storing the stored data blocks and the historical mapping relationship. Accordingly, different types of commands can be quickly distinguished, thereby reducing the time for processing commands of a target type, reducing the bandwidth consumed, and improving processing efficiency.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: October 25, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Zhibin Zhang, Yalan Kuang