Patents Examined by Tracy C Chan
-
Patent number: 11656985Abstract: Methods and systems are provided for allocating memory. A portion of memory may be allocated by: selecting a type of memory to allocate in a client device from a group of memory types in response to a memory allocation request and/or in response to a request to access a portion of an address space, wherein the selection of the type of memory to allocate is based on an available memory determination; selecting a portion of the local primary memory, a portion of the external primary memory, or a portion of the memory-mapped file for the portion of memory to allocate at the client device depending on the selected type of memory; and mapping at least the selected portion to the address space.Type: GrantFiled: January 29, 2021Date of Patent: May 23, 2023Assignee: Kove IP, LLCInventors: Timothy A. Stabrawa, Zachary A. Cornelius, John Overton, Andrew S. Poling, Jesse I. Taylor
-
Patent number: 11651810Abstract: A memory system includes: a plurality of memory chips each including a plurality of banks and each suitable for generating a tracking address by tracking a row-hammer risk of selected banks among the banks, encrypting the tracking address using an encryption key to output tracking information to a corresponding data bus of a plurality of data buses and performing a target refresh operation according to a row-hammer address transferred through a command/address bus; and a memory controller suitable for collecting the tracking information for the banks transferred through the plurality of data buses to generate and output the row-hammer address to the command/address bus.Type: GrantFiled: November 16, 2021Date of Patent: May 16, 2023Assignee: SK hynix Inc.Inventor: Woongrae Kim
-
Patent number: 11636397Abstract: Embodiments of the present invention are directed to facilitating concurrent forecasting associating with multiple time series data sets. In accordance with aspects of the present disclosure, a request to perform a predictive analysis in association with multiple time series data sets is received. Thereafter, the request is parsed to identify each of the time series data sets to use in predictive analysis. For each time series data set, an object is initiated to perform the predictive analysis for the corresponding time series data set. Generally, the predictive analysis predicts expected outcomes based on the corresponding time series data set. Each object is concurrently executed to generate expected outcomes associated with the corresponding time series data set, and the expected outcomes associated with each of the corresponding time series data sets are provided for display.Type: GrantFiled: January 24, 2022Date of Patent: April 25, 2023Assignee: Splunk Inc.Inventors: Manish Sainani, Nghi Huu Nguyen, Zidong Yang
-
Patent number: 11609854Abstract: Techniques are disclosed for utilizing checkpoints to achieve resiliency of metadata in a storage system. A storage control system writes metadata to a persistent write cache. The storage control system performs a checkpoint generation process to generate a new metadata checkpoint which includes at least a portion of the metadata in the persistent write cache. The checkpoint generation process comprises placing a lock on processing to enable metadata in the persistent write cache to reach a consistent state, creating a metadata checkpoint structure in memory, removing the lock on processing to allow metadata updates in the persistent write cache, destaging at least a portion of the metadata from the persistent write cache to the metadata checkpoint structure, and persistently storing the metadata checkpoint structure.Type: GrantFiled: October 28, 2021Date of Patent: March 21, 2023Assignee: Dell Products L.P.Inventors: Yosef Shatsky, Doron Tal
-
Patent number: 11604734Abstract: Embodiments of the disclosure relate to a memory system and an operating method thereof. The memory system is configured to select, among the plurality of memory blocks, one or more target memory blocks operable to store user data to be accessed by a host which requests the memory system to write data, and determine whether to control a point of execution time of a command received from the host, based on valid page counts of respective target memory blocks.Type: GrantFiled: April 27, 2021Date of Patent: March 14, 2023Assignee: SK hynix Inc.Inventor: Min Gu Kang
-
Patent number: 11604730Abstract: A processor, including a core; and a cache-coherent memory fabric coupled to the core and having a primary cache agent (PCA) configured to provide a primary access path; and a secondary cache agent (SCA) configured to provide a secondary access path that is redundant to the primary access path, wherein the PCA has a coherency controller configured to maintain data in the secondary access path coherent with data in the main access path.Type: GrantFiled: July 27, 2020Date of Patent: March 14, 2023Assignee: Intel CorporationInventors: Rahul Pal, Philip Abraham, Ajaya Durg, Bahaa Fahim, Yen-Cheng Liu, Sanilkumar Mm
-
Patent number: 11599467Abstract: The present disclosure advantageously provides a system cache and a method for storing coherent data and non-coherent data in a system cache. A transaction is received from a source in a system, the transaction including at least a memory address, the source having a location in a coherent domain or a non-coherent domain of the system, the coherent domain including shareable data and the non-coherent domain including non-shareable data. Whether the memory address is stored in a cache line is determined, and, when the memory address is not determined to be stored in a cache line, a cache line is allocated to the transaction including setting a state bit of the allocated cache line based on the source location to indicate whether shareable or non-shareable data is stored in the allocated cache line, and the transaction is processed.Type: GrantFiled: May 27, 2021Date of Patent: March 7, 2023Assignee: Arm LimitedInventors: Jamshed Jalal, Bruce James Mathewson, Tushar P Ringe, Sean James Salisbury, Antony John Harris
-
Patent number: 11599468Abstract: Memory system features may promote cache coherency where first and second memory clients may attempt to work on the same data. A second client cache system may provide a read request for data and associated metadata. The metadata element may be detected in a first client cache system. The first client cache system may write or flush, such as to a system memory, one or more cache lines containing the metadata and associated data and invalidate the flushed cache lines. The second client cache system may receive the data and metadata, such as from the system memory, completing or fulfilling the read request.Type: GrantFiled: November 30, 2021Date of Patent: March 7, 2023Assignee: QUALCOMM IncorporatedInventors: Andrew Edmund Turner, George Patsilaras
-
Patent number: 11586552Abstract: An apparatus includes a cache memory circuit configured to store a cache lines, and a cache controller circuit. The cache controller circuit is configured to receive a read request to an address associated with a portion of a cache line. In response to an indication that the portion of the cache line currently has at least a first sub-portion that is invalid and at least a second sub-portion that is modified relative to a version in a memory, the cache controller circuit is further configured to fetch values corresponding to the address from the memory, to generate an updated version of the portion of the cache line by using the fetched values to update the first sub-portion, but not the second sub-portion, of the portion of the cache line, and to generate a response to the read request that includes the updated version of the portion of the cache line.Type: GrantFiled: May 13, 2021Date of Patent: February 21, 2023Assignee: Apple Inc.Inventors: Ilya Granovsky, Tom Greenshtein
-
Patent number: 11579811Abstract: A storage device is described. The storage device may store data in a storage memory, and may have a host interface to manage communications between the storage device and a host machine. The storage device may also include a translation layer to translate addresses between the host machine and the storage memory, and a storage interface to access data from the storage memory. An in-storage monitoring engine may determine characteristics of the storage device, such as latency, bandwidth, and retention.Type: GrantFiled: November 15, 2021Date of Patent: February 14, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Inseok Stephen Choi, Yang Seok Ki, Byoung Young Ahn
-
Patent number: 11573708Abstract: A solid state drive having at least one component solid state drive, a spare solid state drive, and a drive aggregator. The drive aggregator has at least one host interface, at least one drive interface connected to the at least one component solid state drive, and an interface connected to the spare solid state drive. The drive aggregator is configured to maintain, in the spare solid state drive, a copy of a dataset that is stored in the component solid state drive. In response to a failure of the component solid state drive, the drive aggregator is configured to substitute a function of the component solid state drive with respect to the dataset with a corresponding function of the spare solid state drive, based on the copy of the dataset maintained in the spare solid state drive.Type: GrantFiled: June 25, 2019Date of Patent: February 7, 2023Assignee: Micron Technology, Inc.Inventors: Christopher Joseph Bueb, Poorna Kale
-
Patent number: 11567866Abstract: The technology disclosed herein may detect, avoid, or protect against “use after free” or “double free” programing logic errors. An example method may involve: receiving a plurality of requests to allocate memory, the plurality of requests comprising a first request and a second request; identifying a chunk of memory; generating a plurality of pointers to the chunk of memory, the plurality of pointers comprising a first pointer and a second pointer; providing the first pointer responsive to the first request and the second pointer responsive to the second request; and updating pointer validation data after providing the second pointer, wherein the pointer validation data indicates at least one of the plurality of pointers is valid and at least one of the plurality of pointers is invalid.Type: GrantFiled: August 24, 2020Date of Patent: January 31, 2023Assignee: Red Hat, Inc.Inventor: Michael Tsirkin
-
Patent number: 11567864Abstract: An operation method a storage device including a nonvolatile memory and a memory controller configured to control the nonvolatile memory is provided. The operation method includes erasing memory cells of the nonvolatile memory using the memory controller and prohibiting an erase of the erased memory cells for a critical time using the memory controller.Type: GrantFiled: October 20, 2021Date of Patent: January 31, 2023Assignee: Samsung Electronics Co., Ltd.Inventor: Nam Wook Kang
-
Patent number: 11567874Abstract: An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.Type: GrantFiled: November 8, 2021Date of Patent: January 31, 2023Assignee: Texas Instruments IncorporatedInventors: Bipin Prasad Heremagalur Ramaprasad, David Matthew Thompson, Abhijeet Ashok Chachad, Hung Ong
-
Patent number: 11567662Abstract: A request to generate a storage system model is received. The storage system model represents at least a portion of a storage system. In response to receiving the request, a storage system interface configuration is loaded. The storage system interface configuration comprises an attribute of an entity model. The attribute corresponds to an attribute of a storage system entity of the storage system. Further in response to receiving the request, the entity model is identified as representing the storage system entity. In response to identifying the entity model as representing the storage system entity, the entity model is instantiated.Type: GrantFiled: December 13, 2021Date of Patent: January 31, 2023Assignee: NetApp, Inc.Inventors: Brian Joseph McGiverin, Christopher Michael Morrissey, Daniel Andrew Sarisky, Santosh C. Lolayekar
-
Patent number: 11561897Abstract: Cache memory requirements between normal and peak operation may vary by two orders of magnitude or more. A cache memory management system for multi-tenant computing environments monitors memory requests and uses a pattern matching classifier to generate patterns which are then delivered to a neural network. The neural network is trained to predict near-future cache memory performance based on the current memory access patterns. An optimizer allocates cache memory among the tenants to ensure that each tenant has sufficient memory to meet its required service levels while avoiding the need to provision the computing environment with worst-case scenario levels of cache memory. System resources are preserved while maintaining required performance levels.Type: GrantFiled: July 16, 2018Date of Patent: January 24, 2023Assignee: VISA INTERNATIONAL SERVICE ASSOCIATIONInventors: Yu Gu, Hongqin Song
-
Patent number: 11544197Abstract: A mapping correspondence between memory addresses and request counts and a cache line flusher are provided, enabling selective cache flushing for persistent memory in a computing system to optimize write performance thereof. Random writes from cache memory to persistent memory are prevented from magnifying inherent phenomena of write amplification, enabling computing systems to implement persistent memory as random-access memory, at least in part. Conventional cache replacement policies may remain implemented in a computing system, but may be effectively overridden by operations of a cache line flusher according to example embodiments of the present disclosure preventing conventional cache replacement policies from being triggered. Implementations of the present disclosure may avoid becoming part of the critical path of a set of computer-executable instructions being executed by a client of cache memory, minimizing additional computation overhead in the critical path.Type: GrantFiled: September 18, 2020Date of Patent: January 3, 2023Assignee: Alibaba Group Holding LimitedInventors: Shuo Chen, Zhu Pang, Qingda Lu, Jiesheng Wu, Yuanjiang Ni
-
Patent number: 11526437Abstract: A method for heap space management includes, in response to a determination that consumption of a first heap space of an application exceeds a first threshold, determining whether a second heap space of the application after garbage collection is sufficient to accommodate data stored in the first heap space. The method further includes, in response to a determination that the second heap space after the garbage collection is sufficient to accommodate the data, performing the garbage collection on the second heap space. The method further includes storing the data into the second heap space.Type: GrantFiled: June 11, 2021Date of Patent: December 13, 2022Assignee: International Business Machines CorporationInventors: Gan Zhang, Xing Xing Shen, Shan Gao, Le Chang, Ming Lei Zhang, Zeng Yu Peng
-
Patent number: 11513725Abstract: A memory module according to some embodiments is operable in a computer system, and comprises a volatile memory subsystem and a module controller coupled to the volatile memory subsystem. The volatile memory subsystem is configurable to be coupled to a memory channel including a data bus, and includes dynamic random access memory (DRAM) devices. The memory module allows independent control of strobe paths and data paths between the DRAM devices and the data bus, and is configurable to perform a memory write operation during which write data is provided to the volatile memory subsystem together with write strobes transmitted via first strobe paths between the DRAM devices and the data bus, and a memory read operation during which read data from the volatile memory subsystem is output onto the data bus together with read strobes transmitted via second strobe paths between the module controller and the data bus.Type: GrantFiled: September 16, 2020Date of Patent: November 29, 2022Assignee: Netlist, Inc.Inventors: Jeekyoung Park, Jordan Horwich
-
Patent number: 11500550Abstract: Inhibiting memory accesses to executable modules. A hypervisor executing on a computing host initiates a virtual machine comprising a guest operating system. The hypervisor receives a communication from the guest operating system requesting that a range of memory utilized by the guest operating system be identified as being execute-only access. The hypervisor marks at least one physical page of memory that includes the range of memory as being execute-only access.Type: GrantFiled: August 27, 2019Date of Patent: November 15, 2022Assignee: Red Hat, Inc.Inventor: Bandan Das