Cache Flushing Patents (Class 711/135)
  • Patent number: 11825146
    Abstract: Techniques and mechanisms described herein facilitate the storage of digital media recordings. According to various embodiments, a system is provided comprising a processor, a storage device, Random Access Memory (RAM), an archive writer, and a recording writer. The archive writer is configured to retrieve a plurality of small multimedia segments (SMSs) in RAM and write the plurality of SMSs into an archive container file in RAM. The single archive container file may correspond to a singular multimedia file when complete. New SMSs retrieved from RAM are appended into the archive container file if the new SMSs also correspond to the singular multimedia file. The recording writer is configured to flush the archive container file to be stored as a digital media recording on the storage device once enough SMSs have been appended by the archive writer to the archive container file to complete the singular multimedia file.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: November 21, 2023
    Assignee: TIVO CORPORATION
    Inventors: Do Hyun Chung, Ren L. Long, Dan Dennedy
  • Patent number: 11816354
    Abstract: Embodiments of the present disclosure relate to establishing persistent cache memory as a write tier. An input/output (IO) workload of a storage array can be analyzed. One or more write data portions of the IO workload can be stored in a persistent memory region of one or more disks of the storage array.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: November 14, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Owen Martin, Dustin Zentz, Vladimir Desyatov
  • Patent number: 11803470
    Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Amit Apte, Ganesh Balakrishnan, Ann Ling, Vydhyanathan Kalyanasundharam
  • Patent number: 11797456
    Abstract: Techniques described herein provide a handshake mechanism and protocol for notifying an operating system whether system hardware supports persistent cache flushing. System firmware may determine whether the hardware is capable of supporting a full flush of processor caches and volatile memory buffers in the event of a power outage or asynchronous reset. If the hardware is capable, then persistent cache flushing may be selectively enabled and advertised to the operating system. Once persistent cache flushing is enabled, the operating system and applications may treat data committed to volatile processor caches as persistent. If disabled or not supported by system hardware, then the platform may not advertise support for persistent cache flushing to the operating system.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: October 24, 2023
    Assignee: Oracle International Corporation
    Inventor: Benjamin John Fuller
  • Patent number: 11783040
    Abstract: An information handling system includes a first memory that stores a firmware image associated with the baseboard management controller. The baseboard management controller begins execution of a kernel, which in turn performs a boot operation of the information handling system. The baseboard management controller begins a file system initialization program. During the boot operation, the baseboard management controller performs a full read and cryptographic verification of the firmware image via a DM-Verity daemon of the file system initialization program. In response to the full read of the firmware image being completed, the baseboard management controller provides a flush command to the kernel via the DM-Verity daemon. The baseboard management controller flushes a cache buffer associated with the baseboard management controller via the kernel.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: October 10, 2023
    Assignee: Dell Products L.P.
    Inventors: Michael E. Brown, Nagendra Varma Totakura, Vasanth Venkataramanappa, Jack E. Fewx
  • Patent number: 11762771
    Abstract: Methods, systems, and devices for advanced power off notification for managed memory are described. An apparatus may include a memory array comprising a plurality of memory cells and a controller coupled with the memory array. The controller may be configured to receive a notification indicating a transition from a first state of the memory array to a second state of the memory array. The notification may include a value, the value comprising a plurality of bits and corresponding to a minimum duration remaining until a power supply of the memory array is deactivated. The controller may also execute a plurality of operations according to an order determined based at least in part on a parameter associated with the memory array and receiving the notification comprising the value.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: September 19, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Vincenzo Reina, Binbin Huo
  • Patent number: 11755483
    Abstract: In a multi-node system, each node includes tiles. Each tile includes a cache controller, a local cache, and a snoop filter cache (SFC). The cache controller responsive to a memory access request by the tile checks the local cache to determine whether the data associated with the request has been cached by the local cache of the tile. The cached data from the local cache is returned responsive to a cache-hit. The SFC is checked to determine whether any other tile of a remote node has cached the data associated with the memory access request. If it is determined that the data has been cached by another tile of a remote node and if there is a cache-miss by the local cache, then the memory access request is transmitted to the global coherency unit (GCU) and the snoop filter to fetch the cached data. Otherwise an interconnected memory is accessed.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: September 12, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventors: Pranith Kumar Denthumdas, Rabin Sugumar, Isam Wadih Akkawi
  • Patent number: 11748260
    Abstract: A first run-time environment executing a first instance of an application, and a second run-time environment executing a second instance of the application are established. An indication of an impending commencement of a reduced-capacity phase of operation of the first run-time environment is obtained at a service request receiver. Based at least in part on the indication, the service request receiver directs a service request to the second instance of the application.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: September 5, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: James Christopher Sorenson, III, Yishai Galatzer, Bernd Joachim Wolfgang Mathiske, Steven Collison, Paul Henry Hohensee
  • Patent number: 11734175
    Abstract: The present technology includes a storage device including a memory device including a first storage region and a second storage region and a memory controller configured to, in response to a write request in the first storage region from an external host, acquire data stored the first region based on a fail prediction information provided from the memory device and to perform a write operation corresponding to the write request, wherein the first storage region and the second storage region are allocated according to logical addresses of data to be stored in by requests of the external host.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: August 22, 2023
    Assignee: SK hynix Inc.
    Inventors: Yong Jin, Jung Ki Noh, Seung Won Jeon, Young Kyun Shin, Keun Hyung Kim
  • Patent number: 11720368
    Abstract: Techniques for memory management of a data processing system are described herein. According to one embodiment, a memory usage monitor executed by a processor of a data processing system monitors memory usages of groups of programs running within a memory of the data processing system. In response to determining that a first memory usage of a first group of the programs exceeds a first predetermined threshold, a user level reboot is performed in which one or more applications running within a user space of an operating system of the data processing system are terminated and relaunched. In response to determining that a second memory usage of a second group of the programs exceeds a second predetermined threshold, a system level reboot is performed in which one or more system components running within a kernel space of the operating system are terminated and relaunched.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: August 8, 2023
    Assignee: APPLE INC.
    Inventors: Andrew D. Myrick, David M. Chan, Jonathan R. Reeves, Jeffrey D. Curless, Lionel D. Desai, James C. McIlree, Karen A. Crippes, Rasha Eqbal
  • Patent number: 11704235
    Abstract: A memory system of an embodiment includes a nonvolatile memory, a primary cache memory, a secondary cache memory, and a processor. The processor performs address conversion by using logical-to-physical address conversion information relating to data to be addressed in the nonvolatile memory. Based on whether first processing is performed on the nonvolatile memory or second processing is performed on the nonvolatile memory, the processor controls to store whether the logical-to-physical address conversion information relating to the first processing to be in the primary cache memory as cache data or logical-to-physical address conversion information relating to the second processing to be in the secondary cache memory as cache data.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: July 18, 2023
    Assignee: Kioxia Corporation
    Inventors: Shogo Ochiai, Nobuaki Tojo
  • Patent number: 11681657
    Abstract: A method, computer program product, and computer system for organizing a plurality of log records into a plurality of buckets, wherein each bucket is associated with a range of a plurality of ranges within a backing store. A bucket of the plurality of buckets from which a portion of the log records of the plurality of log records are to be flushed may be selected. The portion of the log records may be organized into parallel flush jobs. The portion of the log records may be flushed to the backing store in parallel.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: June 20, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Socheavy D. Heng, William C. Davenport
  • Patent number: 11681617
    Abstract: A data processing apparatus includes a requester, a completer and a cache. Data is transferred between the requester and the cache and between the cache and the completer. The cache implements a cache eviction policy. The completer determines an eviction cost associated with evicting the data from the cache and notifies the cache of the eviction cost. The cache eviction policy implemented by the cache is based, at least in part, on the cost of evicting the data from the cache. The eviction cost may be determined, for example, based on properties or usage of a memory system of the completer.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: June 20, 2023
    Assignee: Arm Limited
    Inventor: Alexander Klimov
  • Patent number: 11675492
    Abstract: Techniques for measuring a user's level of interest in content in an electronic document are disclosed. A system generates a user engagement score based on the user's scrolling behavior. The system detects one scrolling event that moves content into a viewport and another scrolling event that moves the content out of the viewport. The system calculates a user engagement score based on the duration of time the content was in the viewport. The system may also detect a scroll-back event, in which the user scrolls away from content and back to the content. The system may then calculate or update the user engagement score based on the scroll-back event.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: June 13, 2023
    Assignee: Oracle International Corporation
    Inventor: Michael Patrick Rodgers
  • Patent number: 11675776
    Abstract: Managing data in a computing device is disclosed, including generating reverse delta updates during an apply operation of a forward delta update. A method includes operations of applying forward update data to an original data object to generate an updated data object from the original data object and generating, during the applying, reverse update data, the reverse update data configured to reverse effects of the forward update data and restore the original data object from the updated data object.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: June 13, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jonathon Tucker Ready, Cristian Gelu Petruta, Mark Zagorski, Timothy Patrick Conley, Imran Baig, Alexey Teterev, Asish George Varghese
  • Patent number: 11675512
    Abstract: A storage system allocates single-level cell (SLC) blocks in its memory to act as a write buffer and/or a read buffer. When the storage system uses the SLC blocks as a read buffer, the storage system reads data from multi-level cell (MLC) blocks in the memory and stores the data in the read buffer prior to receiving a read command from a host for the data. When the storage system uses the SLC blocks as a write buffer, the storage system retains certain data in the write buffer while other data is flushed from the write buffer to MLC blocks in the memory.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: June 13, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Rotem Sela, Einav Zilberstein, Karin Inbar
  • Patent number: 11669449
    Abstract: One example method includes a cache eviction operation. Entries in a cache are maintained in an entry list that includes a recent list, a recent ghost list, a frequent list, and a frequent ghost list. When an eviction operation is initiated or triggered, timestamps of last access for the entries in the entry list are adjusted by corresponding adjustment values. Candidate entries for eviction are identified based on the adjusted timestamps of last access. At least some of the candidates are evicted from the cache.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: June 6, 2023
    Assignee: DELL PRODUCTS L.P.
    Inventors: Keyur B. Desai, Xiaobing Zhang
  • Patent number: 11645232
    Abstract: Techniques for executing show commands are described herein. A plurality of navigation steps is utilized, each navigation step corresponding to a different layer in a database structure and each navigation step including an operator to fetch items from a metadata database up to respective bounded limits. Dependency information is also fetched for objects of the specified object type in the show command. After a set of objects from the last layer are processed, memory for the navigation steps is flushed and the next set of objects are processed.
    Type: Grant
    Filed: June 29, 2022
    Date of Patent: May 9, 2023
    Assignee: Snowflake Inc.
    Inventors: Lin Chan, Tianyi Chen, Robert Bengt Benedikt Gernhardt, Nithin Mahesh, Eric Robinson
  • Patent number: 11635921
    Abstract: A data-storage-destination environment-determination module set determines any of a plurality of storage environments as a storage destination environment for storing data, based on a used-data table set about the characteristics of data and the storage environments. The data migration module transmits the data to the storage destination environment.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: April 25, 2023
    Assignee: HITACHI, LTD.
    Inventors: Iku Matsui, Hiroshi Arakawa, Hideo Tabuchi
  • Patent number: 11630769
    Abstract: A memory controller includes a buffer memory and a microprocessor. The buffer memory includes at least a first cache memory and a second cache memory. The microprocessor is configured to control access of a flash memory device. The microprocessor is configured to obtain a number of spare blocks of the flash memory device corresponding to a first operation period, determine a write speed compensation value, determine a target write speed according to the write speed compensation value and a balance speed, and determine a target garbage collection speed according to the target write speed. The microprocessor is further configured to perform one or more write operations in response to one or more write commands received from a host device in the first operation period according to the target write speed and perform at least one garbage collection operation according to the target garbage collection speed.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: April 18, 2023
    Assignee: Silicon Motion, Inc.
    Inventor: Tsung-Yao Chiang
  • Patent number: 11620233
    Abstract: An integrated circuit for offloading a page migration operation from a host processor is provided. The integrated circuit is configured to: receive, from the host processor, a request to perform the page migration operation from a first physical address to a second physical address; and based on the request, perform the page migration operation. The page migration operation comprises: performing a copy operation of data from the first physical address to the second physical address, and updating a page table entry based on the second physical address, to enable the host processor to access the data from the second physical address based on the updated page table entry.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: April 4, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Ali Ghassan Saidi, Tzachi Zidenberg
  • Patent number: 11620235
    Abstract: Systems and methods for invalidating page translation entries are described. A processing element may apply a delay to a drain cycle of a store reorder queue (SRQ) of a processing element. The processing element may drain the SRQ under the delayed drain cycle. The processing element may receive a translation lookaside buffer invalidation (TLBI) instruction from an interconnect connecting the plurality of processing elements. The TLBI instruction may be an instruction to invalidate a translation lookaside buffer (TLB) entry corresponding to at least one of a virtual memory page and a physical memory frame. The TLBI instruction may be broadcasted by another processing element. The application of the delay to the drain cycle of the SRQ may decrease a difference between the drain cycle of the SRQ and an invalidation cycle associated with the TLBI.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: April 4, 2023
    Assignee: International Business Machines Corporation
    Inventors: Shakti Kapoor, Nelson Wu, Manoj Dusanapudi
  • Patent number: 11614866
    Abstract: A nonvolatile memory device includes a nonvolatile memory, a volatile memory being a cache memory of the nonvolatile memory, and a first controller configured to control the nonvolatile memory. The nonvolatile memory device further includes a second controller configured to receive a device write command and an address, and transmit, to the volatile memory through a first bus, a first read command and the address and a first write command and the address sequentially, and transmit a second write command and the address to the first controller through a second bus, in response to the reception of the device write command and the address.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: March 28, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Youngjin Cho, Sungyong Seo, Sun-Young Lim, Uksong Kang, Chankyung Kim, Duckhyun Chang, JinHyeok Choi
  • Patent number: 11599275
    Abstract: Provided herein may be a memory controller and a method of operating the same. The memory controller may include a SPO detector configured to output a detection signal when a SPO is detected, a memory buffer configured to store host data, and a power loss controller configured to, based on the detection signal, receive dump data corresponding to the host data, store the dump data and a dump age corresponding to the dump data, and output the dump data and the dump age to a memory device, wherein the dump age indicates a number of times that different items of host data have been dumped from the memory buffer to the power loss controller, and the power loss controller is configured to control a recovery operation corresponding to the SPO based on the dump age being received from the memory device.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: March 7, 2023
    Assignee: SK hynix Inc.
    Inventors: Jin Pyo Kim, Sang Min Kim, Woo Young Yang, Jun Six Jeong, Seung Hun Ji
  • Patent number: 11593222
    Abstract: A method and system for backup processes that includes identifying a target volume and identifying a number of available threads to back up the target volume. The elements in the target volume are distributed among the available threads based on a currently pending size of data in the threads. The elements are stored from each thread into a backup container, and merged from each of the backup containers into a backup volume.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: February 28, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Sunil Yadav, Manish Sharma, Aaditya Rakesh Bansal, Shelesh Chopra
  • Patent number: 11595457
    Abstract: A system and method dynamically adjusting the rate at which incoming content data is transferred into a media gateway appliance, and the rate at which outgoing content data is provided to one or more client devices by the media gateway appliance. This dynamic adjustment is performed in accordance with a predetermined program and as a function of real-time data streaming rates and predetermined rate parameters and preferences. The system and method enable the provision of an improved viewing and content acquisition experience for users.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: February 28, 2023
    Assignee: ARRIS ENTERPRISES LLC
    Inventors: Aniela M. Rosenberger, William P. Franks, Kaliraj Kalaichelvan, Arpan Kumar Kaushal, Rajesh K. Rao, Ernest George Schmitt
  • Patent number: 11586553
    Abstract: A memory device for storing data comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. The memory device also comprises a cache memory operable for storing a second plurality of data words, wherein further each data word of the second plurality of data words is either awaiting write verification or is to be re-written into the memory bank. The cache memory is divided into a plurality of primary segments, wherein each primary segment of the cache memory is direct mapped to a corresponding segment of the plurality of segments of the memory bank, wherein each primary segment of the plurality of primary segments of the cache memory is sub-divided into a plurality of secondary segments, and each of the plurality of secondary segments comprises at least one counter for tracking a number of valid entries stored therein.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: February 21, 2023
    Assignee: Integrated Silicon Solution, (Cayman) Inc.
    Inventors: Neal Berger, Susmita Karmakar, TaeJin Pyon, Kuk-Hwan Kim
  • Patent number: 11561924
    Abstract: An information processing device is configured to perform processing, the processing including: executing a persistence processing configured to make a part of a region persistent, the region being to be used as a ring buffer in remote direct memory access (RDMA) to a non-volatile memory accessible in an equal manner to a dynamic random access memory (DRAM) so as not to allow received data stored in the part of the region to be overwritten; executing a determination processing configured to determine whether a ratio of the region made persistent by the persistence processing has exceeded a first threshold; and executing a selection processing configured to select a method of evacuating the persistent received data by using a received data amount of the information processing device and a free region in the non-volatile memory in a case where the determination processing determines that the ratio has exceeded the first threshold.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: January 24, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Hiroki Ohtsuji
  • Patent number: 11544199
    Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: January 3, 2023
    Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.
    Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
  • Patent number: 11513798
    Abstract: A system and method are provided for simplifying load acquire and store release semantics that are used in reduced instruction set computing (RISC). Translating the semantics into micro-operations, or low-level instructions used to implement complex machine instructions, can avoid having to implement complicated new memory operations. Using one or more data memory barrier operations in conjunction with load and store operations can provide sufficient ordering as a data memory barrier ensures that prior instructions are performed and completed before subsequent instructions are executed.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: November 29, 2022
    Assignee: Ampere Computing LLC
    Inventors: Matthew Ashcraft, Christopher Nelson
  • Patent number: 11513737
    Abstract: Data overflows can be prevented in edge computing systems. For example, an edge computing system (ECS) can include a memory buffer for storing incoming data from client devices. The ECS can also include a local storage device. The ECS can determine that an amount of available storage space in the local storage device is less than a predefined threshold amount. Based on determining that the amount of available storage space is less than the predefined threshold amount, the ECS can prevent the incoming data from being retrieved from the memory buffer. And based on determining that the amount of available storage space is greater than or equal to the predefined threshold amount, the ECS can retrieve the incoming data from the memory buffer and store the incoming data in the local storage device. This may prevent data overflows associated with the local storage device.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: November 29, 2022
    Assignee: RED HAT, INC.
    Inventors: Yehuda Sadeh-Weinraub, Huamin Chen, Ricardo Noriega De Soto
  • Patent number: 11507508
    Abstract: The present disclosure generally relates to improving write cache utilization by recommending a time to initiate a data flush operation or predicting when a new write command will arrive. The recommending can be based upon considerations such as a hard time limit for data caching, rewarding for filling the cache, and penalizing for holding data for too long. The predicting can be based on tracking write command arrivals and then, based upon the tracking, predicting an estimated arrival time for the next write command. Based upon the recommendation or predicting, the write cache can be flushed or the data can remain in the write cache to thus more efficiently utilize the write cache without violating a hard stop time limit.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: November 22, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Shay Benisty, Ariel Navon
  • Patent number: 11507500
    Abstract: A storage system includes a host including a processor and a memory unit, and a storage device including a controller and a non-volatile memory unit. The processor is configured to output a write command, write data, and size information of the write data, to the storage device, the write command that is output not including a write address. The controller is configured to determine a physical write location of the non-volatile memory unit in which the write data are to be written, based on the write command and the size information, write the write data in the physical write location of the non-volatile memory unit, and output the physical write location to the host. The processor is further configured to generate, in the memory unit, mapping information between an identifier of the write data and the physical write location.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: November 22, 2022
    Assignee: KIOXIA CORPORATION
    Inventor: Daisuke Hashimoto
  • Patent number: 11494485
    Abstract: A uniform enclave interface is provided for creating and operating enclaves across multiple different types of backends and system configurations. For instance, an enclave manager may be created in an untrusted environment of a host computing device. The enclave manager may include instructions for creating one or more enclaves. An enclave may be generated in memory of the host computing device using the enclave manager. One or more enclave clients of the enclave may be generated by the enclave manager such that the enclave clients configured to provide one or more entry points into the enclave. One or more trusted application instances may be created in the enclave.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Matthew Gingell, Peter Gonda, Alexander Thomas Cope, Sergey Karamov, Keith Moyer, Uday Savagaonkar, Chong Cai
  • Patent number: 11494301
    Abstract: A storage system in one embodiment comprises storage nodes, an address space, address mapping sub-journals and write cache data sub-journals. Each address mapping sub-journal corresponds to a slice of the address space, is under control of one of the storage nodes and comprises update information corresponding to updates to an address mapping data structure. Each write cache data sub journal is under control of the one of the storage nodes and comprises data pages to be later destaged to the address space. A given storage node is configured to store write cache metadata in a given address mapping sub journal that is under control of the given storage node. The write cache metadata corresponds to a given data page stored in a given write cache data sub-journal that is also under control of the given storage node.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: November 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Lior Kamran
  • Patent number: 11481325
    Abstract: A system for managing a virtual machine is provided. The system includes a processor configured to initiate a session for accessing a virtual machine by accessing an operating system image from a system disk and monitor read and write requests generated during the session. The processor is further configured to write any requested information to at least one of a memory cache and a write back cache located separately from the system disk and read the operating system image content from at least one of the system disk and a host cache operably coupled between the system disk and the at least one processor. Upon completion of the computing session, the processor is configured to clear the memory cache, clear the write back cache, and reboot the virtual machine using the operating system image stored on the system disk or stored in the host cache.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: October 25, 2022
    Assignee: Citrix Systems, Inc.
    Inventors: Yuhua Lu, Graham MacDonald, Lanyue Xu, Roger Cruz
  • Patent number: 11474941
    Abstract: A computer-implemented method, according to one approach, includes: receiving a stream of incoming I/O requests, all of which are satisfied using one or more buffers in a primary cache. However, in response to determining that the available capacity of the one or more buffers in the primary cache is outside a predetermined range: one or more buffers in the secondary cache are allocated. These one or more buffers in the secondary cache are used to satisfy at least some of the incoming I/O requests, while the one or more buffers in the primary cache are used to satisfy a remainder of the incoming I/O requests. Moreover, in response to determining that the available capacity of the one or more buffers in the primary cache is not outside the predetermined range: the one or more buffers in the primary cache are again used to satisfy all of the incoming I/O requests.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: October 18, 2022
    Assignee: International Business Machines Corporation
    Inventors: Beth Ann Peterson, Kevin J. Ash, Lokesh Mohan Gupta, Warren Keith Stanley, Roger G. Hathorn
  • Patent number: 11455251
    Abstract: A system-on-chip with runtime global push to persistence includes a data processor having a cache, an external memory interface, and a microsequencer. The external memory interface is coupled to the cache and is adapted to be coupled to an external memory. The cache provides data to the external memory interface for storage in the external memory. The microsequencer is coupled to the data processor. In response to a trigger signal, the microsequencer causes the cache to flush the data by sending the data to the external memory interface for transmission to the external memory.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: September 27, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander J. Branover, Kevin M. Lepak, William A. Moyes
  • Patent number: 11422811
    Abstract: A processor includes a global register to store a value of an interrupted block count. A processor core, communicably coupled to the global register, may, upon execution of an instruction to flush blocks of a cache that are associated with a security domain: flush the blocks of the cache sequentially according to a flush loop of the cache; and in response to detection of a system interrupt: store a value of a current cache block count to the global register as the interrupted block count; and stop execution of the instruction to pause the flush of the blocks of the cache. After handling of the interrupt, the instruction may be called again to restart the flush of the cache.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: August 23, 2022
    Assignee: Intel Corporation
    Inventors: Gideon Gerzon, Dror Caspi, Arie Aharon, Ido Ouziel
  • Patent number: 11425223
    Abstract: A computer-implemented method, operable with a content delivery network (CDN) uses late binding of caching policies; by a caching node in the CDN, in response to a request for content, determining if the content is cached locally. When it is determined that said content is cached locally, then: determining a current cache policy associated with the content; and then determining, based on said current cache policy associated with the content, whether it is acceptable to serve the content that is cached locally; based on said determining, when it is not acceptable to serve the content that is cached locally, obtaining a new version of the content and then serving the new version of the content, otherwise when it is acceptable to serve the content that is cached locally, serving the content that is cached locally.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: August 23, 2022
    Assignee: Level 3 Communications, LLC
    Inventors: Christopher Newton, William Crowder
  • Patent number: 11422895
    Abstract: A memory system may include: a nonvolatile memory device including a plurality of memory blocks, each of which includes a plurality of pages, and among which a subset of memory blocks are managed as a system area and remaining memory blocks are managed as a normal area; and a controller may store system data, used to control the nonvolatile memory device, in the system area, and storing boot data, used in a host and normal data updated in a control operation for the nonvolatile memory device, in the normal area, the controller may perform a checkpoint operation each time storage of N number of boot data among the boot data is completed, and may perform the checkpoint operation each time the control operation for the nonvolatile memory device is completed, ā€˜N’ being a natural number.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: August 23, 2022
    Assignee: SK hynix Inc.
    Inventor: Jong-Min Lee
  • Patent number: 11409445
    Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for managing a storage system. A method for managing a storage system is provided. The method includes storing a data block to be backed up into a local storage device of the storage system; determining whether the data block includes periodically rewritten data based on historical operation information of the storage system, the historical operation information being associated with storage operations and removal operations by the storage system on historical data; and if it is determined that the data block does not include periodically rewritten data, storing the data block into a remote storage device of the storage system, and removing the data block from the local storage device.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: August 9, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Yi Wang, Qingxiao Zheng, Qianyun Cheng
  • Patent number: 11397677
    Abstract: One embodiment can provide an apparatus. The apparatus can include a persistent flush (PF) cache and a PF-tracking logic coupled to the PF cache. The PF-tracking logic is to: in response to receiving, from a media controller, an acknowledgment to a write request, determine whether the PF cache includes an entry corresponding to the media controller; in response to the PF cache not including the entry corresponding to the media controller, allocate an entry in the PF cache for the media controller; in response to receiving a persistence checkpoint, identify a media controller from a plurality of media controllers based on entries stored in the PF cache; issue a persistent flush request to the identified media controller to persist write requests received by the identified media controller; and remove an entry corresponding to the identified media controller from the PF cache subsequent to issuing the persistent flush request.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 26, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Derek A. Sherlock, Gregg B. Lesartre
  • Patent number: 11372757
    Abstract: Tracking repeated reads to guide dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) and a central ordering point circuit (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation. The COP's selection is based on protocol preference indicators generated by the PEs using repeat-read indicators that each PE maintains to track whether a coherence granule was repeatedly read by the PE (e.g., as a result of polling reads, or as a result of re-reading the coherence granule after it was evicted from a cache due to an invalidating snoop). After selecting the cache coherence protocol, the COP sends a response message to the PEs indicating the selected cache coherence protocol.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Neal Magill, Eric Francis Robinson, Derek Bachand, Jason Panavich, Michael B. Mitchell, Michael P. Wilson
  • Patent number: 11343348
    Abstract: This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: May 24, 2022
    Assignee: Akamai Technologies, Inc.
    Inventors: Matthew J. Stevens, Michael G. Merideth, Nil Alexandrov, Andrew F. Champagne, Brendan Coyle, Timothy Glynn, Mark A. Roman, Xin Xu
  • Patent number: 11310550
    Abstract: Techniques and mechanisms described herein facilitate the storage of digital media recordings. According to various embodiments, a system is provided comprising a processor, a storage device, Random Access Memory (RAM), an archive writer, and a recording writer. The archive writer is configured to retrieve a plurality of small multimedia segments (SMSs) in RAM and write the plurality of SMSs into an archive container file in RAM. The single archive container file may correspond to a singular multimedia file when complete. New SMSs retrieved from RAM are appended into the archive container file if the new SMSs also correspond to the singular multimedia file. The recording writer is configured to flush the archive container file to be stored as a digital media recording on the storage device once enough SMSs have been appended by the archive writer to the archive container file to complete the singular multimedia file.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: April 19, 2022
    Assignee: TIVO CORPORATION
    Inventors: Do Hyun Chung, Ren L. Long, Dan Dennedy
  • Patent number: 11301380
    Abstract: Exemplary methods, apparatuses, and systems include identifying that a first cache line from a first cache is subject to an operation that copies data from the first cache to a non-volatile memory. A first portion of the first cache line stores clean data and a second portion of the first cache line stores dirty data. A redundant copy of the dirty data is stored in a second cache line of the first cache. In response to identifying that the first cache line is subject to the operation, metadata associated with the redundant copy of the dirty data is used to copy the dirty data to a non-volatile memory while omitting the clean data.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: April 12, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Robert M. Walker, Ashay Narsale
  • Patent number: 11288206
    Abstract: Embodiment of this disclosure provide techniques to support memory paging between trust domains (TDs) in computer systems. In one embodiment, a processing device including a memory controller and a memory paging circuit is provided. The memory paging circuit is to insert a transportable page into a memory location associated with a trust domain (TD), the transportable page comprises encrypted contents of a first memory page of the TD. The memory paging circuit is further to create a third memory page associated with the TD by binding the transportable page to the TD, binding the transportable page to the TD comprises re-encrypting contents of the transportable page based on a key associated with the TD and a physical address of the memory location. The memory paging circuit is further to access contents of the third memory page by decrypting the contents of the third memory page using the key associated with the TD.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 29, 2022
    Assignee: Intel Corporation
    Inventors: Hormuzd M. Khosravi, Baiju Patel, Ravi Sahita, Barry Huntley
  • Patent number: 11288191
    Abstract: An apparatus to facilitate memory flushing is disclosed. The apparatus comprises a cache memory, one or more processing resources, tracker hardware to dispatch workloads for execution at the processing resources and to monitor the workloads to track completion of the execution, range based flush (RBF) hardware to process RBF commands and generate a flush indication to flush data from the cache memory and a flush controller to receive the flush indication and perform a flush operation to discard data from the cache memory at an address range provided in the flush indication.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: March 29, 2022
    Assignee: Intel Corporation
    Inventors: Hema Chand Nalluri, Aditya Navale, Altug Koker, Brandon Fliflet, Jeffery S. Boles, James Valerio, Vasanth Ranganathan, Anirban Kundu, Pattabhiraman K
  • Patent number: 11287986
    Abstract: Apparatus and methods are disclosed, including a controller circuit, a volatile memory, a non-volatile memory, and a reset circuit, where the reset circuit is configured to receive a reset signal from a host device and actuate a timer circuit. The timer circuit, where the timer circuit is configured to cause a storage device to reset after a threshold time period. The reset circuit is further configured to actuate the controller circuit to write data stored in the volatile memory to the non-volatile memory before the storage device is reset.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: March 29, 2022
    Assignee: Micron Technology, Inc.
    Inventor: David Aaron Palmer