Cache Flushing Patents (Class 711/135)
-
Patent number: 11960776Abstract: Some memory dice in a stack can be connected externally to the stack and other memory dice in the stack can be connected internally to the stack. The memory dice that are connected externally can act as interface dice for other memory dice that are connected internally thereto. Data protection and recovery schemes provided for the stacks of memory dice can be based on data that are transferred in a single data stream without a discontinuity between those data transfers from the memory dice of the stacks.Type: GrantFiled: June 2, 2022Date of Patent: April 16, 2024Assignee: Micron Technology, Inc.Inventors: Marco Sforzin, Paolo Amato
-
Patent number: 11947472Abstract: Described herein are systems, methods, and products utilizing a cache coherent switch on chip. The cache coherent switch on chip may utilize Compute Express Link (CXL) interconnect open standard and allow for multi-host access and the sharing of resources. The cache coherent switch on chip provides for resource sharing between components while independent of a system processor, removing the system processor as a bottleneck. Cache coherent switch on chip may further allow for cache coherency between various different components. Thus, for example, memories, accelerators, and/or other components within the disclose systems may each maintain caches, and the systems and techniques described herein allow for cache coherency between the different components of the system with minimal latency.Type: GrantFiled: June 28, 2022Date of Patent: April 2, 2024Assignee: Avago Technologies International Sales Pte. LimitedInventors: Shreyas Shah, George Apostol, Jr., Nagarajan Subramaniyan, Jack Regula, Jeffrey S. Earl
-
Patent number: 11941281Abstract: A system and method for a memory system are provided. A memory device includes an array of non-volatile memory cells. A memory controller is connected to the array of non-volatile memory cells. The memory controller is configured to perform the steps of receiving a request to read a value of a memory flag, wherein the memory flag includes a 2-bit value stored in a first memory cell and a second memory cell of the array of non-volatile memory cells, reading a first value of the first memory cell, reading a second value of the second memory cell, and determining the value of the memory flag based on the first value and the second value. In embodiments, the memory flag may have more than 2-bits.Type: GrantFiled: April 1, 2022Date of Patent: March 26, 2024Assignee: NXP B.V.Inventor: Soenke Ostertun
-
Patent number: 11924114Abstract: An electronic device, according to various embodiments of the present invention, comprises a network connection device, at least one processor, and a memory operatively connected to the at least one processor, wherein the memory stores instructions which, when executed, cause the at least one processor to: receive a data packet from the network connection device; add the data packet to a packet list corresponding to the data packet; and when the number of data packets included in the packet list is less than a threshold value, flush the data packets to a network stack on the basis of a flush time value for controlling a packet aggregation function, wherein the flush time value may be determined on the basis of the network throughput.Type: GrantFiled: April 23, 2019Date of Patent: March 5, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jongeon Park, Sangyeop Lee, Geumhwan Yu, Soukjin Bae, Chihun Ahn
-
Patent number: 11924107Abstract: Techniques for orchestrating workloads based on policy to operate in optimal host and/or network proximity in cloud-native environments are described herein. The techniques may include receiving flow data associated with network paths between workloads hosted by a cloud-based network. Based at least in part on the flow data, the techniques may include determining that a utilization of a network path between a first workload and a second workload is greater than a relative utilization of other network paths between the first workload and other workloads. The techniques may also include determining that reducing the network path would optimize communications between the first workload and the second workload without adversely affecting communications between the first workload and the other workloads. The techniques may also include causing at least one of a redeployment or a network path re-routing to reduce the networking proximity between the first workload and the second workload.Type: GrantFiled: October 4, 2021Date of Patent: March 5, 2024Assignee: CISCO TECHNOLOGY, INC.Inventors: Vincent E. Parla, Kyle Andrew Donald Mestery
-
Patent number: 11921639Abstract: A method for caching data, and a host device and a storage system that caches data. The method includes determining a first file in a storage device as a first predetermined type of file; reallocating a logical address of a predetermined logical address region to the first file; and updating a first logical address to physical address (L2P) table, corresponding to the predetermined logical address region, in a cache of the host device. The updated first L2P table includes a mapping relationship between the logical address reallocated for the first file and a physical address of the first file.Type: GrantFiled: June 28, 2022Date of Patent: March 5, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Heng Zhang, Yuanyuan Ye, Huimei Xiong, Yunchang Liang
-
Patent number: 11907131Abstract: Techniques for efficiently flushing a user data log may postpone or delay establishing chains of metadata pages used as mapping information to map logical addresses to storage locations of content stored at the logical addresses. Processing can include: receiving a write operation that writes data to a logical address; storing an entry for the write operation in the user data log; and flushing the entry from the user data log. Flushing can include storing a metadata log entry in a metadata log, wherein the metadata log entry represents a binding of the logical address to a data block including the data stored at the logical address; and destaging the metadata log entry. Destaging can include updating mapping information used to map the logical address to the data block. The mapping information can include a metadata page in accordance with the metadata log entry.Type: GrantFiled: July 1, 2022Date of Patent: February 20, 2024Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Bar David
-
Patent number: 11886884Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: GrantFiled: November 12, 2019Date of Patent: January 30, 2024Assignee: Intel CorporationInventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Patent number: 11880318Abstract: Methods for local page writes via pre-staging buffers for resilient buffer pool extensions are performed by computing systems. Compute nodes in database systems insert, update, and query data pages maintained in storage nodes. Data pages cached locally by compute node buffer pools are provided to buffer pool extensions on local disks as pre-copies via staging buffers that store data pages prior to local disk storage. Encryption of data pages occurs at the staging buffers, which allows a less restrictive update latching during the copy process, with page metadata being updated in buffer pool extensions page tables with in-progress states indicating it is not yet written to local disk. When stage buffers are filled, data pages are written to buffer pool extensions and metadata is updated in page tables to indicate available/valid states. Data pages in staging buffers can be read and updated prior to writing to the local disk.Type: GrantFiled: March 28, 2022Date of Patent: January 23, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Rogério Ramos, Kareem Aladdin Golaub, Chaitanya Gottipati, Alejandro Hernandez Saenz, Raj Kripal Danday
-
Patent number: 11880600Abstract: A write request directed to the non-volatile memory device is received. A stripe associated with an address specified by the write request is present in the volatile memory device is determined. The volatile memory device includes a plurality of stripes, each stripe of the plurality of stripes having a plurality of managed units. The write request on a managed unit of the stripe in the volatile memory device is performed. The stripe in the volatile memory device is evicted to a stripe in the non-volatile memory device.Type: GrantFiled: September 2, 2021Date of Patent: January 23, 2024Assignee: Micron Technology, Inc.Inventors: Ning Chen, Jiangli Zhu, Yi-Min Lin, Fangfang Zhu
-
Patent number: 11876702Abstract: A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.Type: GrantFiled: March 23, 2020Date of Patent: January 16, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Abdulla M. Bataineh, Thomas L. Court, Hess M. Hodge
-
Patent number: 11874770Abstract: An indexless logical-to-physical translation table (L2PTT). In one example, the data storage device including a memory, a data storage controller, and a bus. The memory including a mapping unit staging page that includes a plurality of mapping unit pages and a mapping unit page directory. The data storage controller including a data storage controller memory and coupled to the memory, the data storage controller memory including an indexless logical-to-physical translation table (L2PTT). The bus for transferring data between the data storage controller and a host device in communication with the data storage controller. The data storage controller is configured to perform one or more memory operations with the indexless L2PTT.Type: GrantFiled: May 12, 2022Date of Patent: January 16, 2024Assignee: Western Digital Technologies, Inc.Inventors: Oleg Kragel, Vijay Sivasankaran
-
Patent number: 11874765Abstract: A processor may allocate a first buffer segment from a buffer pool. The first buffer segment may be configured with a first contiguous range of memory for a first data partition of a data table. The first data partition comprising a first plurality of data blocks. A processor may store the first plurality of data blocks in order into the first buffer segment. A processor may retrieve the target data block from the first buffer segment in response to a data access request for a target data block of the first plurality of data blocks.Type: GrantFiled: May 28, 2021Date of Patent: January 16, 2024Assignee: International Business Machines CorporationInventors: Shuo Li, Xiaobo Wang, Sheng Yan Sun, Hong Mei Zhang
-
Patent number: 11868265Abstract: Techniques are described herein processing asynchronous power transition events while maintaining a persistent memory state. In some embodiments, a system may proxy asynchronous reset events through system logic, which generates an interrupt to invoke a special persistent flush interrupt handler that performs a persistent cache flush prior to invoking a hardware power transition. Additionally or alternatively, the system may include a hardware backup mechanism to ensure all resets and power-transitions requested in hardware reliably complete within a bounded window of time independent of whether the persistent cache flush handler succeeds.Type: GrantFiled: March 25, 2022Date of Patent: January 9, 2024Assignee: Oracle International CorporationInventor: Benjamin John Fuller
-
Patent number: 11860789Abstract: A cache purge simulation system includes a device under test with a cache skip switch. A first cache skip switch includes a configurable state register to indicate whether all of an associated cache is purged upon receipt of a cache purge instruction from a verification system or whether a physical partition that is smaller than the associated cache is purged upon receipt of the cache purge instruction from the verification system. A second cache skip switch includes a configurable start address register comprising a start address that indicates a beginning storage location of a physical partition of an associated cache and a configurable stop address register comprising a stop address that indicates a ending storage location of the physical partition of the associated cache.Type: GrantFiled: March 21, 2022Date of Patent: January 2, 2024Assignee: International Business Machines CorporationInventors: Yvo Thomas Bernard Mulder, Ralf Ludewig, Huiyuan Xing, Ulrich Mayer
-
Patent number: 11861216Abstract: Methods, systems, and devices for memory operations are described. Data for a set of commands associated with a barrier command may be written to a buffer. Based on a portion of the data to be flushed from the buffer, a determination may be made as to whether to update an indication of a last barrier command for which all of the associated data has been written to a memory device. Based on whether the indication of the last barrier command is updated, a flushing operation may be performed that transfers the portion of the data from the buffer to a memory device. During a recovery operation, the portion of the data stored in the memory device may be validated based on determining that the barrier command is associated with the portion of the data and on updating the indication of the last barrier command to indicate the barrier command.Type: GrantFiled: December 20, 2021Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventor: Giuseppe Cariello
-
Patent number: 11853574Abstract: A protocol for processing write operations can include recording each write operation in a log using a PDESC (page descriptor)-PB (page block) pair. The log entry for the write operation can be included in a container of logged writes. In a dual node system, the protocol when processing the write operation, that writes first data, can include incrementing a corresponding one of two counters of the container, where the corresponding counter is associated with one of the system's nodes which received the write operation and and caches the first data. Each container can be associated with an logical block address (LBA) range of a logical device, where logged writes that write to target addresses in the particular LBA range are included in the container. Nodes can independently determine flush ownership using the container's counters and can flush containers based on the flush ownership.Type: GrantFiled: June 21, 2022Date of Patent: December 26, 2023Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Geng Han, Changyu Feng
-
Patent number: 11853261Abstract: The present disclosure relates to a serving wireless communication node adapted to predict data files (A, B, C) to be requested by at least two served user terminals (2, 3) and to form predicted sub-data files (A1, A2; B1, B2; C1, C2). In a placement phase, the serving node is adapted to transmit predicted sub-data files (A1, B1; A2, B2) to cache nodes (APC1, APC2), each cache node (APC1, APC2) having a unique set of predicted different sub-data files of different predicted data files, and to receive requests (RA, RB) for data files from the served user terminals (2, 3). In a delivery phase, the serving node is adapted to transmit an initial complementary predicted sub-data file (Formula I) to the cache nodes (APC1, APC2), comprising a reversible combination of the remaining predicted sub-data files (A2, B1) for the files requested.Type: GrantFiled: June 10, 2020Date of Patent: December 26, 2023Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Behrooz Makki, Mikael Coldrey
-
Patent number: 11836088Abstract: Guided cache replacement is described. In accordance with the described techniques, a request to access a cache is received, and a cache replacement policy which controls loading data into the cache is accessed. The cache replacement policy includes a tree structure having nodes corresponding to cachelines of the cache and a traversal algorithm controlling traversal of the tree structure to select one of the cachelines. Traversal of the tree structure is guided using the traversal algorithm to select a cacheline to allocate to the request. The guided traversal modifies at least one decision of the traversal algorithm to avoid selection of a non-replaceable cacheline.Type: GrantFiled: December 21, 2021Date of Patent: December 5, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Jeffrey Christopher Allan
-
Patent number: 11837319Abstract: A multi-port memory device in communication with a controller includes a memory array for storing data provided by the controller, a first port coupled to the controller via a first controller channel, a second port coupled to the controller via a second controller channel, a processor, and a processor memory local to the processor, wherein the processor memory has stored thereon instructions that, when executed by the processor, cause the processor to: enable data transfer through the first port and/or the second port in response to a first control signal received from the first controller channel and/or a second control signal received from second controller channel, decode at least one of the received first and second control signals to identify a data operation to perform, the identified data operation including a read or write operation from or to the memory array, and execute the identified data operation.Type: GrantFiled: December 10, 2020Date of Patent: December 5, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Hingkwan Huen, Changho Choi
-
Patent number: 11825146Abstract: Techniques and mechanisms described herein facilitate the storage of digital media recordings. According to various embodiments, a system is provided comprising a processor, a storage device, Random Access Memory (RAM), an archive writer, and a recording writer. The archive writer is configured to retrieve a plurality of small multimedia segments (SMSs) in RAM and write the plurality of SMSs into an archive container file in RAM. The single archive container file may correspond to a singular multimedia file when complete. New SMSs retrieved from RAM are appended into the archive container file if the new SMSs also correspond to the singular multimedia file. The recording writer is configured to flush the archive container file to be stored as a digital media recording on the storage device once enough SMSs have been appended by the archive writer to the archive container file to complete the singular multimedia file.Type: GrantFiled: March 11, 2022Date of Patent: November 21, 2023Assignee: TIVO CORPORATIONInventors: Do Hyun Chung, Ren L. Long, Dan Dennedy
-
Patent number: 11816354Abstract: Embodiments of the present disclosure relate to establishing persistent cache memory as a write tier. An input/output (IO) workload of a storage array can be analyzed. One or more write data portions of the IO workload can be stored in a persistent memory region of one or more disks of the storage array.Type: GrantFiled: July 27, 2020Date of Patent: November 14, 2023Assignee: EMC IP Holding Company LLCInventors: Owen Martin, Dustin Zentz, Vladimir Desyatov
-
Patent number: 11803470Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.Type: GrantFiled: December 22, 2020Date of Patent: October 31, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Amit Apte, Ganesh Balakrishnan, Ann Ling, Vydhyanathan Kalyanasundharam
-
Patent number: 11797456Abstract: Techniques described herein provide a handshake mechanism and protocol for notifying an operating system whether system hardware supports persistent cache flushing. System firmware may determine whether the hardware is capable of supporting a full flush of processor caches and volatile memory buffers in the event of a power outage or asynchronous reset. If the hardware is capable, then persistent cache flushing may be selectively enabled and advertised to the operating system. Once persistent cache flushing is enabled, the operating system and applications may treat data committed to volatile processor caches as persistent. If disabled or not supported by system hardware, then the platform may not advertise support for persistent cache flushing to the operating system.Type: GrantFiled: March 25, 2022Date of Patent: October 24, 2023Assignee: Oracle International CorporationInventor: Benjamin John Fuller
-
Patent number: 11783040Abstract: An information handling system includes a first memory that stores a firmware image associated with the baseboard management controller. The baseboard management controller begins execution of a kernel, which in turn performs a boot operation of the information handling system. The baseboard management controller begins a file system initialization program. During the boot operation, the baseboard management controller performs a full read and cryptographic verification of the firmware image via a DM-Verity daemon of the file system initialization program. In response to the full read of the firmware image being completed, the baseboard management controller provides a flush command to the kernel via the DM-Verity daemon. The baseboard management controller flushes a cache buffer associated with the baseboard management controller via the kernel.Type: GrantFiled: July 9, 2021Date of Patent: October 10, 2023Assignee: Dell Products L.P.Inventors: Michael E. Brown, Nagendra Varma Totakura, Vasanth Venkataramanappa, Jack E. Fewx
-
Patent number: 11762771Abstract: Methods, systems, and devices for advanced power off notification for managed memory are described. An apparatus may include a memory array comprising a plurality of memory cells and a controller coupled with the memory array. The controller may be configured to receive a notification indicating a transition from a first state of the memory array to a second state of the memory array. The notification may include a value, the value comprising a plurality of bits and corresponding to a minimum duration remaining until a power supply of the memory array is deactivated. The controller may also execute a plurality of operations according to an order determined based at least in part on a parameter associated with the memory array and receiving the notification comprising the value.Type: GrantFiled: April 27, 2021Date of Patent: September 19, 2023Assignee: Micron Technology, Inc.Inventors: Vincenzo Reina, Binbin Huo
-
Patent number: 11755483Abstract: In a multi-node system, each node includes tiles. Each tile includes a cache controller, a local cache, and a snoop filter cache (SFC). The cache controller responsive to a memory access request by the tile checks the local cache to determine whether the data associated with the request has been cached by the local cache of the tile. The cached data from the local cache is returned responsive to a cache-hit. The SFC is checked to determine whether any other tile of a remote node has cached the data associated with the memory access request. If it is determined that the data has been cached by another tile of a remote node and if there is a cache-miss by the local cache, then the memory access request is transmitted to the global coherency unit (GCU) and the snoop filter to fetch the cached data. Otherwise an interconnected memory is accessed.Type: GrantFiled: May 27, 2022Date of Patent: September 12, 2023Assignee: Marvell Asia Pte LtdInventors: Pranith Kumar Denthumdas, Rabin Sugumar, Isam Wadih Akkawi
-
Service performance enhancement using advance notifications of reduced-capacity phases of operations
Patent number: 11748260Abstract: A first run-time environment executing a first instance of an application, and a second run-time environment executing a second instance of the application are established. An indication of an impending commencement of a reduced-capacity phase of operation of the first run-time environment is obtained at a service request receiver. Based at least in part on the indication, the service request receiver directs a service request to the second instance of the application.Type: GrantFiled: September 23, 2019Date of Patent: September 5, 2023Assignee: Amazon Technologies, Inc.Inventors: James Christopher Sorenson, III, Yishai Galatzer, Bernd Joachim Wolfgang Mathiske, Steven Collison, Paul Henry Hohensee -
Patent number: 11734175Abstract: The present technology includes a storage device including a memory device including a first storage region and a second storage region and a memory controller configured to, in response to a write request in the first storage region from an external host, acquire data stored the first region based on a fail prediction information provided from the memory device and to perform a write operation corresponding to the write request, wherein the first storage region and the second storage region are allocated according to logical addresses of data to be stored in by requests of the external host.Type: GrantFiled: August 21, 2020Date of Patent: August 22, 2023Assignee: SK hynix Inc.Inventors: Yong Jin, Jung Ki Noh, Seung Won Jeon, Young Kyun Shin, Keun Hyung Kim
-
Patent number: 11720368Abstract: Techniques for memory management of a data processing system are described herein. According to one embodiment, a memory usage monitor executed by a processor of a data processing system monitors memory usages of groups of programs running within a memory of the data processing system. In response to determining that a first memory usage of a first group of the programs exceeds a first predetermined threshold, a user level reboot is performed in which one or more applications running within a user space of an operating system of the data processing system are terminated and relaunched. In response to determining that a second memory usage of a second group of the programs exceeds a second predetermined threshold, a system level reboot is performed in which one or more system components running within a kernel space of the operating system are terminated and relaunched.Type: GrantFiled: March 8, 2021Date of Patent: August 8, 2023Assignee: APPLE INC.Inventors: Andrew D. Myrick, David M. Chan, Jonathan R. Reeves, Jeffrey D. Curless, Lionel D. Desai, James C. McIlree, Karen A. Crippes, Rasha Eqbal
-
Patent number: 11704235Abstract: A memory system of an embodiment includes a nonvolatile memory, a primary cache memory, a secondary cache memory, and a processor. The processor performs address conversion by using logical-to-physical address conversion information relating to data to be addressed in the nonvolatile memory. Based on whether first processing is performed on the nonvolatile memory or second processing is performed on the nonvolatile memory, the processor controls to store whether the logical-to-physical address conversion information relating to the first processing to be in the primary cache memory as cache data or logical-to-physical address conversion information relating to the second processing to be in the secondary cache memory as cache data.Type: GrantFiled: June 1, 2021Date of Patent: July 18, 2023Assignee: Kioxia CorporationInventors: Shogo Ochiai, Nobuaki Tojo
-
Patent number: 11681657Abstract: A method, computer program product, and computer system for organizing a plurality of log records into a plurality of buckets, wherein each bucket is associated with a range of a plurality of ranges within a backing store. A bucket of the plurality of buckets from which a portion of the log records of the plurality of log records are to be flushed may be selected. The portion of the log records may be organized into parallel flush jobs. The portion of the log records may be flushed to the backing store in parallel.Type: GrantFiled: July 31, 2019Date of Patent: June 20, 2023Assignee: EMC IP Holding Company, LLCInventors: Socheavy D. Heng, William C. Davenport
-
Patent number: 11681617Abstract: A data processing apparatus includes a requester, a completer and a cache. Data is transferred between the requester and the cache and between the cache and the completer. The cache implements a cache eviction policy. The completer determines an eviction cost associated with evicting the data from the cache and notifies the cache of the eviction cost. The cache eviction policy implemented by the cache is based, at least in part, on the cost of evicting the data from the cache. The eviction cost may be determined, for example, based on properties or usage of a memory system of the completer.Type: GrantFiled: March 12, 2021Date of Patent: June 20, 2023Assignee: Arm LimitedInventor: Alexander Klimov
-
Patent number: 11675492Abstract: Techniques for measuring a user's level of interest in content in an electronic document are disclosed. A system generates a user engagement score based on the user's scrolling behavior. The system detects one scrolling event that moves content into a viewport and another scrolling event that moves the content out of the viewport. The system calculates a user engagement score based on the duration of time the content was in the viewport. The system may also detect a scroll-back event, in which the user scrolls away from content and back to the content. The system may then calculate or update the user engagement score based on the scroll-back event.Type: GrantFiled: January 15, 2021Date of Patent: June 13, 2023Assignee: Oracle International CorporationInventor: Michael Patrick Rodgers
-
Patent number: 11675512Abstract: A storage system allocates single-level cell (SLC) blocks in its memory to act as a write buffer and/or a read buffer. When the storage system uses the SLC blocks as a read buffer, the storage system reads data from multi-level cell (MLC) blocks in the memory and stores the data in the read buffer prior to receiving a read command from a host for the data. When the storage system uses the SLC blocks as a write buffer, the storage system retains certain data in the write buffer while other data is flushed from the write buffer to MLC blocks in the memory.Type: GrantFiled: August 1, 2022Date of Patent: June 13, 2023Assignee: Western Digital Technologies, Inc.Inventors: Rotem Sela, Einav Zilberstein, Karin Inbar
-
Patent number: 11675776Abstract: Managing data in a computing device is disclosed, including generating reverse delta updates during an apply operation of a forward delta update. A method includes operations of applying forward update data to an original data object to generate an updated data object from the original data object and generating, during the applying, reverse update data, the reverse update data configured to reverse effects of the forward update data and restore the original data object from the updated data object.Type: GrantFiled: June 29, 2021Date of Patent: June 13, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Jonathon Tucker Ready, Cristian Gelu Petruta, Mark Zagorski, Timothy Patrick Conley, Imran Baig, Alexey Teterev, Asish George Varghese
-
Patent number: 11669449Abstract: One example method includes a cache eviction operation. Entries in a cache are maintained in an entry list that includes a recent list, a recent ghost list, a frequent list, and a frequent ghost list. When an eviction operation is initiated or triggered, timestamps of last access for the entries in the entry list are adjusted by corresponding adjustment values. Candidate entries for eviction are identified based on the adjusted timestamps of last access. At least some of the candidates are evicted from the cache.Type: GrantFiled: November 30, 2021Date of Patent: June 6, 2023Assignee: DELL PRODUCTS L.P.Inventors: Keyur B. Desai, Xiaobing Zhang
-
Patent number: 11645232Abstract: Techniques for executing show commands are described herein. A plurality of navigation steps is utilized, each navigation step corresponding to a different layer in a database structure and each navigation step including an operator to fetch items from a metadata database up to respective bounded limits. Dependency information is also fetched for objects of the specified object type in the show command. After a set of objects from the last layer are processed, memory for the navigation steps is flushed and the next set of objects are processed.Type: GrantFiled: June 29, 2022Date of Patent: May 9, 2023Assignee: Snowflake Inc.Inventors: Lin Chan, Tianyi Chen, Robert Bengt Benedikt Gernhardt, Nithin Mahesh, Eric Robinson
-
Patent number: 11635921Abstract: A data-storage-destination environment-determination module set determines any of a plurality of storage environments as a storage destination environment for storing data, based on a used-data table set about the characteristics of data and the storage environments. The data migration module transmits the data to the storage destination environment.Type: GrantFiled: September 15, 2021Date of Patent: April 25, 2023Assignee: HITACHI, LTD.Inventors: Iku Matsui, Hiroshi Arakawa, Hideo Tabuchi
-
Patent number: 11630769Abstract: A memory controller includes a buffer memory and a microprocessor. The buffer memory includes at least a first cache memory and a second cache memory. The microprocessor is configured to control access of a flash memory device. The microprocessor is configured to obtain a number of spare blocks of the flash memory device corresponding to a first operation period, determine a write speed compensation value, determine a target write speed according to the write speed compensation value and a balance speed, and determine a target garbage collection speed according to the target write speed. The microprocessor is further configured to perform one or more write operations in response to one or more write commands received from a host device in the first operation period according to the target write speed and perform at least one garbage collection operation according to the target garbage collection speed.Type: GrantFiled: June 8, 2021Date of Patent: April 18, 2023Assignee: Silicon Motion, Inc.Inventor: Tsung-Yao Chiang
-
Patent number: 11620233Abstract: An integrated circuit for offloading a page migration operation from a host processor is provided. The integrated circuit is configured to: receive, from the host processor, a request to perform the page migration operation from a first physical address to a second physical address; and based on the request, perform the page migration operation. The page migration operation comprises: performing a copy operation of data from the first physical address to the second physical address, and updating a page table entry based on the second physical address, to enable the host processor to access the data from the second physical address based on the updated page table entry.Type: GrantFiled: September 30, 2019Date of Patent: April 4, 2023Assignee: Amazon Technologies, Inc.Inventors: Adi Habusha, Ali Ghassan Saidi, Tzachi Zidenberg
-
Patent number: 11620235Abstract: Systems and methods for invalidating page translation entries are described. A processing element may apply a delay to a drain cycle of a store reorder queue (SRQ) of a processing element. The processing element may drain the SRQ under the delayed drain cycle. The processing element may receive a translation lookaside buffer invalidation (TLBI) instruction from an interconnect connecting the plurality of processing elements. The TLBI instruction may be an instruction to invalidate a translation lookaside buffer (TLB) entry corresponding to at least one of a virtual memory page and a physical memory frame. The TLBI instruction may be broadcasted by another processing element. The application of the delay to the drain cycle of the SRQ may decrease a difference between the drain cycle of the SRQ and an invalidation cycle associated with the TLBI.Type: GrantFiled: October 4, 2021Date of Patent: April 4, 2023Assignee: International Business Machines CorporationInventors: Shakti Kapoor, Nelson Wu, Manoj Dusanapudi
-
Patent number: 11614866Abstract: A nonvolatile memory device includes a nonvolatile memory, a volatile memory being a cache memory of the nonvolatile memory, and a first controller configured to control the nonvolatile memory. The nonvolatile memory device further includes a second controller configured to receive a device write command and an address, and transmit, to the volatile memory through a first bus, a first read command and the address and a first write command and the address sequentially, and transmit a second write command and the address to the first controller through a second bus, in response to the reception of the device write command and the address.Type: GrantFiled: July 30, 2021Date of Patent: March 28, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Youngjin Cho, Sungyong Seo, Sun-Young Lim, Uksong Kang, Chankyung Kim, Duckhyun Chang, JinHyeok Choi
-
Patent number: 11599275Abstract: Provided herein may be a memory controller and a method of operating the same. The memory controller may include a SPO detector configured to output a detection signal when a SPO is detected, a memory buffer configured to store host data, and a power loss controller configured to, based on the detection signal, receive dump data corresponding to the host data, store the dump data and a dump age corresponding to the dump data, and output the dump data and the dump age to a memory device, wherein the dump age indicates a number of times that different items of host data have been dumped from the memory buffer to the power loss controller, and the power loss controller is configured to control a recovery operation corresponding to the SPO based on the dump age being received from the memory device.Type: GrantFiled: June 2, 2021Date of Patent: March 7, 2023Assignee: SK hynix Inc.Inventors: Jin Pyo Kim, Sang Min Kim, Woo Young Yang, Jun Six Jeong, Seung Hun Ji
-
Patent number: 11593222Abstract: A method and system for backup processes that includes identifying a target volume and identifying a number of available threads to back up the target volume. The elements in the target volume are distributed among the available threads based on a currently pending size of data in the threads. The elements are stored from each thread into a backup container, and merged from each of the backup containers into a backup volume.Type: GrantFiled: September 15, 2020Date of Patent: February 28, 2023Assignee: EMC IP Holding Company LLCInventors: Sunil Yadav, Manish Sharma, Aaditya Rakesh Bansal, Shelesh Chopra
-
Patent number: 11595457Abstract: A system and method dynamically adjusting the rate at which incoming content data is transferred into a media gateway appliance, and the rate at which outgoing content data is provided to one or more client devices by the media gateway appliance. This dynamic adjustment is performed in accordance with a predetermined program and as a function of real-time data streaming rates and predetermined rate parameters and preferences. The system and method enable the provision of an improved viewing and content acquisition experience for users.Type: GrantFiled: February 10, 2022Date of Patent: February 28, 2023Assignee: ARRIS ENTERPRISES LLCInventors: Aniela M. Rosenberger, William P. Franks, Kaliraj Kalaichelvan, Arpan Kumar Kaushal, Rajesh K. Rao, Ernest George Schmitt
-
Patent number: 11586553Abstract: A memory device for storing data comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. The memory device also comprises a cache memory operable for storing a second plurality of data words, wherein further each data word of the second plurality of data words is either awaiting write verification or is to be re-written into the memory bank. The cache memory is divided into a plurality of primary segments, wherein each primary segment of the cache memory is direct mapped to a corresponding segment of the plurality of segments of the memory bank, wherein each primary segment of the plurality of primary segments of the cache memory is sub-divided into a plurality of secondary segments, and each of the plurality of secondary segments comprises at least one counter for tracking a number of valid entries stored therein.Type: GrantFiled: September 13, 2021Date of Patent: February 21, 2023Assignee: Integrated Silicon Solution, (Cayman) Inc.Inventors: Neal Berger, Susmita Karmakar, TaeJin Pyon, Kuk-Hwan Kim
-
Patent number: 11561924Abstract: An information processing device is configured to perform processing, the processing including: executing a persistence processing configured to make a part of a region persistent, the region being to be used as a ring buffer in remote direct memory access (RDMA) to a non-volatile memory accessible in an equal manner to a dynamic random access memory (DRAM) so as not to allow received data stored in the part of the region to be overwritten; executing a determination processing configured to determine whether a ratio of the region made persistent by the persistence processing has exceeded a first threshold; and executing a selection processing configured to select a method of evacuating the persistent received data by using a received data amount of the information processing device and a free region in the non-volatile memory in a case where the determination processing determines that the ratio has exceeded the first threshold.Type: GrantFiled: February 4, 2021Date of Patent: January 24, 2023Assignee: FUJITSU LIMITEDInventor: Hiroki Ohtsuji
-
Patent number: 11544199Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.Type: GrantFiled: October 12, 2021Date of Patent: January 3, 2023Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
-
Patent number: 11513737Abstract: Data overflows can be prevented in edge computing systems. For example, an edge computing system (ECS) can include a memory buffer for storing incoming data from client devices. The ECS can also include a local storage device. The ECS can determine that an amount of available storage space in the local storage device is less than a predefined threshold amount. Based on determining that the amount of available storage space is less than the predefined threshold amount, the ECS can prevent the incoming data from being retrieved from the memory buffer. And based on determining that the amount of available storage space is greater than or equal to the predefined threshold amount, the ECS can retrieve the incoming data from the memory buffer and store the incoming data in the local storage device. This may prevent data overflows associated with the local storage device.Type: GrantFiled: April 16, 2021Date of Patent: November 29, 2022Assignee: RED HAT, INC.Inventors: Yehuda Sadeh-Weinraub, Huamin Chen, Ricardo Noriega De Soto