Cache Flushing Patents (Class 711/135)
-
Patent number: 12216589Abstract: An apparatus has processing circuitry with support for transactional memory, and a cache hierarchy comprising at least two levels of cache. In response to a draining trigger event having potential to cause loss of state stored in at least one further level cache beyond a predetermined level cache in which speculative store data generated by a transaction is marked as speculative until the transaction is committed, draining circuitry performs a draining operation to scan a subset of the cache hierarchy to identify dirty cache lines and write data associated with the dirty cache lines to persistent memory. The subset of the cache hierarchy includes the at least one further level cache. In the draining operation, speculative store data marked as speculative in the predetermined level cache is prevented from being drained to the persistent memory.Type: GrantFiled: August 16, 2021Date of Patent: February 4, 2025Assignee: Arm LimitedInventors: Wei Wang, Matthew James Horsnell
-
Patent number: 12210784Abstract: A computer implemented method includes creating a cache within system management memory to cache data from a firmware flash memory to allow access to the cache by system firmware, providing a baseboard management controller ownership of the firmware flash memory in a server, updating the firmware in the firmware flash memory via the baseboard management controller, relinquishing baseboard management controller ownership of firmware flash memory upon completion of updating the firmware, and flushing the cache back to the firmware flash memory in response to baseboard management controller relinquishing ownership of the firmware flash memory.Type: GrantFiled: December 27, 2022Date of Patent: January 28, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Mallik Bulusu, Tom Long Nguyen, Daini Xie, Karunakara Kotary, Muhammad Ashfaq Ahmed, Subhankar Panda, Ravi Mysore Shantamurthy
-
Patent number: 12197723Abstract: According to one embodiment, a controller of a memory system manages 2N banks obtained by dividing a logical address space, and 2N regions included in a nonvolatile memory, the 2N regions corresponding one-to-one to the 2N banks. The controller stores an address translation table in a random access memory, the address translation table including a plurality of entries respectively corresponding to a plurality of logical addresses which are contiguous in units of a first size corresponding to granularity of data read/write-accessed by a host, the address translation table managing mapping between each of the logical addresses and each of physical addresses. The controller allocates 2N write buffers to the random access memory.Type: GrantFiled: September 7, 2022Date of Patent: January 14, 2025Assignee: Kioxia CorporationInventors: Kazuhiro Hiwada, Tomoya Suzuki
-
Patent number: 12197407Abstract: Described technologies generate a data structure corresponding to values sequenced based on a plurality of timestamps associated with the values. The data structure can include a first section identifying a first timestamp associated with the plurality of timestamps and a number representing how many timestamps are associated with the plurality of timestamps, and a second section including at least a value linked to the first timestamp, and an additional value representing an encoding type associated with the second section. The data structure can be stored in computer-implemented storage.Type: GrantFiled: December 10, 2021Date of Patent: January 14, 2025Assignee: Amazon Technologies, Inc.Inventors: Andrea Giuliano, Gary Taylor, Gavin Bramhill
-
Patent number: 12156304Abstract: A dual-line protocol read-write control chip, system and method. Said chip comprises: two front-stage ports, two rear-stage ports, a protocol decoding module, a data forwarding module, a read-back control module, a display control module, a gradient control module and an instruction control module; data is input to a chip based on a dual-line transmission protocol; after the chip decodes the data, the instruction control module controls a corresponding module according to the decoded instruction data; the data forwarding module forwards the data to a next-stage chip; in a read-back mode, the input and output ports of the chip are interchanged, so that corresponding state parameters can be read back from the chip, and then the working state of the chip is adjusted, and the gray scale of an LED light can be directly controlled according to the decoded gray scale data.Type: GrantFiled: February 9, 2021Date of Patent: November 26, 2024Assignee: SHENZHEN SUNMOON MICROELECTRONICS CO., LTD.Inventor: Zhaohua Li
-
Patent number: 12135732Abstract: A system performs delivery of a data pipeline on a cloud platform. The system receives a specification of the data pipeline that is split into smaller specifications of data pipeline units. The system identifies a target cloud platform and generates a deployment package for each data pipeline unit for the target cloud platform. The system creates a connection with the target cloud platform and uses the connection to provision computing infrastructure on the target cloud platform for the data pipeline unit according to the system configuration of the data pipeline unit. The data pipeline may be implemented as a data mesh that is a directed acyclic graph of nodes, each node representing a data pipeline unit. Different portions of the data mesh may be modified independent of each other. Partial results stored in different portions of the data mesh may be recomputed starting from different points in time.Type: GrantFiled: May 24, 2023Date of Patent: November 5, 2024Assignee: Humana Inc.Inventors: Yuan Yao, Andrew McPherron, Tom Ho, Bing Zhang
-
Patent number: 12135643Abstract: Techniques and mechanisms for metadata, which corresponds to cached data, to be selectively stored to a sequestered memory region. In an embodiment, integrated circuitry evaluates whether a line of a cache can accommodate a first representation of both the data and some corresponding metadata. Where the cache line can accommodate the first representation, said first representation is generated and stored to the line. Otherwise, a second representation of the data is generated and stored to a cache line, and the metadata is stored to a sequestered memory region that is external to the cache. The cache line include an indication as to whether the metadata is represented in the cache line, or is stored in the sequestered memory region. In another embodiment, a metric of utilization of the sequestered memory region is provided to software which determines whether a capacity of the sequestered memory region is to be modified.Type: GrantFiled: November 12, 2020Date of Patent: November 5, 2024Assignee: Intel CorporationInventors: Michael Kounavis, Siddhartha Chhabra, David M. Durham
-
Patent number: 12111768Abstract: A method and device for controlling memory handling in a processing system comprising a cache shared between a plurality of processing units, wherein the cache comprises a plurality of cache portions. The method comprises obtaining first information pertaining to an allocation of a first memory portion of a memory to a first application, an allocation of a first processing unit of the plurality of processing units to the first application, and an association between a first cache portion of the plurality of cache portions and the first processing unit. The method further comprises reconfiguring a mapping configuration based on the obtained first information, and controlling a providing of first data associated with the first application to the first cache portion from the first memory portion using the reconfigured mapping configuration.Type: GrantFiled: February 13, 2020Date of Patent: October 8, 2024Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Amir Roozbeh, Alireza Farshin, Dejan Kostic, Gerald Q Maguire, Jr.
-
Patent number: 12039036Abstract: A kernel driver on an endpoint uses a process cache to provide a stream of events associated with processes on the endpoint to a data recorder. The process cache can usefully provide related information about processes such as a name, type or path for the process to the data recorder through the kernel driver. Where a tamper protection cache or similarly secured repository is available, this secure information may also be provided to the data recorder for use in threat detection, forensic analysis and so forth.Type: GrantFiled: April 3, 2023Date of Patent: July 16, 2024Assignee: Sophos LimitedInventor: Richard S. Teal
-
Patent number: 12007998Abstract: A method for caching partial data pages to support optimized transactional processing and analytical processing with minimal memory footprint may include loading, from disk to memory, a portion of a data page. The memory may include a first cache for storing partial data pages and a second cache for storing full data pages. The first portion of the data page may be loaded into the first cache. A data structure may be updated to indicate that the portion of the data page has been loaded into the first cache. When the data structure indicates that the data page has been loaded into the first cache in its entirety, transferring the data page from the first cache to the second cache. One or more queries may be performed using at least the portion of the data page loaded into the memory. Related systems and articles of manufacture are also provided.Type: GrantFiled: May 27, 2021Date of Patent: June 11, 2024Assignee: SAP SEInventor: Ivan Schreter
-
Patent number: 11995459Abstract: A virtual machine (VM) is migrated from a source host to a destination host in a virtualized computing system, the VM having a plurality of virtual central processing units (CPUs). The method includes copying, by VM migration software executing in the source host and the destination host, memory of the VM from the source host to the destination host by installing, at the source host, write traces spanning all of the memory and then copying the memory from the source host to the destination host over a plurality of iterations; and performing switch-over, by the VM migration software, to quiesce the VM in the source host and resume the VM in the destination host. The VM migration software installs write traces using less than all of the virtual CPUs, and using trace granularity larger than a smallest page granularity.Type: GrantFiled: August 25, 2020Date of Patent: May 28, 2024Assignee: VMware LLCInventors: Arunachalam Ramanathan, Yanlei Zhao, Anurekh Saxena, Yury Baskakov, Jeffrey W. Sheldon, Gabriel Tarasuk-Levin, David A. Dunn, Sreekanth Setty
-
Patent number: 11983411Abstract: A method can include, in a default mode of a memory device, decoding command data received on a unidirectional command address (CA) bus of a memory interface according to a first standard. In response to decoding a mode enter command, placing the memory device into an alternate management mode. In the alternate management mode, receiving alternate command data on the CA bus, and in response to receiving a command execute indication on the CA bus, decoding alternate command data according to a second standard to execute an alternate command. In response to decoding a mode exit command received on the CA bus according to the first standard, returning the memory device to the default mode. The memory interface comprises the CA bus and a data bus, and the CA bus and data bus comprise a plurality of parallel input connections. Corresponding devices and systems are also disclosed.Type: GrantFiled: April 25, 2022Date of Patent: May 14, 2024Assignee: Infineon Technologies LLCInventors: Nobuaki Hata, Clifford Zitlaw, Yuichi Ise, Stephan Rosner
-
Patent number: 11977490Abstract: A method for executing a transaction for a processor associated with a persistent memory and with a cache memory, the cache memory comprising cache lines associated with respective states, including: if a cache line is associated with a state allowing data to be copied directly: copying data to the cache line; associating the line with a state representative of an allocation to transaction data; otherwise: flushing lines associated with a state representative of an allocation to external data and associating them with a state indicating that content of the lines has not been modified; copying data to the flushed lines; associating these lines with a state representative of an allocation to transaction data.Type: GrantFiled: October 27, 2020Date of Patent: May 7, 2024Assignee: IDEMIA IDENTITY & SECURITY FRANCEInventors: Lauren Marjorie Del Giudice, Rémi Louis Marie Duclos, Aurélien Cuzzolin
-
Patent number: 11960776Abstract: Some memory dice in a stack can be connected externally to the stack and other memory dice in the stack can be connected internally to the stack. The memory dice that are connected externally can act as interface dice for other memory dice that are connected internally thereto. Data protection and recovery schemes provided for the stacks of memory dice can be based on data that are transferred in a single data stream without a discontinuity between those data transfers from the memory dice of the stacks.Type: GrantFiled: June 2, 2022Date of Patent: April 16, 2024Assignee: Micron Technology, Inc.Inventors: Marco Sforzin, Paolo Amato
-
Patent number: 11947472Abstract: Described herein are systems, methods, and products utilizing a cache coherent switch on chip. The cache coherent switch on chip may utilize Compute Express Link (CXL) interconnect open standard and allow for multi-host access and the sharing of resources. The cache coherent switch on chip provides for resource sharing between components while independent of a system processor, removing the system processor as a bottleneck. Cache coherent switch on chip may further allow for cache coherency between various different components. Thus, for example, memories, accelerators, and/or other components within the disclose systems may each maintain caches, and the systems and techniques described herein allow for cache coherency between the different components of the system with minimal latency.Type: GrantFiled: June 28, 2022Date of Patent: April 2, 2024Assignee: Avago Technologies International Sales Pte. LimitedInventors: Shreyas Shah, George Apostol, Jr., Nagarajan Subramaniyan, Jack Regula, Jeffrey S. Earl
-
Patent number: 11941281Abstract: A system and method for a memory system are provided. A memory device includes an array of non-volatile memory cells. A memory controller is connected to the array of non-volatile memory cells. The memory controller is configured to perform the steps of receiving a request to read a value of a memory flag, wherein the memory flag includes a 2-bit value stored in a first memory cell and a second memory cell of the array of non-volatile memory cells, reading a first value of the first memory cell, reading a second value of the second memory cell, and determining the value of the memory flag based on the first value and the second value. In embodiments, the memory flag may have more than 2-bits.Type: GrantFiled: April 1, 2022Date of Patent: March 26, 2024Assignee: NXP B.V.Inventor: Soenke Ostertun
-
Patent number: 11921639Abstract: A method for caching data, and a host device and a storage system that caches data. The method includes determining a first file in a storage device as a first predetermined type of file; reallocating a logical address of a predetermined logical address region to the first file; and updating a first logical address to physical address (L2P) table, corresponding to the predetermined logical address region, in a cache of the host device. The updated first L2P table includes a mapping relationship between the logical address reallocated for the first file and a physical address of the first file.Type: GrantFiled: June 28, 2022Date of Patent: March 5, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Heng Zhang, Yuanyuan Ye, Huimei Xiong, Yunchang Liang
-
Patent number: 11924107Abstract: Techniques for orchestrating workloads based on policy to operate in optimal host and/or network proximity in cloud-native environments are described herein. The techniques may include receiving flow data associated with network paths between workloads hosted by a cloud-based network. Based at least in part on the flow data, the techniques may include determining that a utilization of a network path between a first workload and a second workload is greater than a relative utilization of other network paths between the first workload and other workloads. The techniques may also include determining that reducing the network path would optimize communications between the first workload and the second workload without adversely affecting communications between the first workload and the other workloads. The techniques may also include causing at least one of a redeployment or a network path re-routing to reduce the networking proximity between the first workload and the second workload.Type: GrantFiled: October 4, 2021Date of Patent: March 5, 2024Assignee: CISCO TECHNOLOGY, INC.Inventors: Vincent E. Parla, Kyle Andrew Donald Mestery
-
Patent number: 11924114Abstract: An electronic device, according to various embodiments of the present invention, comprises a network connection device, at least one processor, and a memory operatively connected to the at least one processor, wherein the memory stores instructions which, when executed, cause the at least one processor to: receive a data packet from the network connection device; add the data packet to a packet list corresponding to the data packet; and when the number of data packets included in the packet list is less than a threshold value, flush the data packets to a network stack on the basis of a flush time value for controlling a packet aggregation function, wherein the flush time value may be determined on the basis of the network throughput.Type: GrantFiled: April 23, 2019Date of Patent: March 5, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jongeon Park, Sangyeop Lee, Geumhwan Yu, Soukjin Bae, Chihun Ahn
-
Patent number: 11907131Abstract: Techniques for efficiently flushing a user data log may postpone or delay establishing chains of metadata pages used as mapping information to map logical addresses to storage locations of content stored at the logical addresses. Processing can include: receiving a write operation that writes data to a logical address; storing an entry for the write operation in the user data log; and flushing the entry from the user data log. Flushing can include storing a metadata log entry in a metadata log, wherein the metadata log entry represents a binding of the logical address to a data block including the data stored at the logical address; and destaging the metadata log entry. Destaging can include updating mapping information used to map the logical address to the data block. The mapping information can include a metadata page in accordance with the metadata log entry.Type: GrantFiled: July 1, 2022Date of Patent: February 20, 2024Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Bar David
-
Patent number: 11886884Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: GrantFiled: November 12, 2019Date of Patent: January 30, 2024Assignee: Intel CorporationInventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Patent number: 11880600Abstract: A write request directed to the non-volatile memory device is received. A stripe associated with an address specified by the write request is present in the volatile memory device is determined. The volatile memory device includes a plurality of stripes, each stripe of the plurality of stripes having a plurality of managed units. The write request on a managed unit of the stripe in the volatile memory device is performed. The stripe in the volatile memory device is evicted to a stripe in the non-volatile memory device.Type: GrantFiled: September 2, 2021Date of Patent: January 23, 2024Assignee: Micron Technology, Inc.Inventors: Ning Chen, Jiangli Zhu, Yi-Min Lin, Fangfang Zhu
-
Patent number: 11880318Abstract: Methods for local page writes via pre-staging buffers for resilient buffer pool extensions are performed by computing systems. Compute nodes in database systems insert, update, and query data pages maintained in storage nodes. Data pages cached locally by compute node buffer pools are provided to buffer pool extensions on local disks as pre-copies via staging buffers that store data pages prior to local disk storage. Encryption of data pages occurs at the staging buffers, which allows a less restrictive update latching during the copy process, with page metadata being updated in buffer pool extensions page tables with in-progress states indicating it is not yet written to local disk. When stage buffers are filled, data pages are written to buffer pool extensions and metadata is updated in page tables to indicate available/valid states. Data pages in staging buffers can be read and updated prior to writing to the local disk.Type: GrantFiled: March 28, 2022Date of Patent: January 23, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Rogério Ramos, Kareem Aladdin Golaub, Chaitanya Gottipati, Alejandro Hernandez Saenz, Raj Kripal Danday
-
Patent number: 11874770Abstract: An indexless logical-to-physical translation table (L2PTT). In one example, the data storage device including a memory, a data storage controller, and a bus. The memory including a mapping unit staging page that includes a plurality of mapping unit pages and a mapping unit page directory. The data storage controller including a data storage controller memory and coupled to the memory, the data storage controller memory including an indexless logical-to-physical translation table (L2PTT). The bus for transferring data between the data storage controller and a host device in communication with the data storage controller. The data storage controller is configured to perform one or more memory operations with the indexless L2PTT.Type: GrantFiled: May 12, 2022Date of Patent: January 16, 2024Assignee: Western Digital Technologies, Inc.Inventors: Oleg Kragel, Vijay Sivasankaran
-
Patent number: 11876702Abstract: A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.Type: GrantFiled: March 23, 2020Date of Patent: January 16, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Abdulla M. Bataineh, Thomas L. Court, Hess M. Hodge
-
Patent number: 11874765Abstract: A processor may allocate a first buffer segment from a buffer pool. The first buffer segment may be configured with a first contiguous range of memory for a first data partition of a data table. The first data partition comprising a first plurality of data blocks. A processor may store the first plurality of data blocks in order into the first buffer segment. A processor may retrieve the target data block from the first buffer segment in response to a data access request for a target data block of the first plurality of data blocks.Type: GrantFiled: May 28, 2021Date of Patent: January 16, 2024Assignee: International Business Machines CorporationInventors: Shuo Li, Xiaobo Wang, Sheng Yan Sun, Hong Mei Zhang
-
Patent number: 11868265Abstract: Techniques are described herein processing asynchronous power transition events while maintaining a persistent memory state. In some embodiments, a system may proxy asynchronous reset events through system logic, which generates an interrupt to invoke a special persistent flush interrupt handler that performs a persistent cache flush prior to invoking a hardware power transition. Additionally or alternatively, the system may include a hardware backup mechanism to ensure all resets and power-transitions requested in hardware reliably complete within a bounded window of time independent of whether the persistent cache flush handler succeeds.Type: GrantFiled: March 25, 2022Date of Patent: January 9, 2024Assignee: Oracle International CorporationInventor: Benjamin John Fuller
-
Patent number: 11861216Abstract: Methods, systems, and devices for memory operations are described. Data for a set of commands associated with a barrier command may be written to a buffer. Based on a portion of the data to be flushed from the buffer, a determination may be made as to whether to update an indication of a last barrier command for which all of the associated data has been written to a memory device. Based on whether the indication of the last barrier command is updated, a flushing operation may be performed that transfers the portion of the data from the buffer to a memory device. During a recovery operation, the portion of the data stored in the memory device may be validated based on determining that the barrier command is associated with the portion of the data and on updating the indication of the last barrier command to indicate the barrier command.Type: GrantFiled: December 20, 2021Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventor: Giuseppe Cariello
-
Patent number: 11860789Abstract: A cache purge simulation system includes a device under test with a cache skip switch. A first cache skip switch includes a configurable state register to indicate whether all of an associated cache is purged upon receipt of a cache purge instruction from a verification system or whether a physical partition that is smaller than the associated cache is purged upon receipt of the cache purge instruction from the verification system. A second cache skip switch includes a configurable start address register comprising a start address that indicates a beginning storage location of a physical partition of an associated cache and a configurable stop address register comprising a stop address that indicates a ending storage location of the physical partition of the associated cache.Type: GrantFiled: March 21, 2022Date of Patent: January 2, 2024Assignee: International Business Machines CorporationInventors: Yvo Thomas Bernard Mulder, Ralf Ludewig, Huiyuan Xing, Ulrich Mayer
-
Patent number: 11853261Abstract: The present disclosure relates to a serving wireless communication node adapted to predict data files (A, B, C) to be requested by at least two served user terminals (2, 3) and to form predicted sub-data files (A1, A2; B1, B2; C1, C2). In a placement phase, the serving node is adapted to transmit predicted sub-data files (A1, B1; A2, B2) to cache nodes (APC1, APC2), each cache node (APC1, APC2) having a unique set of predicted different sub-data files of different predicted data files, and to receive requests (RA, RB) for data files from the served user terminals (2, 3). In a delivery phase, the serving node is adapted to transmit an initial complementary predicted sub-data file (Formula I) to the cache nodes (APC1, APC2), comprising a reversible combination of the remaining predicted sub-data files (A2, B1) for the files requested.Type: GrantFiled: June 10, 2020Date of Patent: December 26, 2023Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Behrooz Makki, Mikael Coldrey
-
Patent number: 11853574Abstract: A protocol for processing write operations can include recording each write operation in a log using a PDESC (page descriptor)-PB (page block) pair. The log entry for the write operation can be included in a container of logged writes. In a dual node system, the protocol when processing the write operation, that writes first data, can include incrementing a corresponding one of two counters of the container, where the corresponding counter is associated with one of the system's nodes which received the write operation and and caches the first data. Each container can be associated with an logical block address (LBA) range of a logical device, where logged writes that write to target addresses in the particular LBA range are included in the container. Nodes can independently determine flush ownership using the container's counters and can flush containers based on the flush ownership.Type: GrantFiled: June 21, 2022Date of Patent: December 26, 2023Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Geng Han, Changyu Feng
-
Patent number: 11837319Abstract: A multi-port memory device in communication with a controller includes a memory array for storing data provided by the controller, a first port coupled to the controller via a first controller channel, a second port coupled to the controller via a second controller channel, a processor, and a processor memory local to the processor, wherein the processor memory has stored thereon instructions that, when executed by the processor, cause the processor to: enable data transfer through the first port and/or the second port in response to a first control signal received from the first controller channel and/or a second control signal received from second controller channel, decode at least one of the received first and second control signals to identify a data operation to perform, the identified data operation including a read or write operation from or to the memory array, and execute the identified data operation.Type: GrantFiled: December 10, 2020Date of Patent: December 5, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Hingkwan Huen, Changho Choi
-
Patent number: 11836088Abstract: Guided cache replacement is described. In accordance with the described techniques, a request to access a cache is received, and a cache replacement policy which controls loading data into the cache is accessed. The cache replacement policy includes a tree structure having nodes corresponding to cachelines of the cache and a traversal algorithm controlling traversal of the tree structure to select one of the cachelines. Traversal of the tree structure is guided using the traversal algorithm to select a cacheline to allocate to the request. The guided traversal modifies at least one decision of the traversal algorithm to avoid selection of a non-replaceable cacheline.Type: GrantFiled: December 21, 2021Date of Patent: December 5, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Jeffrey Christopher Allan
-
Patent number: 11825146Abstract: Techniques and mechanisms described herein facilitate the storage of digital media recordings. According to various embodiments, a system is provided comprising a processor, a storage device, Random Access Memory (RAM), an archive writer, and a recording writer. The archive writer is configured to retrieve a plurality of small multimedia segments (SMSs) in RAM and write the plurality of SMSs into an archive container file in RAM. The single archive container file may correspond to a singular multimedia file when complete. New SMSs retrieved from RAM are appended into the archive container file if the new SMSs also correspond to the singular multimedia file. The recording writer is configured to flush the archive container file to be stored as a digital media recording on the storage device once enough SMSs have been appended by the archive writer to the archive container file to complete the singular multimedia file.Type: GrantFiled: March 11, 2022Date of Patent: November 21, 2023Assignee: TIVO CORPORATIONInventors: Do Hyun Chung, Ren L. Long, Dan Dennedy
-
Patent number: 11816354Abstract: Embodiments of the present disclosure relate to establishing persistent cache memory as a write tier. An input/output (IO) workload of a storage array can be analyzed. One or more write data portions of the IO workload can be stored in a persistent memory region of one or more disks of the storage array.Type: GrantFiled: July 27, 2020Date of Patent: November 14, 2023Assignee: EMC IP Holding Company LLCInventors: Owen Martin, Dustin Zentz, Vladimir Desyatov
-
Patent number: 11803470Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.Type: GrantFiled: December 22, 2020Date of Patent: October 31, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Amit Apte, Ganesh Balakrishnan, Ann Ling, Vydhyanathan Kalyanasundharam
-
Patent number: 11797456Abstract: Techniques described herein provide a handshake mechanism and protocol for notifying an operating system whether system hardware supports persistent cache flushing. System firmware may determine whether the hardware is capable of supporting a full flush of processor caches and volatile memory buffers in the event of a power outage or asynchronous reset. If the hardware is capable, then persistent cache flushing may be selectively enabled and advertised to the operating system. Once persistent cache flushing is enabled, the operating system and applications may treat data committed to volatile processor caches as persistent. If disabled or not supported by system hardware, then the platform may not advertise support for persistent cache flushing to the operating system.Type: GrantFiled: March 25, 2022Date of Patent: October 24, 2023Assignee: Oracle International CorporationInventor: Benjamin John Fuller
-
Patent number: 11783040Abstract: An information handling system includes a first memory that stores a firmware image associated with the baseboard management controller. The baseboard management controller begins execution of a kernel, which in turn performs a boot operation of the information handling system. The baseboard management controller begins a file system initialization program. During the boot operation, the baseboard management controller performs a full read and cryptographic verification of the firmware image via a DM-Verity daemon of the file system initialization program. In response to the full read of the firmware image being completed, the baseboard management controller provides a flush command to the kernel via the DM-Verity daemon. The baseboard management controller flushes a cache buffer associated with the baseboard management controller via the kernel.Type: GrantFiled: July 9, 2021Date of Patent: October 10, 2023Assignee: Dell Products L.P.Inventors: Michael E. Brown, Nagendra Varma Totakura, Vasanth Venkataramanappa, Jack E. Fewx
-
Patent number: 11762771Abstract: Methods, systems, and devices for advanced power off notification for managed memory are described. An apparatus may include a memory array comprising a plurality of memory cells and a controller coupled with the memory array. The controller may be configured to receive a notification indicating a transition from a first state of the memory array to a second state of the memory array. The notification may include a value, the value comprising a plurality of bits and corresponding to a minimum duration remaining until a power supply of the memory array is deactivated. The controller may also execute a plurality of operations according to an order determined based at least in part on a parameter associated with the memory array and receiving the notification comprising the value.Type: GrantFiled: April 27, 2021Date of Patent: September 19, 2023Assignee: Micron Technology, Inc.Inventors: Vincenzo Reina, Binbin Huo
-
Patent number: 11755483Abstract: In a multi-node system, each node includes tiles. Each tile includes a cache controller, a local cache, and a snoop filter cache (SFC). The cache controller responsive to a memory access request by the tile checks the local cache to determine whether the data associated with the request has been cached by the local cache of the tile. The cached data from the local cache is returned responsive to a cache-hit. The SFC is checked to determine whether any other tile of a remote node has cached the data associated with the memory access request. If it is determined that the data has been cached by another tile of a remote node and if there is a cache-miss by the local cache, then the memory access request is transmitted to the global coherency unit (GCU) and the snoop filter to fetch the cached data. Otherwise an interconnected memory is accessed.Type: GrantFiled: May 27, 2022Date of Patent: September 12, 2023Assignee: Marvell Asia Pte LtdInventors: Pranith Kumar Denthumdas, Rabin Sugumar, Isam Wadih Akkawi
-
Service performance enhancement using advance notifications of reduced-capacity phases of operations
Patent number: 11748260Abstract: A first run-time environment executing a first instance of an application, and a second run-time environment executing a second instance of the application are established. An indication of an impending commencement of a reduced-capacity phase of operation of the first run-time environment is obtained at a service request receiver. Based at least in part on the indication, the service request receiver directs a service request to the second instance of the application.Type: GrantFiled: September 23, 2019Date of Patent: September 5, 2023Assignee: Amazon Technologies, Inc.Inventors: James Christopher Sorenson, III, Yishai Galatzer, Bernd Joachim Wolfgang Mathiske, Steven Collison, Paul Henry Hohensee -
Patent number: 11734175Abstract: The present technology includes a storage device including a memory device including a first storage region and a second storage region and a memory controller configured to, in response to a write request in the first storage region from an external host, acquire data stored the first region based on a fail prediction information provided from the memory device and to perform a write operation corresponding to the write request, wherein the first storage region and the second storage region are allocated according to logical addresses of data to be stored in by requests of the external host.Type: GrantFiled: August 21, 2020Date of Patent: August 22, 2023Assignee: SK hynix Inc.Inventors: Yong Jin, Jung Ki Noh, Seung Won Jeon, Young Kyun Shin, Keun Hyung Kim
-
Patent number: 11720368Abstract: Techniques for memory management of a data processing system are described herein. According to one embodiment, a memory usage monitor executed by a processor of a data processing system monitors memory usages of groups of programs running within a memory of the data processing system. In response to determining that a first memory usage of a first group of the programs exceeds a first predetermined threshold, a user level reboot is performed in which one or more applications running within a user space of an operating system of the data processing system are terminated and relaunched. In response to determining that a second memory usage of a second group of the programs exceeds a second predetermined threshold, a system level reboot is performed in which one or more system components running within a kernel space of the operating system are terminated and relaunched.Type: GrantFiled: March 8, 2021Date of Patent: August 8, 2023Assignee: APPLE INC.Inventors: Andrew D. Myrick, David M. Chan, Jonathan R. Reeves, Jeffrey D. Curless, Lionel D. Desai, James C. McIlree, Karen A. Crippes, Rasha Eqbal
-
Patent number: 11704235Abstract: A memory system of an embodiment includes a nonvolatile memory, a primary cache memory, a secondary cache memory, and a processor. The processor performs address conversion by using logical-to-physical address conversion information relating to data to be addressed in the nonvolatile memory. Based on whether first processing is performed on the nonvolatile memory or second processing is performed on the nonvolatile memory, the processor controls to store whether the logical-to-physical address conversion information relating to the first processing to be in the primary cache memory as cache data or logical-to-physical address conversion information relating to the second processing to be in the secondary cache memory as cache data.Type: GrantFiled: June 1, 2021Date of Patent: July 18, 2023Assignee: Kioxia CorporationInventors: Shogo Ochiai, Nobuaki Tojo
-
Patent number: 11681657Abstract: A method, computer program product, and computer system for organizing a plurality of log records into a plurality of buckets, wherein each bucket is associated with a range of a plurality of ranges within a backing store. A bucket of the plurality of buckets from which a portion of the log records of the plurality of log records are to be flushed may be selected. The portion of the log records may be organized into parallel flush jobs. The portion of the log records may be flushed to the backing store in parallel.Type: GrantFiled: July 31, 2019Date of Patent: June 20, 2023Assignee: EMC IP Holding Company, LLCInventors: Socheavy D. Heng, William C. Davenport
-
Patent number: 11681617Abstract: A data processing apparatus includes a requester, a completer and a cache. Data is transferred between the requester and the cache and between the cache and the completer. The cache implements a cache eviction policy. The completer determines an eviction cost associated with evicting the data from the cache and notifies the cache of the eviction cost. The cache eviction policy implemented by the cache is based, at least in part, on the cost of evicting the data from the cache. The eviction cost may be determined, for example, based on properties or usage of a memory system of the completer.Type: GrantFiled: March 12, 2021Date of Patent: June 20, 2023Assignee: Arm LimitedInventor: Alexander Klimov
-
Patent number: 11675776Abstract: Managing data in a computing device is disclosed, including generating reverse delta updates during an apply operation of a forward delta update. A method includes operations of applying forward update data to an original data object to generate an updated data object from the original data object and generating, during the applying, reverse update data, the reverse update data configured to reverse effects of the forward update data and restore the original data object from the updated data object.Type: GrantFiled: June 29, 2021Date of Patent: June 13, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Jonathon Tucker Ready, Cristian Gelu Petruta, Mark Zagorski, Timothy Patrick Conley, Imran Baig, Alexey Teterev, Asish George Varghese
-
Patent number: 11675492Abstract: Techniques for measuring a user's level of interest in content in an electronic document are disclosed. A system generates a user engagement score based on the user's scrolling behavior. The system detects one scrolling event that moves content into a viewport and another scrolling event that moves the content out of the viewport. The system calculates a user engagement score based on the duration of time the content was in the viewport. The system may also detect a scroll-back event, in which the user scrolls away from content and back to the content. The system may then calculate or update the user engagement score based on the scroll-back event.Type: GrantFiled: January 15, 2021Date of Patent: June 13, 2023Assignee: Oracle International CorporationInventor: Michael Patrick Rodgers
-
Patent number: 11675512Abstract: A storage system allocates single-level cell (SLC) blocks in its memory to act as a write buffer and/or a read buffer. When the storage system uses the SLC blocks as a read buffer, the storage system reads data from multi-level cell (MLC) blocks in the memory and stores the data in the read buffer prior to receiving a read command from a host for the data. When the storage system uses the SLC blocks as a write buffer, the storage system retains certain data in the write buffer while other data is flushed from the write buffer to MLC blocks in the memory.Type: GrantFiled: August 1, 2022Date of Patent: June 13, 2023Assignee: Western Digital Technologies, Inc.Inventors: Rotem Sela, Einav Zilberstein, Karin Inbar
-
Patent number: 11669449Abstract: One example method includes a cache eviction operation. Entries in a cache are maintained in an entry list that includes a recent list, a recent ghost list, a frequent list, and a frequent ghost list. When an eviction operation is initiated or triggered, timestamps of last access for the entries in the entry list are adjusted by corresponding adjustment values. Candidate entries for eviction are identified based on the adjusted timestamps of last access. At least some of the candidates are evicted from the cache.Type: GrantFiled: November 30, 2021Date of Patent: June 6, 2023Assignee: DELL PRODUCTS L.P.Inventors: Keyur B. Desai, Xiaobing Zhang