Patents Examined by Hua J Song
-
Patent number: 12204448Abstract: A plurality of work items are processed through a processing pipeline comprising a plurality of stages in processing logic. The processing of a work item includes: (i) reading data in accordance with a memory address associated with the work item, (ii) updating the read data, and (iii) writing the updated data in accordance with the memory address associated with the work item. The method includes processing a first work item and a second work item through the processing pipeline, wherein the processing of the first work item through the pipeline is initiated earlier than the processing of the second work item, and where it is determined that the first and second work items are associated with the same memory address, first updated data of the first work item is written to a register in the processing logic, and the processing of the second work item comprises reading the first updated data from the register instead of reading data from the memory.Type: GrantFiled: December 19, 2022Date of Patent: January 21, 2025Assignee: Imagination Technologies LimitedInventor: Tijmen Spreij
-
Patent number: 12197736Abstract: A technique of managing the rate of I/O (Input/Output) request processing includes a token-bucket arrangement having first, second, and third token buckets. The first token bucket is provided with sufficient tokens to accommodate an expected baseline level of I/O requests, whereas the second token bucket is provided with sufficient tokens to accommodate an expected excess level of I/O requests during bursts. The third token bucket is provided with tokens at predefined intervals and limits a total amount of bursting available during those intervals.Type: GrantFiled: January 5, 2023Date of Patent: January 14, 2025Assignee: Dell Products L.P.Inventors: Vitaly Zharkov, Omer Dayan, Eldad Zinger
-
Patent number: 12189947Abstract: According to one embodiment, a memory device includes a nonvolatile semiconductor memory having physical storage areas that includes a user area externally accessible and are divided into management units and a control unit. The control unit receives a control command having a first argument to designate a sequential write area and a read command or a write command, assigns a management unit represented by an address of the read command or the write command as the sequential write area, and changes memory access control by judging whether an address of a memory access command to access the user area indicates access in the sequential write area whose size is equivalent to the management unit.Type: GrantFiled: December 28, 2022Date of Patent: January 7, 2025Assignee: Kioxia CorporationInventor: Akihisa Fujimoto
-
Patent number: 12182395Abstract: An electronic device includes an external device configured to determine a first performance index on the basis of at least one of a power level and a temperature signal, to put the first performance index into a command, and to output the command. The electronic device also includes a storage component including a plurality of memory dies. The electronic device further includes a memory controller configured to provide the temperature signal to the external device at a set transmission period, and to control the storage component to process the command by simultaneously operating the number of memory dies corresponding to the first performance index as the command is received.Type: GrantFiled: December 20, 2022Date of Patent: December 31, 2024Assignee: SK hynix Inc.Inventor: Eu Joon Byun
-
Patent number: 12182441Abstract: Aspects of a storage device for providing superior sustained sequential write (SSW) performance are disclosed. A controller on the storage device allocates buffer space in the host memory buffers (HMBs) on the host device for storage of relocation data, i.e., data to be folded or compacted. The controller or a hardware element therein can therefore allocate local SRAM (including TRAM) for use in accommodating incoming host writes. The increased SRAM allocation of relocation data without an attendant increase in cost or size to the storage device enables the storage device to perform operations in parallel and substantially increase SSW performance metrics.Type: GrantFiled: May 5, 2022Date of Patent: December 31, 2024Assignee: SANDISK TECHNOLOGIES, INC.Inventors: Sagar Uttarwar, Disha Gundecha
-
Patent number: 12174749Abstract: The creation, maintenance, and accessing of page tables is done by a virtual machine monitor running on a computing system rather than the guest operating systems. This allows page table walks to be completed in fewer memory accesses when compared to the guest operating system's maintenance of the page tables. In addition, the virtual machine monitor may utilize additional resources to offload page table access and maintenance functions from the CPU to another device, such as a page table management device or page table management node. Offloading some or all page table access and maintenance functions to a specialized device or node enables the CPU to perform other tasks during page table walks and/or other page table maintenance functions.Type: GrantFiled: January 14, 2022Date of Patent: December 24, 2024Assignee: Rambus Inc.Inventors: Steven C. Woo, Christopher Haywood, Evan Lawrence Erickson
-
Patent number: 12169455Abstract: Data base performance is improved using write-behind optimization of covering cache. Non-volatile memory data cache includes a full copy of stored data file(s). Data cache and storage writes, checkpoints, and recovery may be decoupled (e.g., with separate writes, checkpoints and recoveries). A covering data cache supports improved performance by supporting database operation during storage delays or outages and/or by supporting reduced I/O operations using aggregate writes of contiguous data pages (e.g., clean and dirty pages) to stored data file(s). Aggregate writes reduce data file fragmentation and reduce the cost of snapshots. Performing write-behind operations in a background process with optimistic concurrency control may support improved database performance, for example, by not interfering with write operations to data cache. Data cache may store (e.g., in metadata) data cache checkpoint information and storage checkpoint information. A stored data file may store storage checkpoint information (e.g.Type: GrantFiled: May 3, 2023Date of Patent: December 17, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Krystyna Ewa Reisteter, Cristian Diaconu, Rogério Ramos, Sarika R. Iyer, Siddharth Deepak Mehta, Huanhui Hu
-
Patent number: 12164772Abstract: A memory system includes a memory device with a memory cell array including a first and second plane and first and second caches. A controller is configured to output status information in response to a status read command. The status information indicating the states of the caches. The controller begins a first process in response to a command addressed to the first plane if the status information indicates the first and second caches are in the ready state, and begins a second process on the second plane according to a second command to the second plane if the status information indicates at least the second cache is in the ready state.Type: GrantFiled: July 6, 2023Date of Patent: December 10, 2024Assignee: Kioxia CorporationInventors: Masanobu Shirakawa, Tokumasa Hara
-
Patent number: 12153520Abstract: A method and an apparatus for processing Bitmap data are provided by the embodiments of the present disclosure. The method for processing Bitmap data includes: dividing a Bitmap region in a disk into a plurality of partitions in advance and setting an update region in the disk; obtaining a respective amount of dirty data corresponding to each of the plurality of partitions in memory in response to a condition for writing back to the disk being satisfied; finding multiple second partitions with an amount of dirty data satisfying to be merged into the update region from the plurality of partitions according to the respective amount of dirty data corresponding to each of the plurality of partitions; and recording dirty data corresponding to the multiple second partitions in the memory into the update region in the disk through one or more I/O operations after merging.Type: GrantFiled: January 10, 2023Date of Patent: November 26, 2024Assignee: Alibaba Cloud Computing Ltd.Inventors: Ya Lin, Feifei Li, Peng Wang, Zhushi Cheng, Fei Wu
-
Patent number: 12147351Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.Type: GrantFiled: April 25, 2023Date of Patent: November 19, 2024Assignee: Rambus Inc.Inventors: Evan Lawrence Erickson, Christopher Haywood, Mark D. Kellam
-
Patent number: 12147342Abstract: A storage system includes at least one solid-state drive (SSD) and a baseboard management controller (BMC). The at least one SSD communicates over a communication link information that the at least one SSD includes a predetermined number of super capacitors in which the predetermined number includes 0, and is capable of providing a mode of operation to flush data in a non-volatile memory to a non-volatile memory that spans a predetermined amount of time if a loss of power condition is detected. The BMC device receives the information from the SSD and in response sends a message to the at least on SSD to enter the mode of operation.Type: GrantFiled: January 6, 2021Date of Patent: November 19, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Wentao Wu, Sompong Olarig, William Schwaderer, Ramdas Kachare
-
Patent number: 12147670Abstract: A method for performing data access management of a memory device in a predetermined communications architecture with aid of unbalanced table regions and associated apparatus are provided.Type: GrantFiled: January 9, 2023Date of Patent: November 19, 2024Assignee: Silicon Motion, Inc.Inventors: Jie-Hao Lee, Chien-Cheng Lin, Chang-Chieh Huang
-
Patent number: 12141449Abstract: A method for managing processing power in a storage system is provided. The method includes providing a plurality of blades, each of a first subset having a storage node and storage memory, and each of a second, differing subset having a compute-only node. The method includes distributing authorities across the plurality of blades, to a plurality of nodes including at least one compute-only node, wherein each authority has ownership of a range of user data.Type: GrantFiled: November 4, 2022Date of Patent: November 12, 2024Assignee: PURE STORAGE, INC.Inventors: John Martin Hayes, Robert Lee, John Colgrove, John D. Davis
-
Patent number: 12141444Abstract: An example computer-implemented method for mirroring memory in a disaggregated memory clustered environment is provided. The method includes assigning, by a hypervisor, a disaggregated memory to a virtual machine comprising a remote disaggregated memory, the virtual machine being one node of a cluster of the disaggregated memory clustered environment. The method further includes allocating, by a disaggregated memory manager, a mirrored memory for the remote disaggregated memory to mirror the remote disaggregated memory on an alternate node of the cluster of the disaggregated memory clustered environment. The method further includes responsive to a memory access occurring, maintaining, by the disaggregated memory manager, the mirrored memory. The method further includes, responsive to detecting a memory allocation adjustment, modifying, by the disaggregated memory manager, memory usage across the cluster.Type: GrantFiled: December 15, 2022Date of Patent: November 12, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Adam Thomas Stallman, Suresh Guduru, Ryan K. Cradick
-
Patent number: 12135891Abstract: A storage service supports attachment of multiple clients to a distributed storage object and further supports persistent reservations that govern types of access the respective clients are granted with respect to the distributed storage object. In order to efficiently distribute reservation state changes to multiple partitions of the distributed storage object hosted by different data storage units/servers, existing connections are used between the data storage units/servers hosting the partitions of the distributed storage object and the connected clients to propagate reservation state changes.Type: GrantFiled: April 7, 2023Date of Patent: November 5, 2024Assignee: Amazon Technologies, Inc.Inventors: Swapnil Vinay Dinkar, Pradeep Kunni Raman, David Matthew Buches, Hon Ping Shea, Norbert Paul Kusters
-
Patent number: 12131068Abstract: Embodiments of the present disclosure provide a method, an electronic device, and a computer program product for synchronously accessing data. The method may include sending metadata associated with data to be moved by a user to a programmable network device associated with a persistent memory containing the data, so as to enable the programmable network device to move the data based on the metadata, wherein the programmable network device is a smart network interface card having a remote direct memory access function. The method may also include entering a hibernation state. In addition, the method may include exiting from the hibernation state in response to receiving a confirmation of operation completion from the programmable network device, so as to notify the user that an operation of moving the data is complete.Type: GrantFiled: November 15, 2022Date of Patent: October 29, 2024Assignee: DELL PRODUCTS L.P.Inventors: Tao Chen, Ran Liu, Wei Lu
-
Patent number: 12124724Abstract: Embodiments of this application provide a memory migration method, an apparatus, and a computing device.Type: GrantFiled: September 23, 2020Date of Patent: October 22, 2024Assignee: Alibaba Group Holding LimitedInventor: Dianchen Tian
-
Patent number: 12124705Abstract: Various embodiments provide for performing a memory operation, such as a memory block compaction operation or block folding or refresh operation, based on a temperature associated with a memory block of a memory device. For instance, some embodiments provide for techniques that can cause performance of a block compaction operation on a memory block at a temperature that is at least at or higher than a predetermined temperature value. Additionally, some embodiments provide for techniques that can cause performance of a block folding/refresh operation, at a temperature that is at or higher than the predetermined temperature value, on one or more blocks on which data was written at a temperature lower than the predetermined temperature value.Type: GrantFiled: June 23, 2022Date of Patent: October 22, 2024Assignee: Micron Technology, Inc.Inventors: Pitamber Shukla, Ching-Huang Lu, Devin Batutis
-
Patent number: 12124376Abstract: A method for providing elastic columnar cache includes receiving cache configuration information indicating a maximum size and an incremental size for a cache associated with a user. The cache is configured to store a portion of a table in a row-major format. The method includes caching, in a column-major format, a subset of the plurality of columns of the table in the cache and receiving a plurality of data requests requesting access to the table and associated with a corresponding access pattern requiring access to one or more of the columns. While executing one or more workloads, the method includes, for each column of the table, determining an access frequency indicating a number of times the corresponding column is accessed over a predetermined time period and dynamically adjusting the subset of columns based on the access patterns, the maximum size, and the incremental size.Type: GrantFiled: April 22, 2022Date of Patent: October 22, 2024Assignee: Google LLCInventors: Anjan Kumar Amirishetty, Xun Cheng, Viral Shah
-
Patent number: 12124381Abstract: A processing system includes a hardware translation lookaside buffer (TLB) retry loop that retries virtual memory address to physical memory address translation requests from a software client independent of a command from the software client. In response to a retry response notification at the TLB, a controller of the TLB waits for a programmable delay period and then retries the request without involvement from the software client. After a retry results in a hit at the TLB, the controller notifies the software client of the hit. Alternatively, if a retry results in an error at the TLB, the controller notifies the software client of the error and the software client initiates error handling.Type: GrantFiled: November 18, 2021Date of Patent: October 22, 2024Assignee: ATI Technologies ULCInventor: Edwin Pang