Patents Examined by Charles Rones
-
Patent number: 11687288Abstract: A method of queue design for data storage and management applies RAM data synchronization technology on many distributed nodes, both ensures storage performance and solves the problem of data loss in the system operation process; performs business separation and parallelize actions to optimize processing performance; uses simply extracted information instead of accessing the original information helps to speed up the processing ability and promptly detect events that exceed the threshold; allocates a fixed memory for the queue to ensure the safety of the whole system; in addition, provides monitoring and early warning of possible incidents. The method includes: step 1: build a deployment model; step 2: initialize the values when the application first launches; step 3: process write data to the queue; step 4: detect the threshold and process the data in the queue; step 5: remove processed data from the queue; step 6: monitor queue and early warn.Type: GrantFiled: October 26, 2021Date of Patent: June 27, 2023Assignee: VIETTEL GROUPInventors: Thanh Phong Pham, The Anh Do, Thi Huyen Dang, Viet Anh Nguyen
-
Patent number: 11687460Abstract: Methods, devices, and systems for GPU cache injection. A GPU compute node includes a network interface controller (NIC) which includes NIC receiver circuitry which can receive data for processing on the GPU, NIC transmitter circuitry which can send the data to a main memory of the GPU compute node and which can send coherence information to a coherence directory of the GPU compute node based on the data. The GPU compute node also includes a GPU which includes GPU receiver circuitry which can receive the coherence information; GPU processing circuitry which can determine, based on the coherence information, whether the data satisfies a heuristic; and GPU loading circuitry which can load the data into a cache of the GPU from the main memory if on the data satisfies the heuristic.Type: GrantFiled: April 26, 2017Date of Patent: June 27, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Michael W. LeBeane, Walter B. Benton, Vinay Agarwala
-
Patent number: 11681444Abstract: The present application discloses a magnetic disk management method, an apparatus and an electronic device by providing an engine layer including a plurality of space files and an encapsulation layer including a file directory tree of a space file structure; where the engine layer responds to a data management operation performed for a target space file of the file directory tree output by the engine layer, and a target magnetic disk space corresponding to the target space files is determined through the address association list of the encapsulation layer, and data management is performed on the data in the target magnetic disk space. Thereby, different data can be isolated by different space files when entering through the engine layer, which ensures that security issues such as leakage of the data in the magnetic disk will not occur.Type: GrantFiled: September 15, 2020Date of Patent: June 20, 2023Inventors: Chao Wang, Jian Liu, Li Li
-
Patent number: 11681436Abstract: An information handling system may include a processor and a scanning agent including a program of instructions embodied in computer-readable media communicatively coupled to the processor, and configured to, asynchronously from input/output operations to a solid state drive communicatively coupled to the processor: scan sequences of logical block addresses corresponding to consecutively occurring input/output operations to the solid state drive; determine logical block addresses that are frequently proximate to each other in the sequences; and communicate information regarding the logical block addresses that are frequently proximate to each other in the sequences to the solid state drive, such that a controller of the solid state drive uses the information to organize data in physical pages of the solid state drive such that at least one physical page includes logical block addresses that are frequently proximate to each other in the sequences.Type: GrantFiled: November 12, 2020Date of Patent: June 20, 2023Assignee: Dell Products L.P.Inventors: Wei Dong, Weilin Liu
-
Patent number: 11675539Abstract: A computational device configures a storage system that supports a plurality of submission queues. A file system monitors characteristics of received writes to distribute the writes among the plurality of submission queues. The computational device categorizes the writes into full track writes, medium track writes, and small track writes, measures a frequency of different categories of writes determined based on the categorization of the writes, and generates arbitrations of the writes with varying priorities for distributing the writes for processing in the submission queues. A full track write includes writing incoming data blocks of the writes received to a fresh track, in response to a total size of the incoming data blocks being equal to or more than a size of one full track. A medium track write includes overwriting an existing data track. A small track write includes staging the incoming data blocks to a caching storage.Type: GrantFiled: June 3, 2021Date of Patent: June 13, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ravindra R. Sure, Samrat P. Kannikar, Sukumar Vankadhara, Sasikanth Eda
-
Patent number: 11675694Abstract: Embodiments of the invention utilize an improved LSM-tree-based key-value approach to strike the optimal balance between the costs of updates and lookups and storage space. The improved approach involves use of a new merge policy that removes merge operations from all but the largest levels of LSM-tree. In addition, the improved approach may include an improved LSM-tree that allows separate control over the frequency of merge operations for the largest level and for all other levels. By adjusting various parameters, such as the storage capacity of the largest level, the storage capacity of the other smaller levels, and/or the size ratio between adjacent levels in the improved LSM-tree, the improved LSM-tree-based key-value approach may maximize throughput for a particular workload.Type: GrantFiled: July 15, 2021Date of Patent: June 13, 2023Assignee: President and Fellows of Harvard CollegeInventors: Stratos Idreos, Niv Dayan
-
Patent number: 11669446Abstract: Various embodiments comprise systems, methods, architectures, mechanisms or apparatus for providing programmable or pre-programmed in-memory computing operations.Type: GrantFiled: June 18, 2019Date of Patent: June 6, 2023Assignee: THE TRUSTEES OF PRINCETON UNIVERSITYInventors: Naveen Verma, Hossein Valavi, Hongyang Jia
-
Patent number: 11669272Abstract: A memory sub-system configured to predictively schedule the transfer of data to reduce idle time and the amount and time of data being buffered in the memory sub-system. For example, write commands received from a host system can be queued without buffering the data of the write commands at the same time. When executing a first write command using a media unit, the memory sub-system can predict a duration to a time the media unit becoming available for execution of a second write command. The communication of the data of the second command from the host system to a local buffer memory of the memory sub-system can be postponed and initiated according to the predicted duration. After the execution of the first write command, the second write command can be executed by the media unit without idling to store the data from the local buffer memory.Type: GrantFiled: May 4, 2020Date of Patent: June 6, 2023Assignee: Micron Technology, Inc.Inventors: Sanjay Subbarao, Steven S. Williams, Mark Ish
-
Patent number: 11669263Abstract: The disclosed computer-implemented method may include configuring a plurality of watcher processes for observing and logging performance of one or more storage devices. Each watcher process may be configured with a trigger condition and a resource limit and organized into tiers based on resource limit. The method may include initiating a first watcher process of a first tier to observe one of the one or more storage devices and monitoring, with a watcher service, the first watcher process for the trigger condition of the first watcher process. The method may further include, in response to detecting the trigger condition, processing an output of the first watcher process and initiating, based on the processed output, a second watcher process of a second tier, wherein the second tier corresponds to a higher resource limit than the first tier. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: May 14, 2021Date of Patent: June 6, 2023Assignee: Meta Platforms, Inc.Inventors: Venkatraghavan Ramesh, Ta-Yu Wu, Vineet Parekh
-
Patent number: 11669274Abstract: A memory controller includes an arbiter for selecting memory requests from a command queue for transmission to a dynamic random access memory (DRAM) memory. The arbiter includes a bank group tracking circuit that tracks bank group numbers of three or more prior write requests selected by the arbiter. The arbiter also includes a selection circuit that selects requests to be issued from the command queue, and prevents selection of write requests and associated activate commands to the tracked bank group numbers unless no other write request is eligible in the command queue. The bank group tracking circuit indicates that a prior write request and the associated activate commands are eligible to be issued after a number of clock cycles has passed corresponding to a minimum write-to-write timing period for a bank group of the prior write request.Type: GrantFiled: March 31, 2021Date of Patent: June 6, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Kedarnath Balakrishnan
-
Patent number: 11669255Abstract: A file system with distributed resource caching that includes cache volumes and agents that may be associated with clients of the file system may be provided. A cache allocation for each agent may be determined based on a capacity of the cache volumes and a number of the agents such that each cache allocation is associated with tokens that each represents a reserved portion of free space in the cache volumes. Storage jobs may be provided to the agents. Data associated with the storage jobs may be stored in the cache volumes. The cache allocation for each agent may be reduced based on the data stored for each agent.Type: GrantFiled: January 28, 2022Date of Patent: June 6, 2023Assignee: Qumulo, Inc.Inventors: Conner Saltiel Hansen, Patrick Jakubowski, David Patrick Rogers, III, Thomas Gregory Rothschilds, Porter Michael Smith, Hanqing Zhang
-
Patent number: 11663119Abstract: One or more units of decompressed data of a plurality of units of decompressed data is written to a target location for subsequent writing to memory. The plurality of units of decompressed data includes a plurality of symbol outputs and has associated therewith a plurality of decompression headers. A determination is made that the subsequent writing to memory of at least a portion of another unit of decompressed data to be written to the target location is to be stalled. A symbol start position of the other unit of decompressed data and a decompression header of a selected unit of the one or more units of decompressed data written to the target location are provided to a component of the computing environment. The decompression header is used for the subsequent writing of the other unit of decompressed data to memory.Type: GrantFiled: May 29, 2020Date of Patent: May 30, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Deepankar Bhattacharjee, Girish Gopala Kurup, Ashutosh Misra, Puja Sethia
-
Patent number: 11662927Abstract: Embodiments that process data are described. For instance, a method includes receiving, at a first disk management device in a storage system, an access request for accessing data in a plurality of disks associated with the storage system. The method further includes determining whether a first access engine for accessing the plurality of disks in the first disk management device is available. The method further includes redirecting the access request to a second disk management device in the storage system if it is determined that the first access engine is unavailable, wherein a second access engine in the second disk management device is available to access the plurality of disks. By means of this method, effective data access can be performed when an access engine of a disk management device is unavailable, thus realizing a more stable access capability and improving the user experience.Type: GrantFiled: June 30, 2021Date of Patent: May 30, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: Xiaochen Liu, Ao Sun
-
Patent number: 11662934Abstract: A data processing system includes a system fabric, a system memory, a memory controller, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link to a destination host with which the source host is non-coherent. A plurality of processing units is configured to execute a logical partition and to migrate the logical partition to the destination host via the communication link. Migration of the logical partition includes migrating, via a communication link, the dataset of the logical partition executing on the source host from the system memory of the source host to a system memory of the destination host. After migrating at least a portion of the dataset, a state of the logical partition is migrated, via the communication link, from the source host to the destination host, such that the logical partition thereafter executes on the destination host.Type: GrantFiled: December 15, 2020Date of Patent: May 30, 2023Assignee: International Business Machines CorporationInventors: Steven Leonard Roberts, David A. Larson Stanton, Peter J. Heyrman, Stuart Zachary Jacobs, Christian Pinto
-
Patent number: 11663142Abstract: Methods, systems, and devices for codeword rotation for zone grouping of media codewords are described. A value of a first pointer may be configured to correspond to a first memory address within a region of memory and a value of a second pointer may be configured to correspond to a second memory address within the region of memory. The method may include monitoring access commands for performing access operations within the region of memory, where the plurality of access command may be associated with requested addresses within the region of memory. The method may include updating the value of the second pointer bases on a quantity of the commands that are monitored satisfying a threshold and executing the plurality of commands on locations within the region of memory. The locations may be based on the requested address, the value of the first pointer, and the value of the second pointer.Type: GrantFiled: September 7, 2021Date of Patent: May 30, 2023Assignee: Micron Technology, Inc.Inventor: Joseph Thomas Pawlowski
-
Patent number: 11656995Abstract: A method comprising receiving a memory access request comprising an address of data to be accessed and determining an access granularity of the data to be accessed based on the address of the data to be accessed. The method further includes, in response to determining that the data to be accessed has a first access granularity, generating first cache line metadata associated with the first access granularity and in response to determining that the data to be accessed has a second access granularity, generating second cache line metadata associated with the second access granularity. The method further includes storing the first cache line metadata and the second cache line metadata in a single cache memory component.Type: GrantFiled: November 26, 2019Date of Patent: May 23, 2023Assignee: Micron Technology, Inc.Inventors: Dhawal Bavishi, Robert M. Walker
-
Patent number: 11656770Abstract: A storage device may include a connector comprising a power management pin, a detector circuit configured to detect a transition of a power management signal received on the power management pin, and a power management circuit capable of configuring power to at least a portion of the storage device based, at least in part, on the detector circuit detecting a transition of the power management signal. The connector may further include a port enable pin, and the power management circuit may be configured to be disabled based, at least in part, on a state of the port enable pin. A storage device may include a connector comprising a power management pin, a nonvolatile memory, and a power management circuit configured to operate in a first power management mode based on determining a first state of the nonvolatile memory.Type: GrantFiled: July 10, 2020Date of Patent: May 23, 2023Inventors: Sompong Paul Olarig, Yasser Zaghloul
-
Patent number: 11656899Abstract: Implementations of the disclosure provide a processing device comprising an address translation circuit to intercept a work request from an I/O device. The work request comprises a first ASID to map to a work queue. A second ASID of a host is allocated for the first ASID based on the work queue. The second ASID is allocated to at least one of: an ASID register for a dedicated work queue (DWQ) or an ASID translation table for a shared work queue (SWQ). Responsive to receiving a work submission from the SVM client to the I/O device, the first ASID of the application container is translated to the second ASID of the host machine for submission to the I/O device using at least one of: the ASID register for the DWQ or the ASID translation table for the SWQ based on the work queue associated with the I/O device.Type: GrantFiled: August 17, 2021Date of Patent: May 23, 2023Assignee: Intel CorporationInventors: Sanjay Kumar, Rajesh M. Sankaran, Gilbert Neiger, Philip R. Lantz, Jason W. Brandt, Vedvyas Shanbhogue, Utkarsh Y. Kakaiya, Kun Tian
-
Patent number: 11656798Abstract: The present disclosure generally relates to improving data transfer in a data storage device. Not only prior to executing a command received from a host device, but even before scheduling the command, the data storage device parses the command and fetches physical region page (PRP) entries and/or scatter-gather list (SGL) entries. The fetching occurs just after receiving the command. Additionally, the host buffer pointers, which are described in PRP or SGL methods, associated with the entries are also fetched prior to scheduling the command. The fetching is a function of device constraints, queue depth, and/or tenant ID in a multi-tenant environment. The immediate fetching of at least part of the host buffers improves device performance, particularly in sequential write or read look ahead (RLA) scenarios.Type: GrantFiled: December 3, 2021Date of Patent: May 23, 2023Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, Amir Segev
-
Patent number: 11650795Abstract: A multi-level memory cell NAND structure of a memory device is utilized to extract uniqueness from the memory device. Certain unreliable characteristics of a NAND-based storage are used to generate a true random number sequence. A method for generating such sequence is based on a physically unclonable function (PUF) which is implemented by extracting unique characteristics of a NAND-based memory device using existing firmware procedures.Type: GrantFiled: August 23, 2019Date of Patent: May 16, 2023Assignee: SK hynix Inc.Inventors: Siarhei Zalivaka, Alexander Ivaniuk