Patents Examined by Than Nguyen
-
Patent number: 11698725Abstract: Apparatuses and methods for performing concurrent memory access operations for multiple memory planes are disclosed herein. An example method may include receiving first and second command and address pairs associated with first and second plane, respectively, of a memory. The method may further include, responsive to receiving the first and second command and address pairs, providing a first and second read voltages based on first and second page type determined from the first and second command and address pair. The method may further include configuring a first GAL decoder circuit to provide one of the first read voltage or a pass voltage on each GAL of a first GAL bus. The method may further include configuring a second GAL decoder circuit to provide one of the second read level voltage signal or the pass voltage signal on each GAL of a second GAL bus coupled to the second memory plane.Type: GrantFiled: October 27, 2021Date of Patent: July 11, 2023Assignee: Micron Technology, Inc.Inventors: Shantanu R. Rajwade, Pranav Kalavade, Toru Tanzawa
-
Patent number: 11698862Abstract: Systems, apparatuses, and methods related to three tiered hierarchical memory systems are described herein. A three tiered hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. An example apparatus may include a persistent memory, and one or more non-persistent memories configured to map an address associated with an input/output (I/O) device to an address in logic circuitry prior to the apparatus receiving a request from the I/O device to access data stored in the persistent memory, and map the address associated with the I/O device to an address in a non-persistent memory subsequent to the apparatus receiving the request and accessing the data.Type: GrantFiled: July 22, 2021Date of Patent: July 11, 2023Assignee: Micron Technology, Inc.Inventors: Vijay S. Ramesh, Anton Korzh, Richard C. Murphy, Scott Matthew Stephens
-
Patent number: 11693579Abstract: Application-specific prioritization of streaming data replication. Data streamed from connected devices is selectively replicated to data storage clusters based on needs of the applications being served by the data. Data characterization supports prioritized replication processing. Statistical metrics compare streaming data with estimated values to characterize the data for prioritization.Type: GrantFiled: March 9, 2021Date of Patent: July 4, 2023Assignee: International Business Machines CorporationInventors: Manish Anand Bhide, Prateek Goyal, Seema Nagar, Pramod Vadayadiyil Raveendran, Sougata Mukherjea, Kuntal Dey
-
Patent number: 11687269Abstract: In some examples, a computing device may determine an amount of pending data to copy over a network from a first storage system to a second storage system. Further, the computing device may determine an ingest speed based on a quantity of data received by the first storage system and a copy speed associated with one or more first computing resources associated with the first storage system. The computing device may determine an estimated time to copy at least a portion of the pending data to the second storage system to meet a data copy requirement. For instance, the estimated time may be based at least in part on the copy speed, the amount of pending data, and the ingest speed. In addition, at least one action may be performed based on the estimated time.Type: GrantFiled: August 23, 2021Date of Patent: June 27, 2023Assignee: HITACHI, LTD.Inventor: Pablo Martinez Lerin
-
Patent number: 11681440Abstract: The present disclosure includes apparatuses and methods for parallel writing to multiple memory device locations. An example apparatus comprises a memory device. The memory device includes an array of memory cells and sensing circuitry coupled to the array. The sensing circuitry includes a sense amplifier and a compute component configured to implement logical operations. A memory controller in the memory device is configured to receive a block of resolved instructions and/or constant data from the host. The memory controller is configured to write the resolved instructions and/or constant data in parallel to a plurality of locations the memory device.Type: GrantFiled: March 8, 2021Date of Patent: June 20, 2023Assignee: Micron Technology, Inc.Inventors: Jason T. Zawodny, Glen E. Hush, Troy A. Manning, Timothy P. Finkbeiner
-
Patent number: 11669270Abstract: A multi-channel memory storage device, a memory control circuit unit, and a data reading method are provided. The method includes: determining whether a storage space of a buffer memory is insufficient when a multi-channel access is performed; issuing a data read command corresponding to each of a plurality of multi-channels to a rewritable non-volatile memory module according to a logical address in a host read command in response to insufficient storage space of the buffer memory to read data corresponding to each of the plurality of multi-channels from a data storage area to a data cache area via the plurality of multi-channels; and allocating the storage space of the buffer memory to the rewritable non-volatile memory module after the storage space of the buffer memory is released and issuing a cache read command to move first data in data temporarily stored in the data cache area to the buffer memory.Type: GrantFiled: January 19, 2022Date of Patent: June 6, 2023Assignee: Hefei Core Storage Electronic LimitedInventors: Wan-Jun Hong, Qi-Ao Zhu, Xin Wang, Yang Zhang, Xu Hui Cheng, Jian Hu
-
Patent number: 11662912Abstract: A method for connecting a plurality of NVMe storage arrays using switchless NVMe cross connect fiber channel architecture for faster direct connectivity and reduced latencies.Type: GrantFiled: August 2, 2021Date of Patent: May 30, 2023Inventor: Patrick Kidney
-
Patent number: 11650763Abstract: IO traces on a high-speed memory that provides temporary storage for multiple storage volumes are stored in a trace buffer. IO operations on different storage volume are considered separate workloads on the high-speed memory. Periodically, the IO traces are processed to extract workload features for each workload. The workload features are stored in a feature matrix, and the workload features from multiple IO trace buffer processing operations are aggregated over time. A HDBSCAN unsupervised clustering machine learning process is used to create a set of four workload clusters and an outlier cluster. A dominant feature of each workload cluster is used to set a policy for the workload cluster. IO percentages for clusters with the same policies are used to set minimum sizes for policy regions in the high-speed memory. Histograms based on the workloads are used to determine segmentation rules specifying slot sizes for the policy regions.Type: GrantFiled: April 11, 2022Date of Patent: May 16, 2023Assignee: Dell Products, L.P.Inventors: Owen Martin, Shaul Dar, Paras Pandya
-
Patent number: 11640339Abstract: A computer-implemented method according to one embodiment includes identifying a first data set to be backed up, where the first data set is stored on a first storage volume; removing empty data tracks from the first data set to create an intermediary data set; storing the intermediary data set at a plurality of secondary storage volumes different from the first storage volume; and creating a backup data set for the first data set, utilizing the intermediary data set.Type: GrantFiled: November 23, 2020Date of Patent: May 2, 2023Assignee: International Business Machines CorporationInventors: David C. Reed, Matthew Barragan, Esteban Rios
-
Patent number: 11635897Abstract: A method, computer program product, and computer system for receiving an XCopy command is provided. The XCopy command may be in the form of an IO operation. The IO operation may be a subextent block operation. A source range and a destination range of the XCopy command may be determined to be aligned within an alignment boundary. The Xcopy command may be determined to be smaller than a predetermined size. In response to determining the source range and destination range of the XCopy command are aligned within the alignment boundary and the XCopy command is smaller than a predetermined size, the XCopy command may be processed. The receiving of the XCopy command may be recorded in a log.Type: GrantFiled: July 30, 2021Date of Patent: April 25, 2023Assignee: EMC IP Holding Company, LLCInventors: Nimrod Shani, Shari A. Vietry, Vikram A. Prabhakar, Vamsi K. Vankamamidi
-
Patent number: 11636031Abstract: Methods, computer systems, and computer readable medium are described. In a particular embodiment, a storage controller is configured to receive, from a host computing device, a request to perform a bulk array task and in response to receiving the request, store an indication relating old keys of a mapping table to new keys, wherein both the old keys and the new keys correspond to the request. The storage controller is also configured to convey a response indicating completing of the request without prior access of user data and update the mapping table to replace the old keys with the new keys.Type: GrantFiled: June 28, 2021Date of Patent: April 25, 2023Assignee: PURE STORAGE, INC.Inventors: John Colgrove, John Hayes, Ethan Miller, Feng Wang
-
Patent number: 11625186Abstract: A method for erasing stored data from the memory of the network device and requesting data from the memory after completion of the data erasure procedure or accessing the memory of the network device after completion of the data erasure procedure. The method further comprises determining the outcome of the data erasure procedure based on: the results of a comparison between a response received from the network device in reply to the request for data and an expected response which is indicative of a successful erasure of the memory of the network device; or the results of a comparison between any contents of the memory of the network device after completion of the data erasure procedure and expected contents of the memory of the network device after completion of the data erasure procedure which are indicative of a successful erasure of the memory of the network device.Type: GrantFiled: July 14, 2021Date of Patent: April 11, 2023Assignee: BLANCCO TECHNOLOGY GROUP IP OYInventors: Mitesh Shah, Markku Valtonen, Dhia Ben Haddej, Chandrashekhar Kakade, Akash Nehere, Prasad Bidkar, Pratibha Pathekar
-
Patent number: 11625175Abstract: Techniques for a device with NUMA a memory architecture to migrate virtual resources between NUMA nodes to reduce resource contention between virtual resources running on the NUMA nodes. In some examples, the device monitors various metrics and/or operations of the NUMA nodes and/or virtual resources, and detect events that indicate that virtual resources running on a same NUMA node are contending, or are likely to contend, for computing resources of the NUMA node. Upon detecting such an event, the device may migrate a virtual resource from the NUMA node to another NUMA node on the device that has an availability of computing resources. The device may then migrate the virtual resource from the overcommitted NUMA node onto the NUMA node that has availability to run the virtual resource. In this way, devices may reduce resource contention among virtual resources running on a same NUMA node.Type: GrantFiled: June 29, 2021Date of Patent: April 11, 2023Assignee: Amazon Technologies, Inc.Inventors: Nikolay Krasilnikov, Oleksii Tsai, Alexey Gadalin, Guy Parton, Anton Valter
-
Patent number: 11620060Abstract: Unified hardware and software two-level memory mechanisms and associated methods, systems, and software. Data is stored on near and far memory devices, wherein an access latency for a near memory device is less than an access latency for a far memory device. The near memory devices store data in data units having addresses in a near memory virtual address space, while the far memory devices store data in data units having addresses in a far memory address space, with a portion of the data being stored on both near and far memory devices. In response to memory read access requests, a determination is made to where data corresponding to the request is located on a near memory device, and if so the data is read from the near memory device; otherwise, the data is read from a far memory device. Memory access patterns are observed, and portions of far memory that are frequently accessed are copied to near memory to reduce access latency for subsequent accesses.Type: GrantFiled: December 28, 2018Date of Patent: April 4, 2023Assignee: Intel CorporationInventors: Mohan J. Kumar, Murugasamy K. Nachimuthu
-
Patent number: 11620062Abstract: In at least one embodiment, processing can include receiving a metadata (MD) structure including MD pages; and performing a MD split operation with respect to a first of the MD pages, wherein said performing the MD split operation includes: generating a first ALI (abstract logical index) representing a new MD page that is unallocated and is a child of the first MD page; and storing an entry in a bucket of an in-memory MD log for the first ALI, wherein the entry denotes a mapping between the first ALI and a corresponding LI (logical index), wherein the entry indicates that the corresponding LI associated with the first ALI is invalid since the first ALI represents a new MD page which is unallocated and not associated with physical storage; and destaging the in-memory MD log, wherein said destaging includes allocating first physical storage for the new MD page.Type: GrantFiled: October 15, 2021Date of Patent: April 4, 2023Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Ronen Gazit, Bar David
-
Patent number: 11593004Abstract: Computer-implemented methods for optimized compute resource addition and removal in a distributed storage platform. In a case of a newly added compute resource being connected to a storage subsystem shared by compute resources in the distributed storage platform, the distributed storage platform formulates a redistribution plan to redistribute a subset of a global address space of the storage subsystem to a newly added logical volume in the storage subsystem. In a case of a removed compute resource being disconnected from the storage subsystem, the distributed storage platform formulates a redistribution plan to redistribute respective logical blocks in a logical volume for the removed compute resource to respective remaining logical volumes for respective remaining compute resources in the distributed storage platform. The distributed storage platform executes the redistribution plan to reassign data block ownerships on one or more physical memory devices in the storage subsystem.Type: GrantFiled: August 13, 2021Date of Patent: February 28, 2023Assignee: International Business Machines CorporationInventors: Sergio Reyes, Brian Chase Twichell
-
Patent number: 11593012Abstract: Methods and systems for performing a partial pass-through transfer are described. In an aspect, a method includes: receiving, from a first computing system, pass-through transfer definition data to be associated with a first logical storage area, the pass-through transfer definition data including a trigger condition for a pass-through transfer and an apportionment value for the pass-through transfer; storing a representation of the pass-through transfer definition data in association with the first logical storage area; detecting a first data transfer to the first logical storage area, the first data transfer representing a transfer of a resource; determining that the first data transfer satisfies the trigger condition; and in response to determining that the first data transfer satisfies the trigger condition: identifying a portion of the resource based on the apportionment value; and initiating a second data transfer.Type: GrantFiled: August 24, 2021Date of Patent: February 28, 2023Assignee: The Toronto-Dominion BankInventors: Milos Dunjic, David Samuel Tax, Jonathan Joseph Prendergast, Kushank Rastogi, Vipul Kishore Lalka, Asad Joheb
-
Patent number: 11593157Abstract: A method for providing an asynchronous execution queue for accelerator hardware includes replacing a malloc operation in an execution queue to be sent to an accelerator with an asynchronous malloc operation that returns a unique reference pointer. Execution of the asynchronous malloc operation in the execution queue by the accelerator allocates a requested memory size and adds an entry to a look-up table accessible by the accelerator that maps the reference pointer to a corresponding memory address.Type: GrantFiled: April 29, 2020Date of Patent: February 28, 2023Assignee: NEC CORPORATIONInventor: Nicolas Weber
-
Patent number: 11593024Abstract: A request can be provided, from a front-end of a memory sub-system, to a processing device of the memory sub-system and deleting the request from a buffer of the front-end of the memory sub-system. Responsive to deleting the request from the buffer, determining a first quantity of requests in the buffer and responsive to deleting the requests from the buffer, determining a second quantity of outstanding requests in the back-end of the memory sub-system. Responsive to deleting the request from the buffer and providing the request to the processing device, determining whether to provide a response to a host, wherein the response includes an indication of the quantity of requests in the buffer and of outstanding requests in a back-end of the memory sub-system, based on a comparison of the second quantity of outstanding requests to a threshold.Type: GrantFiled: August 30, 2021Date of Patent: February 28, 2023Assignee: Micron Technology, Inc.Inventor: Laurent Isenegger
-
Patent number: 11586391Abstract: A technique efficiently migrates a live virtual disk (vdisk) across storage containers of a cluster having a plurality of nodes deployed in a virtualization environment. Each node is embodied as a physical computer with hardware resources, such as processor, memory, network and storage resources, that are virtualized to provide support for one or more user virtual machines (UVM) executing on the node. The storage resources include storage devices embodied as a storage pool that is logically segmented into the storage containers configured to store one or more vdisks. The storage containers include a source container having associated storage policies and a destination container having different (new) storage policies.Type: GrantFiled: January 28, 2021Date of Patent: February 21, 2023Assignee: Nutanix, Inc.Inventors: Kiran Tatiparthi, Mukul Sharma, Saibal Kumar Adhya, Sandeep Ashok Ghadage, Swapnil Ingle