Patents Examined by Jason W Blust
-
Patent number: 11249669Abstract: Systems and methods for handling input/output operations during a space crunch are described herein. An example method includes striping a volume across a plurality of storage nodes, and maintaining a cluster volume table (CVT) storing information regarding distribution of the logical blocks. Additionally, the CVT includes a plurality of entries, where each of the entries includes information identifying a respective owner storage node of a respective logical block. The method also includes receiving a write I/O operation directed to an unallocated logical block owned by a landing storage node, where the landing storage node lacks free storage capacity, and locking the unallocated logical block. The method further includes updating the CVT to identify a storage node having free storage capacity as owner storage node of the unallocated logical block, and unlocking the unallocated logical block, wherein the write I/O operation proceeds at the storage node having free storage capacity.Type: GrantFiled: April 15, 2020Date of Patent: February 15, 2022Assignee: AMZETTA TECHNOLOGIES, LLCInventors: Paresh Chatterjee, Raghavan Sowrirajan, Jomy Jose Maliakal, Sharon Samuel Enoch
-
Patent number: 11243707Abstract: Disclosed is an improved approach to implement virtualization objects in a virtualization system. The virtualization object from a first namespace is cloned as a snapshot that is accessible within a second namespace. To implement this, the virtualization object can be mounted as a target (implemented as a snapshot) that is locally accessible to the host.Type: GrantFiled: March 12, 2014Date of Patent: February 8, 2022Assignee: Nutanix, Inc.Inventors: Miao Cui, Gregory Andrew Smith, Tabrez Memon
-
Patent number: 11232022Abstract: A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit.Type: GrantFiled: October 28, 2011Date of Patent: January 25, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyojin Jeong, Youngjoon Choi, Sunghoon Lee, Jae-Hyeon Ju
-
Patent number: 11226759Abstract: Embodiments manage a lifecycle of distributed data objects from at least a first data fabric node. Embodiments receive a request from a publisher to anchor a scope. Embodiments anchor the scope to an anchor in the first data fabric node to generate an anchored scope, where the anchor includes a previously published first object and a corresponding first lifecycle and anchoring the scope includes registering interest in the first lifecycle of the anchor. Embodiments publish, by the first data fabric node, scope metadata corresponding to the anchored scope. Embodiments then receive a request from the publisher to publish a second object into the anchored scope to define an anchored object, the anchored object including the first lifecycle.Type: GrantFiled: June 19, 2020Date of Patent: January 18, 2022Assignee: METAFLUENT, LLCInventor: Andrew MacGaffey
-
Patent number: 11221970Abstract: Dynamically managing protection groups, including: identifying a protection group of storage resources, the protection group associated with a protection group management schedule that identifies one or more protection group management operations to be performed; detecting a membership change in the protection group; and updating, in dependence upon the change in the protection group, the protection group management schedule.Type: GrantFiled: September 26, 2019Date of Patent: January 11, 2022Assignee: Pure Storage, Inc.Inventors: John Colgrove, Alan S. Driscoll, Steven P. Hodgson, Nitin Nagpal, Emanuel G. Noik, John Roper
-
Patent number: 11204703Abstract: Techniques for scavenging blocks may include: determining, in accordance with a selected option, a set of candidate upper deck file systems, wherein at least a first of the candidate upper deck file systems has storage allocated from at least one block of a lower deck file system; and performing, in accordance with the selected option, scavenging of the set of candidate upper deck file systems to attempt to free blocks of the lower deck file system. Scavenging may include issuing a request to perform hole punching of a backed free block of the first candidate upper deck file system, wherein the backed free block has first provisioned storage that is associated with a block of the lower deck file system. The selected option may be one of multiple options each specifying a different candidate set of upper deck file systems upon which hole punching is performed when selected.Type: GrantFiled: December 10, 2019Date of Patent: December 21, 2021Assignee: EMC IP Holding Company LLCInventors: Philippe Armangau, Ivan Bassov, Walter Forrester
-
Patent number: 11205019Abstract: A first and a second computing environments are generated on a computer system based on a state of a logical storage unit of the computer system. The computing environments are associated with pieces of storage space located outside the logical storage unit. A write operation addressing the logical storage unit in one computing environment is directed to a piece of storage space associated with that computing environment.Type: GrantFiled: October 28, 2011Date of Patent: December 21, 2021Assignee: Hewlett-Packard Development Company, L.P.Inventor: Wei-Shan Yang
-
Patent number: 11188265Abstract: A method for performing storage space management, an associated data storage device, and a controller thereof are provided. The method includes: receiving an identify controller command from a host device; in response to the identify controller command, returning a reply to the host device to indicate that a plurality of logical block address (LBA) formats are supported, where the plurality of LBA formats are related to access of a non-volatile (NV) memory, and the plurality of LBA formats include a first LBA format and a second LBA format; receiving a first namespace (NS) management command from the host device; in response to the first NS management command, establishing a first NS adopting the first LBA format; receiving a second NS management command from the host device; and in response to the second NS management command, establishing a second NS adopting the second LBA format.Type: GrantFiled: April 19, 2020Date of Patent: November 30, 2021Assignee: Silicon Motion, Inc.Inventors: Sheng-I Hsu, Ching-Chin Chang
-
Patent number: 11182084Abstract: Various embodiments manage dynamic memory allocation data. In one embodiment, a set of memory allocation metadata is extracted from a memory heap space. Process dependent information and process independent information is identified from the set of memory allocation metadata based on the set of memory allocation metadata being extracted. The process dependent information and the process independent information at least identify a set of virtual memory addresses available in the memory heap space and a set of virtual memory addresses allocated to a process associated with the memory heap space. A set of allocation data associated with the memory heap space is stored in a persistent storage based on the process dependent information and the process independent information having been identified. The set of allocation data includes the process independent allocation information and a starting address associated with the memory heap space.Type: GrantFiled: February 17, 2020Date of Patent: November 23, 2021Assignee: International Business Machines CorporationInventors: Michel Hack, Xiaoqiao Meng, Jian Tan, Yandong Wang, Li Zhang
-
Patent number: 11182158Abstract: Technologies for providing adaptive memory media management include media access circuitry connected to a memory media. The media access circuitry is to receive a request to perform at least one memory access operation to be managed by the media access circuitry. The media access circuitry is further to manage the requested at least one memory access operation, including disabling a memory controller in communication with the media access circuitry from managing the memory media while the at least one requested memory access operation is performed.Type: GrantFiled: May 22, 2019Date of Patent: November 23, 2021Assignee: Intel CorporationInventors: Bruce Querbach, Shigeki Tomishima, Srikanth Srinivasan, Chetan Chauhan, Rajesh Sundaram
-
Patent number: 11169931Abstract: Techniques for obtaining metadata may include: receiving, by a director, an I/O operation directed to a target offset of a logical device, wherein the director is located on a board including a local page table used by components on the board; querying the local page table for a global memory address of first metadata for the target offset of the logical device; and responsive to the local page table not having the global memory address of the first metadata for the target offset of the logical device, using at least a first indirection layer to obtain the global memory address of the first metadata. The global memory may be a distributed global memory including memory segments from multiple different boards each including its own local page table. Compare and swap operations may be used to perform atomic operations to ensure synchronized access when updating the distributed global memory.Type: GrantFiled: September 24, 2019Date of Patent: November 9, 2021Assignee: EMC IP Holding Company LLCInventors: Andrew Chanler, Kevin Tobin
-
Patent number: 11157411Abstract: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption.Type: GrantFiled: November 22, 2019Date of Patent: October 26, 2021Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, Stephen J. Powell, William J. Starke
-
Patent number: 11132213Abstract: Systems and methods are described for transforming a data set within a data source into a series of task calls to an on-demand code execution environment. The environment can utilize pre-initialized virtual machine instances to enable execution of user-specified code in a rapid manner, without delays typically caused by initialization of the virtual machine instances, and are often used to process data in near-real time, as it is created. However, limitations in computing resources may inhibit a user from utilizing an on-demand code execution environment to simultaneously process a large, existing data set. The present application provides a task generation system that can iteratively retrieve data items from an existing data set and generate corresponding task calls to the on-demand computing environment. The calls can be ordered to address dependencies of the data items, such as when a first data item depends on prior processing of a second data item.Type: GrantFiled: March 30, 2016Date of Patent: September 28, 2021Assignee: Amazon Technologies, Inc.Inventors: Timothy Allen Wagner, Marc John Brooker, Ajay Nair
-
Patent number: 11127468Abstract: Some embodiments include a method for addressing an integrated circuit for a non-volatile memory of the EEPROM type on a bus of the I2C type. The memory includes J hardware-identification pins, with J being an integer lying between 1 and 3, which are assigned respective potentials defining an assignment code on J bits. The method includes a first mode of addressing used selectively when the assignment code is equal to a fixed reference code on J bits, and a second mode of addressing used selectively when the assignment code is different from the reference code. In the first mode, the memory plane of the non-volatile memory is addressed by a memory address contained in the last low-order bits of the slave address and in the first N bytes received. In the second mode, the memory plane is addressed by a memory address contained in the first N+1 bytes received.Type: GrantFiled: December 14, 2017Date of Patent: September 21, 2021Assignee: STMicroelectronics (Rousset) SASInventors: François Tailliet, Marc Battista
-
Patent number: 11119688Abstract: The present disclosure provides a replica processing method based on a raft protocol. The method includes: for a node to be processed corresponding to a raft replica group, determining a replica to be cleaned corresponding to the node, the raft replica group including a first node and at least one second node; for the node, obtaining replica configuration information of the raft replica group, the replica configuration information including one or more primary replicas stored by the first node and one or more secondary replicas stored by the at least one second node; and for the node, determining whether the replica configuration information includes the replica to be cleaned, if yes, reserving the replica to be cleaned, and if no, deleting the replica to be cleaned. The present disclosure also provides a replica processing node, a distributed storage system, a server, and a computer-readable medium.Type: GrantFiled: April 20, 2020Date of Patent: September 14, 2021Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Zhengli Yi, Pengfei Zheng, Xinxing Wang
-
Patent number: 11119656Abstract: Systems and methods of deduplication aware scalable content placement am described. A method may include receiving data to be stored on one or more nodes of a storage array and calculating a plurality of hashes corresponding to the data. The method further includes determining a first subset of the plurality of hashes, determining a second subset of the plurality of hashes of the first subset, and generating a node candidate placement list. The method may further include sending the first subset to one or more nodes represented on the node candidate placement list and receiving, from the nodes represented on the node candidate placement list, characteristics corresponding to the nodes represented on the candidate placement list. The method may further include identifying one of the one or more nodes represented on the candidate placement list m view of the characteristic and sending the data to the identified node.Type: GrantFiled: June 10, 2019Date of Patent: September 14, 2021Assignee: Pure Storage, Inc.Inventors: Robert Lee, Christopher Lumb, Ethan L. Miller, Igor Ostrovsky
-
Patent number: 11119934Abstract: Provided herein may be a storage device and a method of operating the storage device. The storage device includes a memory controller having a map manager and preload mapping information storage, and a memory device having logical-to-physical mapping information. The memory controller determines and obtains from the memory device, preloads mapping information, and then stores the preload mapping information in the preload mapping information storage, before a map update operation of the logical-to-physical mapping information is performed. The preload mapping information includes logical-to-physical mapping information to be updated.Type: GrantFiled: September 11, 2019Date of Patent: September 14, 2021Assignee: SK hynix Inc.Inventors: Byeong Gyu Park, Sung Hun Jeon, Young Ick Cho, Seung Gu Ji
-
Patent number: 11119932Abstract: Operation of a multi-slice processor that includes a plurality of execution slices. Operation of such a multi-slice processor includes: determining, by a hypervisor, that consumption of memory controller resources, by a plurality of processing threads, is above a threshold quantity, wherein respective processing threads of the plurality of processing threads control respective prefetch settings; and responsive to determining that the consumption of the memory controller resources is above the threshold quantity, modifying individual memory controller usage of at least one of the plurality of processing threads such that the consumption of the memory controller resources is reduced below the threshold quantity.Type: GrantFiled: March 20, 2019Date of Patent: September 14, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bradly G. Frey, George W. Rohrbaugh, III, Brian W. Thompto
-
Patent number: 11112971Abstract: A storage device includes one or more FMPKs including a FM chip capable of storing data and a storage controller that controls storing of write data of a predetermined write request for the FMPK. The FMPK includes a compression/decompression circuit that compresses data according to a second compression algorithm different from a first compression algorithm. The storage controller compresses data using the first compression algorithm, and determines whether the write data will be compressed using the storage controller or the compression/decompression circuit based on a predetermined condition. The write data is compressed by the determined storage controller or compression/decompression circuit and is stored in the FMPK.Type: GrantFiled: August 14, 2018Date of Patent: September 7, 2021Assignee: HITACHI, LTD.Inventors: Ai Satoyama, Tomohiro Kawaguchi, Yoshihiro Yoshii
-
Patent number: 11112998Abstract: The present invention discloses an operation instruction scheduling method and device for a NAND flash memory device. The method comprises: performing task decomposition on the operation instruction of the NAND flash memory device, and sending an obtained task to a corresponding task queue; sending a current task to a corresponding arbitration queue according to a task type of the current task in the task queue; and scheduling a NAND interface for a to-be-executed task in the arbitration queue according to priority information of the arbitration queue. Embodiments of the present invention can efficiently realize operation instruction scheduling of a NAND flash memory device, improve flexibility of operation instruction scheduling of the NAND flash memory device, and improve overall performance of the NAND flash memory device.Type: GrantFiled: December 7, 2017Date of Patent: September 7, 2021Assignee: DERA CO., LTD.Inventors: Wang Fenghai, Xia Jiexu, Wang Song, Yang Ji, Zhang Jiantao