Patents Issued in February 6, 2020
-
Publication number: 20200042189Abstract: A hierarchical sparse tensor compression method based on artificial intelligence devices, in DRAM, not only saves the storage space of the neuron surface, but also adds a meta-surface to the mask block. When reading data, the mask is first read, then the size of the non-zero data is calculated, and only these non-zero data are read to save DRAM bandwidth. In the cache, only non-zero data is stored, so the required storage space is reduced. When processing data, only non-zero data is used. The method uses a bit mask to determine if the data is zero. There are three levels in the hierarchical compression scheme: tiles, lines, and points, reading bitmasks and non-zero data from DRAM, and saving bandwidth by not reading zero data. When processing data, if their bit mask is zero, the tile data may be easily removed.Type: ApplicationFiled: December 31, 2018Publication date: February 6, 2020Applicant: Nanjing Iluvatar CoreX Technology Co., Ltd. (DBA “Iluvatar CoreX Inc. Nanjing”)Inventors: Pingping Shao, Jiejun Chen, Yongliu Wang
-
Publication number: 20200042190Abstract: System and method to encode and decode raw data. The method to encode includes receiving a block of uncoded data, decomposing the block of uncoded data into a plurality of data vectors, mapping each of the plurality of data vectors to a bit marker; and storing the bit marker in a memory to produce an encoded representation of the uncoded data. Encoding may further include decomposing the block of uncoded data into default data and non-default data, and mapping only the non-default data. In some embodiments, bit markers may include a seed value and replication rule, or a fractalized pattern.Type: ApplicationFiled: May 17, 2019Publication date: February 6, 2020Inventor: Brian M. Ignomirello
-
Publication number: 20200042191Abstract: A method for accessing a dynamic memory module, the method may include (i) receiving, by a memory controller, a set of access requests for accessing the dynamic memory module; (ii) converting the access requests to a set of commands, wherein the set of commands comprise (a) a first sub-set of commands that are related to a first group of memory banks, and (b) a second sub-set of commands that are related to a second group of memory banks; (iii) scheduling, by a scheduler of the memory controller, an execution of the first sub-set; (iv) scheduling an execution of the second sub-set to be interleaved with the execution of the first sub-set; and (v) executing the set of commands according to the schedule.Type: ApplicationFiled: August 2, 2019Publication date: February 6, 2020Inventors: Boris Shulman, Yosef Kreinin, Leonid Smolyansky
-
Publication number: 20200042192Abstract: A method for control of latency information through logical block addressing is described comprising receiving a computer command, performing a read flow operation on a computer buffer memory based on the computer command; populating at least one metadata frame with data based on logical block address latency information; initiating a serial attached data path transfer for one of transmitting and receiving data to the computer drive and transmitting data to a host based on the second latency.Type: ApplicationFiled: October 9, 2019Publication date: February 6, 2020Inventors: Darin Edward GERHART, Nicholas Edward ORTMEIER, Mark David ERICKSON
-
Publication number: 20200042193Abstract: There is disclosed techniques for use managing data storage. In one embodiment, endurance values are generated in connection with a plurality of solid state drives (SSDs). Each endurance value for an SSD indicating an estimated number of write operations that may be performed on the SSD before the SSD wears out and requires replacement. Additionally, storage space is reserved on one or more of the SSDs such that an endurance level associated with the endurance value of the SSD will have an inverse relationship with the amount of storage space reserved on the SSD.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventors: Nickolay A. Dalmatov, Michael Patrick Wahl, Jian Gao
-
Publication number: 20200042194Abstract: According to the embodiments, a nonvolatile memory device is configured to store a normal operating system, and store a bootloader. A host device is capable of initiating the normal operating system by using the bootloader. The host device is configured to determine whether a first condition is established based on information obtained from the nonvolatile memory device; and rewrite, when determined the first condition is established, the bootloader so that an emergency software is initiated when booting the host device. The emergency software is executed on the host device. The host device is capable of issuing only a read command to the nonvolatile memory device under a control of the emergency software.Type: ApplicationFiled: September 11, 2019Publication date: February 6, 2020Applicant: Toshiba Memory CorporationInventor: Daisuke HASHIMOTO
-
Publication number: 20200042195Abstract: A method, a computing device, and a non-transitory machine-readable medium for performing a multipath selection based on a determined quality of service for the paths. An example method includes a host computing device periodically polling a storage system for path information including an indication of a recommended storage controller. The host computing device periodically determines a quality of service information corresponding to a plurality of paths between the host computing device and a storage volume of the storage system, where at least one of the plurality of paths including the recommended storage controller. The host computing device identifies a fault corresponding to a path of the plurality of paths that routes I/O from the host computing device to the storage volume. The host computing device re-routes the I/O from the path to a different path of the plurality of paths, where the different path is selected for the re-routing based on the quality of service information and the path information.Type: ApplicationFiled: October 15, 2019Publication date: February 6, 2020Inventors: Joey Parnell, Steven Schremmer, Brandon Thompson, Mahmoud K. Jibbe
-
Publication number: 20200042196Abstract: A method for execution by an auditing unit includes sending a verification request to a storage unit that includes a slice name and a challenge value. A proof of knowledge is received from the storage unit in response, where the proof of knowledge is generated by the storage unit based on a prover output value generated by performing a combined integrity function on the challenge value and slice data associated with the slice name. A verifier output value is generated by the auditing unit as a function of the challenge value and a known slice integrity check value for the slice name. Output verification data is generated by comparing the prover output value to the verifier output value. A corrective action is initiated on the storage unit when the prover output value compares unfavorably to the verifier output value, or when the proof of knowledge is evaluated to be invalid.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventor: Jason K. Resch
-
Publication number: 20200042197Abstract: A system including a stack of two or more layers of volatile memory, such as layers of a 3D stacked DRAM memory, places data in the stack based on a temperature or a refresh rate. When a threshold is exceeded, data are moved from a first region to a second region in the stack, the second region having one or both of a second temperature lower than a first temperature of the first region or a second refresh rate lower than a first refresh rate of the first region.Type: ApplicationFiled: August 1, 2018Publication date: February 6, 2020Inventors: Jagadish B. KOTRA, Karthik RAO, Joseph L. GREATHOUSE
-
Publication number: 20200042198Abstract: One example method includes receiving an IO associated with a process initiated by an application, where the IO is identified by a tag that corresponds to the process. The method further includes saving the tag on a device that is an element of a storage group (SG) that is specific to the application, and correlating the tag with a data protection process that is associated with the application. When a request is received to perform an SG protection process, the SG protection process is performed on the tagged device.Type: ApplicationFiled: August 2, 2018Publication date: February 6, 2020Inventors: Arieh Don, Jehuda Shemer, Yaron Dar
-
Publication number: 20200042199Abstract: A method for performing a write operation in a distributed storage system is disclosed. The method comprises receiving a first time-stamped write request from a proxy server. Further, the method comprises determining if the first time-stamped write request is within a time window of a reorder buffer and if the first time-stamped write request overlaps with a second time-stamped write request in the reorder buffer. Responsive to a determination that the first time-stamped write request is outside the time window or that the first time-stamped write request is within the time window but has an older time-stamp than the second time-stamped write request, the method comprises rejecting the first time-stamped write request. Otherwise, the method comprises inserting the first time-stamped write request in the reorder buffer in timestamp order and transmitting an accept to the proxy server.Type: ApplicationFiled: August 6, 2018Publication date: February 6, 2020Inventor: Guillermo J. ROZAS
-
Publication number: 20200042200Abstract: According to one embodiment, when it is determined that a first storage device of a plurality of storage devices is to be removed and an additional storage device is connected to a storage controller, the storage controller writes update data portions corresponding to data portions already written to the first storage device to any storage device selected from remaining one or more storage devices of the plurality of storage devices except for the first storage device and the additional storage device. Further, the storage controller writes update data portions corresponding to data portions already written to the remaining one or more storage devices to any storage device selected from the remaining one or more storage devices and the additional storage device.Type: ApplicationFiled: March 14, 2019Publication date: February 6, 2020Applicant: TOSHIBA MEMORY CORPORATIONInventor: Shinichi Kanno
-
Publication number: 20200042201Abstract: Devices and techniques for managing partial superblocks in a NAND device are described herein. A set of superblock candidates is calculated. Here, a superblock may have a set of blocks that share a same position in each plane in each die of a NAND array of the NAND device. A set of partial super block candidates is also calculated. A partial superblock candidate is a superblock candidate that has at least one plane that has a bad block. A partial superblock use classification may then be obtained. Superblocks may be established for the NAND device by using members of the set of superblock candidates after removing the set of partial superblock candidates from the set of superblock candidates. Partial superblocks may then be established for classes of data in the NAND device according to the partial superblock use classification.Type: ApplicationFiled: July 9, 2019Publication date: February 6, 2020Inventors: Jianmin Huang, Kulachet Tanpairoj, Harish Reddy Singidi, Ting Luo
-
Publication number: 20200042202Abstract: A method for execution by an auditing unit includes sending a verification request to a storage unit that includes a slice name and a challenge value. A proof of knowledge is received from the storage unit in response, where the proof of knowledge is generated by the storage unit based on a prover output value generated by performing a combined integrity function on the challenge value and slice data associated with the slice name. A verifier output value is generated by the auditing unit as a function of the challenge value and a known slice integrity check value for the slice name. Output verification data is generated by comparing the prover output value to the verifier output value. A corrective action is initiated on the storage unit when the prover output value compares unfavorably to the verifier output value, or when the proof of knowledge is evaluated to be invalid.Type: ApplicationFiled: July 18, 2019Publication date: February 6, 2020Inventor: Jason K. Resch
-
Publication number: 20200042203Abstract: A storage module includes a set of memories. Each of the memories in the set of memories may be divided into a set of portions. A controller is configured to transfer data between the set of memories and a host connected through an interface. A set of channels connects the set of memories to the controller. The controller is also configured to select: a memory from the set of memories, a portion from the set of portions for the selected memory, and/or a channel from the set of channels, e.g., connected to the selected memory, based upon an identification (ID) associated with the data. The ID may be separate from the data and a write address of the data, and the selected memory, the selected portion, and the selected channel may be used to store the data.Type: ApplicationFiled: August 20, 2019Publication date: February 6, 2020Inventor: Kimmo Juhani Mylly
-
Publication number: 20200042204Abstract: The present invention relates to an operation method of distributed memory disk cluster storage system, the distributed memory storage system is adopted thereby satisfying four desired improvements including the expansion of network bandwidth, the expansion of hard disk capacity, the increasing of TOPS speed, and the increasing of memory I/O transmitting speed. Meanwhile, the system can be cross-region, cross-datacenter and cross-WAN operated, so the user's requirements can be collected through the local memory disk cluster for providing the corresponding services, the capacity of the memory disk cluster can also be gradually expanded for further providing cross-region or cross-country data service.Type: ApplicationFiled: September 25, 2019Publication date: February 6, 2020Inventor: HSUN-YUAN CHEN
-
Publication number: 20200042205Abstract: One or more memory systems, architectural structures, and/or methods of storing information in memory devices is disclosed to improve the data bandwidth and or to reduce the load on the communications links in a memory system. The system may include one or more memory devices, one or more memory control circuits and one or more data buffer circuits. In one embodiment, the Host only transmits data over its communications link with the data buffer circuit. In one aspect, the memory control circuit does not send a control signal to the data buffer circuits. In one aspect, the memory control circuit and the data buffer circuits each maintain a separate state machine-driven address pointer or local address sequencer, which contains the same tags in the same sequence. In another aspect, a resynchronization method is disclosed.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Inventors: Steven R. Carlough, Susan M. Eickhoff, Patrick J. Meaney, Stephen J. Powell, Gary A. Van Huben, Jie Zheng
-
Publication number: 20200042206Abstract: Copy source to target operations may be selectively and preemptively undertaken in advance of source destage operations. In another aspect, logic detects sequential writes including large block writes to point-in-time copy sources. In response, destage tasks on the associated point-in-time copy targets are started which include in one embodiment, stride-aligned copy source to target operations which copy unmodified data from the point-in-time copy sources to the point-in-time copy targets in alignment with the strides of the target. As a result, when write data of write operations is destaged to the point-in-time copy sources, such source destages do not need to wait for copy source to target operations since they have already been performed. In addition, the copy source to target operations may be stride-aligned with respect to the stride boundaries of the point-in-time copy targets. Other features and aspects may be realized, depending upon the particular application.Type: ApplicationFiled: October 11, 2019Publication date: February 6, 2020Inventors: Lokesh M. Gupta, Kevin J. Ash, Clint A. Hardy, Karl A. Nielsen
-
Publication number: 20200042207Abstract: A storage device comprises a controller and a plurality of nonvolatile memory devices. Maintenance conditions of the nonvolatile memory devices are monitored internally by the storage device. Upon determining that a maintenance condition is satisfied, the storage device notifies an external host. The controller may perform the maintenance operations on the plurality of nonvolatile memory devices with little disruption to the host and assure data is reliably maintained by the nonvolatile memory devices.Type: ApplicationFiled: June 21, 2019Publication date: February 6, 2020Inventors: YOUNGHO KWAK, HOJUN SHIM, KWANGHEE CHOI
-
Publication number: 20200042208Abstract: Example tiered storage systems, storage devices, and methods provide tier configuration by peer storage devices. Each tiered storage device is configured to communicate with a plurality of peer storage devices with storage device identifiers. The storage devices may query each other for performance characteristics and/or self-assigned performance tiers and organize the storage devices into a tier configuration. Each storage device, a storage controller, another system, and/or some combination may store metadata that describes the tier configuration. The tier configuration may then be used to route host data commands among the plurality of peer storage devices.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventor: Adam Roberts
-
Publication number: 20200042209Abstract: Techniques and mechanisms for providing communications which facilitate link training. In an embodiment, a memory controller includes, or couples to, trainer circuitry which is configured to provide instructions to generate memory access commands. The instructions are accessed at the circuitry in response to an indication that link training is performed, where the accessing is independent of communication with a processor coupled to the memory controller. Based on the instructions, memory access commands are communicated via a link between the memory controller and a memory device. Link training is performed based on an evaluation of one or more characteristics of the link communications. In another embodiment, memory access commands are generated, based on the instructions, while a validity of data at the memory device is maintained.Type: ApplicationFiled: April 22, 2019Publication date: February 6, 2020Applicant: Intel CorporationInventors: Tonia G. Morris, Moshe Jacob Finkelstein, Ramesh Subashchandrabose, Lohit R. Yerva
-
Publication number: 20200042210Abstract: A memory manager on a programmable device manages memory allocated to accelerators on the programmable device and allocated to processes that access the programmable device. The memory manager can manage both memory on the programmable device as well as external memory coupled to the programmable device. The memory manager protects the memory from unauthorized access by enforcing protection for the memory, using keys, encryption or the like. The memory manger can allocate a partition of memory to an accelerator when an accelerator is deployed to a programmable device, then allocate subpartitions within the allocated partition for each process that accesses the accelerator. When an accelerator is cast out of the programmable device, the memory partition is scrubbed so it can be reclaimed and allocated to another accelerator. When a process terminates, the subpartitions corresponding to the process are scrubbed so they may be reclaimed and allocated to another process.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventors: Paul E. Schardt, Jim C. Chen, Lance G. Thompson, James E. Carey
-
Publication number: 20200042211Abstract: A binary that is stored in a portion of runtime memory subject to garbage collection is analyzed. An amount of memory in a portion of runtime memory not subject to garbage collection is allocated for a binary copy based on the analysis. The binary is copied to the allocated portion of runtime memory not subject to garbage collection.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventor: Sergey Rogulenko
-
Publication number: 20200042212Abstract: A data storage management layer comprises computing device(s), operatively connected to storage resources, which comprise data storage units and control units. The data storage management layer is operatively connected to the storage resources. They are operatively connected to host computers. A sub-set of the storage resources are assigned to each host, in order to provide storage services according to performance requirements predefined for the host, thereby generating Virtual Private Arrays (VPA). The computing device(s) are configured to perform a method of managing the data storage system comprising: (a) implement storage management strategies, comprising rules. The rules comprise conditions and actions. The actions are capable of improving VPA performance in a dynamic manner; (b) repetitively performing: (i) monitor VPA performance for detection of compliance of VPA with the condition(s); and (ii) responsive to detection of compliance of VPA with the condition(s), performing the action(s).Type: ApplicationFiled: July 24, 2019Publication date: February 6, 2020Inventors: Adik SOKOLOVSKI, Eyal GORDON, Gilad HITRON, Benjamin Noam BONDI, Guy LORMAN
-
Publication number: 20200042213Abstract: A virtual storage system according to an aspect of the present invention includes multiple storage systems each including: a storage controller that accepts a read/write request for reading or writing from and to a logical volume; and multiple storage devices. The storage system defines a pool that manages the storage device capable of allocating any of storage areas to the logical volume, and manages the capacity (pool capacity) of the storage areas belonging to the pool, and the capacity (pool available capacity) of unused storage areas in the pool. Furthermore, the storage system calculates the total value of the pool available capacities of the storage systems included in the virtual storage system, and provides the server with the total value as the pool available capacity of the virtual storage system.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Inventors: Akira YAMAMOTO, Hiroaki AKUTSU, Tomohiro KAWAGUCHI
-
Publication number: 20200042214Abstract: Implementing a base set of data storage features for containers across multiple cloud computing environments. A container specification analyzer receives a container specification that identifies a container to be initiated, a volume to be mounted, and a native device driver to communicate with to facilitate mounting the volume. The container specification analyzer changes the container specification to generate an updated container specification that identifies a pass-through device driver to communicate with in lieu of the native device driver and identifies pass-through device driver data that identifies a data storage feature to be performed on data destined for the native device driver. The container specification analyzer returns the updated container specification for processing by a container initiator.Type: ApplicationFiled: August 2, 2018Publication date: February 6, 2020Inventors: Huamin Chen, Bradley D. Childs
-
Publication number: 20200042215Abstract: Example peer storage systems, storage devices, and methods provide peer operation state indicators for managing peer-to-peer operations. Peer storage devices establish peer communication channels that communicate data among the peer storage devices that bypasses the storage control plane for managing the peer storage devices. The peer storage devices identify peer operations that communicate data through the peer communication channels and generate a peer operation state during the operating period of the peer operations. The peer storage devices activate a state indicator configured to indicate the peer operation state. The state indicator may be used to prevent a storage controller or other entity with access to the storage device, including administrative personnel, from performing an operation that may corrupt data or truncate a media operation involving peer-to-peer communications.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventor: Adam Roberts
-
Publication number: 20200042216Abstract: The present disclosure relates to an apparatus transforming a computation graph. The apparatus comprises a converter configured to convert the computation graph into a storage-based graph having a plurality of nodes and at least one edge representing an operation performed on data flowing between two nodes among the plurality of nodes. Each of the plurality of nodes represents a storage storing data. The apparatus further comprises an optimizer configured to identify at least one processing condition of a processing system executing the computation graph, and to adjust the storage-based graph according to the at least one processing condition.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventor: Weifang ZHANG
-
Publication number: 20200042217Abstract: Example storage systems and methods provide multichannel communication among subsystems, including a compute complex. A plurality of storage devices, a host, and a compute complex are interconnected over an interconnect fabric. The storage system is configured with a host-storage channel for communication between the host and the plurality of storage devices, host-compute channel for communication between the host and the compute complex, and a compute-storage channel for communication between the compute complex and the storage devices.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventors: Adam Roberts, Sivakumar Munnangi, John Scaramuzzo
-
Publication number: 20200042218Abstract: A method is used in managing data reduction in storage systems using machine learning. A value representing a data reduction assessment for a first data block in a storage system is calculated using a hash of the data block. The value is used to train a machine learning system to assess data reduction associated with a second data block in the storage system without performing the data reduction on the second data block, where assessing data reduction associated with the second data block indicates a probability as to whether the second data block can be reduced.Type: ApplicationFiled: August 1, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Sorin FAIBISH, Rustem RAFIKOV, Ivan BASSOV
-
Publication number: 20200042219Abstract: A method is used in managing deduplication characteristics in a storage system. Deduplication entries stored in a deduplication cache are categorized into a set of deduplication groups based on a data deduplication probability associated with the deduplication entries. A machine learning system is used to dynamically adjust deduplication characteristics associated with the set of deduplication groups based on an I/O workload associated with the storage system.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Yubing WANG, Philippe ARMANGAU, Ajay KARRI
-
Publication number: 20200042220Abstract: A method is used in managing inline data de-duplication in storage systems. The method receives a request to write data at a logical address of a file in a file system of a storage system. The method determines whether the data can be de-duplicated to matching data residing on the storage system in a compressed format. Based on the determination, the method uses a block mapping pointer associated with the matching data to de-duplicate the data. The block mapping pointer includes a block mapping of a set of compressed data extents and information regarding location of the matching data within the set of compressed data extents.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Philippe ARMANGAU, Christopher SEIBEL, Bruce CARAM, Alexei KARABAN
-
Publication number: 20200042221Abstract: Disclosed herein is an apparatus and method for a shuffle manager for a distributed memory object system. In one embodiment, a method includes forming a system cluster comprising a plurality of nodes, wherein each node includes a memory, a processor and a network interface to send and receive messages and data, wherein the network interface operates on remote direct memory access; creating a plurality of sharable memory spaces having partitioned data, wherein each space is a distributed memory object having a compute node, wherein the sharable memory spaces are at least one of persistent memory or DRAM cache; and storing data in an in-memory data structure when there is available memory in a compute node; and if there is an out of memory condition, serializing at least some of the in-memory data and spilling it to a distributed memory object system to persist shuffled data outside the compute node.Type: ApplicationFiled: April 1, 2019Publication date: February 6, 2020Inventors: Peiyu Zhuang, Kunwu Huang, Yue Zhao, Wei Kang, Haiyan Wang, Yue Li, Jie Yu
-
Publication number: 20200042222Abstract: Data-aware orchestration with respect to a distributed system platform enables at least lifting and shifting of pre-existing applications and associated data without developer action. A volume of a local store is created automatically in response to a container comprising a user application that is non-native with respect to the distributed system platform. The volume is then exposed to the container for use by the application to save and retrieve data. The container and local store are co-located on a compute node providing at least high availability. The application and local store can be duplicated on one or more replicas providing reliability in case of a failure. Further, partitions can be created automatically in response to declarative specification.Type: ApplicationFiled: October 8, 2018Publication date: February 6, 2020Inventors: Subramanian Ramaswamy, Raja Krishnaswamy, Kumar Gaurav Khanna, Gopala Krishna R. Kakivaya
-
Publication number: 20200042223Abstract: Embodiments described herein provide a system comprising a storage device. The storage device includes a plurality of non-volatile memory cells, each of which is configured to store a plurality of data bits. During operation, the system forms a first region in the storage circuitry comprising a subset of the plurality of non-volatile memory cells in such a way that a respective cell of the first region is reconfigured to store fewer data bits than the plurality of data bits. The system also forms a second region comprising a remainder of the plurality of non-volatile memory cells. The system can write host data received via a host interface in the first region. The write operations received from the host interface are restricted to the first region. The system can also transfer valid data from the first region to the second region.Type: ApplicationFiled: February 15, 2019Publication date: February 6, 2020Applicant: Alibaba Group Holding LimitedInventor: Shu Li
-
Publication number: 20200042224Abstract: A method, computer program product, and computing system for managing wear balance in a mapped RAID storage system. According to embodiments, mapped RAID extents, which are comprised of storage disk extents, are assigned to particular mapped RAID groups based on one or more parameters related to wear experienced by disk extents associated with the RAID extent. Endurance parameters are measured and can be used by machine learning modules to predict future wear levels enabling predictive wear balancing in mapped RAID storage systems. Embodiments can be used when initially forming a mapped RAID group, when adding storage to an existing mapped RAID group, or when managing the ongoing performance of a mapped RAID group or storage system.Type: ApplicationFiled: August 2, 2018Publication date: February 6, 2020Inventors: Nickolay Dalmatov, Michael P. Wahl, Jian Gao
-
Publication number: 20200042225Abstract: A data processing system includes a host configured to handle data in response to an input entered from an external, and a plurality of memory systems engaged with the host and configured to store or output the data in response to a request generated by the host. A first memory system among the plurality of memory systems accesses a specific location therein in response to a read command and an address delivered from the host. The first memory system outputs subject data read from the specific location to the host. The first memory system migrates the subject data to another memory system among the plurality of memory systems according to an operational state of the specific location.Type: ApplicationFiled: July 30, 2019Publication date: February 6, 2020Inventors: Ik-Sung OH, Byeong-Gyu PARK
-
Publication number: 20200042226Abstract: Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to a hardware-based processing node of an object memory fabric.Type: ApplicationFiled: August 20, 2019Publication date: February 6, 2020Inventors: STEVEN J. FRANK, LARRY REBACK
-
SYSTEMS AND METHODS FOR PROVIDING CUSTOMER SERVICE FUNCTIONALITY DURING PORTFOLIO MIGRATION DOWNTIME
Publication number: 20200042227Abstract: A system includes one or more memory devices storing instructions, and one or more processors configured to execute the instructions to perform steps of a method for providing customer data access during a migration process. The system may initiate a transfer of customer data from a source data server to a system platform and transfer a subset of the customer data to a temporary data storage. The system may modify the temporary copy of customer data and generate an instruction to modify the permanent copy of customer data. In response to the completion of the transfer of customer data from the source data server to the system mainframe, the system may then transfer and execute the instruction to modify the permanent copy of customer data.Type: ApplicationFiled: October 14, 2019Publication date: February 6, 2020Inventors: Faizan Ahmad, Shahnawaz Ali -
Publication number: 20200042228Abstract: Example tiered storage systems, storage devices, and methods provide tier configuration for routing of data commands by peer storage devices. Each tiered storage device is configured to communicate with a plurality of peer storage devices with storage device identifiers. Each storage device is assigned to a performance tier in a tier configuration that determines which host data tier should be stored in the storage media of the storage device, the local performance tier for the storage device. If the local performance tier of the storage device does not match the host data tier for a data command or stored data element when the storage device determines the host data tier, the storage device selectively forwards the host data to another peer storage device with the performance tier that matches the host data tier.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventor: Adam Roberts
-
Publication number: 20200042229Abstract: Method, computer program product, and system embodiments of the present disclosure may include a computing device which may set a predetermined flag on data to be copied from a primary storage tier and a secondary storage tier. The computing device may identify a first portion of the flagged data as being in a pre-migrated state stored on the primary storage tier and migrate the flagged pre-migrated data from the primary storage tier to a target medium. The computing device may identify a second portion of the flagged data as being in a migrated state stored on the secondary storage tier. The computing device may recall the flagged migrated data from the secondary storage tier to the primary storage tier and migrate the recalled migrated data from the primary storage tier to the target medium.Type: ApplicationFiled: September 19, 2019Publication date: February 6, 2020Inventors: Tsuyoshi Miyamura, Sosuke Matsui, Tohru Hasegawa, Hiroshi Itakagi, Noriko Yamamoto, Shinsuke Mitsuma
-
Publication number: 20200042230Abstract: One embodiment facilitates data storage. During operation, the system selects a first page of a non-volatile storage to be recycled in a garbage collection process. The system determines that the first page is a first partial page which includes valid data and invalid data. The system combines the valid data from the first partial page with valid data from a second partial page to form a first full page, wherein a full page is aligned with a physical page in the non-volatile storage. The system writes the first full page to a first newly assigned physical page of the non-volatile storage.Type: ApplicationFiled: January 16, 2019Publication date: February 6, 2020Applicant: Alibaba Group Holding LimitedInventor: Shu Li
-
Publication number: 20200042231Abstract: A storage controller for a storage system includes a host interface, configured to receive data for storage within the storage system, and to transmit data from the storage system to a host system, and one or more storage interfaces, configured to transmit data to storage media, and to receive data from the storage media. The storage controller also includes a plurality of data paths configured to process and transfer data between the host interface and the one or more storage interfaces, the plurality of data paths comprising a first quantity of read data paths configured to interpret data retrieved from the storage media, and a second quantity of write data paths configured to prepare data for storage onto the storage media, and an arbiter configured to dynamically arbitrate access to the one or more storage interfaces by the read data paths and the write data paths.Type: ApplicationFiled: August 2, 2019Publication date: February 6, 2020Applicant: Burlywood, Inc.Inventors: David Christopher Pruett, Christopher Bergman
-
Publication number: 20200042232Abstract: A semiconductor memory module includes data buffers that exchange first data signals with an external device, nonvolatile memory devices that are respectively connected to the data buffers through data lines, and a controller connected to the data lines. The controller receives an address, a command, and a control signal from the external device, and depending on the address, the command, and the control signal, the controller controls the data buffers through first control lines and controls the nonvolatile memory devices through second control lines.Type: ApplicationFiled: April 22, 2019Publication date: February 6, 2020Inventors: TAESUNG LEE, JUNGHWAN CHOI
-
Publication number: 20200042233Abstract: A buffer circuit includes a primary interface, a secondary interface, and an encoder/decoder circuit. The primary interface is configured to communicate on an n-bit channel, wherein n parallel bits on the n-bit channel are coded using data bit inversion (DBI). The secondary interface is configured to communicate with a plurality of integrated circuit devices on a plurality of m-bit channels, each m-bit channel transmitting m parallel bits without using DBI. And the encoder/decoder circuit is configured to translate data words between the n-bit channel of the primary interface and the plurality of m-bit channels of the secondary interface.Type: ApplicationFiled: August 19, 2019Publication date: February 6, 2020Inventor: Scott C. Best
-
Publication number: 20200042234Abstract: Offload processing may be provided that is not dedicated to a primary processor or a subset of primary processors. A system may have one or more offload processors, for example, GPUs, coupled to data storage slots of the system, which can be shared by multiple primary processors of the system. The offload processor(s) may be housed within a device configured to be coupled to a storage slot, for example, as if the device were a storage drive. The one or more offload processors may be housed within a device that includes an interface in conformance with a version of an NVMe specification and may have a form factor in accordance with the U.2 specification. Offload processing devices may be communicatively coupled to one or more primary processors by switching fabric disposed between the one or more primary processors and the storage slot to which the offload processing device is connected.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Jon I. Krasner, Jason J. Duquette, Jonathan P. Sprague
-
Publication number: 20200042235Abstract: An aspect of minimizing read amplification IO where metadata is not in RAM includes reading a l_md_page and corresponding lp_md_page from a storage device in a dual distributed layered architecture. The l_md_page specifies a metadata page that persists in a SSD and having logical addresses of metadata, and the lp_md_page associates logical block addresses with corresponding physical locations for the metadata. An aspect further includes reading data for a redundant array of independent disks (RAID) stripe according to an associated physical offset in the lp_md_page, accessing a stripe counter from the lp_md_page, and comparing the stripe counter from the lp_md_page to a stripe counter held in memory. Upon determining the stripe counter from the lp_md_page is the same, an aspect further includes determining the data is valid and reading the data according to associated physical offset in the lp_md_page and while bypassing a data module for the data.Type: ApplicationFiled: August 1, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Zvi Schneider, Amitai Alkalay, Assaf Natanzon
-
Publication number: 20200042236Abstract: Systems and methods disclosed herein provide an I/O prioritization scheme for NVMe-compliant storage devices. Through an interface of an HBA driver, a user specifies a range of LBAs that map to a namespace. The user interface also designates a priority level for the namespace. Once the namespace is created, the HBA driver generates a queue of the designated priority level. The HBA driver also generates a table that maps the queue to the namespace. When the HBA driver receives a request to perform an I/O command that targets the namespace, the HBA driver adds the requested command to the queue. I/O commands targeting the namespace are processed in accordance with the designated priority level by the controller.Type: ApplicationFiled: August 13, 2018Publication date: February 6, 2020Inventor: Sumangala Bannur Subraya
-
Publication number: 20200042237Abstract: A solid state storage device includes a control circuit and a non-volatile memory. The control circuit includes a retry table. In addition, plural retry read-voltage sets are recorded in the retry table, and the retry table is divided into plural retry sub-tables. The plural retry read-voltage sets are classified into plural groups. The plural retry read-voltage sets are recorded into the corresponding retry sub-tables. The non-volatile memory is connected with the control circuit. During a read retry process of a read cycle, the control circuit performs a hard decoding process according to a retry sub-table of the plural retry sub-tables. If the hard decoding process fails, the control circuit performs a soft decoding process according to another retry sub-table of the plural retry sub-tables.Type: ApplicationFiled: October 19, 2018Publication date: February 6, 2020Inventors: Shih-Jia ZENG, Jen-Chien FU, Tsu-Han LU, Hsiao-Chang YEN
-
Publication number: 20200042238Abstract: A data storage device may include a storage configured to temporarily suspend an operation thereof at a specified time; and a controller configured to schedule an operation resume time as the storage temporarily suspends the operation, and transmit an operation resume signal to the storage according to a result of the scheduling.Type: ApplicationFiled: December 6, 2018Publication date: February 6, 2020Inventors: Hoe Seung JUNG, Jae Hyeong JEONG