Abstract: A peripheral device may implement storage virtualization for non-volatile storage devices connected to the peripheral device. A host system connected to the peripheral device may host one or multiple virtual machines. The peripheral device may implement different virtual interfaces for the virtual machines or the host system that present a storage partition at a non-volatile storage device to the virtual machine or host system for storage. Access requests from the virtual machines or host system are directed to the respective virtual interface at the peripheral device. The peripheral device may perform data encryption or decryption, or may perform throttling of access requests. The peripheral device may generate and send physical access requests to perform the access requests received via the virtual interfaces to the non-volatile storage devices. Completion of the access requests may be indicated to the virtual machines via the virtual interfaces.
Type:
Grant
Filed:
June 7, 2019
Date of Patent:
February 15, 2022
Assignee:
Amazon Technologies, Inc.
Inventors:
Raviprasad Venkatesha Murthy Mummidi, Matthew Shawn Wilson, Anthony Nicholas Liguori, Nafea Bshara, Saar Gross, Jaspal Kohli
Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.
Abstract: Methods and apparatuses for memory device mode selection in a serial memory device are presented. Memory device configuration information may be retrieved in response to a memory device initialization condition, and a configuration register bit mask that is included in the memory device configuration information may then be written to a configuration register of the memory device. A write command that may also be included in the memory device configuration information may be used to write the configuration bit mask to the configuration register. The serial memory device may be a serial flash memory. The configuration register bit mask may include an I/O mode bit setting that indicates enabling the memory to operate in a quad-bit I/O mode or other multi-bit serial I/O mode instead of a single-bit serial I/O mode.
Abstract: The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request.
Abstract: A data storage device may include: a nonvolatile memory apparatus including a plurality of groups configured by dividing a plurality of planes in interleaving units; and a controller configured to check whether a group including a read region of a current read command is included in a group including a read operation of a previous read command and whether the read region of the current read command extends over two or more groups, when receiving the current read command, and control the nonvolatile memory apparatus to perform cache read or interleaving read based on the check result.
Abstract: In exemplary aspects described herein, system memory is secured using protected memory regions. Portions of a system memory are assigned to endpoint devices, such as peripheral component interconnect express (PCIe) compliant devices. The portions of the system memory can include protected memory regions. The protected memory regions of the system memory assigned to each of the endpoint devices are configured to control access thereto using device identifiers and/or process identifiers, such as a process address space ID (PASID). When a transaction request is received by a device, the memory included in that request is used to determine whether it corresponds to a protected memory region. If so, the transaction request is executed if the identifiers in the request match the identifiers for which access is allowed to that protected memory region.
Type:
Grant
Filed:
July 31, 2019
Date of Patent:
January 18, 2022
Assignee:
Hewlett Packard Enterprise Development LP
Abstract: An apparatus including a control unit, a memory having computer program code, and N groups of storage units electrically connected to the control unit is disclosed. Each of the N groups of storage units has N storage units, each of the N storage units has N storage regions, wherein N is a positive integer. The memory and the computer program code configured to, with the control unit, cause the apparatus to perform: storing a first data segment into an ith storage region of a first storage unit of a kth group of storage units; storing a fourth data segment into an ith storage region of a first storage it of a (k+1)th group of storage units; storing a fifth data segment into an ith storage region of a second storage unit of the (k+1)th group of storage units; and storing a sixth data segment into an ith storage region of a third storage unit of the (k+1)th group of storage units.
Abstract: A memory system includes a host controller and a cache system. The host controller includes a host queue in which host data including a command outputted from a host are stored. The cache system includes a cache memory having a plurality of sets and a cache controller controlling an operation of the cache memory. The cache controller transmits status information on a certain set to which the host data are to be transmitted among the plurality of sets to the host controller. The host controller receives the status information from the cache controller to determine transmission or non-transmission of the host data stored in the host queue to the cache system.
Type:
Grant
Filed:
October 25, 2019
Date of Patent:
December 14, 2021
Assignee:
SK hynix Inc.
Inventors:
Seung Gyu Jeong, Jin Woong Suh, Jung Hyun Kwon
Abstract: A computer-implemented method according to one embodiment includes receiving and storing historical data for historical data jobs performed within a data storage system; determining an optimal maintenance time for the data storage system, utilizing the stored historical data; determining a timing in which storage devices within the data storage system are taken offline, utilizing the optimal maintenance time and the stored historical data; and preparing the data storage system for one or more maintenance operations, utilizing the determined timing.
Type:
Grant
Filed:
September 3, 2019
Date of Patent:
December 7, 2021
Assignee:
International Business Machines Corporation
Abstract: The invention provides a data storage system having dual channels, which comprises a host. The host comprises a host-side control unit, a first data storage device, and at least one second data storage device. The first data storage device comprises a first data-side controller. The host-side control unit is connected to the first data storage device via a high-speed channel, and accesses data of the first data storage device via the high-speed channel. The first data storage device is connected to each of the second data storage devices via a low-speed channel, respectively. The low-speed channel is a bus of broadcast type. The first data-side controller of the first data storage device manages data exchanging, data copying, and data moving between the first data storage device and the second data storage device via the low-speed channel.
Abstract: A card engine may dynamically configure content for display via user equipment (UE). A rules engine may provide constructs to the card engine in the form of card definitions, which the card engine may evaluate using facts obtained from a facts controller. The card engine may create a hierarchy of containers, which are logical abstracts for containing cards. The containers in the hierarchy, which may be organized as a tree, may contain card definitions according to respective themes determined by the card engine. Variants may be assigned weights which can be changed dynamically based on factors such as user behavior, account condition, promotions or offerings. The card having the highest weight within its container is advanced up the tree. When a card reaches the top level of the tree, it may be formatted for display via the user interface and transmitted to the UE accordingly.
Type:
Grant
Filed:
April 19, 2019
Date of Patent:
December 7, 2021
Assignee:
T-Mobile USA, Inc.
Inventors:
Jonathan Soini, Tyler Axdorff, Senthil Kumar Mulluppadi Velusamy, Calum Lawler, Mark Hanson
Abstract: Apparatuses, systems, methods, and computer program products are disclosed for balanced caching. An input circuit receives a request for data of non-volatile storage. A balancing circuit determines whether to execute a request by directly communicating with one or more of a cache and a non-volatile storage based on a first rate corresponding to the cache and a second rate corresponding to the non-volatile storage. A data access circuit executes a request based on a determination made by a balancing circuit.
Abstract: Provided is a process including: initializing a data block matrix; making supra-diagonal nodes that include at most one more node than sub-diagonal nodes; making a hash nodes with a hash sequence length that is proportional to a number of nodes in the row or column of nodes in which the hash node is arranged; and writing data blocks in nodes of the data block matrix such that a number of data blocks in nodes in the data block matrix is less than (N2?N) for N number of nodes in the data block matrix, wherein the data block matrix has dispersed data blocks.
Type:
Grant
Filed:
April 29, 2020
Date of Patent:
November 16, 2021
Assignee:
GOVERNMENT OF THE UNITED STATES OF AMERICA, AS REPRESENTED BY THE SECRETARY OF COMMERCE
Abstract: Memory devices, memory systems, and methods of operating memory devices and systems are disclosed in which a memory device can asynchronously indicate to a connected host that information in a mode register has been changed, obviating the need for repeated polling of the information and thereby reducing both command/address bus and data bus bandwidth consumption. In one embodiment, a memory device comprises a memory; a mode register storing information corresponding to the memory; and circuitry configured to, in response to the information in the mode register being modified by the memory device, generate a notification to a connected host device.
Abstract: Devices and techniques are disclosed herein for remapping data of flash memory indexed by logical block addresses (LBAs) of a host device in response to re-map requests received at a flash memory system from the host device or in response to re-map requests generated at the flash memory system.
Abstract: A computer-implemented method, according to one embodiment, is for maintaining heat information of data while in a cache. The computer-implemented method includes: transferring data from non-volatile memory to the cache, such that the data is stored in a first page in the cache. Previous read and/or write heat information associated with the data is maintained by preserving one or more bits in a hash table which correspond to the data in the first page. Moreover, the data is destaged from the first page in the cache to the non-volatile memory, and the one or more bits in the hash table which correspond to the data are updated to reflect current read and/or write heat information associated with the data.
Type:
Grant
Filed:
August 7, 2019
Date of Patent:
October 19, 2021
Assignee:
International Business Machines Corporation
Inventors:
Nikolas Ioannou, Nikolaos Papandreou, Roman Alexander Pletka, Sasa Tomic, Radu Ioan Stoica, Timothy Fisher, Aaron Daniel Fry, Charalampos Pozidis, Andrew D. Walls
Abstract: An apparatus is provided for receiving requests from a plurality of processing units, at least some of which may have associated cache storage. A snoop unit implements a cache coherency protocol when a request received by the apparatus identifies a cacheable memory address. Snoop filter storage is provided comprising an N-way set associative storage structure with a plurality of entries. Each entry stores coherence data for an associated address range identifying a memory block, and the coherence data is used to determine which cache storages need to be subjected to a snoop operation when implementing the cache coherency protocol in response to a received request. The snoop filter storage stores coherence data for memory blocks of at least a plurality P of different size granularities, and is organised as a plurality of at least P banks that are accessible in parallel, where each bank has entries within each of the N-ways of the snoop filter storage.
Abstract: A storage system includes a processor configured to request a write operation of first data corresponding to a first logical address, and requests a write operation of second data corresponding to a second logical address, a memory module including a nonvolatile memory device configured to store the first data and the second data, and a controller configured to convert the first logical address into a first device logical address, and converts the second logical address into a second device logical address based on the first device logical address and a size of the first data, and a storage device configured to store the first data in the storage device based on the first device logical address, and store the second data in the storage device based on the second device logical address.
Type:
Grant
Filed:
March 10, 2020
Date of Patent:
October 12, 2021
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Seung-Woo Lim, Kyu-Min Park, Do-Han Kim
Abstract: A memory system includes a nonvolatile memory device including a plurality of memory blocks and a controller configured to control the nonvolatile memory device. The controller determines, as an available bad block, a memory block having data storage reliability equal to or greater than a first reference value, included in the plurality of memory blocks, determines write data to be stored in the nonvolatile memory device as first data which is required for the memory system to normally operate or second data which does not correspond to the first data, and allocate the write data determined as the second data to the available bad block. The nonvolatile memory device performs a write operation of storing the second data in the available bad block.
Abstract: Embodiments of the present disclosure relate to a memory system, a memory controller, and an operation method. The embodiments receive a plurality of requests for a memory device, determine the number of hit requests and the number of miss requests with respect to the plurality of received requests, and determine whether or not to perform all or some of map data read operations for the respective miss requests in parallel and whether or not to perform all or some of user data read operations for the respective hit requests in parallel, thereby minimizing the time required for processing the plurality of requests.