STORAGE DEVICE, COMPUTING SYSTEM INCLUDING THE STORAGE DEVICE, AND METHOD OF OPERATING THE STORAGE DEVICE
A storage device includes a storage medium and a controller configured to control the storage medium. The controller includes an interface unit configured to interface with a host, a processing unit connected to the interface unit via a first signal line and configured to process a direct load operation and a direct store operation between the host and the controller, and at least one memory connected to the interface unit via a second signal line. The at least one memory is configured to temporarily store data read from the storage medium or data received from the host, and is configured to be directly accessed by the host.
This application is a continuation of U.S. patent application Ser. No. 14/613,462 filed Feb. 4, 2015, which claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2014-0052972, filed on Apr. 30, 2014, the disclosures of which are incorporated by reference herein in their entireties.
TECHNICAL FIELDExemplary embodiments of the present inventive concept relate to a storage device, and more particularly, to a storage device, a computing system including the storage device, and a method of operating the storage device.
DISCUSSION OF THE RELATED ARTRecently, a solid state drive (SSD) using a nonvolatile memory device has been developed as a next generation storage device used in place of a hard disk drive (HDD). An SSD replaces a mechanical configuration of an HDD with a nonvolatile memory device, resulting in a high operating speed and stability while occupying a smaller amount of space.
SUMMARYAccording to an exemplary embodiment of the present inventive concept a storage device includes a storage medium and a controller configured to control the storage medium. The controller includes an interface unit interfacing with a host, a processing unit connected to the interface unit via a first signal line and configured to process a direct load operation or a direct store operation between the host and the controller, and at least one memory connected to the interface unit via a second signal line. The at least one memory temporarily stores data read from the storage medium or data received from the host and is directly accessible by the host.
The interface unit may be a first interface unit interfacing with the host according to a first standardized interface, and the controller may further include a second interface unit interfacing with the first interface unit according to a second standardized interface.
The first signal line may directly connect the first interface unit and the processing unit to each other, not via the second interface unit, and the second signal line may directly connect the first interface unit and the at least one memory to each other, not via the second interface unit.
The first standardized interface may be a peripheral component interconnect express (PCIe) and the second standardized interface may be a nonvolatile memory express (NVMe) or a small computer system interface express (SCSIe).
The processing unit may include at least one processor configured to process the direct load operation or the direct store operation, and at least one tightly coupled memory (TCM) disposed adjacent to the at least one processor and accessible by the at least one processor within a relatively short time period.
The at least one TCM may temporarily store a data transfer command between the controller and the storage medium.
The data transfer command may include a flush command for transferring data temporarily stored in the at least one memory to the storage medium and a fill command for transferring the data stored in the storage medium to the at least one memory.
The at least one TCM may include at least one special function register (SFR) used to perform the direct load operation or the direct store operation.
The at least one memory may include at least one of a first memory temporarily storing raw data read from the storage medium or raw data received from the host, and a second memory temporarily storing metadata relating to the raw data.
The at least one of the first memory and the second memory may include at least one SFR used to perform the direct load operation or the direct store operation.
The interface unit may include a signal transfer unit configured to transmit and receive a signal to and from the host, and an address conversion unit configured to perform an address conversion between an external address of the signal and an internal address suitable for internal communication within the controller.
The controller may include a first bus connected to the interface unit, a second bus connected to the processing unit, and a third bus connected to the at least one memory. The first signal line may be directly connected between the first bus and the second bus, and the second signal line may be directly connected between the first bus and the third bus.
According to an exemplary embodiment of the present inventive concept, a computing system includes a storage device including a storage medium and a controller configured to control the storage medium and including at least one memory, and a host configured to directly access the at least one memory with reference to an address map having an address space corresponding to the at least one memory.
The host may include a processor and a main memory connected to the processor. The processor may directly access the at least one memory, not via the main memory.
The controller may further include an interface unit interfacing with the host, and a processing unit connected to the interface unit via a first signal line and configured to process a direct load operation and a direct store operation between the host and the controller. The at least one memory may be connected to the interface unit via a second signal line, and may temporarily store data read from the storage medium or data received from the host.
The processing unit may include at least one processor configured to process the direct load operation or the direct store operation, and at least one tightly coupled memory (TCM) disposed adjacent to the at least one processor and accessible by the at least one processor within a relatively short time period.
The at least one memory and the at least one TCM may be respectively mapped to address spaces that are exclusive from each other in the address map.
The host may include a plurality of base address registers (BARs) storing the address map. At least one BAR from the plurality of BARs may store address spaces that are exclusive from each other and used to perform the direct load operation and the direct store operation with respect to the at least one memory.
According to an exemplary embodiment of the present inventive concept, a method of operating a storage device including a storage medium and a controller for controlling the storage medium and including at least one memory includes receiving a direct load command instructing a direct load operation to be executed or a direct store command instructing a direct store operation to be executed from a host, and determining a state of the at least one memory for performing the direct load operation or the direct store operation. The at least one memory may be directly accessed by the host, and the direct load operation or the direct store operation may be performed between the host and the at least one memory by loading data temporarily stored in the at least one memory to the host or temporarily storing data received from the host in the at least one memory based on a result of the determining.
The determining of the state of the at least one memory may include, when the direct load command is received from the host, determining whether there is first data corresponding to the direct load command in the at least one memory, and when the direct store command is received from the host, determining whether there is available space for storing second data corresponding to the direct store command in the at least one memory.
The performing of the direct load operation or the direct store operation may include, if the first data is not stored in the at least one memory, performing a fill operation for transmitting the first data from the storage medium to the at least one memory, and when the fill operation is finished, loading the first data to the host.
The performing of the direct load operation or the direct store operation may include, if there is no available space in the at least one memory, performing a flush operation for transmitting data temporarily stored in the at least one memory to the storage medium, and when the flush operation is finished, temporarily storing the second data in the at least one memory.
The method may further include, before receiving the direct load command or the direct store command, storing device information relating to the storage device in the at least one memory so that the host may recognize the device information for performing the direct load operation or the direct store operation.
The storing of the device information in the at least one memory may include loading the device information in the at least one memory from the storage medium according to an operation initiation signal or an initialization command, and writing the device information in the at least one memory by the host.
The controller may include at least one processor, and at least one tightly coupled memory (TCM) disposed adjacent to the at least one processor and accessible by the at least one processor within a relatively short time period. The at least one memory and the at least one TCM are respectively mapped to address spaces that are exclusive from each other in the address map.
According to an exemplary embodiment of the present inventive concept, a computing system includes a storage device including a storage medium and a controller configured to control the storage medium. The controller includes a processing unit and at least one memory. The computing system further includes a host including a processor and a main memory connected to the processor. The processor is configured to directly access the at least one memory without accessing the main memory.
The above and other features of the present inventive concept will become more apparent by describing in detail exemplary embodiments thereof with reference to the accompanying drawings, in which:
Exemplary embodiments of the present inventive concept will be described more fully hereinafter with reference to the accompanying drawings. Like reference numerals may refer to like elements throughout the accompanying drawings.
While such terms as “first,” “second,” etc., may be used to describe various components, such components are not limited by these terms. The above terms are used only to distinguish one component from another. For example, a first element may be designated as a second element, and similarly, a second element may be designated as a first element without departing from the teachings of the present inventive concept.
Referring to
The host 20 may be, for example, a user device such as a personal/portable computer (e.g., a desktop computer, a laptop computer, a smartphone, a tablet computer, etc.), a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, etc.
The storage medium 200 may be, for example, a nonvolatile memory device having a large storage capacity. In an exemplary embodiment, the storage medium 200 may include a plurality of nonvolatile memory devices. In this case, each of the nonvolatile memory devices may be connected to the controller 100 by a channel unit.
In exemplary embodiments of the present inventive concept, the storage medium 200 may include a NAND-type flash memory. The NAND-type flash memory may be, for example, a three-dimensional (3D) vertical NAND-type flash memory or a NAND-type flash memory having a two-dimensional (2D) horizontal structure. However, the storage medium 200 is not limited thereto. For example, the storage medium 200 may be a resistive memory such as a resistive random access memory (RRAM), a phase change RAM (PRAM), or a magnetic RAM (MRAM).
The computing system 1 may perform various computing operations according to a load command and a store command. For example, in an exemplary embodiment, the load command may instruct the computing system 1 to read data from the memory 120 and store the data in a certain register in the processor 21, and the store command may instruct the computing system 1 to record a value included in a certain register of the processor 21 in the memory 120.
According to a normal load/store (NLS) operation, the processor 21 may access the main memory 22 in the host 20. For example, in the normal load operation, the processor 21 loads data from the main memory 22, and in the normal store operation, the processor 21 may temporarily store the data in the main memory 22. The NLS operation may be performed by, for example, a memory management unit included in the host 20.
According to a normal read operation (NRD) and a normal write operation (NWR), using the NLS operation, the data may be transmitted between the processor 21 and the main memory 22 and between the main memory 22 and the storage device 10. The data transfer between the main memory 22 and the storage device 10 may be performed by the controller 100.
Referring to the NRD operation, the data is transferred from the storage device 10 to the main memory 22 and temporarily stored in the main memory 22, and the data temporarily stored in the main memory 22 may then be transferred to the processor 21. Referring to the NWR operation, the data is temporarily stored in the main memory 22 first, and the data temporarily stored in the main memory 22 is then transferred to the storage device 10 to be stored in the storage device 10. Accordingly, it may take a relatively long amount of time to perform the reading/writing of the data using the NRD/NWR operations.
According to a direct load/store (DLS) operation in accordance with exemplary embodiments of the present inventive concept, the processor 21 may directly access the storage device 10 without accessing the main memory 22 (e.g., without passing through the main memory 22). For example, the processor 21 may directly access at least one memory 120 included in the controller 100 of the storage device 10.
Referring to the direct load operation, the processor 21 may load data from the at least one memory 120, and in the direct store operation, the processor 21 may temporarily store the data in the at least one memory 120. The DLS operations may be performed by, for example, a device driver included in the host 20 and the controller 100 included in the storage device 10.
The at least one memory 120 included in the storage device 10 may act as the main memory 22 of the host 20, and accordingly, performance degradation caused by a limitation in the capacity of the main memory 22 may be reduced or prevented. Also, according to direct read/write operations performed using the DLS operations, the amount of time taken to read/write data may be relatively short. As a result, the operating speed of the computing system 1 may be increased.
Referring to
The interface unit 130 may interface with the host 20. For example, the interface unit 130 may interface with the host 20 according to a first standardized interface. The first standardized interface may be, for example, a peripheral component interconnect express (PCIe). However, the first standardized interface not limited thereto. For example, the first standardized interface may be a universal serial bus (USB), small computer system interface (SCSI), SCSI express (SCSIe), peripheral component interconnect (PCI), advanced technology attachment (ATA), parallel ATA (PATA), serial ATA (SATA), serial attached SCSI (SAS), enhanced small device interface (ESDI), or integrated drive electronics (IDE).
The processing unit 110 may be connected to the interface unit 130 via a first signal line SL1, and may perform the direct load operation or the direct store operation between the host 20 and the controller 100A. In addition, the processing unit 110 may control overall operations of the controller 100A.
The at least one memory 120 may be connected to the interface unit 130 via a second signal line SL2, and may temporarily store the data read from the storage medium 200 or the data transferred from the host 20. In an exemplary embodiment, the at least one memory 120 may be directly accessed by the host 20. The at least one memory 120 is also connected to the processing unit 110 to temporarily store data according to control of the processing unit 110.
The first and second signal lines SL1 and SL2 may be referred to herein as buses, metal lines, or data/signal paths. Each of the first and second signal lines SL1 and SL2 may communicate bi-directionally.
Referring to
The first interface unit 130 may interface with the host 20. For example, the first interface unit 130 may interface with the host 20 according to the first standardized interface (e.g., PCIe), as described above. However, as described above, the first standardized interface is not limited thereto.
The second interface unit 140 may interface with the first interface unit 130. For example, the second interface unit 140 may interface with the first interface unit 130 according to a second standardized interface. The second standardized interface may be, for example, Nonvolatile Memory Express (NVMe), Nonvolatile Memory Host Controller Interface Specification (NVMHCI), or Small Computer System Interface Express (SCSIe). However, the second standardized interface is not limited thereto, and may be another type of interface.
The controller 100B may further include a first bus BUS1, a second bus BUS2, and a third bus BUS3. The first bus BUS1 is connected to the first interface unit 130 and provides a communication path between the first interface unit 130 and the other components. The second bus BUS2 is connected to the processing unit 110 and provides a communication path between the processing unit 110 and the other components. The third bus BUS3 is connected to the at least one memory 120 and provides a communication path between the at least one memory 120 and the other components.
The first through third buses BUS1, BUS2, and BUS3 may be implemented as, for example, a network interface card (NIC) or a bus matrix. For example, the bus matrix may be an advanced extended interface (AXI) interconnect of Advanced Microcontroller Bus Architecture 3 (AMBA3). The AXI interconnect is a bus matrix structure having a plurality of channels, and may connect a plurality of bus masters and a plurality of bus slaves to the plurality of channels at the same time using a multiplexer and a demultiplexer.
The first interface unit 130 may interface with the host 20, and the second interface unit 140 may interface with the first interface unit 130. Accordingly, some of signals output from the second interface unit 140 may be transferred to the processing unit 110 via the second bus BUS2, and the other signals may be transferred to the at least one memory 120 via the third bus BUS3. As described above, if the signals generated by the host 20 are transferred to the processing unit 110 and the at least one memory 120 via the first interface unit 130 and the second interface unit 140, it may take a relatively long time to transfer the signals, and thus, the operating speed of the computing system 1 may be reduced.
According to an exemplary embodiment, the first signal line SL1 may be directly connected between the first bus BUS1 and the second bus BUS2, and the second signal line SL2 may be directly connected between the first bus BUS1 and the third bus BUS3. Accordingly, the signals generated by the host 20 may be transferred from the first interface unit 130 to the processing unit 110 and the at least one memory 120 without passing through the second interface unit 140. Accordingly, the time taken to transfer the signals may be reduced and the operating speed of the computing system 1 may be improved. Therefore, a memory accessing speed between the host 20 and the storage device 10 may be increased.
Referring to
In the exemplary embodiment shown in
The at least one processor 112 may perform the direct load operation or the direct store operation described above. The at least one processor 112 may be referred to herein as a central processing unit (CPU). In the exemplary embodiment shown in
The at least one TCM 114 may be disposed adjacent to the at least one processor 112 and may be accessed by the at least one processor 112 within a relatively short amount of time, for example, within one cycle time or a few cycle times. For example, each TCM 114 may be connected to a corresponding processor 112 via a dedicated channel and may act as a dedicated memory of the corresponding processor 112. The at least one TCM 114 may be disposed adjacent to the at least one processor 112. Herein, when the at least one TCM 114 is described as being disposed adjacent to the at least one processor 112, it is to be understood that the at least one TCM 114 is disposed directly next to or near the at least one processor 112. For example, there may be no other components disposed between the at least one TCM 114 and the at least one processor 112. Further, the at least one TCM 114 and the at least one processor 112 may be directly connected to each other via a dedicated connection.
In an exemplary embodiment, the at least one TCM 114 may store a data transfer command that is transmitted between the controller 110A and the storage medium 200. The data transfer command may include, for example, a flush command for transferring the data temporarily stored in the at least one memory 120 to the storage medium 200, and a fill command for transferring the data stored in the storage medium 200 to the at least one memory 120. The flush command and the fill command are described in further detail with reference to
Referring to
Some of the components included in the processing unit 110B of the exemplary embodiment shown in
The at least one SFR 116 may be used to perform the direct load operation or the direct store operation described above. For example, the at least one SFR 116 may store a doorbell representing initiation of the direct load operation or the direct store operation. The doorbell is described in further detail below. The at least one SFR 116 will be described in further detail with reference to
Referring to
Some of the components included in the processing unit 110C shown in
The first processing unit 110a (HCPU) may include at least one host processor 112a and at least one TCM 114. The first processing unit 110a may process various signals transmitted/received to/from the host 20. The second processing unit 110b may include at least one processor 112b (FCPU) and at least one TCM 114. The second processing unit 110b may process various signals transmitted/received to/from the storage medium 200.
Referring to
The first memory 122 may temporarily store raw data read from the storage medium 200 or raw data received from the host 20. The raw data may be stored in a page unit, and may be referred to as page data. In an exemplary embodiment, the first memory 122 may be a dynamic RAM (DRAM). For example, the first memory 122 may be a DRAM of 4 MB, and accordingly, if a page has a size of 4 KB, the first memory 122 may store 1024 pages. However, the sizes of the first memory 122 and the page are not limited thereto.
The second memory 124 may temporarily store metadata of the raw data that is temporarily stored in the first memory 122. The metadata may be information relating to the pages, and may be referred to as page information. The page information may include, for example, DRAM related information, bitmap information, page to logical block address (LBA) mapping information, or partition information. In an exemplary embodiment, the second memory 124 may be a static RAM (SRAM), for example, an SRAM of 128 KB. However, the size of the second memory 124 is not limited thereto.
Referring to
Referring to
Referring to
The plurality of BARs 23 are registers in which base address values are stored when executing a program. An absolute address may be found by adding a relative address value to the base address value. For example, the host 20A may include six BARs (e.g., BAR0 through BARS), and BAR4 and BARS from among the six BARs may store addresses allocated with respect to the memories included in the storage device 10A. However, the number of BARs is not limited thereto.
The storage device 10A may include a controller 100C and the storage medium 200. The controller 100C may include a processing unit 110D and a memory 120A. The processing unit 110D may include at least one SFR 116. The memory 120A may include, for example, the DRAM 122 and the SRAM 124 shown in
Referring to
The first address map AM1 is an address space that is allocated to the memories that the processor 21 may access. The first address map AM1 may be stored in the host 20A, and the processor 21 may access the memories with reference to the first address map AM1. The first address map AM1 shown in
Referring to
The plurality of BARs 23 are registers in which base address values are stored when executing a program. An absolute address may be found by adding a relative address value to the base address value. For example, the host 20A may include six BARs 23 (e.g., BAR0 through BARS), and BARS from among the six BARs 23 may store the address spaces allocated to the memories included in the storage device 10B.
The storage device 10B may include a controller 100D and the storage medium 200. The controller 100D may include a processing unit 110E and the memory 120A. The processing unit 110E may include at least one processor 112, at least one TCM 114, and at least one SFR 116. The memory 120A may include the DRAM 122 and the SRAM 124.
Referring to
The SFR space SP1, the SRAM space SP2, the DRAM space SP3, and the TCM space SP4 may not overlap with each other. For example, these spaces may be allocated exclusively from each other. The processor 21 may directly access the SFR 116, the SRAM 124, the DRAM 122, and the TCM 114 included in the storage device 10B with reference to the second address map AM2.
The TCM space SP4 may include a first TCM space SP4a, a second TCM space SP4b, and a third TCM space SP4c. The TCM space SP4 may further include TCM spaces allocated respectively to other TCMs. Thus, the number of sub-spaces included in the TCM space SP4 is not limited to three.
The first TCM space SP4a is an address space allocated to a first TCM (TCM0) that is connected to a first processor CPU0 via a dedicated channel, the second TCM space SP4b is an address space allocated to the second TCM (TCM1) that is connected to a second processor CPU1 via a dedicated channel, and a third TCM space SP4c is an address space allocated to a third TCM (TCM2) that is connected to a third processor CPU2 via a dedicated channel.
As described above, the TCM space SP4 may be divided into a plurality of sub-spaces (e.g., SP4a, SP4b, and SP4c) according to the number of the plurality of TCMs included in the processing unit 110E. The plurality of sub-spaces (e.g., SP4a, SP4b, and SP4c) may not overlap with each other and may be allocated exclusively from each other.
Referring to
The plurality of BARs 23 are registers in which base address values are stored when executing a program. An absolute address may be found by adding a relative address value to the base address value. For example, the host 20A may include six BARs 23 (e.g., BAR0 through BARS), and BARS from among the six BARs 23 may store the address spaces allocated to the memories included in the storage device 10C.
The storage device 10C may include a controller 100E and the storage medium 200. The controller 100E may include the processing unit 110B and the memory 120A. The processing unit 110B may include at least one processor 112 and at least one TCM 114′. The at least one TCM 114′ may include at least one SFR 116. The memory 120A may include the DRAM 122 and the SRAM 124.
Referring to
The SRAM space SP2, the DRAM space SP3, and the TCM space SP4 may not overlap with each other and may be allocated exclusively from each other. The processor 21 may directly access the SRAM 124, the DRAM 122, the TCM 114′, and the SFR 116 included in the storage device 10C with reference to the third address map AM3.
The TCM space SP4 may be divided into a plurality of sub-spaces including, for example, the first TCM space SP4a, the second TCM space SP4b, and the third TCM space SP4c. At least one of the plurality of sub-spaces, for example, the first TCM space SP4a, may be partially allocated as an SFR space. The number of sub-spaces of the TCM space SP4 is not limited thereto.
Referring to
Context of the host 20 may have, for example, logical block addressing (LBA) information. The context of the host 20 may be mapped to the TCM space or the SRAM space of the controller 100. The data may be mapped with the DRAM space of the controller 100. The data mapped with the DRAM space may be mapped with the storage medium 200 using a page table stored in the TCM space or the SRAM space.
Referring to
A size of each of the data pages may be, for example, 4 KB, and the DRAM may include, for example, 1024 pages. Accordingly, the data area DA may have a capacity of about 4 MB. The information area IA may be about 14 KB. The information area IA may include, for example, a table representing DRAM information, bitmap information, page map information, and partition layout information. The size of the data pages, the data area DA and the information area IA are not limited thereto.
Referring to
The signal transfer unit 132 may transmit/receive a signal to/from the host 20. In an exemplary embodiment, the interface unit 130A may interface with the host 20 according to PCIe, and the signal transfer unit 132 may receive a signal from the host 20 via a PCIe bus and may provide the signal with electrical and mechanical interfacing. However, the interface type is not limited to PCIe, and may include other standardized interfaces, as described above. The signal transfer unit 132 may transfer a signal generated by the controller 100A to the host 20 via the PCIe bus, and may provide the signal that is to be transferred via the electrical and mechanical interfacing.
The signal transfer unit 132 may be formed of, for example, a physical layer (PHY), and may be referred to as a PHY core. The signal transfer unit 132 may be referred to as a port.
The address conversion unit 134 may perform address conversion between a host address space and a controller address space. For example, the address conversion unit 134 may convert an external address ADDR_EX of the signal received from the host 20 to an internal address ADDR_IN that is suitable for internal communication within the controller 100A. The address conversion unit 134 may convert the internal address ADDR_IN to the external address ADDR_EX. Operations of the address conversion unit 134 will be described with reference to
Referring to
The first and second signal transfer units 132 and 136 may transmit/receive signals to/from the host 20, respectively. The first and second signal transfer units 132 and 136 may be implemented as separate ports to process the signals in parallel. As a result, a high speed access operation between the host 20 and the storage device 10 may be implemented.
Referring to
The external address ADDR_EX is a standard interface between the host 20 and the storage device 10, for example, an address according to the PCIe standard, and may be referred to as a host address. However, the standard interface is not limited to PCIe. The external address ADDR_EX may be, for example, 64 bits, however the size of the external address ADDR_EX is not limited thereto. The internal address ADDR_IN is suitable for the internal communication in the controller 100A, and may be referred to as a controller address. The internal address ADDR_IN may be, for example, 32 bits, however the size of the internal address ADDR_IN is not limited thereto.
The address conversion unit 134 may convert the external address ADDR_EX of the first TCM (TCM0), 0x0000_0000, to the internal address ADDR_IN 0x4080_0000. Further, the address conversion unit 134 may convert the external address ADDR_EX of the DRAM, 0x000e_0000, to the internal address ADDR_IN 0x4780_0000.
Referring to
The second interface unit 140′ may selectively activate one of the plurality of sub-interface units 142 and 144 to interface with the first interface unit 130. The second interface unit 140′ may further include a plurality of multiplexers and a plurality of demultiplexers, and may selectively activate one of the plurality of sub-interface units 142 and 144 according to a selection signal applied from an external source.
In an exemplary embodiment, the first sub-interface unit 142 may interface with the first interface unit 130 according to SCSIe, and the second sub-interface unit 144 may interface with the first interface unit 130 according to NVMe. However, exemplary embodiments are not limited thereto. For example, the first and second sub-interface units 142 and 144 may interface with the first interface unit 130 according to other standardized interfaces.
The second interface unit 140′ may further include a buffer memory (e.g., an SRAM), and the first and second sub-interface units 142 and 144 may share the buffer memory. The second interface unit 140′ may further include a bus for communication between the first and second sub-interface units 142 and 144 and the buffer memory.
Referring to
In operation S120, a storage device may receive a direct load command or a direct store command from a host. For example, the storage device 10 may receive the direct load command or the direct store command from the host 20. The direct load command causes a direct load operation to be performed, and the direct store command causes a direct store operation to be performed. For example, the direct load command instructs the storage device 10 to perform a direct load operation, and the direct store command instructs the storage device 10 to perform a direct store operation.
In operation S140, a status of the memory is determined. For example, the processing unit 110 may determine a status of the at least one memory 120. For example, the processing unit 110 determines whether the at least one memory 120 is in a status suitable for executing the direct load operation or the direct store operation.
In operation S160, the direct load operation or the direct store operation is performed. For example, data that is temporarily stored in the at least one memory 120 is loaded to the host 20 or data received from the host 20 is temporarily stored in the at least one memory 120 based on a determination result to perform the direct load operation or the direct store operation between the host 20 and the at least one memory 120.
Referring to
In operation S110, device information is stored in a memory. The device information may include, for example, DRAM information, DRAM capacity, DRAM version, a page size, the number of information pages, the number of data pages, an LBA size, start LBA, end LBA, bitmap offset, page map table offset, partition layout offset, and padding.
Referring to
The device information stored in the storage medium 200 may be transferred or copied to the information area IA (see
The information area IA in the memory 120 is allocated as an exclusive area in the address map, and may be a pre-compromised area with the host driver or the firmware. The device information copied from the storage medium 200 may have a default value, which may be changed by an explicit command generated by the host 20.
Referring to
Referring back to
In operation S1420, it is determined whether a received command is a direct load command. For example, the processing unit 110 may determine whether the command received from the host 20 is a direct load command. If the command is a direct load command, operation S1440 is performed. If the command is not a direct load command (e.g., if the command is a direct store command), operation S1460 is performed.
In operation S1440, it is determined whether there is first data in the memory. The first data is data corresponding to the direct load command and data that is to be loaded by the host 20. For example, the processing unit 110 or a memory manager may determine whether the first data is in the memory 120. If the first data is not in the memory 120, operation S1620 is performed. If the first data is in the memory 120, operation S1640 is performed.
In operation S1620, the first data is received from the storage medium to the memory 120. The above operations may be referred to as a fill operation. For example, the storage medium 200 may transfer or copy the first data to the memory 120, and then, the memory 120 may temporarily store the first data.
In operation S1640, the first data is loaded to the host. For example, the first data temporarily stored in the memory 120 may be loaded to the host 20. Thus, the execution of the direct load operation may be completed.
In operation S1460, it is determined whether there is a clearance space in the memory. The clearance space may store second data corresponding to the direct store command, the second data being the data that the host 20 is to store. For example, the processing unit 110 or the memory manager may determine whether the clearance space is in the memory 120. If the clearance space is not in the memory 120, operation S1660 is performed. If the clearance space is in the memory 120, operation S1680 is performed.
In operation S1660, the data temporarily stored in the memory is transferred to the storage medium. The above operations may be referred to as a flush operation. For example, the memory 120 may transfer the temporarily stored data to the storage medium 200, and the memory 120 may then ensure a clearance space.
In operation S1680, the second data may be stored in the memory. For example, the host 20 may store the second data in the memory 120, and execution of the direct store operation may be completed.
Referring to
First, the host 20 transmits a direct load command DL_CMD to the controller 100. The memory 120 of the controller 100 stores first data DATA1 corresponding to the direct load command DL_CMD (e.g., available data). Next, the controller 100 transfers the first data DATA1 to the host 20, and execution of the direct load operation may be completed.
Referring to
First, the host 20 transmits a direct load command DL_CMD to the controller 100. The memory 120 of the controller 100 does not store the first data DATA1 corresponding to the direct load command DL_CMD (e.g., available data).
In addition, the controller 100 transmits a fill command FILL_CMD to the storage medium 200. The fill command FILL_CMD may be generated by the host 20. The storage medium 200 includes the first data DATA1 corresponding to the direct load command DL_CMD (e.g., the available data).
In addition, the storage medium 200 transfers or copies the first data DATA1 to the controller 100. As a result, the memory 120 may temporarily store the first data DATA1. Next, the controller 100 transmits the first data DATA1 to the host 20, and execution of the direct load operation may be completed.
Referring to
First, the host 20 sets bits in a fill operation bitmap fill_op_bitmap. Next, the controller 100 notifies the firmware in the processing unit 110 of the arrival of the fill command Fill_CMD.
Then, the firmware reads the fill operation bitmap fill_op_bitmap to read a page map table page_map_table. In addition, the firmware reads pages corresponding to the bits that are set in the fill operation bitmap fill_op_bitmap or the fill operation page fields. During operation of the firmware, the host 20 may selectively perform polling of the fill operation bitmap fill_op_bitmap in order to check whether the fill operation with respect to an arbitrary page is finished. Then, when the fill operations with respect to certain pages are finished, the firmware clears the bits in the fill operation bitmap fill_op_bitmap, and repeatedly performs the above operations until all bits in the fill operation bitmap fill_op_bitmap are cleared.
Next, the firmware notifies the controller 100 of the completion of the fill operation. The controller 100 triggers an interrupt in the host 20, and the host 20 processes the interrupt.
Referring to
Referring to
First, the host 20 transmits a direct store command DS_CMD to the controller 100. Here, the memory 120 in the controller 100 has an available space for storing data. The available space may store second data DATA2 corresponding to the direct store command DS_CMD. The second data DATA2 is the data to be stored at the host 20. Next, the host 20 directly stores the second data DATA2 in the memory 120 of the controller 100, and accordingly, the execution of the direct store operation may be completed.
Referring to
First, the host 20 transmits the direct store command DS_CMD to the controller 100. Here, the memory 120 in the controller 100 has no available space (e.g., the memory 120 is filled with other data).
Next, the controller 100 transmits a flush command FLUSH_CMD to the storage medium 200, and transfers the data filled in the memory 120 to the storage medium 200. Here, the flush command FLUSH_CMD may be generated by the host 20. As such, the memory 120 may ensure an available space.
In addition, the host 20 directly writes the second data DATA2 in the memory 120 of the controller 100. Accordingly, execution of the direct store operation may be completed.
Referring to
First, the host 20 fills pages of the memory 120, for example, DRAM, with data, and sets bits in a flush operation bitmap flush_op_bitmap. The host 20 further sets the bits in the SFR, and may initiate the operation by setting the doorbell as ‘1’. Then, the controller 100 notifies the firmware in the processing unit 110 of the arrival of the flush command FLUSH_CMD.
The firmware reads the flush operation bitmap flush_op_bitmap and reads the page_map_table. In addition, the firmware flushes the pages corresponding to the bits that are set in the flush operation bitmap flush_op_bitmap. During operation of the firmware, the host 20 may selectively perform a polling operation of the flush operation bitmap flush_op_bitmap to check whether the flush operation with respect to an arbitrary page is finished. Next, the firmware clears the bits in the flush operation bitmap flush_op_bitmap when the flush operations with respect to certain pages are finished.
In addition, the firmware notifies the controller 100 of the completion of the flush operation. Next, the controller 100 triggers an interrupt in the host 20, and the host 20 processes the interrupt.
Referring to
Referring to
The controller 1100 is configured to access the nonvolatile memory device 1200 in response to a request from a host. For example, the controller 1100 is configured to control reading, writing, erasing, and background operations of the nonvolatile memory device 1200. The controller 1100 is further configured to provide an interface between the nonvolatile memory device 1200 and the host, and may be configured to drive firmware for controlling the nonvolatile memory device 1200.
The nonvolatile memory device 1200 or the memory system 1000 according to exemplary embodiments of the present inventive concept may be mounted using various types of packages. For example, the nonvolatile memory device 1200 or the memory system 1000 may be mounted by using a package on package (PoP), ball grid arrays (BGAs), chip scale packages (CSPs), a plastic leaded chip carrier (PLCC), a plastic dual in-line package (PDIP), a die in waffle pack, a die in wafer form, a chip on board (COB), a ceramic dual in-line package (CERDIP), a plastic metric quad flat pack (MQFP), a thin quad flat pack (TQFP), a small outline integrated chip (SOIC), a shrink small outline package (SSOP), a thin small outline package (TSOP), a thin quad flat pack (TQFP), a system in package (SIP), a multichip package (MCP), a wafer-level fabricated package (WFP), or a wafer-level processed stack package (WSP).
Referring to
The nonvolatile memory device 2200 includes a plurality of nonvolatile memory chips. The plurality of nonvolatile memory chips may be divided into a plurality of groups. Each of the groups of the plurality of nonvolatile memory chips may be configured to communicate with the controller 2100 via common channels. For example, the plurality of nonvolatile memory chips may communicate with the controller 2100 via first through k-th channels CH1 through CHk.
In
Referring to
The memory system 3100 may include a controller 3110 and a nonvolatile memory device 3120. The controller 3110 may be the controller 100 of
Referring to
The first storage device 4200 may include a controller 4210 and a storage medium 4220. The second storage device 4300 may include a controller 4310 and a storage medium 4320. The controllers 4210 and 4310 may be the controller 100 of
The first storage device 5300 may include a controller 5310 and a storage medium 5320. The second storage device 5400 may include a controller 5410 and a storage medium 5420. The first and second controllers 5310 and 5410 may be the controller 100 of
Referring to
Referring to
Referring to
While the present inventive concept has been particularly shown and described with reference to the exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present inventive concept as defined by the following claims.
Claims
1. A method of operating a storage device including a controller and a storage medium, the method comprising:
- receiving a first request from a host;
- determining whether the first request includes a direct load request or a direct store request; and
- performing a direct load operation in response to the direct load request by loading a first data stored in a buffer memory to the host or performing the direct store operation in response to the direct store request by storing a second data received from the host in the buffer memory,
- wherein the buffer memory is included in the controller, and is configured to be directly accessible by the host.
2. The method of claim 1, wherein the direct load operation further comprises determining, while performing the direct load request, whether the first data is in the buffer memory, and if determined that the first data is in the buffer memory, loading the first data to the host.
3. The method of claim 2, wherein, if determined that the first data is not in the buffer memory, performing a fill operation in which transmitting the first data stored in the storage medium to the buffer memory, and then loading the first data to the host.
4. The method of claim 1, wherein the direct store operation further comprises determining, while performing the direct store command, whether the buffer memory has enough space for accommodating the second data, and if determined that the buffer memory has enough space, storing the second data in the buffer memory.
5. The method of claim 4, wherein, if determined that the buffer memory has not enough space, performing a flush operation in which transmitting a third data included in the buffer memory to the storage medium to secure enough space for storing the second data, and then storing the second data in the buffer memory.
6. The method of claim 1, further comprising storing device information relating to the storage device in the buffer memory before receiving the first request, wherein the device information indicates a hardware configuration of the storage device to the host and the host utilizes the hardware configuration to issue the first request.
7. The method of claim 6, wherein storing the device information in the buffer memory comprises receiving the device information from the storage medium in response to an operation initiation signal or an initialization command, and storing the device information in the buffer memory.
8. The method of claim 1, wherein the controller comprises a processor, and a tightly coupled memory (TCM) disposed adjacent to the processor and accessible by the processor, wherein the buffer memory and the TCM are mapped to address spaces that are exclusive from each other in an address map.
9. The method of claim 8, wherein the processor is configured to process the direct load operation and the direct store operation, and the TCM is configured to be directly accessible by the processor.
10. The method of claim 9, wherein the TCM is further configured to temporarily store a data transfer command for communicating data between the controller and the storage medium.
11. The method of claim 10, wherein the data transfer command includes a flush command for transmitting a third data temporarily stored in the buffer memory to the storage medium, and a fill command for transmitting the first data stored in the storage medium to the buffer memory.
12. The method of claim 10, wherein the TCM includes at least one special function register (SFR) configured to perform the direct load operation and the direct store operation.
13. The method of claim 1, wherein the buffer memory comprises a first portion configured to temporarily store raw data read from the storage medium or raw data received from the host, and a second portion configured to temporarily store metadata corresponding to the raw data.
14. The method of claim 1, wherein the buffer memory comprises at least one DRAM device and/or at least one SRAM device.
15. A method of operating a storage device including a controller and a storage medium, the method comprising:
- receiving a first request from a host;
- determining whether the first command includes a normal write/read request or a direct load/store request; and
- performing a normal write/read operation in response to the normal write/read request or performing a direct load/store operation in response to the direct load/store request,
- wherein the direct load/store operation comprises: determining whether the direct load/store request is a direct load request or a direct store request; and performing a direct load operation upon receiving the direct load request by loading a first data stored in a buffer memory to the host or performing the direct store operation upon receiving the direct store request by storing a second data received from the host in the buffer memory, wherein the buffer memory is included in the controller, and is configured to be directly accessible by the host.
16. The method of claim 15, wherein, during normal write/read operation, a data to be written to or read from the storage medium is temporarily stored in a main memory which is configured to be accessible by the host, and the main memory is located outside the storage device.
17. The method of claim 15, wherein the direct load operation further comprises determining, while performing the direct load request, whether the first data is in the buffer memory, and if determined that the first data is in the buffer memory, loading the first data to the host.
18. The method of claim 17, wherein, if determined that the first data is not in the buffer memory, performing a fill operation in which transmitting the first data stored in the storage medium to the buffer memory, and then loading the first data to the host.
19. The method of claim 15, wherein the direct store operation further comprises determining, while performing the direct store command, whether the buffer memory has enough space for accommodating the second data, and if determined that the buffer memory has enough space, storing the second data in the buffer memory.
20. The method of claim 19, wherein, if determined that the buffer memory has not enough space, performing a flush operation in which transmitting a third data included in the buffer memory to the storage medium to secure enough space for storing the second data, and then storing the second data in the buffer memory.
Type: Application
Filed: Jul 6, 2018
Publication Date: Nov 1, 2018
Inventors: MYEONG-EUN HWANG (SEONGNAM-SI), KI-JO JUNG (GWACHEON-SI), TAE-HACK LEE (HWASEONG-SI), KWANG-HO CHOI (SEOUL), SANG-KYOO JEONG (SEOUL)
Application Number: 16/029,007