Method and computing device for encrypting data stored in swap memory
The following embodiments generally relate to the use of a “swap area” in a non-volatile memory as an extension to volatile memory in a computing device. These embodiments include techniques to use both volatile memory and non-volatile swap memory to pre-load a plurality of applications, to control the bandwidth of swap operations, to encrypt data stored in the swap area, and to perform a fast clean-up of the swap area.
Latest SanDisk Technologies LLC Patents:
- Three-dimensional memory device including an isolation-trench etch stop layer and methods for forming the same
- CONTROL GATE SIGNAL FOR DATA RETENTION IN NONVOLATILE MEMORY
- Non-volatile memory with tuning of erase process
- Non-volatile memory with optimized operation sequence
- Early detection of programming failure for non-volatile memory
Today, one of the main trends with mobile computing devices, such as smartphones and tablets, is the ever-increasing demand for mobile volatile memory (e.g., DRAM), which has gone from 256 K to a few gigabytes and is still rising. With DRAM scaling slowing down, there is a burden on a mobile computing device to effectively use its DRAM. One of these burdens is the number of applications (or “apps”) that can be loaded into the DRAM. To launch an application, a processor in the computing device loads computer-readable program code for the application from non-volatile memory (e.g., Flash memory) into volatile memory (e.g., DRAM) and then executes the code. Executing the code can create application data, which is also stored in the volatile memory. The time required to launch an application may be seen as an inconvenience by some users, so some computing devices are designed to automatically pre-load a set of applications into the DRAM during the boot-up process of the computing device, when the user is normally expecting there to be some delay. By being pre-loaded, the user is able to almost instantly access an application after power-up rather than waiting for the application to launch from scratch. Because a computing device has a limited amount of DRAM, there is a limit as to the number of applications that can be pre-loaded into DRAM. As such, unless an application is among the limited number pre-loaded into DRAM, a user will experience some delay in waiting for the application to launch.
OVERVIEWEmbodiments of the present invention are defined by the claims, and nothing in this section should be taken as a limitation on those claims.
In one embodiment, a method and computing device are disclosed for using both volatile memory and non-volatile swap memory to pre-load a plurality of applications. In one method, a plurality of applications are pre-loaded in volatile memory in the computing device until it is determined that available space in the volatile memory has dropped below a threshold level. An application is pre-loaded in the volatile memory by copying application code for the application from the non-volatile memory into the volatile memory, executing the application code from the volatile memory, and storing created application data in the volatile memory. When it is determined that the available space in the volatile memory has dropped below the threshold level, the application data for at least one application is moved from the volatile memory to the non-volatile memory.
In another embodiment, a method and computing device are disclosed for bandwidth control of a swap operation. In one method, a plurality of applications are loaded in volatile memory in the computing device. An application is loaded in the volatile memory by copying application code for the application from the non-volatile memory into the volatile memory, executing the application code from the volatile memory, and storing created application data in the volatile memory. A bandwidth at which the application data for at least one application should be moved from the volatile memory to the non-volatile memory during a swap operation is determined, and the application data for the at least one application is moved from the volatile memory to the non-volatile memory during a swap operation according to the determined bandwidth.
In another embodiment, a method and computing device are disclosed for encrypting data stored in a swap area. In one method, an application is loaded in the volatile memory by copying application code for the application from the non-volatile memory into the volatile memory, executing the application code from the volatile memory, storing created application data in the volatile memory. The application data for the application is moved from the volatile memory to the non-volatile memory during a swap operation, and the application data is encrypted before it is stored in the non-volatile memory.
In another embodiment, a method and computing device are disclosed for fast erase of a swap area in a non-volatile memory. In one method, a controller of a storage module is in communication with a processor of a computing device, and the storage module has a non-volatile memory with a swap area storing data that was swapped out of a volatile memory of the computing device. The controller of the storage module receives a multi-block erase command from the processor of the computing device to erase the plurality of blocks in the swap area in non-volatile memory and, in response to receiving the command, simultaneously erases all of the plurality of blocks.
Other embodiments are possible, and each of the embodiments can be used alone or together in combination. Accordingly, various embodiments will now be described with reference to the attached drawings.
Introduction
The following embodiments generally relate to the use of a “swap area” in a non-volatile memory as an extension to volatile memory in a computing device. These embodiments include techniques to use both volatile memory and non-volatile swap memory to pre-load a plurality of applications, to control the bandwidth of swap operations, to encrypt data stored in the swap area, and to perform a fast clean-up of the swap area. These embodiments can be used alone or in combination with one another, and other embodiments are provided. Before turning to these and other embodiments, the following section provides a discussion of exemplary computing and storage devices that can be used with these embodiments. Of course, these are just examples, and other suitable types of computing and storage devices can be used.
Exemplary Computing and Storage Devices
Turning now to the drawings,
The processor 110 is responsible for running the general operation of the computing device 100. This includes, for example, running an operating system, as well as various applications. The computer-readable program code for the operating system and applications can be stored in the non-volatile memory 120 and then loaded into the volatile memory 130 for execution. The following embodiments provide several examples of methods that can be performed by the processor 110.
The non-volatile and volatile memories 120, 130 can take any suitable form. For example, the volatile memory 130 can use any current or future technology for implementing random access memory (RAM) (or dynamic random access memory (DRAM)). In one embodiment, the non-volatile memory 120 takes the form of a solid-state (e.g., flash) memory and can be one-time programmable, few-time programmable, or many-time programmable. The non-volatile memory 120 can be two-dimensional or three-dimensional and can use single-level cell (SLC), multiple-level cell (MLC), triple-level cell (TLC), or other memory technologies, now known or later developed.
The non-volatile memory 120 can simply be a memory chip or can be part of a self-contained storage device with its own controller. An example of such a storage device 200 is shown in
The controller 210 also comprises a central processing unit (CPU) 213, an optional hardware crypto-engine 214 operative to provide encryption and/or decryption operations, read access memory (RAM) 215, read only memory (ROM) 216 which can store firmware for the basic operations of the storage device 100, and a non-volatile memory (NVM) 217 which can store a device-specific key used for encryption/decryption operations, when used. The controller 210 can be implemented in any suitable manner. For example, the controller 210 can take the form of a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro)processor, logic gates, switches, an application specific integrated circuit (ASIC), a programmable logic controller, and an embedded microcontroller, for example.
The storage device 200 can be embedded in or removably connected with the computing device 100. For example, the storage device 200 can take the form of an iNAND™ eSD/eMMC embedded flash drive by SanDisk Corporation or can take the form of a removable memory device, such as a Secure Digital (SD) memory card, a microSD memory card, a Compact Flash (CF) memory card, a universal serial bus (USB) device, or a solid-state drive (SSD).
Returning to
As shown in
In the user space, the relevant objects are applications (e.g., such as apps for making a phone call, taking a picture, opening a video, etc.), and each application translates into a process (or several processes) that need to run in order to support the application's functionality. Each process has a projection into the kernel space. From the operating system kernel's perspective, a process is an entity that requires resources: memory, time slots to run in, structures that describe the process, etc. The operating system kernel 310 is the process manager and allocates the memory resources and the time slots where the process can run. So, in some sense, the processes can be said to run in the operating system kernel 310; however, the operating system kernel 310 has no knowledge of the functionality of the processes. The operating system kernel 310 does not even know if a process is running in the background or foreground. From the operating system kernel's perspective, the process is defined by the resources it needs to support it.
In the user space, the application management layer 305 is aware of the functionality of each process, of the processes associated with each application 300, and of the priority of an application 300 and its associated processes. In order to support the operating system kernel 310 in its role of resource allocation to the processes running in the operating system kernel 310, the application management layer 305 in the user space can compute a priority parameter, sometimes known as adjustment, and reports this parameter to the operating system kernel 310. Typically, the adjustment parameter is added to the structure defining the process (i.e., the reflection of the process in the kernel space) and will be updated on a regular basis. For example, the adjustment parameter can be defined as a 16-level parameter where a low value indicates high priority and a high value indicates low priority. Whenever memory resources are insufficient for fulfilling a memory allocation request of a process (in the operating system kernel 310), the operating system kernel 310 may free some memory in the volatile memory 130, either by swapping (i.e., moving some data from the volatile memory 130 (e.g., RAM) into the non-volatile memory (e.g., main storage)) or by ending (or “killing”) low-priority processes (as indicated by the adjustment parameter). The operating system kernel 310 can compute a first threshold function: A=F (free memory, required memory), where A is a number in the range of the adjustment parameter. Then, the operating system kernel 310 can kill any process with an adjustment greater than (or equal) to A in order to fulfill the requests from current processes.
The following embodiments can be implemented in any suitable manner in the computing device 100. For example, as discussed above, the processor 110 of the computing device 100 can execute an operating system kernel 310 as well as applications 300 and an application management layer 310 running in the user space. The operating system kernel 310 can be Linux or incompatible with Linux. Operating systems with a kernel incompatible with Linux include, but are not limited to, Windows operating systems (e.g., Windows 8 NT and Windows 8) and Apple operating systems (e.g., iOS and Mac-OSx). Also, the various acts discussed below can be performed by sending function calls from the application management layer 305 to the operating system kernel 310.
Further, in some embodiments, a storage device (e.g., eMMC or UFS devices) can be designed with a special partition of the same chip, or a special chip, that is designed for high performance and endurance. This may assist in the adoption of swap operations in mobile computing devices. That is, many current mobile operating systems do not enable swap due to the concern of the endurance of embedded storage devices. Specifically, the concern is that if swap is utilized as a DRAM extension, it will result in increased traffic and cause severe stress to the embedded device, possibly damaging the device and rendering the whole system non-operable. Also, traditionally, eMMC devices all have limited endurance and are not designed for swapping. Using a partition or special chip designed for high performance and endurance can help address this issue. The following section provides more information on the swapping process.
General Overview of Swapping Operations
As mentioned above, to launch an application, the processor 110 in the computing device 100 loads computer-readable program code for the application from the non-volatile memory 120 into the volatile memory 130 and then executes the code. Executing the code can create dynamic application data, which is also stored in the volatile memory 130. As used herein, “dynamic application data” (or “application data”) refers to data that is dynamically allocated by the application for internal use and maintains the state information of the application, such that, if lost, will require the application to be reloaded. Examples of such application data include, but are not limited to, temporary data that is buffered, data allocated in an internal stack or cache, or video/graphic data that is buffered for rendering purposes, data from specific or shared libraries, and data generated from external data (e.g., from a network).
Because a computing device typically has a relatively-small amount of volatile memory as compared to non-volatile memory, there is a limit as to the number of applications that can be loaded into volatile memory. That is, while computing devices are generally fitted with sufficient volatile memory (e.g., DRAM) for handling the memory requirements during the initial system boot process, additional memory may be needed when applications are loaded on an as-needed basis by the operating system or explicitly by the user. As such, as some point, the computing device 100 may need to end (or “kill”) one or more applications currently running in the volatile memory 130 in order to provide volatile memory resources for a new application. However, to re-start a killed application, the launching process is repeated, and this may cause an undesirable delay for the user. To reduce this delay, instead of killing the application, the processor 110 can use the non-volatile memory 120 as a memory extension to the storage space in the volatile memory 130, and move (or “swap out”) the application data from the volatile memory 130 to the non-volatile memory 120. (As the code for the application itself is already stored in the non-volatile memory 120, the code residing in the volatile memory 130 can simply be deleted instead of moved to the non-volatile memory 120). In this way, when the user wants to re-launch the application, after the application code is executed, the processor 110 simply needs to move the “swapped-out” application data from the non-volatile memory 120 to the volatile memory 130, instead of generating the application data again from scratch, as the swapped-out application data contains all the state information needed for the application to continue. This reduces the delay the user experiences when re-launching the application.
In
In
It should be noted that the processor 110 can use any suitable technique for determining which application to swap out. For example, in the memory swapping mechanism that can be used with Linux systems in Android phones, specific portions of application data in the volatile memory are moved to the non-volatile memory using a least-recently-used (LRU) mechanism to determine which pieces (e.g., in increments of 4 KB) can be moved to the non-volatile memory. This method provides a scheme for moving out old, cold data that has not and will likely not be accessed for some time. In Linux, a swap area consists of 4 KB slots, where there is one slot for each memory page. The first slot is the swap area header. When an anonymous page (i.e., a page not in the file system) is swapped out, it will be written to the swap area. When the process needs this page later, the page will be retrieved from the swap area. If the page is brought-in for write, the corresponding slot will be marked as invalid and can be re-used. However, if the page is brought-in for read, it will have two copies: one in DRAM and the other in the swap area. If the page is modified later, the copy in the swap area will be invalidated. If the page is swapped out with no modifications, the copy in the DRAM will be invalidated.
In operation, Linux will file a contiguous 256 page free space call a swap_file_cluster for swap out. Once the space is used up, Linux will find the next cluster sequentially and search from the beginning of the swap area to minimize the seek time. If no such cluster can be found, the processor will find the first available slots for swap out. After reboot, the swap area will start empty. The swap cluster will be allocated from the previous shutdown to improve wear leveling. Therefore, at the beginning, the swap out trace is likely sequential. The free slots will be fragmented eventually, which can cause performance issues since the swap out becomes a random write.
After an extended period of use, the whole swap space can be used up, and “wrap around” can occur, where previously-written pages that are now marked “invalid” can be re-written. Invalid pages can be pages that were swapped to the non-volatile memory and read back to the DRAM and are no longer needed. Also, depending on the usage pattern and workload, some pages may never be read back to the DRAM. As a result, the swap space can be severely fragmented and full of “invalid” data.
Embodiments Relating to Pre-Loading Applications
While the swapping mechanism discussed above reduces the delay in re-launching an application, the user will still experience a delay when launching the application for the first time. To address this issue, operating systems in some modern-day computing devices are designed to automatically pre-load a set of applications during the boot-up process of the computing device, when the user is normally expecting there to be some delay. As used here, a “pre-loaded” application refers to an application that was automatically launched without the user specifically, manually requesting the application to be launched at the time the user wants to actually use the application (e.g., without the user touching the icon for the application on the display screen). (However, as mentioned below, a user can designate in advance which applications should be pre-loaded.) That way, when the computing device is booted up, there will be a core set of applications that are ready to go without any delay experienced by the user. However, the number of application that can be pre-loaded into the DRAM is limited by the size of the DRAM. Also, when additional applications are to be launched after power-up, those additional applications will also need to be loaded into the DRAM. As some point, the DRAM will become saturated (no/low free memory), and applications in the DRAM will need to be killed before any further applications can be loaded or, in some extreme cases, loaded and killed. Once an application is killed, the processor will need to restart the initial code launch sequence the next time the user chooses to launch it, which requires time and possibly incurs additional network charges.
This embodiment takes advantage of the swap space in the non-volatile memory 120 in order to pre-load more application that just those that will fit in the volatile memory 130. That is, instead of just pre-loading the core set of applications into the volatile memory 130, the processor 110 in this embodiment loads additional applications. If the loading of one or more of these additional applications causes the amount of available space in volatile memory 130 to drop below the threshold level, the application data of one or more applications can be moved from the volatile memory 130 to the non-volatile memory 120. So, there will be pre-loaded applications both in the volatile memory 130 and the non-volatile memory 120). If the user chooses an application that is pre-loaded in the volatile memory 130, the application will respond with no/minimal delay. (The user may or may not know whether an application is pre-loaded and, if it is, which memory it resides in.) However, if the user chooses an application that is pre-loaded in the non-volatile memory 130, the application code and the application data will need to be loaded into the volatile memory 130. While this will take some time, it will take less time than launching the application from scratch. Thus, this embodiment can be used to allow more applications to be loaded into a combination of volatile and non-volatile memory. This allows for faster application launch as loading program data sequentially from the non-volatile memory is faster than launching the application from the scratch.
In general, the processor 110 in this embodiment pre-loads a plurality of applications in the volatile memory 130 until it is determined that available space in the volatile memory 130 has dropped below a threshold level. The processor 110 pre-loads an application by copying application code for the application from the non-volatile memory 130 into the volatile memory 120, executing the application code from the volatile memory 130, wherein executing the application code creates application data, and storing the application data in the volatile memory 130. When the processor 110 determines that the available space in the volatile memory 130 has dropped below the threshold level, the processor 110 moves the application data for at least one application from the volatile memory 130 to the non-volatile memory 120. The processor 110 can also delete the application code from the volatile memory 130 for the application(s) whose application data was moved from the volatile memory 130 to the non-volatile memory 120.
This embodiment can be implemented in any suitable way. For example, applications can be pre-loaded during boot time of the computing device 100. Several actions occur during the boot-up process, such as jumping to a reset vector (i.e., a default address for the first line of code that needs to be executed to power-up the computing device 100), initiating the boot strap sequence, executing the boot loader to initialize the basic system hardware, initiating the operating system kernel load process where additional hardware and peripherals are brought up and the core system drivers are loaded, initializing the file system (e.g., activating the swap partition and sending a TRIM command to clean blocks), and loading system drivers. As noted above, in more-advanced operating systems, a set of applications can be pre-loaded into the volatile memory 130 during boot-up in anticipation of user interaction, and, in this embodiment, additional applications are pre-loaded for user-experience enhancement.
Returning to the drawings,
The determination of which application(s) to swap out can be made in any suitable way. For example, the computing device 100 can store a list of application(s) (predetermined or user-created) whose application data should be moved from the volatile memory 130 to the non-volatile memory 120 when it is determined that the available space in the volatile memory 130 has dropped below the threshold level. As another example, the computing device 100 can stores multiple lists (again, predetermined or user-created), such as a list of primary applications and a list of secondary applications, where the primary applications are pre-loaded before the secondary applications are pre-loaded (more than two lists can be used). This example is illustrated in the flow chart 700 of
As shown in
In some embodiments, the application data, but not the application code, will be swapped out to the non-volatile memory 130, as the application code is static (not dynamic) and will not change throughout the life of the application. As such, the application code can be re-loaded from the file system section of the non-volatile memory 130. The application data, however, is generated after code execution (CPU overhead) and, in some cases, from data that needs to be downloaded through a network connection. This combination creates an overhead on the application launch time.
Returning to
In the examples shown in
There are many alternatives that can be used with these embodiments. For example, instead of using application lists, system-defined parameters can be used to specify which applications to pre-load and which specific services to defer to prevent excessive boot time. Also, a best-fit algorithm can be used to determine the most-efficient methodology to pre-load the applications, and most-recently-used application and user-preference algorithms can be used as pre-loading factors. Further, different techniques can be used to determine the application preload sequence, determine when to preload applications (e.g. to avoid preloading when the computing device 100 is in its critical loading process, as this can slow down boot up), dynamically manage free, cached, and swap memory, and automatically terminate specific applications during the time of critical memory allocation.
Embodiments Relating to Bandwidth Control of a Swap Operation
As mentioned above, operating systems of modern computing devices utilize different swapping algorithms to extend system memory to varying degrees of success. Some of the algorithms include the use of temporal-based swapping (e.g., least-recently-used page-based swapping) and contextual/spatial-based swapping (i.e., application/chunk-based swapping). In computing devices that are bandwidth-bound, there are issues dealing with user experience, specifically when large (in size or number) input/output (I/O) operations saturate the bus between the processor and the memory, leaving users with a less-than-optimal experience. Accordingly, one of the biggest issues in using the swap mechanism is the added latency to the system. Due to the intrinsic design of NAND architecture, a need exists for a more-optimal method to balance latency with throughput. NAND performance is bottlenecked by the I/O operations when doing small-sized random commands; while, on the other hand, sequential commands greatly improve the throughput. User experience is proportional to the performance of the read command—the longer a read command takes to complete, the longer the delay felt by the user. Large commands increase the time-to-completion for each command. Without the ability to prioritize the commands, it is very likely that a large sequential command or a series of commands will delay other commands long enough to make the system irresponsive. With the introduction of a swap operation in NAND-based storage devices, there is a need to load balance the available throughput of the device against this latency. This is directly felt by the user when an action is taken (e.g., with an application launch). This premise also hold true for other swap-based operations. The amount of memory swapped out is directly proportional to the memory demand, and, in times of severe memory shortage, large chunks of memory will be swapped out over a short period of time.
To provide for this smoothed-out operation, in one embodiment, the processor 110 determines a bandwidth at which at least some of the application data for at least one application should be moved from the volatile memory 130 to the non-volatile memory 120 during a swap operation, and then moves the data according to the determined bandwidth. There are several techniques that the processor 110 can use to determine the appropriate bandwidth. For example, the processor 110 can allow a user to set the bandwidth control through one or more predefined system parameters. In another example, advanced heuristics and/or NAND parameters can be used to define an optimal threshold for swap over a predetermined period of time. Both of these techniques seek to lessen the effects of swap on the I/O throughput and command latency, which adversely affect the user experience.
Returning to the drawings, the diagram 1200 in
As mentioned above, another technique that can be used for bandwidth control employs advanced heuristics and/or NAND parameters to define an optimal threshold for swap over a predetermined period of time. This technique utilizes feedback from the NAND device to the host that is fed through an algorithm to automatically determine the best settings to use. Manual control of any/all parameters can be allowed to bypass limitations, and additional parameters can be used as a percentage of the maximum throughput of the device. This allows balancing the throughput/latency on a fine scale irrespective of the underlying device. This technique will be discussed in conjunction with
As shown in the diagram 1400 in
There are several alternatives that can be used with these embodiments. For example, in one alternative, the processor 110 can monitor for all I/O utilization and dynamically vary the swap bandwidth. This can include the steps of measuring I/O traffic load in the system, limiting memory swap out, predicting I/O traffic (based on predetermined or calculated past data), delaying/deferring a swap transfer until a later time, and dynamic chunk size setting based on load.
Embodiments Relating to Encryption of Data in Swap Area
One problem observed by many nonvolatile storage systems is the security of the data stored within. Given the data is persistent across power-down cycles, any sensitive information or data, be it sensitive application data or licensed content, stored in the device is susceptible to theft. One method to combat this theft is through the use of digital rights management (DRM) and/or encryption; however, this type of protection is currently done on a per-application basis and not during memory management operations, such as a swap operation. As explained above, in most modern operating systems, virtual memory subsystems make use of a swap operation to “extend” the available system memory. Depending on the algorithm used, a swap mechanism could be used to store the least utilized memory pages. In scenarios where application security is based on key exchange or privately-generated keys, the keys are stored in volatile memory. In times of severe memory pressure, it is conceivable that the key and/or data could be swapped out onto non-volatile storage, thereby leaving this sensitive key and/or data exposed, as there exists a possibility that the storage device can be removed, and the data extracted with malicious intent. Additionally, it is possible to overwrite the data at any location in the non-volatile memory with malicious code. The underlying technology may afford some level of security through translation table look ups, but it is not 100% fool proof.
To address these issues, in this embodiment, data is encrypted before it is stored in the non-volatile memory extension area (the swap area). By providing security features in the virtual memory swapping mechanism (in either or both hardware and software), sensitive data will be protected during the power down state. Securing data through encryption will also allow the system to detect intrusion and determine whether the data has been compromised. This can be done by integrating encryption capabilities in the virtual memory or in the entire region in the memory device.
As mentioned above, the encryption/decryption functionality can be part of host or the memory device feature set, or a combination of both. This way, this embodiment can be adaptive to different usage models. That is, the goal of ensuring the security of the swapped-out data by setting up a secure path for the data can be attained by embedding a security channel (through hardware or software support) either on the host or on the storage device. Depending on the security requirements, it is also possible to enable security on both the host and the storage device. The alternatives will now be discussed in conjunction with
There are several alternatives that can be used with these embodiments. For example, while the above embodiments have crypto-engines both in the host and storage device, there can be a crypto-engine 2200 just in the host device (
Embodiments Relating to Fast Clean-Up of Swap Area
When the host computing device does a power cycle (e.g., when a PC or smartphone is turned off), all the data in the volatile memory is lost, and the system will start afresh. However, the swap space in the non-volatile memory maintains the previously swapped-out data, even though none of the data is valid or relevant. In order for the swap space of the non-volatile memory to become serviceable again, the swap space in the non-volatile memory should be cleaned up and erased to restore the original time 0 state (i.e., so there is no swapped-out data stored in the non-volatile memory). As the erase speed of Flash is quite slow (e.g., 5-20 msec), erasing all of the thousands of the blocks of the swap area (e.g., a few GB) can take seconds, during which time the swap space is not useable. Even if swap is not needed right after system boot up, the swap space still needs to be cleaned up to make it useable, and this will take system time and energy.
This embodiment addresses this problem by simultaneously erasing all the blocks in the swap space (e.g., at boot up) to efficiently clean and reset the whole swap space. In this embodiment, the processor 110 takes the fact that the computing device is booting up as a signal that all the data in the swap space from previous sessions is no longer valid and needs to be clean up. Specifically, during boot up, the processor 110 resets the reset vector and initiates the boot strap sequence, and executes the boot loader to initialize the basic system hardware. The processor 110 also initiates the operating system kernel load process where the more-advanced system hardware is brought up and the core system drivers are loaded. The swapping mechanism is initiated as part of the system boot process. The initiation process includes cleaning the swap partition by erasing the content of the partition, either through a pseudo erase which only modifies the mapping table or actual overwrite of the data for security purposes. As NAND internally remaps the logical address, a special command is needed to force an actual erase of the used NAND blocks. The command can be a TRIM or DISCARD command from the operating system. The swap partition can use the command, in conjunction with the type of partition, to determine how best to erase the partition. This could be done as a single command to the entire partition or as a command that defines a list of blocks to erase.
At the NAND flash level, there can exist specific commands that will allow control over erase of the entire device. This provides a facility to support features such as secure erase or fast format but does not allow for the flexibility to allow partial erase on a partition level. Traditional support relies on serializing (or in limited circumstances, relies on serializing parallel support across dies/channels) the erase commands resulting in poor performance. This embodiment supports multi-block or partition erase through support at the NAND flash level that will allow parallel block erase. This support can be provided through a special NAND command, which is referred to here as Multi-Block Erase. The introduction of this feature allows for fast partition erase at the system level. In previous designs, the NAND controller performs a pseudo erase (clearing the table entries associated with the erased region) to speed up the erase process. With this methodology, there are additional delays introduced in that a new block needs to be erased before it can be used. With the introduction of Multi-Block Erase, the system now can introduce true fast erase of the series of blocks request by the system. The storage device can provide support for this Multi-Block Erase command where a list of block addresses can be sent into the NAND die to determine what blocks are to be erased. By iterating through this list of to-be-erased blocks, the memory device can simultaneously erase the needed blocks, thereby reducing the overhead of erasing each block independently. Any suitable syntax can be used for this command, such as: <MBE CMD> <ROW ADDR (plane & chip)> <DATA>, where data represents the block addresses to be erased. By issuing a multi-block erase command that lists all the invalid blocks of the swap partition, the storage device can erase them all in a single command. This will restore the swap space to an “all-erased” state, making it ready for the next swap, without the need to trigger garbage collection. In other words, this provides a total clean slate and restart.
To address this, the multi-block erase process of
Exemplary Memory Technologies
As mentioned above, any type of memory technology can be used. Semiconductor memory devices include volatile memory devices, such as dynamic random access memory (“DRAM”) or static random access memory (“SRAM”) devices, non-volatile memory devices, such as resistive random access memory (“ReRAM”), electrically erasable programmable read only memory (“EEPROM”), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (“FRAM”), and magnetoresistive random access memory (“MRAM”), and other semiconductor elements capable of storing information. Each type of memory device may have different configurations. For example, flash memory devices may be configured in a NAND or a NOR configuration.
The memory devices can be formed from passive and/or active elements, in any combinations. By way of non-limiting example, passive semiconductor memory elements include ReRAM device elements, which in some embodiments include a resistivity switching storage element, such as an anti-fuse, phase change material, etc., and optionally a steering element, such as a diode, etc. Further by way of non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements containing a charge storage region, such as a floating gate, conductive nanoparticles, or a charge storage dielectric material.
Multiple memory elements may be configured so that they are connected in series or so that each element is individually accessible. By way of non-limiting example, flash memory devices in a NAND configuration (NAND memory) typically contain memory elements connected in series. A NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, memory elements may be configured so that each element is individually accessible, e.g., a NOR memory array. NAND and NOR memory configurations are exemplary, and memory elements may be otherwise configured.
The semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two dimensional memory structure or a three dimensional memory structure.
In a two dimensional memory structure, the semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two dimensional memory structure, memory elements are arranged in a plane (e.g., in an x-z direction plane) which extends substantially parallel to a major surface of a substrate that supports the memory elements. The substrate may be a wafer over or in which the layer of the memory elements are formed or it may be a carrier substrate which is attached to the memory elements after they are formed. As a non-limiting example, the substrate may include a semiconductor such as silicon.
The memory elements may be arranged in the single memory device level in an ordered array, such as in a plurality of rows and/or columns. However, the memory elements may be arrayed in non-regular or non-orthogonal configurations. The memory elements may each have two or more electrodes or contact lines, such as bit lines and word lines.
A three dimensional memory array is arranged so that memory elements occupy multiple planes or multiple memory device levels, thereby forming a structure in three dimensions (i.e., in the x, y and z directions, where the y direction is substantially perpendicular and the x and z directions are substantially parallel to the major surface of the substrate).
As a non-limiting example, a three dimensional memory structure may be vertically arranged as a stack of multiple two dimensional memory device levels. As another non-limiting example, a three dimensional memory array may be arranged as multiple vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in the y direction) with each column having multiple memory elements in each column. The columns may be arranged in a two dimensional configuration, e.g., in an x-z plane, resulting in a three dimensional arrangement of memory elements with elements on multiple vertically stacked memory planes. Other configurations of memory elements in three dimensions can also constitute a three dimensional memory array.
By way of non-limiting example, in a three dimensional NAND memory array, the memory elements may be coupled together to form a NAND string within a single horizontal (e.g., x-z) memory device levels. Alternatively, the memory elements may be coupled together to form a vertical NAND string that traverses across multiple horizontal memory device levels. Other three dimensional configurations can be envisioned wherein some NAND strings contain memory elements in a single memory level while other strings contain memory elements which span through multiple memory levels. Three dimensional memory arrays may also be designed in a NOR configuration and in a ReRAM configuration.
Typically, in a monolithic three dimensional memory array, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array may also have one or more memory layers at least partially within the single substrate. As a non-limiting example, the substrate may include a semiconductor such as silicon. In a monolithic three dimensional array, the layers constituting each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array. However, layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.
Then again, two dimensional arrays may be formed separately and then packaged together to form a non-monolithic memory device having multiple layers of memory. For example, non-monolithic stacked memories can be constructed by forming memory levels on separate substrates and then stacking the memory levels atop each other. The substrates may be thinned or removed from the memory device levels before stacking, but as the memory device levels are initially formed over separate substrates, the resulting memory arrays are not monolithic three dimensional memory arrays. Further, multiple two dimensional memory arrays or three dimensional memory arrays (monolithic or non-monolithic) may be formed on separate chips and then packaged together to form a stacked-chip memory device.
Associated circuitry is typically required for operation of the memory elements and for communication with the memory elements. As non-limiting examples, memory devices may have circuitry used for controlling and driving memory elements to accomplish functions such as programming and reading. This associated circuitry may be on the same substrate as the memory elements and/or on a separate substrate. For example, a controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements.
One of skill in the art will recognize that this invention is not limited to the two dimensional and three dimensional exemplary structures described but cover all relevant memory structures within the spirit and scope of the invention as described herein and as understood by one of skill in the art.
CONCLUSIONIt is intended that the foregoing detailed description be understood as an illustration of selected forms that the invention can take and not as a definition of the invention. It is only the following claims, including all equivalents, that are intended to define the scope of the claimed invention. Finally, it should be noted that any aspect of any of the preferred embodiments described herein can be used alone or in combination with one another.
Claims
1. A method for encrypting data stored in a swap area, the method comprising:
- performing the following in a computing device having a volatile memory and a non-volatile memory: loading an application in the volatile memory, wherein the application is loaded in the volatile memory by: copying application code for the application from the non-volatile memory into the volatile memory; executing the application code from the volatile memory, wherein application data is created after the application code is executed; and storing the application data in the volatile memory; moving the application data for the application from the volatile memory to the non-volatile memory during a swap operation and deleting the application code from the volatile memory for the application, wherein the application data is encrypted before it is stored in the non-volatile memory; receiving a request for the application; and in response to receiving the request, moving the application data for the application from the non-volatile memory to the volatile memory instead of recreating the application data from scratch wherein the application data maintains state information of the application.
2. The method of claim 1, wherein the non-volatile memory is part of a storage module, and wherein the application data is encrypted by an encryption module in the computing device.
3. The method of claim 1, wherein the non-volatile memory is part of a storage module, and wherein the application data is encrypted by an encryption module in the storage module.
4. The method of claim 1, wherein the non-volatile memory is part of a storage module, and wherein the application data is encrypted by an encryption module in the computing device and by an encryption module in the storage module.
5. The method of claim 1, wherein the non-volatile memory is part of a storage module, and wherein the storage module is embedded in the computing device.
6. The method of claim 1, wherein the non-volatile memory is part of a storage module, and wherein the storage module is removably connected to the computing device.
7. The method of claim 1, wherein the computing device is a mobile device.
8. The method of claim 1, wherein the non-volatile memory has a three-dimensional configuration.
9. A computing device comprising:
- a volatile memory;
- a non-volatile memory; and
- a processor in communication with the volatile and non-volatile memory, wherein the processor is configured to: load an application in the volatile memory, wherein the application is loaded in the volatile memory by: copying application code for the application from the non-volatile memory into the volatile memory; executing the application code from the volatile memory, wherein application data is created after the application code is executed; and storing the application data in the volatile memory; move the application data for the application from the volatile memory to the non-volatile memory during a swap operation and deleting the application code from the volatile memory for the application, wherein the application data is encrypted before it is stored in the non-volatile memory; receiving a request for the application; and in response to receiving the request, moving the application data for the application from the non-volatile memory to the volatile memory instead of recreating the application data from scratch; wherein the application data maintains state information of the application.
10. The computing device of claim 9, wherein the non-volatile memory is part of a storage module, and wherein the application data is encrypted by an encryption module in the computing device.
11. The computing device of claim 9, wherein the non-volatile memory is part of a storage module, and wherein the application data is encrypted by an encryption module in the storage module.
12. The computing device of claim 9, wherein the non-volatile memory is part of a storage module, and wherein the application data is encrypted by an encryption module in the computing device and by an encryption module in the storage module.
13. The computing device of claim 9, wherein the non-volatile memory is part of a storage module, and wherein the storage module is embedded in the computing device.
14. The computing device of claim 9, wherein the non-volatile memory is part of a storage module, and wherein the storage module is removably connected to the computing device.
15. The computing device of claim 9, wherein the computing device is a mobile device.
16. The computing device of claim 9, wherein the non-volatile memory has a three-dimensional configuration.
17. A computing device comprising:
- a volatile memory;
- a non-volatile memory:
- means for loading an application in the volatile memory, wherein the application is loaded in the volatile memory by: copying application code for the application from the non-volatile memory into the volatile memory; executing the application code from the volatile memory, wherein application data is created after the application code is executed; and storing the application data in the volatile memory;
- means for moving the application data for the application from the volatile memory to the non-volatile memory during a swap operation and deleting the application code from the volatile memory for the application, wherein the application data is encrypted before it is stored in the non-volatile memory;
- means for receiving a request for the application; and
- means for, in response to receiving the request, moving the application data for the application from the non-volatile memory to the volatile memory instead of recreating the application data from scratch; wherein the application data maintains state information of the application.
6581133 | June 17, 2003 | Bitner et al. |
6933919 | August 23, 2005 | Anderson et al. |
7003621 | February 21, 2006 | Koren |
7110301 | September 19, 2006 | Lee |
7315916 | January 1, 2008 | Bennett |
7433951 | October 7, 2008 | Waldspurger |
7826469 | November 2, 2010 | Li et al. |
8112755 | February 7, 2012 | Apacible |
8187936 | May 29, 2012 | Alsmeier et al. |
8261009 | September 4, 2012 | Freikorn |
8554986 | October 8, 2013 | Lee |
8694754 | April 8, 2014 | Schuette |
8738840 | May 27, 2014 | Tzeng |
8819337 | August 26, 2014 | Oshinsky et al. |
8898374 | November 25, 2014 | Yang |
8909888 | December 9, 2014 | Goss et al. |
8972675 | March 3, 2015 | Avila et al. |
9053019 | June 9, 2015 | Roh |
20020134222 | September 26, 2002 | Tamura |
20040030882 | February 12, 2004 | Forman |
20040068627 | April 8, 2004 | Sechrest et al. |
20060083069 | April 20, 2006 | Fasoli |
20060123320 | June 8, 2006 | Vogt |
20070005883 | January 4, 2007 | Trika |
20070016725 | January 18, 2007 | Chu et al. |
20070055813 | March 8, 2007 | Ingram et al. |
20070168632 | July 19, 2007 | Zeevi |
20070226443 | September 27, 2007 | Giampaolo |
20080059785 | March 6, 2008 | O'Connell |
20080074931 | March 27, 2008 | Kim |
20090083478 | March 26, 2009 | Kunimatsu et al. |
20090113444 | April 30, 2009 | Hackborn |
20090119450 | May 7, 2009 | Saeki et al. |
20090172255 | July 2, 2009 | Yeh et al. |
20090198874 | August 6, 2009 | Tzeng |
20090222639 | September 3, 2009 | Hyvonen et al. |
20090240873 | September 24, 2009 | Yu et al. |
20090291696 | November 26, 2009 | Cortes et al. |
20100064111 | March 11, 2010 | Kunimatsu et al. |
20100075760 | March 25, 2010 | Shimabukuro et al. |
20100118434 | May 13, 2010 | Inoue |
20100169540 | July 1, 2010 | Sinclair |
20100191874 | July 29, 2010 | Feeley et al. |
20100274955 | October 28, 2010 | Lasser et al. |
20110010722 | January 13, 2011 | Matsuyama |
20110066792 | March 17, 2011 | Shaeffer |
20110145490 | June 16, 2011 | Lee et al. |
20110213954 | September 1, 2011 | Baik |
20110302224 | December 8, 2011 | Yairi et al. |
20120167100 | June 28, 2012 | Li et al. |
20120198174 | August 2, 2012 | Nellans et al. |
20120254520 | October 4, 2012 | Roh |
20120254966 | October 4, 2012 | Parker |
20120303865 | November 29, 2012 | Hars |
20120324481 | December 20, 2012 | Xia et al. |
20130031298 | January 31, 2013 | Tan et al. |
20130046921 | February 21, 2013 | Pyeon |
20130067138 | March 14, 2013 | Schuette et al. |
20130132638 | May 23, 2013 | Horn et al. |
20130270643 | October 17, 2013 | Lee et al. |
20130305247 | November 14, 2013 | Easton et al. |
20130311751 | November 21, 2013 | Kurihara |
20130326113 | December 5, 2013 | Wakrat et al. |
20130326116 | December 5, 2013 | Goss et al. |
20140129758 | May 8, 2014 | Okada |
20140229605 | August 14, 2014 | Besser |
20140331010 | November 6, 2014 | Rankin et al. |
20150026415 | January 22, 2015 | Clausen et al. |
20150178188 | June 25, 2015 | Grin et al. |
20150293701 | October 15, 2015 | Kim et al. |
10 2004 055051 | October 2005 | DE |
- International Search Report and Written Opinion for PCT/US2015/024843 dated Jul. 27, 2015, 9 pages.
- Office Action for U.S. Appl. No. 14/272,251 dated Sep. 18, 2015, 31 pages.
- Office Action for U.S. Appl. No. 14/272,238 dated Jan. 12, 2016, 72 pages.
- Office Action for U.S. Appl. No. 14/272,244 dated Dec. 31, 2015, 52 pages.
- Office Action for U.S. Appl. No. 14/272,257 dated Jan. 8, 2016, 7 pages.
- Arya, P., “A Survey of 3D Nand Flash Memory”, EECS Int'l Graduate Program, National Chiao Tung University, 2012, pp. 1-11.
- Jang et al., “Vertical Cell Array using TCAT(Terabit Cell Array Transistor) Technology for Ultra High Density NAND Flash Memory,” 2009 Symposium on VLSI Technology Digest of Technical Papers, pp. 192-193, 2009.
- Nowak, E. et al., “Intrinsic Fluctuations in Vertical NAND Flash Memories”, 2012 Symposium on VLSI Technology Digest of Technical Papers, 2012, pp. 21-22.
- “Write Amplification”, http://en.wikipedia.org/wiki/Write—amplification, 13 pages, printed Mar. 9, 2013.
- Application as Filed for U.S. Appl. No. 13/800,256 entitled, “Mobile Computing Device and Method for Dynamic Application Hibernation Implemented with Non-Linux Operating System”, filed Mar. 13, 2013, 41 pages.
- Application as Filed for U.S. Appl. No. 13/800,330 entitled, “Mobile Computing Device and Method for Dynamic Application Hibernation Implemented with Function Calls Sent From an Application Management Layer Running in a User Space to an Operating System Kernel”, filed Mar. 13, 2013, 41 pages.
- Application as Filed for U.S. Appl. No. 13/829,010 entitled, “Storage Module and Method for Regulating Garbage Collection Operations based on Write Activity of a Host”, filed Mar. 14, 2013, 23 pages.
- Application as Filed for U.S. Appl. No. 14/219,868 entitled, “Computing Device and Method for Predicting Low Memory Conditions”, filed Mar. 19, 2014, 33 pages.
- Application as Filed for U.S. Appl. No. 14/133,979, filed Dec. 19, 2013, 121 pages.
- Application as Filed for U.S. Appl. No. 14/136,103, filed Dec. 20, 2013, 56 pages.
- Application as Filed for U.S. Appl. No. 14/254,393 entitled, “Storage Module and Method for Adaptive Burst Mode”, filed Apr. 16, 2014, 31 pages.
- Office Action for U.S. Appl. No. 14/254,393 dated Jun. 16, 2014, 18 pages.
- Office Action for U.S. Appl. No. 14/272,251, dated May 5, 2016, 35 pages.
- Office Action in U.S. Appl. No. 14/272,257 dated Jul. 21, 2016, 8 pages.
- Office Action in U.S. Appl. No. 14/272,244 dated Aug. 12, 2016, 69 pages.
- Office Action in U.S. Appl. No. 14/272,251 dated Nov. 17, 2016, 26 pages.
- “IBM Knowledge Center—How Paging Works in z/OS”, printed from the internet: www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zconcepts/zconcepts—91.htm , Nov. 21, 2016, 2 pages.
- Kay, T., “Linux Swap Space”, Linux Journal, Feb. 28, 2011, 5 pages.
- Office Action in U.S. Appl. No. 14/272,244, dated Dec. 5, 2016, 44 pages.
- Office Action in U.S. Appl. No. 14/272,238 dated Sep. 8, 2016, 81 pages.
Type: Grant
Filed: May 7, 2014
Date of Patent: Apr 25, 2017
Patent Publication Number: 20160117260
Assignee: SanDisk Technologies LLC (Plano, TX)
Inventors: Robert S. Wu (Milpitas, CA), Jian Chen (Menlo Park, CA), Ashish Karkare (Milpitas, CA), Alon Marcu (Tel Mond), Vsevolod Mountaniol (Givataim)
Primary Examiner: Samson Lemma
Application Number: 14/272,255
International Classification: G06F 11/30 (20060101); G06F 21/79 (20130101); G06F 12/02 (20060101); G06F 12/0862 (20160101);