METHOD AND SYSTEM FOR EFFICIENT VARIABLE LENGTH MEMORY FRAME ALLOCATION

A system and method for efficient variable length memory frame allocation are described. The method is described to include receiving a frame allocation request from a host system, allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames, updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames, determining a super frame identifier for the allocated super frame, and enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This Non-Provisional Patent Application claims the benefit of U.S. Provisional Patent Application No. 62/410,752, filed Oct. 20, 2016, the entire disclosure of which is hereby incorporated herein by reference.

FIELD OF THE DISCLOSURE

The present disclosure is generally directed toward computer memory allocation techniques.

BACKGROUND

Traditional Dynamic memory allocation schemes require high memory usage to maintain metadata and also computation in terms of search for a free block or freeing of a used block. Advanced caching algorithms require many sizes of fixed memory blocks to be allocated run time. The life time of these blocks varies based on the usage of such a block whether it is to store a temporary state of cache or whether to issue write/read requests to devices etc. Typically, dynamic memory allocation for such use cases is not optimal. The memory allocation strategy has to be simple, fast and easy to find issues in allocation algorithms. Apart from that memory blocks need clear separation in terms of the life span of the blocks.

BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure is described in conjunction with the appended figures, which are not necessarily drawn to scale:

FIG. 1 is a block diagram depicting a computing system in accordance with at least some embodiments of the present disclosure;

FIG. 2 is a block diagram depicting details of an illustrative RAID controller in accordance with at least some embodiments of the present disclosure;

FIG. 3 is a block diagram depicting a first illustrative data structure used in accordance with at least some embodiments of the present disclosure;

FIG. 4 is a block diagram depicting a second illustrative data structure used in accordance with at least some embodiments of the present disclosure;

FIG. 5 is a block diagram depicting a third illustrative data structure used in accordance with at least some embodiments of the present disclosure;

FIG. 6 is a flow diagram depicting a method of responding to a frame allocation request in accordance with at least some embodiments of the present disclosure;

FIG. 7 is a flow diagram depicting a method of allocating additional super frames from a stack of free super frames in accordance with at least some embodiments of the present disclosure;

FIG. 8 is a flow diagram depicting an additional method of responding to a frame allocation request in accordance with at least some embodiments of the present disclosure; and

FIG. 9 is a flow diagram depicting a method of releasing a super frame back to a stack of free super frames in accordance with at least some embodiments of the present disclosure.

DETAILED DESCRIPTION

The ensuing description provides embodiments only, and is not intended to limit the scope, applicability, or configuration of the claims. Rather, the ensuing description will provide those skilled in the art with an enabling description for implementing the described embodiments. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the appended claims.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and this disclosure.

As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “and/or” includes any and all combinations of one or more of the associated listed items.

As will be discussed in further detail herein, embodiments of the present disclosure contemplate a frame allocation method in which frames are allocated based on variable sized pools called super frames. Although frames and super frames are described with respect to specific sizes or ranges of sizes, it should be appreciated that embodiments of the present disclosure are not limited to particular frame sizes or super frame sizes. Indeed, while a typical allocation of 2 kB and 128 Byte super frames will be described, it should not be construed as limiting embodiments of the present disclosure. In some embodiments, a super frame, as used herein, may refer to a large frame that contains at least two sub frames of a particular size or range of sizes (e.g., 64 Bytes/sub frame). As a non-limiting example, a super frame of size 2 Kbyte may contain approximately 32 contiguous 64 Byte sub frames. As another non-limiting example, a 128 Byte super frame may contain two 64 Byte sub frames. The 2 Kbyte and 128 Byte illustrative super frames are used for illustrative purposes—it should be appreciated that super frames of any frame size can be used (e.g., a power of 2 can be used to determine any type of possible super frame size).

In some embodiments, a state of each sub frame within a super frame is maintained and indicated within a bitmap. As a non-limiting example, one bit within the bitmap may be used to indicate if a particular sub frame is currently in use (or not). Additional bits in the bitmap can also be used to indicate the usage type for the sub frame. For instance, additional bits in the bitmap can be used to indicate whether a sub frame is used for a Local Message ID (LMID) or some other memory type is being used. If there is a need to find out all the sub frames that are used for LMIDs, for example, then this information stored in the additional bits becomes quite useful.

In some embodiments, a super frame can be provisioned from various types of memory like SRAM/DRAM or characterized by Slow versus Fast access memory. A super frame pool may be configured to contain all super frames of the same or similar type and same or similar access type (e.g., all SRAM slow access super frames may be combined in a common super frame pool whereas other super frames are combined in other super frame pools).

In some embodiments, a frame or sub frame allocation request can be configured to indicate the desired or required pool type (e.g., 2 Kbytes, 128 Bytes, etc.), followed by desired or required access type of Slow or Fast, and the requested frame size, which is typically an exponent of 2. (e.g., ‘0’ indicates 1 sub frame, ‘1’ indicates 2 sub frames, and ‘2’ indicates 4 sub frames, etc.). It should be appreciated, that a separate stack of super frames can be maintained for each pool.

In some embodiments, for each pool type and access type, a super frame tracker is maintained. The tracker contains the super frame ID that is currently allocated and not fully used and the usage count for the super frame. Whenever a sub frame is allocated from a super frame, an entry is added to the appropriate index in the tracker table. For example, if a super frame is allocated from the Fast Frame, 2 Kbyte Pool, the super frame Id and the number of sub frames used (e.g., usage count) is added to the 2 Kbyte Pool, Fast access tracker into index 4. The super frame bitmap can also updated to indicate which sub frame is currently in use.

In some embodiments, the tracker also maintains the usage count. The usage count may indicate which sub frame is available next. For example, count 1 indicates that Frame at index 0 is in use whereas count 2 indicates that sub frames at indices 0 and 1 are in use. This avoids the need to search for free sub frames within the tracker. The sub frame indexed with the count would be the next frame that can be allocated.

Based on the size of first request to be serviced from a freshly allocated super frame, the super frame ID is stored in the allocation pointer specific to that frame size: e.g., 64, 128, 256, 512 and 1 K. There would not necessarily need to be a tracker for the largest super frame in the pool. For example, assume there is no tracker for the 2 Kbyte super frame. If there is a request for 2 Kbyte frame, then the entirety of the super frame is allocated directly from the super frame stack and since it is used in full, there is no need for it to go into the tracker. On the other hand, whenever a sub frame is allocated from the tracker, the count is incremented. If the usage count becomes equal to the size of the super frame, then the super frame ID is removed from the tracker. This indicates that this super frame cannot be used further (e.g., the super frame is completely allocated).

In some embodiments, the sub frame allocation is performed in terms of forward lookup (e.g., from 0 to maximum sub frames available). Sub frames are allocated until all the sub frames belonging to a particular super frame are exhausted (e.g., even if some sub frames get freed or released) because the sub frames would not be re-allocated for the further requests until the entire super frame becomes free and is re-used for allocation. This is ensured by checking that usage counter in the super frame tracker is only incremented and never decremented even when a sub frame is getting freed or released.

After a super frame is allocated, then it can be used to fulfill requests for frames with sizes ranging from 64 Byte to 2 Kbyte (only square sizes are valid). When a sub frame needs to be allocated, a linear search from 64 Byte index until the largest frame size for the pool is performed to see if a super frame is available. If a super frame ID is valid in a particular index and if it has required number of sub frames to satisfy the request, the allocation is completed from this index. This makes sure that there is no internal fragmentation. As expected, if the entirety of the request cannot be satisfied from any of the index, then a new super frame is allocated and the super frame ID is added to the index corresponding to the request size.

The subsequent frame allocation requests could be fulfilled from the same super frame as long as there are sufficient sub frames left within the super frame. After an allocation is performed from a particular index, the number of sub frames available is less than the size associated with the allocation pointer and the super frame ID is moved from the current allocation pointer to the allocation pointer designated for the lower size.

When a new request cannot be fulfilled from a super frame located at the lowest allocation pointer, the allocation pointers are scanned upward until another super frame that can fulfill the request is found. If no super frame is found or none of the super frames found can fulfill the request, a new super frame is allocated from the stack and the frame allocation process continues as described.

In some embodiments, when an allocated sub frame gets released the corresponding bits in the parent super frame bitmap get cleared. Furthermore, if the entire bitmap becomes clear for a particular super frame, then the super frame gets released (e.g., that particular super frame's super frame ID is pushed back into the allocation stack). A super frame may not be configured to be freed directly. Rather, it can be freed when all of the bits are cleared as part of the freeing process of sub frames.

In some embodiments, when an allocation is requested for a frame size which is the same as the super frame size, then there is no search involved. A new super frame is immediately allocated directly from the free stack and the request is granted and all the bits corresponding to the super frame are set to indicate that the super frame is completely used.

It should be appreciated that the frame allocation mechanisms described herein can have particular characteristics like Slow virtual disks, Fast virtual disks, etc. The frame allocation mechanisms can also have characteristics like allocation from SRAM, DRAM, etc. or may be based on size of super frame (e.g., whether 2 Kbyte or 128 Bytes). Characterizing the super frames helps ensure that a module that requires frames that gets freed up fast uses only such frames so that the super frame gets freed without getting blocked because of the slow requests.

As can be appreciated, the disclosed frame allocation mechanisms provide a more efficient allocation strategy than most existing or known frame allocation techniques. The disclosed frame allocation mechanisms can cater to the needs of hardware caching acceleration where frames of various sizes and various characteristics are required.

With reference now to FIG. 1, additional details of a computing system 100 capable of implementing frame allocation techniques will be described in accordance with at least some embodiments of the present disclosure. The computing system 100 is shown to include a host system 104, a controller 108 (e.g., a RAID controller), and a storage array 112 having a plurality of storage devices 136a-N therein. The system 100 may utilize any type of data storage architecture. The particular architecture depicted and described herein (e.g., a RAID architecture) should not be construed as limiting embodiments of the present disclosure. If implemented as a RAID architecture, however, it should be appreciated that any type of RAID scheme may be employed (e.g., RAID-0, RAID-1, RAID-2, . . . , RAID-5, RAID-6, etc.).

In a RAID-0 (also referred to as a RAID level 0) scheme, data blocks are stored in order across one or more of the storage devices 136a-N without redundancy. This effectively means that none of the data blocks are copies of another data block and there is no parity block to recover from failure of a storage device 136. A RAID-1 (also referred to as a RAID level 1) scheme, on the other hand, uses one or more of the storage devices 136a-N to store a data block and an equal number of additional mirror devices for storing copies of a stored data block. Higher level RAID schemes can further segment the data into bits, bytes, or blocks for storage across multiple storage devices 136a-N. One or more of the storage devices 136a-N may also be used to store error correction or parity information.

A single unit of storage can be spread across multiple devices 136a-N and such a unit of storage may be referred to as a stripe. A stripe, as used herein and as is well known in the data storage arts, may include the related data written to multiple devices 136a-N as well as the parity information written to a parity storage device 136a-N. In a RAID-5 (also referred to as a RAID level 5) scheme, the data being stored is segmented into blocks for storage across multiple devices 136a-N with a single parity block for each stripe distributed in a particular configuration across the multiple devices 136a-N. This scheme can be compared to a RAID-6 (also referred to as a RAID level 6) scheme in which dual parity blocks are determined for a stripe and are distributed across each of the multiple devices 136a-N in the array 112.

One of the functions of the RAID controller 108 is to make the multiple storage devices 136a-N in the array 112 appear to a host system 104 as a single high capacity disk drive. Thus, the RAID controller 108 may be configured to automatically distribute data supplied from the host system 104 across the multiple storage devices 136a-N (potentially with parity information) without ever exposing the manner in which the data is actually distributed to the host system 104.

In the depicted embodiment, the host system 104 is shown to include a processor 116, an interface 120, and memory 124. It should be appreciated that the host system 104 may include additional components without departing from the scope of the present disclosure. The host system 104, in some embodiments, corresponds to a user computer, laptop, workstation, server, collection of servers, or the like. Thus, the host system 104 may or may not be designed to receive input directly from a human user.

The processor 116 of the host system 104 may include a microprocessor, central processing unit (CPU), collection of microprocessors, or the like. The memory 124 may be designed to store instructions that enable functionality of the host system 104 when executed by the processor 116. The memory 124 may also store data that is eventually written by the host system 104 to the storage array 112. Further still, the memory 124 may be used to store data that is retrieved from the storage array 112. Illustrative memory 124 devices may include, without limitation, volatile or non-volatile computer memory (e.g., flash memory, RAM, DRAM, ROM, EEPROM, etc.).

The interface 120 of the host system 104 enables the host system 104 to communicate with the RAID controller 108 via a host interface 128 of the RAID controller 108. In some embodiments, the interface 120 and host interface(s) 128 may be of a same or similar type (e.g., utilize a common protocol, a common communication medium, etc.) such that commands issued by the host system 104 are receivable at the RAID controller 108 and data retrieved by the RAID controller 108 is transmittable back to the host system 104. The interfaces 120, 128 may correspond to parallel or serial computer interfaces that utilize wired or wireless communication channels. The interfaces 120, 128 may include hardware that enables such wired or wireless communications. The communication protocol used between the host system 104 and the RAID controller 108 may correspond to any type of known host/memory control protocol. Non-limiting examples of protocols that may be used between interfaces 120, 128 include SAS, SATA, SCSI, FibreChannel (FC), iSCSI, ATA over Ethernet, InfiniBand, or the like.

The RAID controller 108 may provide the ability to represent the entire storage array 112 to the host system 104 as a single high volume data storage device. Any known mechanism can be used to accomplish this task. The RAID controller 108 may help to manager the storage devices 136a-N (which can be hard disk drives, sold-state drives, or combinations thereof) so as to operate as a logical unit. In some embodiments, the RAID controller 108 may be physically incorporated into the host device 104 as a Peripheral Component Interconnect (PCI) expansion (e.g., PCI express (PCI)e) card or the like. In such situations, the RAID controller 108 may be referred to as a RAID adapter.

The storage devices 136a-N in the storage array 112 may be of similar types or may be of different types without departing from the scope of the present disclosure. The storage devices 136a-N may be co-located with one another or may be physically located in different geographical locations. The nature of the storage interface 132 may depend upon the types of storage devices 136a-N used in the storage array 112 and the desired capabilities of the array 112. The storage interface 132 may correspond to a virtual interface or an actual interface. As with the other interfaces described herein, the storage interface 132 may include serial or parallel interface technologies. Examples of the storage interface 132 include, without limitation, SAS, SATA, SCSI, FC, iSCSI, ATA over Ethernet, InfiniBand, or the like.

With reference now to FIG. 2 additional details of a RAID controller 108 will be described in accordance with at least some embodiments of the present disclosure. The RAID controller 108 is shown to include the host interface(s) 128 and storage interface(s) 132. The RAID controller 108 is also shown to include a processor 204, memory 208, one or more drivers 212, and a power source 216.

The processor 204 may include an Integrated Circuit (IC) chip or multiple IC chips, a CPU, a microprocessor, or the like. The processor 204 may be configured to execute instructions in memory 208 that are shown to include frame allocation instructions 224, bitmap management instructions 228, index management instructions 232, and frame type analysis instructions 236. Furthermore, in connection with executing the bitmap management instructions, the processor 204 may modify one or more data entries (e.g., bit values) in a super frame bitmap 220 that is shown to be maintained internally to the RAID controller 108. It should be appreciated, however, that some or all of the super frame bitmap 220 may be stored and/or maintained external to the RAID controller 108. Alternatively or additionally, the super frame bitmap 220 may be stored or contained within memory 208 of the RAID controller 108.

The memory 208 may be volatile and/or non-volatile in nature. As indicated above, the memory 208 may include any hardware component or collection of hardware components that are capable of storing instructions and communicating those instructions to the processor 204 for execution. Non-limiting examples of memory 208 include RAM, ROM, flash memory, EEPROM, variants thereof, combinations thereof, and the like.

The instructions stored in memory 208 are shown to be different instruction sets, but it should be appreciated that the instructions can be combined into a smaller number of instruction sets without departing from the scope of the present disclosure. The frame allocation instructions 224, when executed, may enable the processor 204 to respond to frame allocation requests, identify available super frames and sub frames therein, allocate such super frames or sub frames as appropriate, and communicate that such an allocation has occurred.

The bitmap management instructions 228, when executed, may enable the processor 204 to recognize that the frame allocation instructions 224 have allocated a super frame or sub frame. Based on that recognition, the bitmap management instructions 228 may adjust values for entries 240a-M within the super frame bitmap 220. For instance, when a new super frame is allocated for a frame allocation request, the bitmap management instructions 228 may change a bit value for a corresponding entry 240a-M of the now-allocated super frame in the bitmap 220. If a super frame is cleared and no longer allocated, then the corresponding entry 240a-M in the bitmap 220 may be changed back to an original value indicating non-allocation.

The index management instructions 232, when executed, may enable the processor 204 to manage usage counts for super frames allocated by the frame allocation instructions 224. In particular, as a new super frame becomes freshly allocated, the index management instructions 232 may increment or update a count assigned to the allocated super frame. If the usage count becomes equal to the size of the super frame, then the corresponding super frame ID can be removed from being tracked by the index management instructions. Such an action may indicate that the super frame is no longer eligible for further use or allocation.

The frame type analysis instructions 236, when executed, may enable the processor 204 to analyze frames and characteristics thereof. For instance, the frame type analysis instructions 236 may determine whether a particular super frame or sub frame is a fast or slow type of super frame or sub frame. The frame type analysis instructions 236 may alternatively or additionally enable a processor 204 to determine whether the super frame or sub frame is being allocated from a particular memory type (e.g., SRAM, DRAM, etc.).

With reference now to FIG. 3, additional details of an illustrative 2 KB super frame 300 data structure will be described in accordance with at least some embodiments of the present disclosure. The super frame 300 is shown to include a plurality of sub frames 304, which could be organized into a plurality of 64 Byte columns. Each sub frame 304 may be of a particular size and the size of one sub frame 304 does not necessarily need to be the same as the size of other sub frames 304. Illustrative sizes of sub frames 304 can be 64 Bytes, 128 Bytes, 256 Bytes, 512 Bytes, or 1 Kbyte. In some embodiments, adjacent sub frames may be assigned sub frame IDs incrementally. That is, adjacent sub frames may have sequential sub frame IDs. Some of the sub frames 304 may have different characteristics than other sub frames 304. In some embodiments, the sub frames 304 which are allocate for a particular allocation request may depend upon the size of the sub frame and the frame size identified in the allocation request. It may be desirable for the frame allocation instructions 224 to identify sub frames 304 which have a size greater than or equal to the frame size identified in the allocation request and allocate a next available sub frame having the appropriate size. Furthermore, the frame allocation instructions 224 may be designed to allocate sub frames in a forward lookup manner meaning that sub frames 304 within the super frame 300 are all allocated until every sub frame 304 within the super frame 300 has been allocated. When a frame needs to be allocated, the frame allocation instructions 224 may perform a linear search until the largest frame size from the pool of available super frames that can accommodate the frame request is identified. This search may be completed using a search index that helps ensure there is no internal fragmentation of the super frame. The index may be maintained and updated as super frames are used and sub frames therefrom are allocated. The index may include usage counters for super frames and the index may be maintained by the index management instructions 232. If the entirety of a request cannot be satisfied from any of the index, then a new super frame is allocated and the super frame ID is added to the index corresponding to the size request.

The sub frames 304 may also have usage information stored therein. In particular, as sub frames 304 are allocated, then data contained within each corresponding sub frame 304 unit may be updated to reflect the allocation and/or type of allocation. As a non-limiting example, each sub frame 304 may have one or a set of bits stored therein (or associated therewith) that reflect a usage condition of the corresponding sub frame. As example of such information may be stored using 2 Bits of data (e.g., 00=unused sub frame; 01=sub frame used for LMID; 10=sub frame used for Scatter Gather Lists (SGLs)). As shown in FIG. 3, however, the super frame 300 still corresponds to a set of consecutively numbered sub frames 304.

With reference now to FIG. 4, additional details of another super frame 400 will be described in accordance with at least some embodiments of the present disclosure. The super frame 400 shown in FIG. 4 is shown to have a corresponding size of 128 Bytes and is constructed of X sub frames 404. The super frame 400 is organized similarly to super frame 300 except that super frame 400 has a different number of sub frames 404 and the number of columns 408 may be different from the number of columns in the super frame 300. Each sub frame 404 may be designed for allocation in request to a frame allocation request. Depending upon the size requested in the frame allocation request, a different number of sub frames 404 may be allocated to fulfill the request. The sub frames 404 may be allocated linearly (e.g., lower numbered sub frames 404 may be allocated before higher numbered sub frames 404) if the size of such sub frames 404 allow.

The sub frames 404 may also have usage information stored therein. In particular, as sub frames 404 are allocated, then data contained within each corresponding sub frame 404 unit may be updated to reflect the allocation. As a non-limiting example, each sub frame 404 may have one or a set of bits stored therein (or associated therewith) that reflect a usage condition of the corresponding sub frame. As example of such information may be stored using 2 Bits of data (e.g., 00=unused sub frame; 01=sub frame used for LMID; 10=sub frame used for SGL). As shown in FIG. 4, however, the super frame 400 still corresponds to a set of consecutively numbered sub frames 404.

With reference now to FIG. 5, additional details of a data structure 500 used to store super frame information will be described in accordance with at least some embodiments of the present disclosure. The data structure 500 may correspond to an example of the super frame bitmap 220 without departing from the scope of the present disclosure. Alternatively or additionally, the data structure 500 may correspond to part or all of an index used to track super frame usage. In particular, the data structure 500 is shown to include a number of fields that enable tracking of super frame allocations. The fields included in the data structure 500 include a pool type field 504, an access type field 508, a frame size field 512, a frame ID field 516, and a usage count field 520. In some embodiments, for each pool type and access type, a data structure 500 in the format depicted in FIG. 5 may be used as a super frame tracker. The super frame tracker may contain the super frame identifier (in the frame ID field 516) that is current allocated and not fully used. In such a scenario, a usage count may also be updated to reflect the incomplete usage. Whenever a frame is allocated, an entry can be added to the appropriate index in the super frame tracker. As a non-limiting example, if a super frame is allocated from a fast frame, 2 Kbyte pool, then the super frame ID 516 and the number of sub frames used (which may also be referred to as the usage count 520) is added to the 2 Kbyte pool, Fast access tracker into index #4. The bitmap 220 can also be updated to indicate which sub frame is currently in use and the super frame to which the sub frame belongs.

The data structure 500 may also be used to maintain the ongoing usage count in the usage count field 520. The usage count field 520 may also reflect which sub frame is available for the next allocation request. For example, count “1” may indicate that sub frame at index 0 is in use whereas count “2” may indicate that sub frames and indices 0 and 1 are both in use. This type of count system helps avoid the need for searching all free sub frames within the tracker. Rather, the sub frame indexed with the count would correspond to the next available sub frame that is free for allocation. Thus, tracking of available and non-available sub frames can be completed with a single Byte of data, thereby avoiding the need to search every single sub frame to determine whether it is available (or not).

The pool type field 504 provides information related to whether a particular super frame is retrieved from or belongs to a set of relatively large super frames (e.g., 2 Kbyte super frames) or whether the particular super frame is retrieved from or belongs to a set of relatively small super frames (e.g., 128 Byte super frames). This information may be represented using one or several bits or it may be represented using a string (e.g., an alphanumeric string). The frame allocation instructions 224, bitmap management instructions 228, and index management instructions 232 may all work cooperatively to help simultaneously analyze allocation requests and update the appropriate data structures (e.g., bitmap 220 and data structures 300, 400, 500).

Based on the size of the first request to be serviced from a freshly allocated super frame, the super frame ID is stored in the allocation pointer specific to that frame size as defined in the frame size field 512. For instance, if the 64 byte sub frame is allocated from a super frame, then the frame ID 516 entry for corresponding frame size 512 entry is updated to include the identifier of the super frame from which the sub frame was allocated.

As can be seen there is no tracker from the largest frame in the pool. That is, there is no particular need for a tracker for the entire 2 Kbyte super frame if a request consumes the entirety of that super frame storage. Rather, if there is such a request, then the super frame is allocated directly from the super frame stack and since it is in full use, there is no need to parse which sub frames were allocated from the super frame and which were not (since all were allocated).

Conversely, whenever a sub frame is allocated from the data structure 500, the corresponding usage count 520 is incremented by the index management instructions 232. When the usage count becomes equal to the size of the super frame, then the super frame ID is removed, which indicated that the super frame is no longer available for use.

With reference now to FIGS. 6-9, additional details of frame allocation and associated bitmap and tracker/index management will be described in accordance with at least some embodiments of the present disclosure. Although certain steps will be described as being performed by particular components, it should be appreciated that embodiments of the present disclosure are not so limited. In particular, a RAID controller 108 or components thereof can be configured to perform some or all of the features described herein. Alternatively or additionally, the described functions can be performed in a component other than a RAID controller 108. For instance, the described functions can be performed within a host system 104 or in some other memory controller other than a RAID controller 108.

With reference initially to FIG. 6, a method of responding to a frame allocation request will be described in accordance with at least some embodiments of the present disclosure. The method begins when a controller 108 receives a frame allocation request from a host system 104 (step 604). The frame allocation request may be received in one or many packets of data. Alternatively or additionally, the frame request may be received in some other non-packet format. The frame allocation request may include an indication of a size of frame required to fulfill the request (e.g., a frame request size) along with possibly other information pertinent to the frame request (e.g., access type requested, pool type requested, etc.).

In response to receiving the frame allocation request, the controller 108 may invoke the frame allocation instructions 224 to allocate a super frame from a stack of free super frames (step 608). The specific super frame that is chosen by the frame allocation instructions 224 may be chosen to match the frame request size, the access type requested, and/or the pool type requested.

After or as the super frame is allocated, the frame allocation instructions 224 and/or index management instructions 232 may update appropriate entries in the bitmap 220 (step 612) and within the data structures 300, 400, or 500 to reflect the allocation of the chosen super frame. Furthermore, an identifier associated with the chosen super frame (e.g., a super frame ID) may be determined by the frame allocation instructions 224 (step 616) and that super frame ID may be entered into the appropriate data structures 300, 400, 500 to reflect that the super frame has been allocated and sub frames from that super frame have been allocated. Once allocated, the super frame (or sub frames therein) are enabled to store data in connection with the frame allocation request (step 620). This data may be stored in any storage device 136a-N or the like that is associated with the allocated super frame/sub frame.

With reference now to FIG. 7, a method of allocating additional super frames from a stack of free super frames will be described in accordance with at least some embodiments of the present disclosure. The method begins with the frame allocation instructions 224 analyzing a frame allocation request after a super frame has already been partially allocated for a previous frame request. The frame allocation instructions 224 analyzes subsequent frame allocation requests with respect to remaining frames (step 704). In this particular scenario, the frame allocation instructions 224 will identify/determine that the remaining sub frames within an allocated super frame are insufficient to store the data in connection with the recently-received frame allocation request (step 708).

In response to making this determination, the frame allocation instructions 224 will allocate a second super frame from the stack of free super frames (step 712). If necessary, the frame allocation instructions 224 may allocate multiple super frames to accommodate a frame request in which the requested frame size is larger than can be supported with a single super frame.

After or as the second super frame is allocated, the frame allocation instructions 224 and/or index management instructions 232 may update appropriate entries in the bitmap 220 (step 716) and within the data structures 300, 400, or 500 to reflect the allocation of the second super frame (and possibly other super frames). Furthermore, an identifier associated with the second super frame (e.g., a super frame ID #2) may be determined by the frame allocation instructions 224 (step 720) and that super frame ID may be entered into the appropriate data structures 300, 400, 500 to reflect that the super frame has been allocated and sub frames from that super frame have been allocated. Once allocated, the super frame (or sub frames therein) are enabled to store data in connection with the frame allocation request (step 724). This data may be stored in any storage device 136a-N or the like that is associated with the allocated super frame/sub frame.

With reference now to FIG. 8, additional details of a method of responding to a frame allocation request will be described in accordance with at least some embodiments of the present disclosure. The method begins when a frame allocation request is received at the controller 108 (step 804). As with other frame allocation requests described herein, the frame allocation request received in this step may define one or multiple characteristics associated with the desired frame or frame type. In particular, the allocation request may indicate a desired frame usage type (e.g., LMID or other memory type), desired frame access type (e.g., Slow or Fast), desired frame size, and/or desired pool type (e.g., 2 Kbyte versus 128 Byte).

The frame allocation instructions 224 may then determine whether a full super frame is necessary to accommodate the frame allocation request (step 808). If the query of step 808 is answered affirmatively, then the method continues with the frame allocation instructions 224 searching/traversing the data structure 500 starting from Index 0 (step 812). As the frame allocation instructions 224 search the data structure 500, the frame allocation instructions 224 determine whether the frame allocation request can be satisfied from the index currently being analyzed (step 816). If the answer to this query is negative, then the Index is incremented (step 820) and the analysis of step 816 is repeated as long as the current Index is not greater than a predefined maximum Index (step 824). If no available sub frame or super frame is found before the Index exceeds the maximum Index, then the frame allocation instructions 224 and/or the index management instructions 232 will obtain a new super frame, set the appropriate super frame ID, update the tracker information, update the bitmap 220 for the appropriate sub frames being allocated from within the super frame, and then increment the usage count for the super frame having the sub frames allocated from therein (step 828). As discussed above, the amount by which the usage count is incremented will depend upon the sub frame that is allocated and the size of the allocated sub frame. The method then proceeds by returning the allocated sub frame for data storage (step 832).

Referring back to step 816, if a sub frame is identified from an already-allocated super frame prior to the Index reaching the maximum index, then the appropriately sized sub frame from the already-allocated super frame is allocated. This results in the frame allocation instructions 224 and/or the index management instructions 232 setting the super frame ID and the sub frame ID for the allocated sub frame and then incrementing the usage count for the allocated sub frame (step 844). Thereafter, the index management instructions 232 will determine whether the usage count is greater than or equal to the maximum number of frames for the pool being analyzed (step 848). If the usage count is greater than or equal to the maximum number of frames for the pool, then the tracker index is invalidated (step 852), after which the method proceeds to step 832.

On the other hand, if the usage count is less than the maximum number of frames for the pool, then the method proceeds with the index management instructions 232 determining whether the Index is equal to the current index (step 856). If this query is answered negatively, then the method proceeds to step 832. If the query of step 856 is answered affirmatively, then the index management instructions 232 invalidate the current index, set the tracker to a new target index that corresponds to an index of the super frame ID that was set in step 844 (step 860). Thereafter, the method proceeds to step 832.

Referring back to step 808, if a full frame is requested, then the frame allocation instructions 224 will allocated a new super frame from the stack of free super frames (step 836). Thereafter or simultaneous therewith, all of the bits in the super frame bitmap are initialized. During this initialization, the bits in the super frame bitmap have their corresponding sub frame IDs set equal to the super frame ID times the super frame size (step 840). This ensures that all of the sub frames within the newly allocated super frame maintain continuous addressing, which ultimately increases the speed with which sub frames are analyzed for later distribution toward a frame allocation request. Thereafter, the method proceeds to step 832.

With reference now to FIG. 9, details of a method of releasing a super frame back to a stack of free super frames will be described in accordance with at least some embodiments of the present disclosure. The method begins when a request is received at the controller 108 to free a super frame (step 904). This request may be initiated by the host system 104 or some other component in the system 100.

In response to receiving the request, then a super frame has its sub frames and their corresponding information analyzed (step 908). This analysis may be performed by the frame allocation instructions 224, the index management instructions 232, or some other component of the controller 108. The appropriate bits (or data fields) in the super frame bitmap are then cleared (step 912). Thereafter, an inquiry is made as to whether or not all of the bitmap has been cleared (step 916). If so, then the super frame is released back to the stack or pool of free super frames (step 920). If not, then the method will simply end (step 924) without releasing the super frame back to the stack or pool of free super frames.

Specific details were given in the description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.

While illustrative embodiments of the disclosure have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art.

Claims

1. A method for efficient variable length memory frame allowance, the method comprising:

receiving a frame allocation request from a host system;
allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames;
updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames;
determining a super frame identifier for the allocated super frame; and
enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.

2. The method of claim 1, wherein the set of consecutively numbered frames are allocated to subsequent requests.

3. The method of claim 2, wherein the frame allocation request corresponds to a request for data storage in an amount less than a total amount of data than can be stored in the super frame and wherein frames from the set of consecutively numbered frames are allocated in order until a sufficient number of frames have been allocated to accommodate the request for data storage in the amount less than the total amount of data that can be stored in the super frame.

4. The method of claim 3, wherein further subsequent frame allocation requests are analyzed to determine whether remaining frames in the set of consecutively numbered frames are sufficient to accommodate the further subsequent frame allocation requests.

5. The method of claim 4, further comprising:

first analyzing the further subsequent frame allocation requests with respect to remaining frames in the set of consecutively numbered frames; and
in the event that the remaining frames in the set of consecutively numbered frames are insufficient to store data in connection with the further subsequent frame allocation requests, then, in response thereto, allocating a second super frame from the stack of free super frames for the further subsequent frame allocation requests, the second super frame comprising a second set of consecutively numbered frames.

6. The method of claim 5, wherein the second set of consecutively numbered frames are sequentially numbered with respect to the set of consecutively numbered frames belonging to the super frame.

7. The method of claim 5, further comprising:

updating entries in the super frame bitmap to indicate that the second super frame has been allocated from the stack of free super frames; and
determining a second super frame identifier for the allocated second super frame.

8. The method of claim 1, further comprising:

determining that all frames in the set of consecutively numbered frames belonging to the super frame are no longer required for allocation to data;
marking all of the frames in the set of consecutively numbered frames belonging to the super frame as available; and
returning the super frame back to the stack of free super frames.

9. The method of claim 1, further comprising:

assigning the super frame to a register from a set of registers based on an amount of unallocated frames in the set of consecutively numbered frames; and
enabling data allocation decisions to be made based on an ordered analysis of the set of registers, wherein a first register in the set of registers that is analyzed in the ordered analysis is assigned to a first super frame having fewer unallocated frames than a second super frame that is assigned to a second register in the set of registers.

10. The method of claim 1, wherein the set of consecutively numbered frames in the super frame are designated with as being either a fast access type or a slow access type.

11. A computing system, comprising:

a processor; and
computer memory coupled to the processor, the computer memory including instructions that are executable by the processor, the instructions comprising: instructions that receive and process a frame allocation request from a host system; instructions that allocate a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames; instructions that update entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames; instructions that determine a super frame identifier for the allocated super frame; and instructions that enable the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.

12. The computing system of claim 11, wherein the set of consecutively numbered frames are allocated to subsequent requests.

13. The computing system of claim 12, wherein the frame allocation request corresponds to a request for data storage in an amount less than a total amount of data than can be stored in the super frame and wherein frames from the set of consecutively numbered frames are allocated in order until a sufficient number of frames have been allocated to accommodate the request for data storage in the amount less than the total amount of data that can be stored in the super frame.

14. The computing system of claim 13, wherein further subsequent frame allocation requests are analyzed to determine whether remaining frames in the set of consecutively numbered frames are sufficient to accommodate the further subsequent frame allocation requests.

15. The computing system of claim 14, wherein the instructions further enable the processor to:

first analyze the further subsequent frame allocation requests with respect to remaining frames in the set of consecutively numbered frames; and
in the event that the remaining frames in the set of consecutively numbered frames are insufficient to store data in connection with the further subsequent frame allocation requests, then, in response thereto, allocate a second super frame from the stack of free super frames for the further subsequent frame allocation requests, the second super frame comprising a second set of consecutively numbered frames.

16. The computing system of claim 15, wherein the second set of consecutively numbered frames are sequentially numbered with respect to the set of consecutively numbered frames belonging to the super frame.

17. The computing system of claim 15, wherein the instructions further enable the processor to:

updating entries in the super frame bitmap to indicate that the second super frame has been allocated from the stack of free super frames; and
determining a second super frame identifier for the allocated second super frame.

18. The computing system of claim 11, wherein the instructions further enable the processor to:

determining that all frames in the set of consecutively numbered frames belonging to the super frame are no longer required for allocation to data;
marking all of the frames in the set of consecutively numbered frames belonging to the super frame as available; and
returning the super frame back to the stack of free super frames.

19. The computing system of claim 11, wherein the instructions further enable the processor to:

assigning the super frame to a register from a set of registers based on an amount of unallocated frames in the set of consecutively numbered frames; and
enabling data allocation decisions to be made based on an ordered analysis of the set of registers, wherein a first register in the set of registers that is analyzed in the ordered analysis is assigned to a first super frame having fewer unallocated frames than a second super frame that is assigned to a second register in the set of registers.

20. The computing system of claim 11, wherein the set of consecutively numbered frames in the super frame are designated with as being either a fast access type or a slow access type.

Patent History
Publication number: 20180113639
Type: Application
Filed: Oct 26, 2016
Publication Date: Apr 26, 2018
Inventors: Horia Simionescu (Foster City, CA), Eugene Saghi (Colorado Springs, CO), Sridhar Rao Veerla (Bangalore), Panthini Pandit (Bangalore), Timothy Hoglund (Colorado Springs, CO), Gowrisankar Radhakrishnan (Colorado Springs, CO)
Application Number: 15/335,014
Classifications
International Classification: G06F 3/06 (20060101);