Two-level indexing for key-value persistent storage device
A system and method for two-level indexing for key-value persistent storage. The method may include: sorting two or more key-value pairs to form a sorted key-value pair set; determining an address of a first key-value pair of the key-value pairs, the first key-value pair including a first key and a first value; determining an address of a second key-value pair of the key-value pairs, the second key-value pair including a second key and a second value; and training a first linear regression model to generate a first line corresponding to the key-value pairs, the training including training the first linear regression model with key-value pairs including the first key-value pair and the second key-value pair.
Latest Samsung Electronics Patents:
The present application claims priority to and the benefit of U.S. Provisional Application No. 63/285,802, filed Dec. 3, 2021, entitled “TWO LEVEL MEMORY EFFICIENT INDEXING FOR KV SSD USING LINEAR REGRESSION”, the entire content of which is incorporated herein by reference.
FIELDOne or more aspects of embodiments according to the present disclosure relate to persistent storage, and more particularly to a two-level indexing system for key-value persistent storage.
BACKGROUNDKey-value storage devices have various uses for data storage, e.g., in server systems. In such a storage device, data may be stored as values, each value being identified by a respective key, and a host using the key-value storage device may, for example, send a read request (or “Get command”) including a key, the key identifying the value to be read from storage.
It is with respect to this general technical environment that aspects of the present disclosure are related.
SUMMARYIn some embodiments, a key-value persistent storage device includes two indexing systems for mapping keys to values, (i) a hash table, and (ii) recursively indexed storage. The hash table may be employed when new key-value pairs are written to the key-value persistent storage device, and periodically, e.g., when wear levelling or garbage collection is performed, some of the key-value pairs (e.g., ones that are determined to be longer-lived than others) may be moved to the recursively indexed storage. The recursively indexed storage may employ a tree structure (e.g., a tree of linear models) to map keys to value storage locations, with higher levels, including internal nodes, in the tree directing any query related to a key toward a lower level, external node, which includes a linear mapping from keys to addresses in persistent storage.
According to an embodiment of the present disclosure, there is provided a method, including: sorting two or more key-value pairs to form a sorted key-value pair set; determining an address of a first key-value pair of the key-value pairs, the first key-value pair including a first key and a first value; determining an address of a second key-value pair of the key-value pairs, the second key-value pair including a second key and a second value; and constructing a model based on the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair.
In some embodiments, the method further includes performing a data-moving operation in a block of a key-value persistent storage device, the performing of the data-moving operation including identifying the two or more key-value pairs.
In some embodiments, the data-moving operation is a wear-leveling operation.
In some embodiments, the method further includes storing the sorted key-value pair set in a region of storage, wherein the determining of the address of the first key-value pair includes determining a first address at which the first key-value pair is stored.
In some embodiments: the constructing of the model includes training a first linear regression model to generate a first line corresponding to the key-value pairs, the training including training the first linear regression model with key-value pairs and corresponding addresses, including the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair; and the method further includes: receiving a command to access a third key-value pair of the two or more key-value pairs, and determining, based on the first line, an approximate address of the third key-value pair.
In some embodiments, the determining of the approximate address includes multiplying a key of the key-value pair by a factor and adding an offset, the factor and the offset being based on a slope and offset of the first line.
In some embodiments: the constructing of the model includes training a first linear regression model to generate a first line corresponding to the key-value pairs, the training including training the first linear regression model with key-value pairs and corresponding addresses, including the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair; and the method further includes: determining that a difference between an approximate address for the first key-value pair, based on the first line, and the address of the first key-value pair exceeds a threshold; and training a second linear regression model to generate a second line corresponding to a first subset of the key-value pairs, the training including training the second linear regression model with the first subset of the key-value pairs.
In some embodiments, the threshold is the difference between the address of the first key-value pair and a page boundary.
In some embodiments, the method further includes: receiving a command to access a key-value pair of the two or more key-value pairs, and determining that the key-value pair is in the first subset.
In some embodiments, the method further includes determining an approximate address of the key-value pair based on the second line.
In some embodiments, the method further includes reading, from persistent storage, a page, the approximate address being within the page.
According to an embodiment of the present disclosure, there is provided a key-value persistent storage device, including: persistent storage; a buffer; and a processing circuit, configured to: sort two or more key-value pairs to form a sorted key-value pair set; determine an address of a first key-value pair of the key-value pairs, the first key-value pair including a first key and a first value; determine an address of a second key-value pair of the key-value pairs, the second key-value pair including a second key and a second value; and construct a model based on the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair.
In some embodiments, the processing circuit is further configured to perform a data-moving operation in a block of a key-value persistent storage device, the performing of the data-moving operation comprising identifying the two or more key-value pairs.
In some embodiments, the data-moving operation is a wear-leveling operation.
In some embodiments, the processing circuit is further configured to store the sorted key-value pair set in a region of storage, wherein the determining of the address of the first key-value pair includes determining a first address at which the first key-value pair is stored.
In some embodiments: the constructing of the model includes training a first linear regression model to generate a first line corresponding to the key-value pairs, the training including training the first linear regression model with key-value pairs and corresponding addresses, including the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair; and the processing circuit is further configured to: receive a command to access a third key-value pair of the two or more key-value pairs, and determine, based on the first line, an approximate address of the third key-value pair.
In some embodiments, the determining of the approximate address includes multiplying a key of the key-value pair by a factor and adding an offset, the factor and the offset being based on a slope and offset of the first line.
In some embodiments: the constructing of the model includes training a first linear regression model to generate a first line corresponding to the key-value pairs, the training including training the first linear regression model with key-value pairs and corresponding addresses, including the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair; and the processing circuit is further configured to: determine that a difference between an approximate address for the first key-value pair, based on the first line, and the address of the first key-value pair exceeds a threshold; and train a second linear regression model to generate a second line corresponding to a first subset of the key-value pairs, the training including training the second linear regression model with the first subset of the key-value pairs.
In some embodiments, the threshold is the difference between the address of the first key-value pair and a page boundary; and the processing circuit is further configured to: receive a command to access a key-value pair of the two or more key-value pairs, and determine that the key-value pair is in the first subset.
In some embodiments, the processing circuit is further configured to determine an approximate address of the key-value pair based on the second line.
According to an embodiment of the present disclosure, there is provided a key-value persistent storage device, including: persistent storage; a buffer; and means for processing, configured to: sort two or more key-value pairs to form a sorted key-value pair set; determine an address of a first key-value pair of the key-value pairs, the first key-value pair including a first key and a first value; determine an address of a second key-value pair of the key-value pairs, the second key-value pair including a second key and a second value; and construct a model based on the first key-value pair, the address of the first key-value pair, the second key-value pair, and the address of the second key-value pair.
These and other features and advantages of the present disclosure will be appreciated and understood with reference to the specification, claims, and appended drawings wherein:
The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of a two-level indexing system for key-value persistent storage provided in accordance with the present disclosure and is not intended to represent the only forms in which the present disclosure may be constructed or utilized. The description sets forth the features of the present disclosure in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and structures may be accomplished by different embodiments that are also intended to be encompassed within the scope of the disclosure. As denoted elsewhere herein, like element numbers are intended to indicate like elements or features.
Key-value persistent storage devices (such as key-value solid state drives (SSDs)) have various uses for data storage, e.g., in server systems. In such a storage device, data may be stored as values, each value being identified by a respective key, and a host using the key-value persistent storage device may, for example, send a read request (or “Get command”) including a key, the key identifying the value to be read from storage. The key-value persistent storage device may include persistent storage (e.g., flash memory, organized into blocks (the smallest unit that may be erased) and pages (the smallest unit that may be read or written)) and a buffer (e.g., dynamic random-access memory (DRAM)). In operation, the hash table may be stored in the buffer for faster operation. The hash table may include each key and a pointer to the location, in persistent storage, of the corresponding value. If the keys are large (e.g., larger than 255 bytes) then the hash table may not fit into the buffer, necessitating costly swapping of the buffer with data stored in the persistent storage. Some key-value persistent storage devices may therefore limit the permissible maximum key size, which may be an inconvenient constraint for some users or applications.
As such, in some embodiments, the size of a hash table of a key-value persistent storage device may be reduced by moving some key-value pairs to one or more separately maintained storage pools referred to herein as “recursively indexed storage”. Referring to
During data-moving operations (for blocks storing key-value pairs indexed using the hash table), such as wear leveling operations, garbage collection operations, or data-moving operations to avoid irrecoverable read-disturb errors, long-lived key-value pairs may be identified as, for example, key-value pairs that have remained unchanged for a long time or key-value pairs that remain valid in a block in which garbage collection is being performed. This identification may be performed, for example, by a data-moving circuit or method (e.g., a garbage collector or a wear-leveling circuit or method) that is aware of the availability of recursively indexed storage (which, in the case of a garbage collector, may be referred as an “RMI-aware garbage collector” 130). Such key-value pairs may be moved to recursively indexed storage, as discussed in further detail below, by a recursive model index circuit (RMI circuit) 135.
A recursive model index may be generated as follows. Referring to
For example, an RMI may be constructed based on the storage mapping, as follows. The recursive model index may be a multi-stage or “multi-level” model tree that may be traversed for any key of the storage mapping, to find an approximate address for the key. As used herein, an “approximate address” for a key is an address that identifies the page containing the key-value pair (e.g., an address in the same page as the address of the key-value pair, or an address differing from the address of the key-value pair by less than the difference between the address of the key value pair and a page boundary). As such, an approximate address is sufficient for reading the first portion of the key-value pair from persistent storage 125 (without having to perform additional read operations from persistent storage 125 to find the key-value pair); once a page containing the key-value pair has been read into the buffer of the key-value persistent storage device, the first portion of the key-value pair may be found by searching the memory buffer. In some embodiments, a delimiter (a reserved bit sequence, which may be sufficiently long (e.g., at least 80 bits long, or at least 128 bits long, or at least 256 bits long) that the likelihood of its appearing by random chance is acceptably small) is used to mark the start of each key-value pair in storage.
The RMI may include one or more internal nodes and one or more external nodes. Each internal node may, in operation, receive keys and map each key to another node, in the next level of the tree (e.g., if the internal node is in the second level, it may map each key to a respective node in the third level). Each external node may, in operation, receive keys and map each key into a respective approximate address. Each external node may include a linear regression model (e.g., a function for a straight line, of the form y=ax+b) that, given a key (as a value for x) returns the approximate address as the value for y (e.g., the approximate address may be calculated by multiplying the key by a factor (e.g., multiplying by the factor a) and adding an offset (e.g., adding the offset b) (where a and b are based on the slope and offset of the first line). The linear regression model may be trained by fitting the function to a subset of the storage mapping. The RMI may be constructed by (i) fitting a first straight line to the entire storage mapping, and then calculating a measure of how well the resulting line fits the storage mapping. If the fit is sufficiently good (e.g., if it successfully calculates an approximate address for each key of the storage mapping) then the construction of the RMI may terminate, and the RMI may consist of a single external node. The first straight line (which, in this example is the RMI) may then be used to find an approximate address for any key in the storage mapping.
When the persistent storage 125 is flash memory, the recursively indexed storage may occupy a plurality of blocks in the persistent storage 125, and a separate RMI may be constructed for each block. When a key is to be looked up in the recursively indexed storage, a coarse lookup table may be employed to determine which block the key and value are stored in, and the RMI for that block may then be employed to identify the page within which the key and value (or a first portion of the key and value) are stored. The structure of the recursive model index may make it unnecessary to keep a large number of keys in the buffer of the key-value persistent storage device; instead, the recursive model index may only use, for each internal node, a set of key boundaries (key values that are at the boundaries between subsets of keys) and, for each external node, the factor (a) and the offset (b) defining the line of the linear regression model. As such, the keys may be relatively large; e.g., the size of each key may be up to a value between 100 bytes and 1 MB (e.g., up to 100 kB).
In some embodiments, the RMI-aware garbage collector 130 may employ various factors to identify blocks in which garbage collection is to be performed (e.g., blocks that are to be erased, after any valid data are moved). Similarly, in some embodiments, an RMI-aware circuit for wear leveling or by an RMI-aware circuit for performing data-moving operations to avoid irrecoverable read-disturb errors may employ the same factors or analogous factors to identify blocks from which data are to be moved. These factors may include, for example, the number of invalidated keys in the block, the average key size, and the device memory pressure (e.g., the fraction of the storage device buffer currently being used). Another factor may be an “access frequency factor”, the value of which may be set based on the table of
As used herein, “a portion of” something means “at least some of” the thing, and as such may mean less than all of, or all of, the thing. As such, “a portion of” a thing includes the entire thing as a special case, i.e., the entire thing is an example of a portion of the thing. As used herein, a “subset” of a set is either the set or a proper subset of the set. As used herein, when a second quantity is “within Y” of a first quantity X, it means that the second quantity is at least X-Y and the second quantity is at most X+Y. As used herein, when a second number is “within Y %” of a first number, it means that the second number is at least (1−Y/100) times the first number and the second number is at most (1+Y/100) times the first number. As used herein, the term “or” should be interpreted as “and/or”, such that, for example, “A or B” means any one of “A” or “B” or “A and B”.
The background provided in the Background section of the present disclosure section is included only to set context, and the content of this section is not admitted to be prior art. Any of the components or any combination of the components described (e.g., in any system diagrams included herein) may be used to perform one or more of the operations of any flow chart included herein. Further, (i) the operations are example operations, and may involve various additional steps not explicitly covered, and (ii) the temporal order of the operations may be varied.
The methods disclosed herein may be performed by one or more processing circuits; for example, the RMI circuit 135 may be, or be part of, or include, a processing circuit. The term “processing circuit” is used herein to mean any combination of hardware, firmware, and software, employed to process data or digital signals. Processing circuit hardware may include, for example, application specific integrated circuits (ASICs), general purpose or special purpose central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), and programmable logic devices such as field programmable gate arrays (FPGAs). In a processing circuit, as used herein, each function is performed either by hardware configured, i.e., hard-wired, to perform that function, or by more general-purpose hardware, such as a CPU, configured to execute instructions stored in a non-transitory storage medium. A processing circuit may be fabricated on a single printed circuit board (PCB) or distributed over several interconnected PCBs. A processing circuit may contain other processing circuits; for example, a processing circuit may include two processing circuits, an FPGA and a CPU, interconnected on a PCB.
As used herein, when a method (e.g., an adjustment) or a first quantity (e.g., a first variable) is referred to as being “based on” a second quantity (e.g., a second variable) it means that the second quantity is an input to the method or influences the first quantity, e.g., the second quantity may be an input (e.g., the only input, or one of several inputs) to a function that calculates the first quantity, or the first quantity may be equal to the second quantity, or the first quantity may be the same as (e.g., stored at the same location or locations in memory as) the second quantity.
It will be understood that, although the terms “first”, “second”, “third”, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed herein could be termed a second element, component, region, layer or section, without departing from the spirit and scope of the inventive concept.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concept. As used herein, the terms “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent deviations in measured or calculated values that would be recognized by those of ordinary skill in the art.
As used herein, the singular forms “a” and “an” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Further, the use of “may” when describing embodiments of the inventive concept refers to “one or more embodiments of the present disclosure”. Also, the term “exemplary” is intended to refer to an example or illustration. As used herein, the terms “use,” “using,” and “used” may be considered synonymous with the terms “utilize,” “utilizing,” and “utilized,” respectively.
It will be understood that when an element or layer is referred to as being “on”, “connected to”, “coupled to”, or “adjacent to” another element or layer, it may be directly on, connected to, coupled to, or adjacent to the other element or layer, or one or more intervening elements or layers may be present. In contrast, when an element or layer is referred to as being “directly on”, “directly connected to”, “directly coupled to”, or “immediately adjacent to” another element or layer, there are no intervening elements or layers present.
Any numerical range recited herein is intended to include all sub-ranges of the same numerical precision subsumed within the recited range. For example, a range of “1.0 to 10.0” or “between 1.0 and 10.0” is intended to include all subranges between (and including) the recited minimum value of 1.0 and the recited maximum value of 10.0, that is, having a minimum value equal to or greater than 1.0 and a maximum value equal to or less than 10.0, such as, for example, 2.4 to 7.6. Similarly, a range described as “within 35% of 10” is intended to include all subranges between (and including) the recited minimum value of 6.5 (i.e., (1−35/100) times 10) and the recited maximum value of 13.5 (i.e., (1+35/100) times 10), that is, having a minimum value equal to or greater than 6.5 and a maximum value equal to or less than 13.5, such as, for example, 7.4 to 10.6. Any maximum numerical limitation recited herein is intended to include all lower numerical limitations subsumed therein and any minimum numerical limitation recited in this specification is intended to include all higher numerical limitations subsumed therein.
Although exemplary embodiments of a two-level indexing system for key-value persistent storage have been specifically described and illustrated herein, many modifications and variations will be apparent to those skilled in the art. Accordingly, it is to be understood that a two-level indexing system for key-value persistent storage constructed according to principles of this disclosure may be embodied other than as specifically described herein. The invention is also defined in the following claims, and equivalents thereof.
Claims
1. A method, comprising:
- sorting two or more key-value pairs to form a sorted key-value pair set;
- determining first address of a first key-value pair of the sorted key-value pair set, the first key-value pair including a first key and a first value;
- determining a second address of a second key-value pair of the sorted key-value pair set, the second key-value pair including a second key and a second value;
- using the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair to train a machine learning model;
- providing an input key to the trained machine learning model;
- determining, by the trained machine learning model, a third address based on the input key; and
- storing a third key-value pair in the third address.
2. The method of claim 1, further comprising performing a data-moving operation in a block of a key-value persistent storage device, the performing of the data-moving operation comprising identifying the two or more key-value pairs.
3. The method of claim 2, wherein the data-moving operation is a wear-leveling operation.
4. The method of claim 1, further comprising storing the sorted key-value pair set in a region of storage, wherein the determining of the first address of the first key-value pair comprises determining the first address at which the first key-value pair is stored.
5. The method of claim 1, wherein the using of the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair to train the machine learning model comprises training a first linear regression model to generate a first line based on the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair.
6. The method of claim 5, wherein the determining of the third address for the third key-value pair comprises multiplying the input key by a factor and adding an offset, the factor and the offset being based on a slope and offset of the first line.
7. The method of claim 1, wherein the using of the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair to train the machine learning model comprises training a first linear regression model to generate a first line based on the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair, wherein the determining of the third address for the third key-value pair is based on the first line, the method further comprising:
- determining that a difference between the third address for the third key-value pair determined by the machine learning model and a real address of the third key-value pair exceeds a threshold; and
- in response to the determining that the difference exceeds the threshold, training a second linear regression model to generate a second line based on a first subset of the sorted key-value pair set.
8. The method of claim 7, wherein the threshold is a difference between the real address of the third key-value pair and a page boundary.
9. The method of claim 8, further comprising:
- receiving a command to access a fourth key-value pair; and
- determining that the fourth key-value pair is in the first subset.
10. The method of claim 9, further comprising determining a fourth address or the fourth key-value pair based on the second line.
11. The method of claim 10, further comprising reading, from persistent storage, a page, the fourth address for the fourth key-value pair being within the page.
12. A key-value persistent storage device, comprising:
- persistent storage;
- a buffer; and
- a processing circuit, configured to: sort two or more key-value pairs to form a sorted key-value pair set; determine a first address of a first key-value pair of the sorted key-value pair set, the first key-value pair including a first key and a first value; determine a second address of a second key-value pair of the sorted key-value pair set, the second key-value pair including a second key and a second value; use the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair to train a machine learning model; provide an input key to the trained machine learning model; determine, by the trained machine learning model, a third address based on the input key; and store a third key-value pair in the third address.
13. The key-value persistent storage device of claim 12, wherein the processing circuit is further configured to perform a data-moving operation in a block of the key-value persistent storage device, the performing of the data-moving operation comprising identifying the two or more key-value pairs.
14. The key-value persistent storage device of claim 13, wherein the data-moving operation is a wear-leveling operation.
15. The key-value persistent storage device of claim 12, wherein the processing circuit is further configured to store the sorted key-value pair set in a region of storage, wherein the determining of the first address of the first key-value pair comprises determining the first address at which the first key-value pair is stored.
16. The key-value persistent storage device of claim 12, wherein the processing circuit being configured to train the machine learning model comprises the processing circuit being configured to:
- construct a first linear regression model to generate a first line based on the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair.
17. The key-value persistent storage device of claim 16, wherein the processing circuit being configured to determine the third address of the third key-value pair comprises the processing circuit being configured to multiply the input key by a factor and adding an offset, the factor and the offset being based on a slope and offset of the first line.
18. The key-value persistent storage device of claim 12, wherein the processing circuit being configured to use the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair to train the machine learning model comprises the processing circuit being configured to train a first linear regression model to generate a first line based on the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair, wherein the processing circuit being configured to determine the third address for the third key-value pair includes the processing circuit being configured to determine the third address for the third key-value pair based on the first line,
- the processing circuit being further configured to: determine that a difference between the third address for the third key-value pair determined by the machine learning model and a real address of the third key-value pair exceeds a threshold; and in response to the processing circuit being configured to determine that the difference exceeds the threshold, train a second linear regression model to generate a second line based on a first subset of the key-value pairs.
19. The key-value persistent storage device of claim 18, wherein the threshold is the difference between the real address of the third key-value pair and a page boundary,
- the processing circuit being further configured to: receive a command to access a fourth key-value pair; and determine that the fourth key-value pair is in the first subset.
20. A key-value persistent storage device, comprising:
- persistent storage;
- a buffer; and
- means for processing, configured to: sort two or more key-value pairs to form a sorted key-value pair set; determine a first address of a first key-value pair of the sorted key-value pair set, the first key-value pair including a first key and a first value; determine a second address of a second key-value pair of the sorted key-value pair set, the second key-value pair including a second key and a second value; use the first key-value pair, the first address of the first key-value pair, the second key-value pair, and the second address of the second key-value pair to train a machine learning model; provide an input key to the trained machine learning model; determine, by the trained machine learning model, a third address based on the input key; and store a third key-value pair in the third address.
5204958 | April 20, 1993 | Cheng et al. |
10061693 | August 28, 2018 | Kim |
10387302 | August 20, 2019 | Qiu |
10489291 | November 26, 2019 | Hsu et al. |
10515064 | December 24, 2019 | Bennett |
10649969 | May 12, 2020 | De |
10725988 | July 28, 2020 | Boles |
10915546 | February 9, 2021 | Tomlinson |
11100071 | August 24, 2021 | Tomlinson |
20150193491 | July 9, 2015 | Yang et al. |
20170017411 | January 19, 2017 | Choi |
20170300407 | October 19, 2017 | Qiu |
20180364937 | December 20, 2018 | Ki et al. |
20190034833 | January 31, 2019 | Ding |
20190042240 | February 7, 2019 | Pappu |
20190042611 | February 7, 2019 | Yap |
20190108267 | April 11, 2019 | Lyakas et al. |
20190138612 | May 9, 2019 | Jeon et al. |
20210011634 | January 14, 2021 | Tumkur Shivanand |
20210089498 | March 25, 2021 | Park |
20210109679 | April 15, 2021 | Guim |
20210181963 | June 17, 2021 | Choi |
20210248107 | August 12, 2021 | Hou et al. |
20210314404 | October 7, 2021 | Glek |
110888886 | March 2020 | CN |
113157694 | July 2021 | CN |
113268457 | August 2021 | CN |
113722319 | November 2021 | CN |
WO 2019/098871 | May 2019 | WO |
WO 2021/139376 | July 2021 | WO |
- Ajitesh Srivastava, Angelos Lazaris, Benjamin Brooks, Rajgopal Kannan, and Viktor K. Prasanna. Sep. 2019. Predicting memory accesses: the road to compact ML-driven prefetcher. In Proceedings of the International Symposium on Memory Systems. Association for Computing Machinery, New York, NY, USA (Year: 2019).
- Y. Sun, S. Feng, Y. Ye, X. Li and J. Kang, “A Deep Cross-Modal Hashing Technique for Large-Scale SAR and VHR Image Retrieval,” 2021 SAR in Big Data Era (BIGSARDATA), Nanjing, China, 2021, pp. 1-4, doi: 10.1109/BIGSARDATA53212.2021.9574218. (Year: 2021).
- H. Aggarwal, R. R. Shah, S. Tang and F. Zhu, “Supervised Generative Adversarial Cross-Modal Hashing by Transferring Pairwise Similarities for Venue Discovery,” 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM), Singapore, 2019, pp. 321-330, doi: 10.1109/BigMM.2019.000-2. (Year: 2019).
- Choi, H. et al., “A Survey of Machine Learning-Based System Performance Optimization Techniques”, Applied Sciences, Apr. 4, 2021, pp. 1-19.
- Kraska, T. et al., “The Case for Learned Index Structures”, Research 6: Storage & Indexing, SIGMOD'18, Jun. 10-15, 2018, Houston, Texas, USA, pp. 489-504.
- Marcus, R. et al., “Benchmarking Learned Indexes”, Jun. 29, 2020, pp. 1-14, arWiv:2006.12804v2.
- EPO Extended European Search Report dated Apr. 21, 2023, issued in corresponding European Patent Application No. 22204634.4 (13 pages).
Type: Grant
Filed: Feb 9, 2022
Date of Patent: Apr 9, 2024
Patent Publication Number: 20230176758
Assignee: Samsung Electronics Co., Ltd. (Suwon-si)
Inventors: Omkar Desai (Syracuse, NY), Changho Choi (San Jose, CA), Yangwook Kang (San Jose, CA)
Primary Examiner: Daniel C. Chappell
Application Number: 17/668,312