UNIFIED DATA STORE AND TRANSACTION SYSTEM
A unified data store and transaction system queries an n-tuple-based multimodal data structure via a mutable tuple-based interface the mutable tuple-based interface, the interface including a memory controller, and a query operation set. The system receives a tuple from a mutable tuple-based query interface with a tuple-reader and reading the tuple into a tuple object, and evaluates the tuple object against semantic rules via a tuple evaluator.
This application claims benefit under 35 U.S.C. § 119(e) of Provisional U.S. patent application No. 62/648,748, filed Mar. 27, 2018, the contents of which are incorporated herein by reference in their entirety.
BACKGROUNDCurrent ways of implementing database information retrieval can be extremely resource intensive. Conventional database systems are used extensively to store and retrieve data in a structured fashion. Such systems are typically responsive to structured query language (SQL) commands specifying criteria for characteristics of data to retrieve. Conventional database systems often have limitations when it comes to managing the type of freeform data commonly generated by social media platforms, free-form chat, and other applications that run in a distributed, real-time, and interactive fashion using many (often thousands) of diverse end user devices. Current methods of data storage and retrieval are inefficient and limit the usability of the systems. Current methods can cause conflicts when multiple parties need to read and write data in a short time frame. For example, currently read/write locks are monopolized for long periods of time during database transactions, preventing the data from being accessed by other parties. Further, the current paradigm's structure around holding locks for long periods can create large efficiency bottlenecks in current systems. In addition to these inefficiencies, the locks themselves cycle slowly. Industry standard reader/writer (RW) locks such as the std::shared_lock implementation are built into leading implementations such as GCC, CLang or Posix Threads. These implementations of a Multiple-Reader/Single-Writer (MRSW lock) typically use a mutex/critical section to protect the use counters. A readers/writer (RsW) or shared-exclusive lock (also known as a multiple readers/single-writer lock or multi-reader lock or push lock) is a synchronization primitive that allows concurrent access for read-only operations, while write operations require exclusive access. This means that multiple threads can read the data in parallel, but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers or readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid (and should not be read by another thread) until the update is complete. RsW locks are usually constructed on top of mutexes and condition variables, or on top of semaphores.
Conventional databases may also require substantial data duplication to store and relate different categories and types of data. Multimodal databases are often implemented using disparate technologies likely provided by independent vendors who specialize in subsets of full multi-modality. Further, multimodal databases are often stretched across multiple process spaces, for example, a time series database, document database and graph database may all be separately stored in different physical databases spread across multiple separate process spaces or machines on a network. This leads to increases in overhead as well as an increase in latency and complexity as the systems must manage multiple connections when exchanging information and/or executing queries. Additionally, due to use of heterogeneous storage mechanisms, data is stored in multiple different formats often requires a great deal of data redundancy, network transfers and data serialization, and the serialization process is often nontrivial and can be very resource intensive at scale.
To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
Referring to
The computing device 108, the computing device 110, and the computing device 112 access the storage engine 106 via the per core runtimes 102, and the lisp operators 104.
Referring now to
The process for implementing a multimodal split-associative data store and control 200 transmits the data to an in-memory multimodal data processor within a content-addressable memory, the in-memory multimodal data processor comprises an associative n-tuple store (block 204).
The process for implementing a multimodal split-associative data store and control 200 and receives a plurality of commands to execute a plurality of actions on the data in the associative n-tuple store (block 206).
A process receives data in the form of a plurality of n-tuples via a transactional pipeline interface. The process then transmits the data to an in-memory multimodal data processor within a content-addressable memory, the in-memory multimodal data processor comprises an associative n-tuple store. The process then and receives a plurality of commands to execute a plurality of actions on the data in the associative n-tuple store.
Referring now to
The process for controlling read/write access to a database 300 waits for a writer to finish by enters a wait-loop and queries a scheduler to reschedule the thread if current wait time exceeds a threshold value (block 204).
The process for controlling read/write access to a database 300 declares a resource to be read (block 306).
The process for controlling read/write access to a database 300 checks if a writer lock is taken (block 308).
The process for controlling read/write access to a database 300 returns the reader pool ID for the current thread (block 310).
The process for controlling read/write access to a database 300 and decrements the reader count in a cache line associated with the reader thread to release the read lock (block 312).
The process for controlling read/write access to a database 300 obtains a write lock by checks if a write-lock or reader pool cache line is taken and enters a wait-loop and queries a scheduler to reschedule the thread if current wait time exceeds a threshold (block 314).
The process for controlling read/write access to a database 300 and resets the semaphore to mark is take on the cache line associated with the writer thread to release the write lock (block 316).
A process for controlling read/write access initiates a read lock, the process obtains a reader pool ID for a thread from a fixed pool of cache-line based reader counts. The process then waits for a writer to finish by enters a wait-loop and queries a scheduler to reschedule the thread if current wait time exceeds a threshold value. The process then declares a resource to be read. The process then checks if a writer lock is taken. The process then returns the reader pool ID for the current thread. The process then and decrements the reader count in a cache line associated with the reader thread to release the read lock. The process then obtains a write lock by checks if a write-lock or reader pool cache line is taken and enters a wait-loop and queries a scheduler to reschedule the thread if current wait time exceeds a threshold. The process then and resets the semaphore to mark is take on the cache line associated with the writer thread to release the write lock.
Referring now to
To protect internal database structures, and increase the efficiency of reading and writing data, read/write locking mechanism is implemented. The read/write lock ensures that a write lock is only taken when necessary and released as quickly as possible after the write is complete. In order to minimize write lock duration, updates may be prepared before the write lock is taken. The read/write lock relies on CPU cache lines to modify the counters in a minimal-contention manner using only low-level processor-intrinsic instructions to protect the memory being modified or read. Because for each database partition only one writer may be present at any time, a semaphore may be maintained to mark if the write lock is taken. On the other hand, any number of threads may be reading at the same time (when no writes are occurring). Conceptually, for read locks a counter may be maintained which indicates the number of readers that are reading. Threads using the lock will either attempt to read or write to some shared data structure protected by the lock. A thread attempting to access the protected data structure will declare its intention to read or write by using the API of the lock. When any thread wishes to read, the thread calls TakeReadLock( ), reads from the protected data-structure and then calls ReleaseReadLock( ). When any thread wishes to write, the thread calls TakeWriteLock( ), writes to the protected data-structure and then calls ReleaseWriteLock( ). The process for obtaining a read lock is:
1) When a thread is attempting to obtain a read lock, it first waits for the write lock to be released if the lock is currently being held.
2) The counter representing the number of readers for the current thread group is optimistically increased.
3) Because there is a possibility of a race condition, check once more to ensure that the write lock is still clear.
4) If the write lock is now set, then the read lock is rolled back, and the process returns to step (1)
5) If the write lock is clear, then a read lock has been successfully obtained. To release the read lock all that is necessary is to decrement the counter representing the number of readers.
The process for obtaining a write lock is:
1) When a thread wishes to obtain the write lock it first waits to make sure no other writers are present (using a CAS operation).
2) Once the write lock has been obtained, wait for the counters that represents the number of readers to fall to zero.
3) At this point a write lock has been obtained.
The read and write counters may be represented using atomic variables spaced evenly on cache lines. The counter for the number of readers is implemented as a pool of cache-line aligned counters that provide no write contention in the normal case where the number of counters representing threads running is equal or more than the number of cores on the machine.
In mathematics a tuple is a finite ordered list (sequence) of elements. An n-tuple is a sequence (or ordered list) of n elements, where n is a non-negative integer. There is only one 0-tuple, an empty sequence.
The following exemplary code fragment creates a key in the Substrate tuple-store.(db.createKey “MyFirstTuple”). With tuples each element may be of different types, and there may be as many elements as needed, allowing much more complex tuples to be specified.
Tuple elements, (atoms), and may be, for example, one of the following types: UTF8 encoded string, Boolean, integer, IEEE 754 32 or 64-bit float, UTF8 encoded string, a binary string literal, identifier symbol. Further, compound atoms comprised of data structures may also be used, for example, lists, vectors, maps, dictionaries.
By way of example, creating tuples of animals and their maximum running speed in kilometers per hour may be accomplished by: (db.createKey [“Mammal” “Greyhound” 63.5])(db.createKey [“Bird” “Penguin” 48.0])(db.createKey [“Mammal” “Rabbit” 48.0])(db.createKey [“Bird” “Roadrunner” 32.0]). This illustrates a mix of type string and double. Tuple elements may be any of the following types but not limited to: string, float, double, integer (signed/unsigned) tuple, symbol expression (S-Expression), Boolean, binary blobs, symbol. Because a tuple is also a data type, it is possible to create arbitrarily nested tuples, for example: (db.createKey [“Mammal” “House Mouse” 13.0 [“Likes” [“Apples” “Cheese”]]]). The length (or arity) of tuples does not need to be uniform. Not all the tuples from the earlier examples have the same length. By way of example, time series data from several temperature sensors, measurements taken at regular intervals, may store the following properties for each measurement: the sensor ID, the timestamp, and the measured temperature. For querying purposes, these are stored in tuples of the following form: [<timestamp> <sensorld> <value>], for example [1478612156 2 15.2][1478612160 0 22.4][1478612160 3 25.1][1478612170 1 21.7][1478622176 0 21.8][1478622176 2 15.0][1478622182 1 22.3][1478622183 3 23.5].
Another example is representation of a 3D cube using tuples:
Vertices:
[0 −0.5 0.5 −0.5][1 −0.5 0.5 0.5][2 0.5 0.5 0.5][3 0.5 0.5 −0.5][4 −0.5−0.5 −0.5][5 −0.5 −0.5 0.5][6 0.5 −0.5 0.5][7 0.5 −0.5 −0.5]
Faces:
[;|−X face|; [5 1 0] [5 0 4];|+X face|; [7 3 2] [7 2 6];|−Y face|; [7 6 5] [7 5 4];|+Y face|; [2 3 0] [2 0 1];|−Z face|; [4 0 3] [4 3 7];|+Z face|; [6 2 1] [6 1 5]].
Another example is representation of geospatial data: [−74.0059413;| longitude 40.7127837 |; latitude “New York”;| city|; 8405837;| population|;][−118.2436849 34.0522342 “Los Angeles” 3884307][−87.6297982 41.8781136 “Chicago” 2718782][−95.3698028 29.7604267 “Houston” 2195914][−75.1652215 39.9525839 “Philadelphia” 1553165].
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Generally, information stored in databases is stored and retrieved in different formats requiring the data to be “serialized.” In the context of data storage, serialization is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later (possibly in a different computer environment). data of different types often needs to be stored in separate databases, for example a time series database, a graph database and a document database and the formats must be serialized and deserialized to and to relate them together and perform various operations. Often, additional databases are created to conveniently relate data of disparate types.
Referring now to
The de-serialized data store transaction system 1100 utilizes a homoiconic (self-describing) n-tuple-based deserialized format; presenting the data the same way in code as it does when stored in memory. This format enables storage of multiple data types in a single multimodal database without the need for large-scale data duplication or serialization to link the data in different ways. Instead, the data may be stored as a single tuple database, and relations may be added on the fly. Further, these tuples may be encoded in any convenient format, for example, as (1 ‘foo’) or (00000001 00100111 01100110 01101111 01101111 00100111). The tuples are lexicographically self-sorting and stored in a content addressable memory so the content of any query acts as an address for the location of the tuple containing the relevant information which may be retrieved quickly. Searching becomes trivial and almost implicit. The system utilizes a Lisp-derived syntax and semantics, utilizing prefix notation and representing both source code and data as symbolic expressions (s-expressions). Standard s-expressions (“symbolic expression”) are a notation for nested list (tree-structured) data, here the s-expressions have been modified to be nested tuples, where tuples and other data types may be nested within an initial tuple.
To select the columns within a table to make a query, using, for example SQL, a user must use the commands which trigger the data tables and columns to be individual searched, then deserialized and returned.
SQL example:
Table named EMPLOYEE_TBL
LAST_NAME FIRST_NAME EMP_ID
SELECT EMP_ID, LAST_NAME
FROM EMPLOYEE_TBL
WHERE EMP_ID=‘123456’
Here, a user could design a function called “get” to return the same information:
(get (EMPLOYEE_TBL (get LAST_NAME FIRST_NAME (get EMP_ID ‘123456’)))
Referring now to
The concept 1226 may be a data type, similar to a namespace or class. The unified data store taxonomy model 1200 may require five concepts: the self-describing data object 1032 (self-defining), the relationship 1222, the weight 1224, the domain 1228 and the token 1230. A domain 1208 may be conceptualized as a knowledge domain which may be defined by the user. For given data, the user may define multiple domains. Relationship 1222 may be a connection between two different pieces of data (concepts, domains, tokens). The weight 1224 provides the strength for the relationship 1222. For example, a user may define the following concepts: (createConcept 0), (createConcept 1), (createConcept 2),(createConcept 3), domain(createConcept 4).
This may allow for the creation of a tuple with the format: (concept,relationship,weight,domain,token). A relationship may be established between different concepts, tokens or domains, for example the unique identifier 1002 and the unique identifier 1012 may be connected to one another by the relationship 1204 and the relationship 1214. This relationship may have a weight which denotes the strength of one datum/type to another. Weights may have values per relationship type, indicating the strength of the given relationship. A relationship may be defined as unidirectional or bi-directional. For example, a relationship may only be denoted in relationship 1204 referring to a relationship 1214, or it may be defined in both tuples. Similarly, weights may differ directionally as well. For example, a first tuple may have a weight 1206 of 0.7 to a second tuple but the second tuple may have a weight 1216 of 0.9 to the first tuple. The weights may also be the same between both tuples.
This taxonomy implicitly reflects the underlying organization and architecture of a unified data store and content addressable memory when used in conjunction therewith allows for a substantial increase in a computer's operational efficiency pertaining to data querying and management.
The taxonomy model is a conceptual graph built from atomic (meta-circular) data objects (“concepts”) Everything in the model is represented uniformly, by concepts themselves. Each “concept” may receive a numeric ‘concept ID’, which may be, for example increasing numbers starting at 0. Concepts are used to define other concepts, and anything stored will relate back to the initial model. The first concept will be the idea of a concept itself which is given an ID 0. All other concepts may then be namespaced under this ID in the key-tuple store, so the first tuple will be [0 0], to be interpreted as ‘the concept “concept”’. Utilizing s-expressions a function may be created to define the first concept, allowing the first concept to be generated in the following manner: (createConcept 0).
Concepts may be further expanded to show relationships between concepts, with a “relationship” concept.
This may be established, for example, by: (createConcept 1)
These concepts allow new tuples to be created which describe the relationships between concepts. These may be namespaced by the concept ID for relationship, “1” in the current example model. This model may then be extended to incorporate weighted relationships. Without weights, our createRelationship operator will look as follows:
(defn createRelationship (conceptld relationshipTypeConceptld relatedConceptld) (db.createKey [1 conceptld relationshipTypeConceptld relatedConceptld]))
For example, the concept ‘animal’ is a hypernym of the concept ‘tiger’, so given three concepts, “animal” as 100, “tiger” as 101, and “hypernym of” as 102, the relationship could be defined as:
[(createRelationship 100 102 103)
The model also supports weighted relationships by first defining the concept ‘weight’, and then adding a fourth parameter to the createRelationship operator:
In order to map from external data sources to the concept model. To do this, we will create the concepts ‘domain’ and ‘token’. ‘Domain’ refers to the way in which the data is represented, and token refers to associated data. For example, when talking about human languages, this could be ‘English’, ‘Spanish’ or any other languages that were defined. In the context of political parties for example, these tokens could be Zip codes or GPS coordinates. In addition, it is also possible to create combinations of tokens across domains.
(createConcept 3); domain
(createConcept 4); token
A token may refer to many concepts, and one concept can be captured by many tokens, and weights may be assigned in one or both directions. Weights may be used by the application to define how tightly tokens are bound to concepts and vice-versa. For example, one weight may link concepts to tokens in a domain, and another may link tokens in a domain to concepts.
The first weight (the ‘token weight’) represents how tightly the token is bound to the concept, as opposed to other tokens. For example, the tokens from the domain ‘English’, “cat”, and “tiger”, may both describe the abstract ‘tiger’, but the actual word “tiger” is more likely to be used to describe the concept of a ‘tiger’ than the word “cat”. The second weight (the ‘concept weight’) represents how tightly the token is bound to this concept, as opposed to other concepts. For example, the token “cat” may refer to ‘tiger’ only 10% of the time, and to other things 90% of the time.
The unified data store and transaction system 1300 comprises a tuple 1302, a reader 1304, an evaluator 1306, a tuple object 1308, and a string object 1310.
The unified data store and transaction system 1300 may be a tuple processor. A tuple 1302 may be input as a typed string, the string object 1310, the parts of which may represent any data type. The tuple 1302 may apply standard string operations (may be concatenated, trimmed, split etc.) as well as apply operations unique to the data types within it. The code for the unified data store and transaction system 1300 may be run on an interpreter or as compiled code. The unified data store and transaction system 1300 may utilize prefix notation, e.g. (OPERATOR, OPERAND, OPERAND . . . ). By way of example (a*(b+c)/d) may be written as (/(*a (+b c)) d).
Evaluation of a tuple 1302 by the unified data store and transaction system 1300 may comprise reading the tuple 1302 with a reader 1304 wherein the reader 1304 translates the tuple 1302 which may be in string format into a tuple object 1308. The semantics of the commands and tuple objects may then be evaluated by the evaluator 1306.
The evaluator 1306 may evaluate the syntax which has been defined by the user. The evaluator 1306 may be a function which may take the tuple as an argument. The system stores data, regardless of type, in a tuple format. This provides the data structure with uniformity and allows for a multimodal data store, requiring little-to-no duplication of the data to relate different categories and types of data. Further, this architecture allows for multimodal databases to easily be stored within the same process space, leading to further increases in efficiency. For example, a user may have a set of tuples which represents time stamps, and another set of tuples which represents geolocations. The user may wish to associate the two of these, and so may combine them into a new tuple for storage within the database. The same operators may be used on the new tuple, and each subpart of the tuple as each element is a tuple. This allows different types to be stored in the same database. For example, a database of tweets may include the content of the tweet, string objects; as well as the times of the tweets, time-series; connections between the users, connected-graph; and location the tweet was sent from, geospatial. These may all be encapsulated in a single database. For example, the tuples may be arranged such that tweets are embedded within geospatial data which is embedded within time.
The system stores data, regardless of type, in a tuple format. This provides the data structure with uniformity and allows for a multimodal data store, requiring little-to-no duplication of the data to relate different categories and types of data. Further, this architecture allows for multimodal databases to easily be stored within the same process space, leading to further increases in efficiency. For example, a user may have a set of tuples which represents time stamps, and another set of tuples which represents geolocations. The user may wish to associate the two of these, and so may combine them into a new tuple for storage within the database. The same operators may be used on the new tuple, and each subpart of the tuple as each element is a tuple. This allows different types to be stored in the same database. For example, a database of tweets may include the content of the tweet, string objects; as well as the times of the tweets, time-series; connections between the users, connected-graph; and location the tweet was sent from, geospatial. These may all be encapsulated in a single database. For example, the tuples may be arranged such that tweets are embedded within geospatial data which is embedded within time.
The tuple may be instantiated (time minute hour day weekday week month year (geolocation latitude longitude locationString) (tweet tweetString)). Queries made to find a specific day of the week may query using the “weekday” position and all tuples which had matching content would be retrieved using that implicit “address”. This also allows for the returned information may be pruned in real time by altering individual parts of the tuples, to constrain them, for example, by specifying only tweets from Amsterdam on Jul. 1, 2017. Further, because the tuples are ordered, the user may also query by specifying a range of indices. For example, to get the day and time of all the tweets, the indices 1-4 could be retrieved. When querying with a range, the order of complexity of the query is proportional to the length of the tuple, (order x, where x is the number of bytes in the tuple). The system also allows for reduced overhead when constraining existing queries; since the content of the tuples acts as an implicit address, if the user wished to constrain the data returned, a constraint could be specified to narrow the query. The system would not need to find and remove all the non-matching entries but would simply no longer return the non-matching entries.
Input devices 1404 comprise transducers that convert physical phenomenon into machine internal signals, typically electrical, optical or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices 1404 are keyboards which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three dimensional objects into device signals. The signals from the input devices 1404 are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory 1406.
The memory 1406 is typically what is known as a first or second level memory device, providing for storage (via configuration of matter or states of matter) of signals received from the input devices 1404, instructions and information for controlling operation of the cpu 1402, and signals from storage devices 1410.
The memory 1406 and/or the storage devices 1410 may store computer-executable instructions and thus forming logic 1414 that when applied to and executed by the cpu 1402 implement embodiments of the processes disclosed herein.
Information stored in the memory 1406 is typically directly accessible to the cpu 1402 of the device. Signals input to the device cause the reconfiguration of the internal material/energy state of the memory 1406, creating in essence a new machine configuration, influencing the behavior of the digital apparatus 1400 by affecting the behavior of the cpu 1402 with control signals (instructions) and data provided in conjunction with the control signals.
Second or third level storage devices 1410 may provide a slower but higher capacity machine memory capability. Examples of storage devices 1410 are hard disks, optical disks, large capacity flash memories or other non-volatile memory technologies, and magnetic memories.
The cpu 1402 may cause the configuration of the memory 1406 to be altered by signals in storage devices 1410. In other words, the cpu 1402 may cause data and instructions to be read from storage devices 1410 in the memory 1406 from which may then influence the operations of cpu 1402 as instructions and data signals, and from which it may also be provided to the output devices 1408. The cpu 1402 may alter the content of the memory 1406 by signaling to a machine interface of memory 1406 to alter the internal configuration, and then converted signals to the storage devices 1410 to alter its material internal configuration. In other words, data and instructions may be backed up from memory 1406, which is often volatile, to storage devices 1410, which are often non-volatile.
Output devices 1408 are transducers which convert signals received from the memory 1406 into physical phenomenon such as vibrations in the air, or patterns of light on a machine display, or vibrations (i.e., haptic devices) or patterns of ink or other materials (i.e., printers and 3-D printers).
The network interface 1412 receives signals from the memory 1406 and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface 1412 also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory 1406.
Terms used herein should be accorded their ordinary meaning in the relevant arts, or the meaning indicated by their use in context, but if an express definition is provided, that meaning controls.
“Circuitry” in this context refers to electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes or devices described herein), circuitry forming a memory device (e.g., forms of random access memory), or circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment).
“Firmware” in this context refers to software logic embodied as processor-executable instructions stored in read-only memories or media.
“Hardware” in this context refers to logic embodied as analog or digital circuitry.
“Logic” in this context refers to machine memory circuits, non transitory machine readable media, and/or circuitry which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter).
“Software” in this context refers to logic implemented as processor-executable instructions in a machine memory (e.g. read/write volatile or nonvolatile memory or media).
Herein, references to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to a single one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list, unless expressly limited to one or the other. Any terms not expressly defined herein have their conventional meaning as commonly understood by those having skill in the relevant art(s).
Various logic functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an “associator” or “correlator”. Likewise, switching may be carried out by a “switch”, selection by a “selector”, and so on.
Those skilled in the art will recognize that it is common within the art to describe devices or processes in the fashion set forth herein, and thereafter use standard engineering practices to integrate such described devices or processes into larger systems. At least a portion of the devices or processes described herein can be integrated into a network processing system via a reasonable amount of experimentation. Various embodiments are described herein and presented by way of example and not limitation.
Those having skill in the art will appreciate that there are various logic implementations by which processes and/or systems described herein can be affected (e.g., hardware, software, or firmware), and that the preferred vehicle will vary with the context in which the processes are deployed. If an implementer determines that speed and accuracy are paramount, the implementer may opt for a hardware or firmware implementation; alternatively, if flexibility is paramount, the implementer may opt for a solely software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, or firmware. Hence, there are numerous possible implementations by which the processes described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the implementation will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations may involve optically-oriented hardware, software, and or firmware.
Those skilled in the art will appreciate that logic may be distributed throughout one or more devices, and/or may be comprised of combinations memory, media, processing circuits and controllers, other circuits, and so on. Therefore, in the interest of clarity and correctness logic may not always be distinctly illustrated in drawings of devices and systems, although it is inherently present therein. The techniques and procedures described herein may be implemented via logic distributed in one or more computing devices. The particular distribution and choice of logic will vary according to implementation.
The foregoing detailed description has set forth various embodiments of the devices or processes via the use of block diagrams, flowcharts, or examples. Insofar as such block diagrams, flowcharts, or examples contain one or more functions or operations, it will be understood as notorious by those within the art that each function or operation within such block diagrams, flowcharts, or examples can be implemented, individually or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. Portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more processing devices (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry or writing the code for the software or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of a signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, flash drives, SD cards, solid state fixed or removable storage, and computer memory.
In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually or collectively, by a wide range of hardware, software, firmware, or any combination thereof can be viewed as being composed of various types of circuitry.
Claims
1. A method of controlling read and write access to a memory structure comprising:
- initiating a read lock comprising: obtaining a reader pool ID for a thread from a fixed pool of readers; waiting for a writer to finish by entering a wait-loop and querying a scheduler to reschedule the thread if current wait time exceeds a threshold value; declaring a resource to be read; checking for an active write lock; and returning the reader pool ID for the thread; and initiating a write-lock comprising: checking for an active write lock flag and an active read lock flag and entering a wait-loop if the active write lock flag or the active read lock flag is present; and querying a scheduler to reschedule the thread if the wait time exceeds the threshold value.
2. The method of claim 1 wherein the reader resets a semaphore to release the write-lock.
3. The method of claim 1 wherein the active read lock flag is a counter pool of cache-line aligned read counters.
4. The method of claim 3 wherein the counter pool is constrained to the number of processor cores on a host machine.
5. The method of claim 3 wherein the read counters are represented using atomic variables.
6. The method of claim 1 the reader count on a cache line associated with the thread releases an active read lock.
7. The method of claim 1 wherein the memory structure may have one writer and multiple readers.
8. A de-serialized data store and transaction system comprising:
- a content addressable memory mapped to an n-tuple-based multimodal data structure, the n-tuple-based multimodal data structure supporting heterogeneous nested data types comprising: a streamable abstract syntax tree; and a self-describing operational interface comprising commands transmitted via a plurality of nested s-expressions.
9. The de-serialized data store and transaction system of claim 8 wherein the n-tuple-based multimodal data structure supports string-like properties.
10. The de-serialized data store and transaction system of claim 8 wherein the n-tuple-based multimodal data structure is homoiconic.
11. The de-serialized data store and transaction system of claim 8 wherein tuples are received as a string of typed data.
12. A secure proxy-free data store access system comprising:
- A plurality of hierarchically privileged nested tuple-space partitions in a content addressable memory;
- a plurality of hierarchically contained programming interface functions defined within each of the plurality of hierarchically privileged nested tuple-space partitions;
- a plurality of virtual machines each associated with a processor core associated with at least one tuple-space partition; and
- a reading and writing data from the content addressable memory via a transactional read pipeline and a transactional write pipeline.
13. The secure proxy-free data store access system of claim 12 wherein each of the plurality of hierarchically privileged nested tuple-space partitions may access privileged nested tuple-space partition with a lower privilege level.
14. The secure proxy-free data store access system of claim 12 wherein access to each of the plurality of the hierarchically contained programming interface functions only granted by a privileged nested tuple-space partition containing that control function.
15. The secure proxy-free data store access system of claim 12 wherein the content addressable memory is partitioned from a global memory store.
16. The secure proxy-free data store access system of claim 12 wherein the plurality of hierarchically privileged nested tuple-space partitions is mirrored in temporary memory.
17. The secure proxy-free data store access system of claim 12 wherein each of the hierarchically privileged nested tuple-spaces has a single cache-line aware writers and a plurality of readers.
18. The secure proxy-free data store access system of claim 12 wherein each processor core has an independent cache line.
19. The secure proxy-free data store access system of claim 12 wherein inter-core communication and software transactional memory is accessed via the hierarchically contained programming interface functions.
20. A unified data store and transaction system comprising:
- a querying an n-tuple-based multimodal data structure via a mutable tuple-based interface the mutable tuple-based interface comprising: a memory controller; and a query operation set;
- receiving a tuple from a mutable tuple-based query interface with a tuple-reader and reading the tuple into a tuple object; and
- evaluating the tuple object against semantic rules via a tuple evaluator.
21. The unified data store of claim 20 wherein the n-tuple-based multimodal data structure is mirrored in a content addressable memory.
22. The unified data store of claim 20 wherein the tuple object further comprises an abstract syntax tree.
23. The unified data store of claim 20 wherein the tuple objects are lexicographically self-sorting.
24. A taxonomy model in a unified data store comprising:
- a data graph structure comprising a plurality of self-describing data objects each self-describing data object comprising a plurality of: a unique identifier; a relationship object linking the self-describing data object to a plurality of other self-describing data objects; a domain object linking the self-describing data object to a plurality of domains; and a token object representing the self-describing data object within a domain;
- an n-tuple-based multimodal data structure a content addressable memory;
- a tuple-reader; and
- a tuple evaluator configured with a set of tuple semantic rules.
25. The taxonomy model of claim 24 wherein the self-describing data object may be operated on.
26. The taxonomy model of claim 24 wherein the data graph structure is representative of the n-tuple-based multimodal data structure.
27. The taxonomy model of claim 24 wherein the self-describing data object is operationally decoupled from the representations of that object in the token object.
28. The taxonomy model of claim 24 wherein the data graph structure is representative of the content addressable memory.
29. The taxonomy model of claim 24 wherein each of the plurality of self-describing data objects are uniquely enumerated.
30. The taxonomy model of claim 24 wherein the plurality of self-describing data objects further comprises a weight object.
31. A method of implementing a multimodal split-associative data store and control comprising:
- receiving data in the form of a plurality of n-tuples via a transactional pipeline interface;
- transmitting the data to an in-memory multimodal data processor within a content-addressable memory, the in-memory multimodal data processor comprising an associative n-tuple store; and
- receiving a plurality of commands to execute a plurality of actions on the data in the associative n-tuple store.
32. The method of claim 31 wherein the plurality of actions performed on the data in the in-memory multimodal data processor further comprise at least one of:
- a read operation;
- a write operation; and
- an arithmetic operation. wherein the data is retrieved from the associative n-tuple store via a transactional read pipeline.
33. The method of claim 31 wherein the data is retrieved from the associative n-tuple store via the transactional pipeline interface.
34. The method of claim 31 wherein the plurality of actions further comprise storing the data in an associative tuple store within a lexicographically ordered key n-tuple-store and an optional n-tuple store.
35. The method of claim 34 wherein the plurality of n-tuples within the lexicographically ordered key n-tuple-store further comprise at least one of:
- a tuple-store key associated with an optional tuple store n-tuple; a table key associated with an optional n-tuple store table; and
- a stand-alone key n-tuple.
36. The method of claim 31 wherein n-tuples within the associative n-tuple store are lexicographically self-sorting.
37. The method of claim 31 wherein the arity of the plurality of n-tuples within the associative n-tuple store is non-uniform.
38. The method of claim 31 wherein the plurality of n-tuples within the associative n-tuple store possess a modular associativity further comprising:
- one-one;
- one-to-many;
- and one-to none.
Type: Application
Filed: Mar 22, 2019
Publication Date: Oct 3, 2019
Patent Grant number: 11442994
Inventors: Christian Beaumont (Amsterdam), Behnaz Beaumont (Amsterdam), Jouke van der Maas (Koog aan de Zaan), Jan Drake (Seattle, WA)
Application Number: 16/362,033