Patents by Inventor George Kondiles

George Kondiles has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200117471
    Abstract: A computing device includes main volatile memory and a node. The node includes a central processing module, non-volatile memory; and a non-volatile memory interface unit. A combination of the non-volatile memory and the main volatile memory stores an application specific operating system and at least a portion of a computing device operating system. The application specific operating system includes a plurality of application specific system level operations and the computing device operating system includes a plurality of general system level operations. A first processing module of the central processing module operates in accordance with a selected operating system and ignores operation not included in the selected operating system. The selected operating system includes one or more selected application specific level operations of the application specific operating system.
    Type: Application
    Filed: February 4, 2019
    Publication date: April 16, 2020
    Applicant: Ocient Holdings LLC
    Inventors: George Kondiles, Jason Arnold
  • Publication number: 20200117649
    Abstract: A method includes receiving a data set that includes a plurality of data records, where a data record includes a first data field containing a first fixed length data value and a second data field containing a first variable length data value. The method further includes accessing a compression dictionary for the second data field, where a first entry of the compression dictionary includes a key field storing a first fixed length index value and a value field storing the first variable length data value, and where the key field has a smaller data size than the value field. The method further includes creating a storage data set based on the compression dictionary and sending the storage data set to a storage sub-system for storage, where the first variable length data value of the second data field of the data record is replaced with the first fixed length index value.
    Type: Application
    Filed: December 14, 2018
    Publication date: April 16, 2020
    Applicant: Ocient Holdings LLC
    Inventors: Jason Arnold, George Kondiles
  • Publication number: 20200117664
    Abstract: A method includes receiving, by a first computing entity of a database system, a query request that is formatted in accordance with a generic query format. The method further includes generating, by the first computing entity, an initial query plan based on the query request and a query instruction set. The method further includes determining, by the first computing entity, storage parameters. The method further includes determining, by the first computing entity, processing resources for processing the query request based on the storage parameters. The method further includes generating, by the first computing entity, an optimized query plan from the initial query plan based on the storage parameters, the processing resources, and optimization tools. The method further includes sending, by the first computing entity, the optimized query plan to a second computing entity for distribution and execution of the optimized query plan.
    Type: Application
    Filed: February 5, 2019
    Publication date: April 16, 2020
    Applicant: Ocient Inc.
    Inventors: George Kondiles, Jason Arnold
  • Publication number: 20200117383
    Abstract: A method includes identifying, by a processing entity of a computing device, data units to read from non-volatile memory and to write into ordered buffers of volatile memory. The method further includes generating, by the processing entity, read operations regarding the data units, wherein the number of read operations equals ā€œnā€. The method further includes tagging, by the processing entity, each read operation of the read operations with a unique ordered tag value. The method further includes receiving, by the processing entity, read responses to the read operations from the non-volatile memory. The method further includes writing, by the processing entity, data units contained in the read responses into the ordered buffers in accordance with the ordered tag values. The method further includes tracking, by the processing entity, consumption of the data units from the ordered buffers.
    Type: Application
    Filed: February 5, 2019
    Publication date: April 16, 2020
    Applicant: Ocient Holdings LLC
    Inventors: George Kondiles, Jason Arnold
  • Publication number: 20200117541
    Abstract: A method includes generating, by a processing entity of a computing system, a plurality of parity blocks from a plurality of lines of data blocks. A first number of parity blocks of the plurality of parity blocks is generated from a first line of data blocks of the plurality of lines of data blocks. The method further includes storing, by the processing entity, the plurality of lines of data blocks in data sections of memory of a cluster of computing devices of the computing system in accordance with a read/write balancing pattern and a restricted file system. The method further includes storing, by the processing entity, the plurality of parity blocks in parity sections of memory of the cluster of computing devices in accordance with the read/write balancing pattern and the restricted file system.
    Type: Application
    Filed: February 5, 2019
    Publication date: April 16, 2020
    Applicant: Ocient Inc.
    Inventors: George Kondiles, Jason Arnold
  • Publication number: 20190073195
    Abstract: A method includes a computing device receiving a sort request regarding data of a table. The method further includes the computing device determining probability traits of the data. The method further includes the computing device dividing the sort request into sub-sort requests based on the probability traits. The method further includes the computing device identifying processing core resources to perform the sub-sort requests based on the probability traits. The method further includes the computing device allocating the sub-sort requests to the identified processing core resources in accordance with the probability traits. The method further includes the computing device allocating data portions to the identified processing core resources in accordance with the probability traits. The method further includes the computing core resources executing allocated sub-sort requests on corresponding divided data portions to produce sorted data portions.
    Type: Application
    Filed: September 6, 2018
    Publication date: March 7, 2019
    Applicant: Ocient Inc.
    Inventors: Jason Arnold, George Kondiles
  • Publication number: 20190034485
    Abstract: A large highly parallel database management system includes thousands of nodes storing huge volume of data. The database management system includes multiple query optimizers for determining low cost execution plans for queries. The database management system is adapted to receive a data query. An execution plan generator component of the database management system generates an initial execution plan for the query. The initial execution plan is fed as input to more than one query optimizers. Each optimizer starts with the initial execution plan, generates alternative execution plans, and determines a satisfactory execution plan that incurs the lowest cost. The database management system compares the selected execution plans by the optimizers and selects one with the lowest cost. The multiple query optimizers run in parallel.
    Type: Application
    Filed: May 29, 2018
    Publication date: January 31, 2019
    Inventors: Jason Arnold, George Kondiles
  • Publication number: 20180349364
    Abstract: A large highly parallel database management system includes thousands of nodes storing huge volume of data. The database management system includes a query optimizer for optimizing data queries. The optimizer estimates the column cardinality of a set of rows based on estimated column cardinalities of disjoint subsets of the set of rows. For a particular column, the actual column cardinality of the set of rows is the sum of the actual column cardinalities of the two subsets of rows. The optimizer creates two respective Bloom filters from the two subsets, and then combines them to create a combined Bloom filter using logical OR operations. The actual column cardinality of the set of rows is estimated using a computation from the combined Bloom filter.
    Type: Application
    Filed: May 29, 2018
    Publication date: December 6, 2018
    Inventors: Jason Arnold, George Kondiles
  • Publication number: 20180285414
    Abstract: A cluster node within a cluster of a highly parallel database system includes at least one processing unit that runs a set of first tier threads and a set of second tier threads, a storage disk drive, and a networking interface. When a first tier thread receives a task, it divides the task into a set of subtasks. The first tier thread also assigns the set of subtasks between a subset of the set of second tier threads for execution. Each second tier thread within the subset processes the one or more subtasks it is assigned to. When the task is a work, the subtasks are work units. When the task is a work unit, the subtasks are subwork units.
    Type: Application
    Filed: April 2, 2018
    Publication date: October 4, 2018
    Inventors: George Kondiles, Rhett Colin Starr
  • Publication number: 20180285167
    Abstract: A highly parallel database system includes multiple clusters of nodes. Each cluster includes multiple nodes. Each node includes a set of first tier threads and a set of second tier threads. Each second tier thread determines its own current load, and communicates the load status to a corresponding first tier thread. Each first tier thread checks the load status of each corresponding second tier thread when it allocates tasks between a number of corresponding second tier threads to achieve load balance between the second tier threads.
    Type: Application
    Filed: April 2, 2018
    Publication date: October 4, 2018
    Inventors: George Kondiles, Rhett Colin Starr
  • Publication number: 20180165351
    Abstract: Analyzing large datasets requires prioritization of analytical calculations to reduce analysis time and resource requirements. The prioritization process includes determining characteristics of a dataset, determining a list of analytical calculations, prioritizing the list of analytical calculations based on the dataset characteristics and characteristics of the analytical calculations, selecting the highest ranked analytical calculation, and applying the selected calculation on the dataset. The prioritization process of a new calculation based on a known set of calculations includes ranking the set of calculations based on their result scores from execution on a given dataset, comparing the new calculation to the list of known set of calculation, determining a similar calculation, and assigning the rank of the similar calculation to the new calculation.
    Type: Application
    Filed: December 13, 2017
    Publication date: June 14, 2018
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski, S. Christopher Gladwin
  • Publication number: 20180165312
    Abstract: Data records and associated data confidence of data in each data record are both stored in a database system. Each data record includes a data confidence. The data confidence indicates an accuracy or reliability level of the data of corresponding data record. The data records with confidence information are constructed in memory before they are stored into the database system. When the data records are retrieved from the database for analysis, the data confidence is retrieved as well. The analysis of data contained in the data records further considers the data confidence. The analysis result is thus affected by the data confidence.
    Type: Application
    Filed: December 13, 2017
    Publication date: June 14, 2018
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski, S. Christopher Gladwin
  • Publication number: 20180095996
    Abstract: A networked database management system (DBMS) and supporting infrastructure is disclosed. At least one node in the DBMS includes a processor having a preferred data alignment. An application operating on the processor has a preferred memory alignment. The application creates a data structure having multiple fields, each of which field exists on a boundary having the preferred memory alignment. This allows the application to access or write data into the fields with a lower processing overhead than if the fields were not forced to be on the preferred memory alignment.
    Type: Application
    Filed: October 2, 2017
    Publication date: April 5, 2018
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski
  • Publication number: 20180096049
    Abstract: A massively parallel database management system includes an index store and a payload store including a set of storage systems of different temperatures. Both the stores each include a list of clusters. Each cluster includes a set of nodes with storage devices forming a group of segments. Nodes and clusters are connected over high speed links. The list of clusters within the payload store includes clusters of different temperatures. The payload store transitions data of a segment group from a higher temperature to a segment group in a lower temperature cluster in parallel. A node moves data of a segment in the higher temperature cluster to a corresponding node's segment in the lower temperature cluster. Once the data is written in the destination segment in the lower temperature cluster, the source segment is freed to store other data. The temperatures include blazing, hot, warm and cold.
    Type: Application
    Filed: October 2, 2017
    Publication date: April 5, 2018
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski
  • Publication number: 20180095914
    Abstract: A networked database management system (DBMS) and supporting infrastructure is disclosed. At least one application in the disclose DBMS can directly access a pinned RDMA buffer for network reads. In addition, an application can directly access pinned DMA buffer for drive reads. The nodes of the DBMS are configured in a particular configuration to aid in high speed accesses. In addition, all data is stored in register width fields, or integer multiples thereof. Finally, at least one application in the disclosed DBMS system includes a drive access class. The drive access class includes a NVME drive access subclass and a SATA drive access subclass. The NVME drive access subclass allows the application to directly access NVME drives without making an operating system call, while the SATA drive access subclass allows the application to directly access SATA drives without making an operating system call.
    Type: Application
    Filed: October 2, 2017
    Publication date: April 5, 2018
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski
  • Publication number: 20180096048
    Abstract: A massively parallel database management system includes an index store and a payload store including a set of storage systems of different temperatures. Both the index store and the storage system each include a list of clusters. Each cluster includes a set of nodes with storage devices forming a group of segments. Nodes and clusters are connected over high speed links. Each cluster receives data and splits the data into data rows based on a predetermined size. The data rows are randomly and evenly distributed between all nodes of the cluster.
    Type: Application
    Filed: October 2, 2017
    Publication date: April 5, 2018
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski