Patents by Inventor Gary C. King

Gary C. King has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10127214
    Abstract: Methods are presented for generating a natural language model. The method may comprise: ingesting training data representative of documents to be analyzed by the natural language model, generating a hierarchical data structure comprising at least two topical nodes within which the training data is to be subdivided into by the natural language model, selecting a plurality of documents among the training data to be annotated, generating an annotation prompt for each document configured to elicit an annotation about said document indicating which node among the at least two topical nodes said document is to be classified into, receiving the annotation based on the annotation prompt; and generating the natural language model using an adaptive machine learning process configured to determine patterns among the annotations for how the documents in the training data are to be subdivided according to the at least two topical nodes of the hierarchical data structure.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: November 13, 2018
    Assignee: Sansa Al Inc.
    Inventors: Robert J. Munro, Schuyler D. Erle, Christopher Walker, Sarah K. Luger, Jason Brenier, Gary C. King, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Jessica D. Long, James B. Robinson, Brendan D. Callahan, Michelle Casbon, Ujjwal Sarin, Aneesh Nair, Veena Basavaraj, Tripti Saxena, Edgar Nunez, Martha G. Hinrichs, Haley Most, Tyler J. Schnoebelen
  • Publication number: 20180157636
    Abstract: Methods, apparatuses, and systems are presented for generating natural language models using a novel system architecture for feature extraction. A method for extracting features for natural language processing comprises: accessing one or more tokens generated from a document to be processed; receiving one or more feature types defined by user; receiving selection of one or more feature types from a plurality of system-defined and user-defined feature types, wherein each feature type comprises one or more rules for generating features; receiving one or more parameters for the selected feature types, wherein the one or more rules for generating features are defined at least in part by the parameters; generating features associated with the document to be processed based on the selected feature types and the received parameters; and outputting the generated features in a format common among all feature types.
    Type: Application
    Filed: November 15, 2017
    Publication date: June 7, 2018
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Schuyler D. Erle, Tyler j. Schnoebelen, Brendan D. Callahan, Jessica D. Long, Gary C. King, Paul A. Tepper, Jason A. Brenier, Stefan Krawczyk
  • Publication number: 20180137098
    Abstract: Systems, methods, and apparatuses are presented for a trained language model to be stored in an efficient manner such that the trained language model may be utilized in virtually any computing device to conduct natural language processing. Unlike other natural language processing engines that may be computationally intensive to the point of being capable of running only on high performance machines, the organization of the natural language models according to the present disclosures allows for natural language processing to be performed even on smaller devices, such as mobile devices.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 17, 2018
    Applicant: Idibon, Inc.
    Inventors: Schuyler D. Erle, Robert J. Munro, Brendan D. Callahan, Gary C. King, Jason Brenier, James B. Robinson
  • Patent number: 9965458
    Abstract: Systems, methods, and apparatuses are presented for a novel natural language tokenizer and tagger. In some embodiments, a method for tokenizing text for natural language processing comprises: generating from a pool of documents, a set of statistical models comprising one or more entries each indicating a likelihood of appearance of a character/letter sequence in the pool of documents; receiving a set of rules comprising rules that identify character/letter sequences as valid tokens; transforming one or more entries in the statistical models into new rules that are added to the set of rules when the entries indicate a high likelihood; receiving a document to be processed; dividing the document to be processed into tokens based on the set of statistical models and the set of rules, wherein the statistical models are applied where the rules fail to unambiguously tokenize the document; and outputting the divided tokens for natural language processing.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: May 8, 2018
    Assignee: Sansa AI Inc.
    Inventors: Robert J. Munro, Rob Voigt, Schuyler D. Erle, Brendan D. Callahan, Gary C. King, Jessica D. Long, Jason Brenier, Tripti Saxena, Stefan Krawczyk
  • Publication number: 20180095946
    Abstract: Systems, methods, and apparatuses are presented for a novel natural language tokenizer and tagger. In some embodiments, a method for tokenizing text for natural language processing comprises: generating from a pool of documents, a set of statistical models comprising one or more entries each indicating a likelihood of appearance of a character/letter sequence in the pool of documents; receiving a set of rules comprising rules that identify character/letter sequences as valid tokens; transforming one or more entries in the statistical models into new rules that are added to the set of rules when the entries indicate a high likelihood; receiving a document to be processed; dividing the document to be processed into tokens based on the set of statistical models and the set of rules, wherein the statistical models are applied where the rules fail to unambiguously tokenize the document; and outputting the divided tokens for natural language processing.
    Type: Application
    Filed: May 16, 2017
    Publication date: April 5, 2018
    Applicant: Idibon, Inc.
    Inventors: Robert Munro, Rob Voigt, Schuyler D. Erle, Brendan D. Callahan, Gary C. King, Jessica D. Long, Jason Brenier, Tripti Saxena, Stefan Krawczyk
  • Patent number: 9836450
    Abstract: Systems, methods, and apparatuses are presented for a trained language model to be stored in an efficient manner such that the trained language model may be utilized in virtually any computing device to conduct natural language processing. Unlike other natural language processing engines that may be computationally intensive to the point of being capable of running only on high performance machines, the organization of the natural language models according to the present disclosures allows for natural language processing to be performed even on smaller devices, such as mobile devices.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: December 5, 2017
    Assignee: Sansa AI Inc.
    Inventors: Schuyler D. Erle, Robert J. Munro, Brendan D. Callahan, Gary C. King, Jason Brenier, James B. Robinson
  • Publication number: 20160162466
    Abstract: Systems, methods, and apparatuses are presented for a novel natural language tokenizer and tagger. In some embodiments, a method for tokenizing text for natural language processing comprises: generating from a pool of documents, a set of statistical models comprising one or more entries each indicating a likelihood of appearance of a character/letter sequence in the pool of documents; receiving a set of rules comprising rules that identify character/letter sequences as valid tokens; transforming one or more entries in the statistical models into new rules that are added to the set of rules when the entries indicate a high likelihood; receiving a document to be processed; dividing the document to be processed into tokens based on the set of statistical models and the set of rules, wherein the statistical models are applied where the rules fail to unambiguously tokenize the document; and outputting the divided tokens for natural language processing.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Rob Voigt, Schuyler D. Erle, Brendan D. Callahan, Gary C. King, Jessica D. Long, Jason Brenier, Tripti Saxena, Stefan Krawczyk
  • Publication number: 20160162456
    Abstract: Methods are presented for generating a natural language model. The method may comprise: ingesting training data representative of documents to be analyzed by the natural language model, generating a hierarchical data structure comprising at least two topical nodes within which the training data is to be subdivided into by the natural language model, selecting a plurality of documents among the training data to be annotated, generating an annotation prompt for each document configured to elicit an annotation about said document indicating which node among the at least two topical nodes said document is to be classified into, receiving the annotation based on the annotation prompt; and generating the natural language model using an adaptive machine learning process configured to determine patterns among the annotations for how the documents in the training data are to be subdivided according to the at least two topical nodes of the hierarchical data structure.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Schuyler D. Erle, Christopher Walker, Sarah K. Luger, Jason Brenier, Gary C. King, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Jessica D. Long, James B. Robinson, Brendan D. Callahan, Michelle Casbon, Ujjwal Sarin, Aneesh Nair, Veena Basavaraj, Tripti Saxena, Edgar Nunez, Martha G. Hinrichs, Haley Most, Tyler J. Schnoebelen
  • Publication number: 20160162457
    Abstract: Methods, apparatuses and computer readable medium are presented for generating a natural language model. A method for generating a natural language model comprises: selecting from a pool of documents, a first set of documents to be annotated; receiving annotations of the first set of documents elicited by first human readable prompts; training a natural language model using the annotated first set of documents; determining documents in the pool having uncertain natural language processing results according to the trained natural language model and/or the received annotations; selecting from the pool of documents, a second set of documents to be annotated comprising documents having uncertain natural language processing results; receiving annotations of the second set of documents elicited by second human readable prompts; and retraining a natural language model using the annotated second set of documents.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Schuyler D. Erle, Jason Brenier, Paul A. Tepper, Tripti Saxena, Gary C. King, Jessica D. Long, Brendan D. Callahan, Tyler J. Schnoebelen, Stefan Krawczyk, Veena Basavaraj
  • Publication number: 20160162468
    Abstract: Systems, methods, and apparatuses are presented for a trained language model to be stored in an efficient manner such that the trained language model may be utilized in virtually any computing device to conduct natural language processing. Unlike other natural language processing engines that may be computationally intensive to the point of being capable of running only on high performance machines, the organization of the natural language models according to the present disclosures allows for natural language processing to be performed even on smaller devices, such as mobile devices.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Schuyler D. Erle, Robert J. Munro, Brendan D. Callahan, Gary C. King, Jason Brenier, James B. Robinson
  • Publication number: 20160162458
    Abstract: Methods and systems are disclosed for creating and linking a series of interfaces configured to display information and receive confirmation of classifications made by a natural language modeling engine to improve organization of a collection of documents into an hierarchical structure. In some embodiments, the interfaces may display to an annotator a plurality of labels of potential classifications for a document as identified by a natural language modeling engine, collect annotated responses from the annotator, aggregate the annotated responses across other annotators, analyze the accuracy of the natural language modeling engine based on the aggregated annotated responses, and predict accuracies of the natural language modeling engine's classifications of the documents.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Christopher Walker, Sarah K. Luger, Jason Brenier, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Gary C. King, Brendan D. Callahan, Tyler J. Schnoebelen, Edgar Nunez, Haley Most
  • Publication number: 20160162464
    Abstract: Methods, apparatuses and computer readable medium are presented for generating a natural language model. A method for generating a natural language model comprises: receiving more than one annotation of a document; calculating a level of agreement among the received annotations; determining that a criterion among a first criterion, a second criterion, and a third criterion is satisfied based at least in part on the level of agreement; determining an aggregated annotation representing an aggregation of information in the received annotations and training a natural language model using the aggregated annotation, when the first criterion is satisfied; generating at least one human readable prompt configured to receive additional annotations of the document, when the second criterion is satisfied; and discarding the received annotations from use in training the natural language model, when the third criterion is satisfied.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Christopher Walker, Sarah K. Luger, Brendan D. Callahan, Gary C. King, Paul A. Tepper, Jana N. Thompson, Tyler J. Schnoebelen, Jason Brenier, Jessica D. Long
  • Publication number: 20160162467
    Abstract: Methods, apparatuses, and systems are presented for generating natural language models using a novel system architecture for feature extraction. A method for extracting features for natural language processing comprises: accessing one or more tokens generated from a document to be processed; receiving one or more feature types defined by user; receiving selection of one or more feature types from a plurality of system-defined and user-defined feature types, wherein each feature type comprises one or more rules for generating features; receiving one or more parameters for the selected feature types, wherein the one or more rules for generating features are defined at least in part by the parameters; generating features associated with the document to be processed based on the selected feature types and the received parameters; and outputting the generated features in a format common among all feature types.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 9, 2016
    Applicant: Idibon, Inc.
    Inventors: Robert J. Munro, Schuyler D. Erle, Tyler J. Schnoebelen, Brendan D. Callahan, Jessica D. Long, Gary C. King, Paul A. Tepper, Jason Brenier, Stefan Krawczyk
  • Patent number: 8918440
    Abstract: Methods and systems for decompressing data are described. The relative magnitudes of a first value and a second value are compared. The first value and the second value represent respective endpoints of a range of values. The first value and the second value each have N bits of precision. Either the first or second value is selected, based on the result of the comparison. The selected value is scaled to produce a third value having N+1 bits of precision. A specified bit value is appended as the least significant bit of the other (non-selected) value to produce a fourth value having N+1 bits of precision.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: December 23, 2014
    Assignee: NVIDIA Corporation
    Inventors: Douglas H. Rogers, Gary C. King, Walter E. Donovan
  • Patent number: 8594441
    Abstract: Image-based data, such as a block of texel data, is accessed. The data includes sets of color component values. A luminance value is computed for each set of color components values, generating a range of luminance values. A first set and a second set of color component values that correspond to the minimum and maximum luminance values are selected from the sets of color component values. A third set of color component values can be mapped to an index that identifies how the color component values of the third set can be decoded using the color component values of the first and second sets. The index value is selected by determining where the luminance value for the third set lies in the range of luminance values.
    Type: Grant
    Filed: September 12, 2006
    Date of Patent: November 26, 2013
    Assignee: Nvidia Corporation
    Inventors: Gary C. King, Edward A. Hutchins, Michael J. M. Toksvig
  • Patent number: 8547395
    Abstract: A computer-implemented graphics system has a mode of operation in which primitive coverage information is generated by a rasterizer for real sample locations and virtual sample locations for use in anti-aliasing. An individual pixel includes a single real sample location and at least one virtual sample location. If the coverage information cannot be changed by a pixel shader, then the rasterizer can write the coverage information to a framebuffer. If, however, the coverage information can be changed by the shader, then the rasterizer sends the coverage information to the shader.
    Type: Grant
    Filed: December 20, 2006
    Date of Patent: October 1, 2013
    Assignee: NVIDIA Corporation
    Inventors: Edward A. Hutchins, Christopher D. S. Donham, Gary C. King, Michael J. M. Toksvig
  • Publication number: 20120084334
    Abstract: Methods and systems for decompressing data are described. The relative magnitudes of a first value and a second value are compared. The first value and the second value represent respective endpoints of a range of values. The first value and the second value each have N bits of precision. Either the first or second value is selected, based on the result of the comparison. The selected value is scaled to produce a third value having N+1 bits of precision. A specified bit value is appended as the least significant bit of the other (non-selected) value to produce a fourth value having N+1 bits of precision.
    Type: Application
    Filed: December 13, 2011
    Publication date: April 5, 2012
    Applicant: NVIDIA CORPORATION
    Inventors: Douglas H. Rogers, Gary C. King, Walter E. Donovan
  • Patent number: 8078656
    Abstract: Methods and systems for decompressing data are described. The relative magnitudes of a first value and a second value are compared. The first value and the second value represent respective endpoints of a range of values. The first value and the second value each have N bits of precision. Either the first or second value is selected, based on the result of the comparison. The selected value is scaled to produce a third value having N+1 bits of precision. A specified bit value is appended as the least significant bit of the other (non-selected) value to produce a fourth value having N+1 bits of precision.
    Type: Grant
    Filed: November 16, 2004
    Date of Patent: December 13, 2011
    Assignee: NVIDIA Corporation
    Inventors: Douglas H. Rogers, Gary C. King, Walter E. Donovan
  • Patent number: 8004522
    Abstract: The boundary of a surface can be represented as a series of line segments. A number of polygons are successively superimposed onto the surface. The polygons utilize a common reference point and each of the polygons has an edge that coincides with one of the line segments. Coverage bits are associated with respective sample locations within a pixel. A value of a coverage bit is changed each time a sample location associated with the coverage bit is covered by one of the polygons. Final values of the coverage bits are buffered after all of the polygons have been processed. The values of the coverage bits can be used when the surface is subsequently rendered.
    Type: Grant
    Filed: August 7, 2007
    Date of Patent: August 23, 2011
    Assignee: NVIDIA Corporation
    Inventors: Michael J. M. Toksvig, Brian K. Cabral, Edward A. Hutchins, Gary C. King, Christopher D. S. Donham
  • Patent number: 7961195
    Abstract: Methods and systems for compressing and decompressing data are described. A first value of N+1 bits and a second value of N+1 bits are reduced to strings of N bits each. The first and second strings of N bits are stored in a particular order relative to one another in a compression block. The particular order in which the first and second strings of N bits are stored in the compression block is used to derive a bit value that is then used in combination with one of the strings of N bits to reconstruct that string as N+1 bits.
    Type: Grant
    Filed: November 16, 2004
    Date of Patent: June 14, 2011
    Assignee: Nvidia Corporation
    Inventors: Douglas H. Rogers, Gary C. King, Walter E. Donovan