Patents by Inventor Jason Brenier
Jason Brenier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240078386Abstract: Methods, apparatuses, and systems are presented for generating natural language models using a novel system architecture for feature extraction. A method for extracting features for natural language processing comprises: accessing one or more tokens generated from a document to be processed; receiving one or more feature types defined by user; receiving selection of one or more feature types from a plurality of system-defined and user-defined feature types, wherein each feature type comprises one or more rules for generating features; receiving one or more parameters for the selected feature types, wherein the one or more rules for generating features are defined at least in part by the parameters; generating features associated with the document to be processed based on the selected feature types and the received parameters; and outputting the generated features in a format common among all feature types.Type: ApplicationFiled: November 2, 2023Publication date: March 7, 2024Applicant: 100.co Global Holdings, LLCInventors: Robert J. Munro, Schuyler D. Erle, Tyler J. Schnoebelen, Brendan D. Callahan, Jessica D. Long, Gary C. King, Paul A. Tepper, Jason A. Brenier, Stefan Krawczyk
-
Patent number: 11675977Abstract: Systems, methods, and apparatuses are presented for a novel natural language tokenizer and tagger. In some embodiments, a method for tokenizing text for natural language processing comprises: generating from a pool of documents, a set of statistical models comprising one or more entries each indicating a likelihood of appearance of a character/letter sequence in the pool of documents; receiving a set of rules comprising rules that identify character/letter sequences as valid tokens; transforming one or more entries in the statistical models into new rules that are added to the set of rules when the entries indicate a high likelihood; receiving a document to be processed; dividing the document to be processed into tokens based on the set of statistical models and the set of rules, wherein the statistical models are applied where the rules fail to unambiguously tokenize the document; and outputting the divided tokens for natural language processing.Type: GrantFiled: March 27, 2020Date of Patent: June 13, 2023Assignee: Daash Intelligence, Inc.Inventors: Robert J. Munro, Rob Voigt, Schuyler D. Erle, Brendan D. Callahan, Gary C. King, Jessica D. Long, Jason Brenier, Tripti Saxena, Stefan Krawczyk
-
Patent number: 11599714Abstract: Systems and methods are presented for the automatic placement of rules applied to topics in a logical hierarchy when conducting natural language processing. In some embodiments, a method includes: accessing, at a child node in a logical hierarchy, at least one rule associated with the child node; identifying a percolation criterion associated with a parent node to the child node, said percolation criterion indicating that the at least one rule associated with the child node is to be associated also with the parent node; associating the at least one rule with the parent node such that the at least one rule defines a second factor for determining whether the document is to also be classified into the parent node; accessing the document for natural language processing; and determining whether the document is to be classified into the parent node or the child node based on the at least one rule.Type: GrantFiled: March 6, 2020Date of Patent: March 7, 2023Assignee: 100.co Technologies, Inc.Inventors: Robert J. Munro, Schuyler D. Erle, Tyler J. Schnoebelen, Jason Brenier, Jessica D. Long, Brendan D. Callahan, Paul A. Tepper, Edgar Nunez
-
Patent number: 11295071Abstract: Methods and systems are disclosed for creating and linking a series of interfaces configured to display information and receive confirmation of classifications made by a natural language modeling engine to improve organization of a collection of documents into an hierarchical structure. In some embodiments, the interfaces may display to an annotator a plurality of labels of potential classifications for a document as identified by a natural language modeling engine, collect annotated responses from the annotator, aggregate the annotated responses across other annotators, analyze the accuracy of the natural language modeling engine based on the aggregated annotated responses, and predict accuracies of the natural language modeling engine's classifications of the documents.Type: GrantFiled: December 14, 2018Date of Patent: April 5, 2022Assignee: 100.co, LLCInventors: Robert J. Munro, Christopher Walker, Sarah K. Luger, Jason Brenier, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Gary C. King, Brendan D. Callahan, Tyler J. Schnoebelen, Edgar Nunez, Haley Most
-
Patent number: 11288444Abstract: Methods, apparatuses and computer readable medium are presented for generating a natural language model. A method for generating a natural language model comprises: selecting from a pool of documents, a first set of documents to be annotated; receiving annotations of the first set of documents elicited by first human readable prompts; training a natural language model using the annotated first set of documents; determining documents in the pool having uncertain natural language processing results according to the trained natural language model and/or the received annotations; selecting from the pool of documents, a second set of documents to be annotated comprising documents having uncertain natural language processing results; receiving annotations of the second set of documents elicited by second human readable prompts; and retraining a natural language model using the annotated second set of documents.Type: GrantFiled: December 11, 2020Date of Patent: March 29, 2022Assignee: 100.co, LLCInventors: Robert J. Munro, Schuyler D. Erle, Jason Brenier, Paul A. Tepper, Tripti Saxena, Gary C. King, Jessica D. Long, Brendan D. Callahan, Tyler J. Schnoebelen, Stefan Krawczyk, Veena Basavaraj
-
Publication number: 20210232763Abstract: Methods and systems are disclosed for creating and linking a series of interfaces configured to display information and receive confirmation of classifications made by a natural language modeling engine to improve organization of a collection of documents into an hierarchical structure. In some embodiments, the interfaces may display to an annotator a plurality of labels of potential classifications for a document as identified by a natural language modeling engine, collect annotated responses from the annotator, aggregate the annotated responses across other annotators, analyze the accuracy of the natural language modeling engine based on the aggregated annotated responses, and predict accuracies of the natural language modeling engine's classifications of the documents.Type: ApplicationFiled: February 22, 2021Publication date: July 29, 2021Applicant: AI IP INVESTMENTS LTDInventors: Robert J. Munro, Christopher Walker, Sarah K. Luger, Jason Brenier, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Gary C. King, Brendan D. Callahan, Tyler J. Schnoebelen, Edgar Nunez, Haley Most
-
Publication number: 20210232762Abstract: Systems are presented for generating a natural language model. The system may comprise a database module, an application program interface (API) module, a background processing module, and an applications module, each stored on the at least one memory and executable by the at least one processor. The system may be configured to generate the natural language model by: ingesting training data, generating a hierarchical data structure, selecting a plurality of documents among the training data to be annotated, generating an annotation prompt for each document configured to elicit an annotation about said document, receiving the annotation based on the annotation prompt, and generating the natural language model using an adaptive machine learning process configured to determine patterns among the annotations for how the documents in the training data are to be subdivided according to the at least two topical nodes of the hierarchical data structure.Type: ApplicationFiled: February 3, 2021Publication date: July 29, 2021Applicant: Al IP INVESTMENTS LTDInventors: Robert J. Munro, Schuyler D. Erie, Christopher Walker, Sarah K. Luger, Jason Brenier, Gary C. King, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Jessica D. Long, James B. Robinson, Brendan D. Callahan, Michelle Casbon, Ujjwal Sarin, Aneesh Nair, Veena Basavaraj, Tripti Saxena, Edgar Nunez, Martha G. Hinrichs, Haley Most, Tyler J. Schnoebelen
-
Publication number: 20210232761Abstract: Systems and methods are presented for providing improved machine performance in natural language processing. In some example embodiments, an API module is presented that is configured to drive processing of a system architecture for natural language processing. Aspects of the present disclosure allow for a natural language model to classify documents while other documents are being retrieved in real time. The natural language model and the documents are configured to be stored in a stateless format, which also allows for additional functions to be performed on the documents while the natural language model is used to continue classifying other documents.Type: ApplicationFiled: December 22, 2020Publication date: July 29, 2021Inventors: Schuyler D. Erle, Robert J. Munro, Brendan D. Callahan, Jason Brenier, Paul A. Tepper, Jessica D. Long, James B. Robinson, Aneesh Nair, Michelle Casbon, Stefan Krawczyk
-
Publication number: 20210232760Abstract: Methods, apparatuses and computer readable medium are presented for generating a natural language model. A method for generating a natural language model comprises: selecting from a pool of documents, a first set of documents to be annotated; receiving annotations of the first set of documents elicited by first human readable prompts; training a natural language model using the annotated first set of documents; determining documents in the pool having uncertain natural language processing results according to the trained natural language model and/or the received annotations; selecting from the pool of documents, a second set of documents to be annotated comprising documents having uncertain natural language processing results; receiving annotations of the second set of documents elicited by second human readable prompts; and retraining a natural language model using the annotated second set of documents.Type: ApplicationFiled: December 11, 2020Publication date: July 29, 2021Inventors: Robert J. Munro, Schuyler D. Erle, Jason Brenier, Paul A. Tepper, Tripti Saxena, Gary C. King, Jessica D. Long, Brendan D. Callahan, Tyler J. Schnoebelen, Stefan Krawczyk, Veena Basavaraj
-
Publication number: 20210165955Abstract: Systems and methods are presented for the automatic placement of rules applied to topics in a logical hierarchy when conducting natural language processing. In some embodiments, a method includes: accessing, at a child node in a logical hierarchy, at least one rule associated with the child node; identifying a percolation criterion associated with a parent node to the child node, said percolation criterion indicating that the at least one rule associated with the child node is to be associated also with the parent node; associating the at least one rule with the parent node such that the at least one rule defines a second factor for determining whether the document is to also be classified into the parent node; accessing the document for natural language processing; and determining whether the document is to be classified into the parent node or the child node based on the at least one rule.Type: ApplicationFiled: March 6, 2020Publication date: June 3, 2021Inventors: Robert J. Munro, Schuyler D. Erle, Tyler J. Schnoebelen, Jason Brenier, Jessica D. Long, Brendan D. Callahan, Paul A. Tepper, Edgar Nunez
-
Publication number: 20210157984Abstract: Systems, methods, and apparatuses are presented for a novel natural language tokenizer and tagger. In some embodiments, a method for tokenizing text for natural language processing comprises: generating from a pool of documents, a set of statistical models comprising one or more entries each indicating a likelihood of appearance of a character/letter sequence in the pool of documents; receiving a set of rules comprising rules that identify character/letter sequences as valid tokens; transforming one or more entries in the statistical models into new rules that are added to the set of rules when the entries indicate a high likelihood; receiving a document to be processed; dividing the document to be processed into tokens based on the set of statistical models and the set of rules, wherein the statistical models are applied where the rules fail to unambiguously tokenize the document; and outputting the divided tokens for natural language processing.Type: ApplicationFiled: March 27, 2020Publication date: May 27, 2021Inventors: Robert J. Munro, Rob Voigt, Schuyler D. Erle, Brendan D. Callahan, Gary C. King, Jessica D. Long, Jason Brenier, Tripti Saxena, Stefan Krawczyk
-
Publication number: 20210150130Abstract: Methods are presented for generating a natural language model. The method may comprise: ingesting training data representative of documents to be analyzed by the natural language model, generating a hierarchical data structure comprising at least two topical nodes within which the training data is to be subdivided into by the natural language model, selecting a plurality of documents among the training data to be annotated, generating an annotation prompt for each document configured to elicit an annotation about said document indicating which node among the at least two topical nodes said document is to be classified into, receiving the annotation based on the annotation prompt; and generating the natural language model using an adaptive machine learning process configured to determine patterns among the annotations for how the documents in the training data are to be subdivided according to the at least two topical nodes of the hierarchical data structure.Type: ApplicationFiled: February 20, 2020Publication date: May 20, 2021Inventors: Robert J. Munro, Schuyler D. Erle, Christopher Walker, Sarah K. Luger, Jason Brenier, Gary C. King, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Jessica D. Long, James B. Robinson, Brendan D. Callahan, Michelle Casban, Ujjwal Sarin, Aneesh Nair, Veena Basavaraj, Tripti Saxena, Edgar Nunez, Martha G. Hinrichs, Haley Most, Tyler Schnoebelen
-
Publication number: 20210110111Abstract: Systems, methods, and apparatuses are presented for a trained language model to be stored in an efficient manner such that the trained language model may be utilized in virtually any computing device to conduct natural language processing. Unlike other natural language processing engines that may be computationally intensive to the point of being capable of running only on high performance machines, the organization of the natural language models according to the present disclosures allows for natural language processing to be performed even on smaller devices, such as mobile devices.Type: ApplicationFiled: May 26, 2020Publication date: April 15, 2021Applicant: Singapore Biotech PTE. LTD.Inventors: Schulyer D. Erle, Robert J. Munro, Brendan D. Callahan, Gary C. King, Jason Brenier, James B. Robinson
-
Publication number: 20210081611Abstract: Methods, apparatuses, and systems are presented for generating natural language models using a novel system architecture for feature extraction. A method for extracting features for natural language processing comprises: accessing one or more tokens generated from a document to be processed; receiving one or more feature types defined by user; receiving selection of one or more feature types from a plurality of system-defined and user-defined feature types, wherein each feature type comprises one or more rules for generating features; receiving one or more parameters for the selected feature types, wherein the one or more rules for generating features are defined at least in part by the parameters; generating features associated with the document to be processed based on the selected feature types and the received parameters; and outputting the generated features in a format common among all feature types.Type: ApplicationFiled: April 29, 2020Publication date: March 18, 2021Applicant: Singapore Biotech PTE. LTD.Inventors: Robert J. Munro, Schuyler D. Erle, Tyler J. Schnoebelen, Brendan D. Callahan, Jessica D. Long, Gary C. King, Paul A. Tepper, Jason A. Brenier, Stefan Krawczyk
-
Publication number: 20200234002Abstract: Methods, apparatuses and computer readable medium are presented for generating a natural language model. A method for generating a natural language model comprises: selecting from a pool of documents, a first set of documents to be annotated; receiving annotations of the first set of documents elicited by first human readable prompts; training a natural language model using the annotated first set of documents; determining documents in the pool having uncertain natural language processing results according to the trained natural language model and/or the received annotations; selecting from the pool of documents, a second set of documents to be annotated comprising documents having uncertain natural language processing results; receiving annotations of the second set of documents elicited by second human readable prompts; and retraining a natural language model using the annotated second set of documents.Type: ApplicationFiled: November 21, 2018Publication date: July 23, 2020Inventors: Robert J. Munro, Schuyler D. Erle, Jason Brenier, Paul A. Tepper, Tripti Saxena, Gary C. King, Jessica D. Long, Brendan D. Callahan, Tyler J. Schnoebelen, Stefan Krawczyk, Veena Basavaraj
-
Publication number: 20200184146Abstract: Methods, apparatuses and computer readable medium are presented for generating a natural language model. A method for generating a natural language model comprises: receiving more than one annotation of a document; calculating a level of agreement among the received annotations; determining that a criterion among a first criterion, a second criterion, and a third criterion is satisfied based at least in part on the level of agreement; determining an aggregated annotation representing an aggregation of information in the received annotations and training a natural language model using the aggregated annotation, when the first criterion is satisfied; generating at least one human readable prompt configured to receive additional annotations of the document, when the second criterion is satisfied; and discarding the received annotations from use in training the natural language model, when the third criterion is satisfied.Type: ApplicationFiled: January 22, 2020Publication date: June 11, 2020Inventors: Robert J. Munro, Christopher Walker, Sarah K. Luger, Brendan D. Callahan, Gary C. King, Paul A. Tepper, Jana N. Thompson, Tyler J. Schnoebelen, Jason Brenier, Jessica D. Long
-
Publication number: 20200034737Abstract: Systems are presented for generating a natural language model. The system may comprise a database module, an application program interface (API) module, a background processing module, and an applications module, each stored on the at least one memory and executable by the at least one processor. The system may be configured to generate the natural language model by: ingesting training data, generating a hierarchical data structure, selecting a plurality of documents among the training data to be annotated, generating an annotation prompt for each document configured to elicit an annotation about said document, receiving the annotation based on the annotation prompt, and generating the natural language model using an adaptive machine learning process configured to determine patterns among the annotations for how the documents in the training data are to be subdivided according to the at least two topical nodes of the hierarchical data structure.Type: ApplicationFiled: February 28, 2019Publication date: January 30, 2020Applicant: AIPARC HOLDINGS PTE. LTD. `Inventors: Robert J. Munro, Schuyler D. Erle, Christopher Walker, Sarah K. Luger, Jason Brenier, Gary C. King, Paul A. Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Jessica D. Long, James B. Robinson, Brendan D. Callahan, Michelle Casbon, Ujjwal Sarin, Aneesh Nair, Veena Basavaraj, Tripti Saxena, Edgar Nunez, Martha G. Hinrichs, Haley Most, Tyler J. Schnoebelen
-
Publication number: 20190384809Abstract: Systems, methods, and apparatuses are presented for a trained language model to be stored in an efficient manner such that the trained language model may be utilized in virtually any computing device to conduct natural language processing. Unlike other natural language processing engines that may be computationally intensive to the point of being capable of running only on high performance machines, the organization of the natural language models according to the present disclosures allows for natural language processing to be performed even on smaller devices, such as mobile devices.Type: ApplicationFiled: January 11, 2019Publication date: December 19, 2019Applicant: AIPARC HOLDINGS PTE. LTD.Inventors: Schuyler D. Erle, Robert J. Munro, Brendan D. Callahan, Gary C. King, Jason Brenier, James B. Robinson
-
Publication number: 20190377788Abstract: Methods, apparatuses, and systems are presented for generating natural language models using a novel system architecture for feature extraction. A method for extracting features for natural language processing comprises: accessing one or more tokens generated from a document to be processed; receiving one or more feature types defined by user; receiving selection of one or more feature types from a plurality of system-defined and user-defined feature types, wherein each feature type comprises one or more rules for generating features; receiving one or more parameters for the selected feature types, wherein the one or more rules for generating features are defined at least in part by the parameters; generating features associated with the document to be processed based on the selected feature types and the received parameters; and outputting the generated features in a format common among all feature types.Type: ApplicationFiled: January 2, 2019Publication date: December 12, 2019Applicant: AIPARC HOLDINGS PTE. LTD.Inventors: Robert J. Munro, Schuyler D. Erle, Tyler J. Schnoebelen, Brendan D. Callahan, Jessica D. Long, Gary C. King, Paul A. Tepper, Jason A. Brenier, Stefan Krawczyk
-
Publication number: 20190361966Abstract: Methods and systems are disclosed for creating and linking a series of interfaces configured to display information and receive confirmation of classifications made by a natural language modeling engine to improve organization of a collection of documents into an hierarchical structure. In some embodiments, the interfaces may display to an annotator a plurality of labels of potential classifications for a document as identified by a natural language modeling engine, collect annotated responses from the annotator, aggregate the annotated responses across other annotators, analyze the accuracy of the natural language modeling engine based on the aggregated annotated responses, and predict accuracies of the natural language modeling engine's classifications of the documents.Type: ApplicationFiled: December 14, 2018Publication date: November 28, 2019Applicant: AlPARC HOLDINGS PTE. LTD.Inventors: Robert J. Munro, Christopher Walker, Sarah K. Luger, Jason Brenier, Paul A, Tepper, Ross Mechanic, Andrew Gilchrist-Scott, Gary C. King, Brendan D. Callahan, Tyler J. Schnoebelen, Edgar Nunez, Haley Most