Patents by Inventor Zachary Kulis
Zachary Kulis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240046043Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: ApplicationFiled: October 5, 2023Publication date: February 8, 2024Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
-
Patent number: 11816439Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: GrantFiled: September 22, 2022Date of Patent: November 14, 2023Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
-
Publication number: 20230252241Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user’s intent. Templates may be determined based on the user’s intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user’s utterances and/or provided as metadata included with the user’s utterances. A response persona may be used to generate responses to the user’s utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: ApplicationFiled: April 17, 2023Publication date: August 10, 2023Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Rui Zhang, Zachary Kulis, Varun Singh
-
Patent number: 11651163Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: GrantFiled: July 22, 2020Date of Patent: May 16, 2023Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Erik T. Mueller, Rui Zhang, Zachary Kulis, Varun Singh
-
Publication number: 20230062915Abstract: System and method of generating an executable action item in response to natural language dialogue are disclosed herein. A computing system receives a dialogue message from a remote client device of a customer associated with an organization, the dialogue message comprising an utterance indicative of an implied goal. A natural language processor of the computing system parses the dialogue message to identify one or more components contained in the utterance. The planning module of the computing system identifies the implied goal. The computing system generates a plan within a defined solution space. The computing system generates a verification message to the user to confirm the plan. The computing system transmits the verification message to the remote client device of the customer. The computing system updates an event queue with instructions to execute the action item according to the generated plan upon receiving a confirmation message from the remote client device.Type: ApplicationFiled: October 17, 2022Publication date: March 2, 2023Applicant: Capital One Services, LLCInventors: Scott Karp, Erik Mueller, Zachary Kulis
-
Publication number: 20230021852Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing recurrent neural network-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Further, input data may be bidirectionally encoded using both forward and backward separators. The forward and backward representations of the input data may be used to train the machine classifiers using a single generative model and/or shared parameters between the encoder and decoder of the machine classifier. During inference, the backward model may be used to reevaluate previously generated output sequences and the forward model may be used to generate an output sequence based on the previously generated output sequences.Type: ApplicationFiled: September 22, 2022Publication date: January 26, 2023Inventors: Oluwatobi Olabiyi, Zachary Kulis, Erik T. Mueller
-
Publication number: 20230020350Abstract: Systems described herein may use transformer-based machine classifiers to perform a variety of natural language understanding tasks including, but not limited to sentence classification, named entity recognition, sentence similarity, and question answering. The exceptional performance of transformer-based language models is due to their ability to capture long-term temporal dependencies in input sequences. Machine classifiers may be trained using training data sets for multiple tasks, such as but not limited to sentence classification tasks and sequence labeling tasks. Loss masking may be employed in the machine classifier to jointly train the machine classifier on multiple tasks simultaneously. The user of transformer encoders in the machine classifiers, which treat each output sequence independently of other output sequences, in accordance with aspects of the invention do not require joint labeling to model tasks.Type: ApplicationFiled: September 26, 2022Publication date: January 19, 2023Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Zachary Kulis, Varun Singh
-
Publication number: 20230015665Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: ApplicationFiled: September 22, 2022Publication date: January 19, 2023Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
-
Patent number: 11487954Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing recurrent neural network-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Further, input data may be bidirectionally encoded using both forward and backward separators. The forward and backward representations of the input data may be used to train the machine classifiers using a single generative model and/or shared parameters between the encoder and decoder of the machine classifier. During inference, the backward model may be used to reevaluate previously generated output sequences and the forward model may be used to generate an output sequence based on the previously generated output sequences.Type: GrantFiled: July 22, 2020Date of Patent: November 1, 2022Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Zachary Kulis, Erik T. Mueller
-
Patent number: 11475366Abstract: System and method of generating an executable action item in response to natural language dialogue are disclosed herein. A computing system receives a dialogue message from a remote client device of a customer associated with an organization, the dialogue message comprising an utterance indicative of an implied goal. A natural language processor of the computing system parses the dialogue message to identify one or more components contained in the utterance. The planning module of the computing system identifies the implied goal. The computing system generates a plan within a defined solution space. The computing system generates a verification message to the user to confirm the plan. The computing system transmits the verification message to the remote client device of the customer. The computing system updates an event queue with instructions to execute the action item according to the generated plan upon receiving a confirmation message from the remote client device.Type: GrantFiled: May 29, 2020Date of Patent: October 18, 2022Assignee: Capital One Services, LLCInventors: Scott Karp, Erik Mueller, Zachary Kulis
-
Patent number: 11468246Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: GrantFiled: July 22, 2020Date of Patent: October 11, 2022Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
-
Patent number: 11468239Abstract: Systems described herein may use transformer-based machine classifiers to perform a variety of natural language understanding tasks including, but not limited to sentence classification, named entity recognition, sentence similarity, and question answering. The exceptional performance of transformer-based language models is due to their ability to capture long-term temporal dependencies in input sequences. Machine classifiers may be trained using training data sets for multiple tasks, such as but not limited to sentence classification tasks and sequence labeling tasks. Loss masking may be employed in the machine classifier to jointly train the machine classifier on multiple tasks simultaneously. The user of transformer encoders in the machine classifiers, which treat each output sequence independently of other output sequences, in accordance with aspects of the invention do not require joint labeling to model tasks.Type: GrantFiled: May 22, 2020Date of Patent: October 11, 2022Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Erik T. Mueller, Zachary Kulis, Varun Singh
-
Publication number: 20220284433Abstract: Unidimensional embedding using multi-modal deep learning models. An autoencoder executing on a processor may receive transaction data for a plurality of transactions, the transaction data including a plurality of fields, the plurality of fields including a plurality of different data types. An embeddings layer of the autoencoder may generate an embedding vector for a first transaction, the embedding vector includes floating point values to represent the plurality of data types of the transaction data. One or more fully connected layers of the autoencoder may generate, based on the embedding vector, a plurality of statistical distributions for the first transaction, each statistical distribution includes a respective embedding vector. A sampling layer of the autoencoder may sample a first statistical distribution of the plurality of statistical distributions. A decoder of the autoencoder may decode the first statistical distribution to generate an output representing the first transaction.Type: ApplicationFiled: March 4, 2021Publication date: September 8, 2022Applicant: Capital One Services, LLCInventors: Minh LE, Zachary KULIS, Tarek LAHLOU
-
Patent number: 11417317Abstract: Aspects described herein may relate to the determination of data that is indicative of a greater range of speech properties than input text data. The determined data may be used as input to one or more speech processing tasks, such as model training, model validation, model testing, or classification. For example, after a model is trained based on the determined data, the model's performance may exhibit more resilience to a wider range of speech properties. The determined data may include one or more modified versions of the input text data. The one or more modified versions may be associated with the one or more speakers or accents and/or may be associated with one or more levels of semantic similarity in relation to the input text data. The one or more modified versions may be determined based on one or more machine learning algorithms.Type: GrantFiled: February 20, 2020Date of Patent: August 16, 2022Assignee: Capital One Services, LLCInventors: Christopher Larson, Tarek Aziz Lahlou, Diana Mingels, Zachary Kulis, Erik T. Mueller
-
Publication number: 20220108164Abstract: The disclosed technology involves autonomously identifying goals and sub-goals from a user utterance and generating responses to the user based on the goals and sub-goals.Type: ApplicationFiled: October 2, 2020Publication date: April 7, 2022Inventors: Alexandra Coman, Zachary Kulis, Rui Zhang, Liwei Dai, Erik T. Mueller, Vinay Igure
-
Publication number: 20210365635Abstract: Systems described herein may use transformer-based machine classifiers to perform a variety of natural language understanding tasks including, but not limited to sentence classification, named entity recognition, sentence similarity, and question answering. The exceptional performance of transformer-based language models is due to their ability to capture long-term temporal dependencies in input sequences. Machine classifiers may be trained using training data sets for multiple tasks, such as but not limited to sentence classification tasks and sequence labeling tasks. Loss masking may be employed in the machine classifier to jointly train the machine classifier on multiple tasks simultaneously. The user of transformer encoders in the machine classifiers, which treat each output sequence independently of other output sequences, in accordance with aspects of the invention do not require joint labeling to model tasks.Type: ApplicationFiled: May 22, 2020Publication date: November 25, 2021Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Zachary Kulis, Varun Singh
-
Publication number: 20210150414Abstract: A method for determining machine learning training parameters is disclosed. The method can include a processor receiving a first input. The processor may receive a first response to the first input, determine a first intent, and identify a first action. The processor can then determine first trainable parameter(s) and determine whether the first trainable parameter(s) is negative or positive. Further, the processor can update a training algorithm based on the first trainable parameter(s). The processor can then receive a second input and determine a second intent for the second input. The processor can also determine a second action for the second intent and transmit the second action to a user. The processor can then determine second trainable parameter(s) and determine whether the second trainable parameter(s) is positive or negative. Finally, the processor can further update the training algorithm based on the second trainable parameter(s).Type: ApplicationFiled: January 27, 2021Publication date: May 20, 2021Inventors: Omar Florez Choque, Erik T. Mueller, Zachary Kulis
-
Patent number: 10929781Abstract: A method for determining machine learning training parameters is disclosed. The method can include a processor receiving a first input. The processor may receive a first response to the first input, determine a first intent, and identify a first action. The processor can then determine first trainable parameter(s) and determine whether the first trainable parameter(s) is negative or positive. Further, the processor can update a training algorithm based on the first trainable parameter(s). The processor can then receive a second input and determine a second intent for the second input. The processor can also determine a second action for the second intent and transmit the second action to a user. The processor can then determine second trainable parameter(s) and determine whether the second trainable parameter(s) is positive or negative. Finally, the processor can further update the training algorithm based on the second trainable parameter(s).Type: GrantFiled: October 31, 2019Date of Patent: February 23, 2021Assignee: CAPITAL ONE SERVICES, LLCInventors: Omar Florez Choque, Erik Mueller, Zachary Kulis
-
Publication number: 20210027025Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: ApplicationFiled: July 22, 2020Publication date: January 28, 2021Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
-
Publication number: 20210027023Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing recurrent neural network-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Further, input data may be bidirectionally encoded using both forward and backward separators. The forward and backward representations of the input data may be used to train the machine classifiers using a single generative model and/or shared parameters between the encoder and decoder of the machine classifier. During inference, the backward model may be used to reevaluate previously generated output sequences and the forward model may be used to generate an output sequence based on the previously generated output sequences.Type: ApplicationFiled: July 22, 2020Publication date: January 28, 2021Inventors: Oluwatobi Olabiyi, Zachary Kulis, Erik T. Mueller