Patents by Inventor Zachary Kulis

Zachary Kulis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046043
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Application
    Filed: October 5, 2023
    Publication date: February 8, 2024
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
  • Patent number: 11816439
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: November 14, 2023
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
  • Publication number: 20230252241
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user’s intent. Templates may be determined based on the user’s intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user’s utterances and/or provided as metadata included with the user’s utterances. A response persona may be used to generate responses to the user’s utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Application
    Filed: April 17, 2023
    Publication date: August 10, 2023
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Rui Zhang, Zachary Kulis, Varun Singh
  • Patent number: 11651163
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: May 16, 2023
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Rui Zhang, Zachary Kulis, Varun Singh
  • Publication number: 20230062915
    Abstract: System and method of generating an executable action item in response to natural language dialogue are disclosed herein. A computing system receives a dialogue message from a remote client device of a customer associated with an organization, the dialogue message comprising an utterance indicative of an implied goal. A natural language processor of the computing system parses the dialogue message to identify one or more components contained in the utterance. The planning module of the computing system identifies the implied goal. The computing system generates a plan within a defined solution space. The computing system generates a verification message to the user to confirm the plan. The computing system transmits the verification message to the remote client device of the customer. The computing system updates an event queue with instructions to execute the action item according to the generated plan upon receiving a confirmation message from the remote client device.
    Type: Application
    Filed: October 17, 2022
    Publication date: March 2, 2023
    Applicant: Capital One Services, LLC
    Inventors: Scott Karp, Erik Mueller, Zachary Kulis
  • Publication number: 20230021852
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing recurrent neural network-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Further, input data may be bidirectionally encoded using both forward and backward separators. The forward and backward representations of the input data may be used to train the machine classifiers using a single generative model and/or shared parameters between the encoder and decoder of the machine classifier. During inference, the backward model may be used to reevaluate previously generated output sequences and the forward model may be used to generate an output sequence based on the previously generated output sequences.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 26, 2023
    Inventors: Oluwatobi Olabiyi, Zachary Kulis, Erik T. Mueller
  • Publication number: 20230020350
    Abstract: Systems described herein may use transformer-based machine classifiers to perform a variety of natural language understanding tasks including, but not limited to sentence classification, named entity recognition, sentence similarity, and question answering. The exceptional performance of transformer-based language models is due to their ability to capture long-term temporal dependencies in input sequences. Machine classifiers may be trained using training data sets for multiple tasks, such as but not limited to sentence classification tasks and sequence labeling tasks. Loss masking may be employed in the machine classifier to jointly train the machine classifier on multiple tasks simultaneously. The user of transformer encoders in the machine classifiers, which treat each output sequence independently of other output sequences, in accordance with aspects of the invention do not require joint labeling to model tasks.
    Type: Application
    Filed: September 26, 2022
    Publication date: January 19, 2023
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Zachary Kulis, Varun Singh
  • Publication number: 20230015665
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
  • Patent number: 11487954
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing recurrent neural network-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Further, input data may be bidirectionally encoded using both forward and backward separators. The forward and backward representations of the input data may be used to train the machine classifiers using a single generative model and/or shared parameters between the encoder and decoder of the machine classifier. During inference, the backward model may be used to reevaluate previously generated output sequences and the forward model may be used to generate an output sequence based on the previously generated output sequences.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: November 1, 2022
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Zachary Kulis, Erik T. Mueller
  • Patent number: 11475366
    Abstract: System and method of generating an executable action item in response to natural language dialogue are disclosed herein. A computing system receives a dialogue message from a remote client device of a customer associated with an organization, the dialogue message comprising an utterance indicative of an implied goal. A natural language processor of the computing system parses the dialogue message to identify one or more components contained in the utterance. The planning module of the computing system identifies the implied goal. The computing system generates a plan within a defined solution space. The computing system generates a verification message to the user to confirm the plan. The computing system transmits the verification message to the remote client device of the customer. The computing system updates an event queue with instructions to execute the action item according to the generated plan upon receiving a confirmation message from the remote client device.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: October 18, 2022
    Assignee: Capital One Services, LLC
    Inventors: Scott Karp, Erik Mueller, Zachary Kulis
  • Patent number: 11468246
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: October 11, 2022
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
  • Patent number: 11468239
    Abstract: Systems described herein may use transformer-based machine classifiers to perform a variety of natural language understanding tasks including, but not limited to sentence classification, named entity recognition, sentence similarity, and question answering. The exceptional performance of transformer-based language models is due to their ability to capture long-term temporal dependencies in input sequences. Machine classifiers may be trained using training data sets for multiple tasks, such as but not limited to sentence classification tasks and sequence labeling tasks. Loss masking may be employed in the machine classifier to jointly train the machine classifier on multiple tasks simultaneously. The user of transformer encoders in the machine classifiers, which treat each output sequence independently of other output sequences, in accordance with aspects of the invention do not require joint labeling to model tasks.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: October 11, 2022
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Zachary Kulis, Varun Singh
  • Publication number: 20220284433
    Abstract: Unidimensional embedding using multi-modal deep learning models. An autoencoder executing on a processor may receive transaction data for a plurality of transactions, the transaction data including a plurality of fields, the plurality of fields including a plurality of different data types. An embeddings layer of the autoencoder may generate an embedding vector for a first transaction, the embedding vector includes floating point values to represent the plurality of data types of the transaction data. One or more fully connected layers of the autoencoder may generate, based on the embedding vector, a plurality of statistical distributions for the first transaction, each statistical distribution includes a respective embedding vector. A sampling layer of the autoencoder may sample a first statistical distribution of the plurality of statistical distributions. A decoder of the autoencoder may decode the first statistical distribution to generate an output representing the first transaction.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 8, 2022
    Applicant: Capital One Services, LLC
    Inventors: Minh LE, Zachary KULIS, Tarek LAHLOU
  • Patent number: 11417317
    Abstract: Aspects described herein may relate to the determination of data that is indicative of a greater range of speech properties than input text data. The determined data may be used as input to one or more speech processing tasks, such as model training, model validation, model testing, or classification. For example, after a model is trained based on the determined data, the model's performance may exhibit more resilience to a wider range of speech properties. The determined data may include one or more modified versions of the input text data. The one or more modified versions may be associated with the one or more speakers or accents and/or may be associated with one or more levels of semantic similarity in relation to the input text data. The one or more modified versions may be determined based on one or more machine learning algorithms.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: August 16, 2022
    Assignee: Capital One Services, LLC
    Inventors: Christopher Larson, Tarek Aziz Lahlou, Diana Mingels, Zachary Kulis, Erik T. Mueller
  • Publication number: 20220108164
    Abstract: The disclosed technology involves autonomously identifying goals and sub-goals from a user utterance and generating responses to the user based on the goals and sub-goals.
    Type: Application
    Filed: October 2, 2020
    Publication date: April 7, 2022
    Inventors: Alexandra Coman, Zachary Kulis, Rui Zhang, Liwei Dai, Erik T. Mueller, Vinay Igure
  • Publication number: 20210365635
    Abstract: Systems described herein may use transformer-based machine classifiers to perform a variety of natural language understanding tasks including, but not limited to sentence classification, named entity recognition, sentence similarity, and question answering. The exceptional performance of transformer-based language models is due to their ability to capture long-term temporal dependencies in input sequences. Machine classifiers may be trained using training data sets for multiple tasks, such as but not limited to sentence classification tasks and sequence labeling tasks. Loss masking may be employed in the machine classifier to jointly train the machine classifier on multiple tasks simultaneously. The user of transformer encoders in the machine classifiers, which treat each output sequence independently of other output sequences, in accordance with aspects of the invention do not require joint labeling to model tasks.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 25, 2021
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Zachary Kulis, Varun Singh
  • Publication number: 20210150414
    Abstract: A method for determining machine learning training parameters is disclosed. The method can include a processor receiving a first input. The processor may receive a first response to the first input, determine a first intent, and identify a first action. The processor can then determine first trainable parameter(s) and determine whether the first trainable parameter(s) is negative or positive. Further, the processor can update a training algorithm based on the first trainable parameter(s). The processor can then receive a second input and determine a second intent for the second input. The processor can also determine a second action for the second intent and transmit the second action to a user. The processor can then determine second trainable parameter(s) and determine whether the second trainable parameter(s) is positive or negative. Finally, the processor can further update the training algorithm based on the second trainable parameter(s).
    Type: Application
    Filed: January 27, 2021
    Publication date: May 20, 2021
    Inventors: Omar Florez Choque, Erik T. Mueller, Zachary Kulis
  • Patent number: 10929781
    Abstract: A method for determining machine learning training parameters is disclosed. The method can include a processor receiving a first input. The processor may receive a first response to the first input, determine a first intent, and identify a first action. The processor can then determine first trainable parameter(s) and determine whether the first trainable parameter(s) is negative or positive. Further, the processor can update a training algorithm based on the first trainable parameter(s). The processor can then receive a second input and determine a second intent for the second input. The processor can also determine a second action for the second intent and transmit the second action to a user. The processor can then determine second trainable parameter(s) and determine whether the second trainable parameter(s) is positive or negative. Finally, the processor can further update the training algorithm based on the second trainable parameter(s).
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: February 23, 2021
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Omar Florez Choque, Erik Mueller, Zachary Kulis
  • Publication number: 20210027025
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.
    Type: Application
    Filed: July 22, 2020
    Publication date: January 28, 2021
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Diana Mingels, Zachary Kulis
  • Publication number: 20210027023
    Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing recurrent neural network-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Further, input data may be bidirectionally encoded using both forward and backward separators. The forward and backward representations of the input data may be used to train the machine classifiers using a single generative model and/or shared parameters between the encoder and decoder of the machine classifier. During inference, the backward model may be used to reevaluate previously generated output sequences and the forward model may be used to generate an output sequence based on the previously generated output sequences.
    Type: Application
    Filed: July 22, 2020
    Publication date: January 28, 2021
    Inventors: Oluwatobi Olabiyi, Zachary Kulis, Erik T. Mueller