Patents by Inventor Sai Ajay Modukuri

Sai Ajay Modukuri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240054342
    Abstract: A method includes obtaining an input containing multiple tokens. The method also includes processing the input using a machine learning model. Processing the input includes performing attention over both (i) multiple dimensions of the tokens contained in the input and (ii) multiple dimensions of embedding vectors used to represent the tokens contained in the input so that different dimensions of each of at least some of the tokens are weighted differently. In addition, the method includes generating an output embedding vector for a query token of the multiple tokens based on the attention.
    Type: Application
    Filed: June 16, 2023
    Publication date: February 15, 2024
    Inventors: Suhel Jaber, Brendon Christopher Beachy Eby, Sai Ajay Modukuri
  • Publication number: 20240020477
    Abstract: A method includes providing embedding vectors representing tokens in an input to a transformer comprising multiple transformer layers arranged in a sequence, each transformer layer having a residual connection to each previous transformer layer. The method also includes, for each transformer layer, determining, for a first token, an input embedding vector based on a combination of output embedding vectors from previous transformer layers. The method further includes, for each transformer layer, processing, for the first token, the input embedding vector to generate an output embedding vector to be provided to each subsequent transformer layer.
    Type: Application
    Filed: April 25, 2023
    Publication date: January 18, 2024
    Inventors: Sai Ajay Modukuri, Brendon Christopher Beachy Eby, Suhel Jaber
  • Publication number: 20230385546
    Abstract: A method includes receiving an input utterance that is a continuation of a previous utterance. The method also includes, using a trained Siamese network, determining input utterance embeddings representing tokens from the input utterance, pooling the input utterance embeddings with a context token embedding representing a class associated with the previous utterance to generate a representative input utterance embedding, and determining a representative embedding associated with each of multiple possible classes. Each possible class is associated with first and second threshold boundaries. The method further includes, using the trained Siamese network, determining a similarity score for each possible class based on a distance between the representative input utterance embedding and a selected threshold boundary of the representative embedding for that possible class and identifying a class for the input utterance based on the determined similarity scores.
    Type: Application
    Filed: May 11, 2023
    Publication date: November 30, 2023
    Inventors: Brendon Christopher Beachy Eby, Suhel Jaber, Sai Ajay Modukuri, Omar Abdelwahab, Ankit Goyal
  • Publication number: 20230386450
    Abstract: A method includes determining, using at least one processing device of an electronic device, a target embedding vector for each class of a plurality of classes. The method also includes generating, using the at least one processing device, an utterance embedding vector using a pre-trained language model, where the utterance embedding vector represents an input utterance associated with an expected class. The method further includes obtaining, using the at least one processing device, a predicted class associated with the input utterance based on distances of the utterance embedding vector to spatial parameters representing the plurality of classes, where the spatial parameter of each class is based on the target embedding vector associated with that class. In addition, the method includes updating, using the at least one processing device, parameters of the language model based on a difference between the predicted class and the expected class.
    Type: Application
    Filed: April 19, 2023
    Publication date: November 30, 2023
    Inventors: Brendon Christopher Beachy Eby, Suhel Jaber, Sai Ajay Modukuri, Omar Abdelwahab, Ankit Goyal