Patents by Inventor Bishal BARMAN

Bishal BARMAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11829720
    Abstract: Systems and methods for analysis and validation of language models trained using data that is unavailable or inaccessible are provided. One example method includes, at an electronic device with one or more processors and memory, obtaining a first set of data corresponding to one or more tokens predicted based on one or more previous tokens. The method determines a probability that the first set of data corresponds to a prediction generated by a first language model trained using a user privacy preserving training process. In accordance with a determination that the probability is within a predetermined range, the method determines that the one or more tokens correspond to a prediction associated with the user privacy preserving training process and outputs a predicted token sequence including the one or more tokens and the one or more previous tokens.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: November 28, 2023
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Bishal Barman, Brent D. Ramerth
  • Publication number: 20230376690
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. An example process includes, receiving a text and a set of contextual information associated with the text; determining, using a system of neural networks, a plurality of text predictions based on the text and the contextual information, wherein a first text prediction of the plurality of text predictions includes a word and a second text prediction of the plurality of text predictions includes a phrase and wherein the system of neural networks includes a first neural network for extracting a context, a second neural network for determining text predictions, and a third neural network for determining whether the text predictions are relevant to the context; and in accordance with a determination that a plurality of confidence scores associated with the plurality of text predictions exceed a predetermined threshold, providing the plurality of text predictions.
    Type: Application
    Filed: August 18, 2022
    Publication date: November 23, 2023
    Inventors: Jerome R. BELLEGARDA, Bishal BARMAN, Hala ALMAGHOUT
  • Patent number: 11544458
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In one example process a set of words including a grammatical error is received. The process can generate, using a neural network based on the set of words including the grammatical error and a reference set of words, a transformed set of words and further determine, based on the set of words including the grammatical error and the reference set of words, a reconstructed reference set of words. The process can also determine, based on a comparison of the transformed set of words and the reconstructed reference set of words, whether the transformed set of words is grammatically correct and provide an indication of whether the transformed set of words is grammatically correct to the neural network.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: January 3, 2023
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Bishal Barman, Douglas Davidson
  • Patent number: 11386266
    Abstract: The present disclosure generally relates to text correction and generating text correction models. In an example process for text correction, text input is received. In response to receiving the text input, a text string corresponding to the text input is displayed. The text string is represented by a token sequence. The process determines whether an end of the token sequence corresponds to a text boundary. In accordance with a determination that the end of the token sequence corresponds to a text boundary, the process determines, based on a context state of the token sequence, one or more textual errors at one or more tokens of the token sequence. An error indication for a portion of the text string corresponding to the one or more tokens is displayed.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: July 12, 2022
    Assignee: Apple Inc.
    Inventors: Douglas R. Davidson, Bishal Barman, Vivek Kumar Rangarajan Sridhar
  • Publication number: 20220067283
    Abstract: Systems and methods for analysis and validation of language models trained using data that is unavailable or inaccessible are provided. One example method includes, at an electronic device with one or more processors and memory, obtaining a first set of data corresponding to one or more tokens predicted based on one or more previous tokens. The method determines a probability that the first set of data corresponds to a prediction generated by a first language model trained using a user privacy preserving training process. In accordance with a determination that the probability is within a predetermined range, the method determines that the one or more tokens correspond to a prediction associated with the user privacy preserving training process and outputs a predicted token sequence including the one or more tokens and the one or more previous tokens.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 3, 2022
    Inventors: Jerome R. BELLEGARDA, Bishal BARMAN, Brent D. RAMERTH
  • Publication number: 20210224474
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In one example process a set of words including a grammatical error is received. The process can generate, using a neural network based on the set of words including the grammatical error and a reference set of words, a transformed set of words and further determine, based on the set of words including the grammatical error and the reference set of words, a reconstructed reference set of words. The process can also determine, based on a comparison of the transformed set of words and the reconstructed reference set of words, whether the transformed set of words is grammatically correct and provide an indication of whether the transformed set of words is grammatically correct to the neural network.
    Type: Application
    Filed: January 17, 2020
    Publication date: July 22, 2021
    Inventors: Jerome R. BELLEGARDA, Bishal BARMAN, Douglas DAVIDSON
  • Publication number: 20190370323
    Abstract: The present disclosure generally relates to text correction and generating text correction models. In an example process for text correction, text input is received. In response to receiving the text input, a text string corresponding to the text input is displayed. The text string is represented by a token sequence. The process determines whether an end of the token sequence corresponds to a text boundary. In accordance with a determination that the end of the token sequence corresponds to a text boundary, the process determines, based on a context state of the token sequence, one or more textual errors at one or more tokens of the token sequence. An error indication for a portion of the text string corresponding to the one or more tokens is displayed.
    Type: Application
    Filed: August 28, 2018
    Publication date: December 5, 2019
    Inventors: Douglas R. DAVIDSON, Bishal BARMAN, Vivek Kumar RANGARAJAN SRIDHAR
  • Patent number: 10311144
    Abstract: The present disclosure generally relates to systems and processes for emoji word sense disambiguation. In one example process, a word sequence is received. A word-level feature representation is determined for each word of the word sequence and a global semantic representation for the word sequence is determined. For a first word of the word sequence, an attention coefficient is determined based on a congruence between the word-level feature representation of the first word and the global semantic representation for the word sequence. The word-level feature representation of the first word is adjusted based on the attention coefficient. An emoji likelihood is determined based on the adjusted word-level feature representation of the first word. In accordance with the emoji likelihood satisfying one or more criteria, an emoji character corresponding to the first word is presented for display.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: June 4, 2019
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Bishal Barman
  • Publication number: 20180336184
    Abstract: The present disclosure generally relates to systems and processes for emoji word sense disambiguation. In one example process, a word sequence is received. A word-level feature representation is determined for each word of the word sequence and a global semantic representation for the word sequence is determined. For a first word of the word sequence, an attention coefficient is determined based on a congruence between the word-level feature representation of the first word and the global semantic representation for the word sequence. The word-level feature representation of the first word is adjusted based on the attention coefficient. An emoji likelihood is determined based on the adjusted word-level feature representation of the first word. In accordance with the emoji likelihood satisfying one or more criteria, an emoji character corresponding to the first word is presented for display.
    Type: Application
    Filed: August 16, 2017
    Publication date: November 22, 2018
    Inventors: Jerome R. BELLEGARDA, Bishal BARMAN
  • Patent number: 10127220
    Abstract: Systems and processes for language identification from short strings are provided. In accordance with one example, a method includes, at a first electronic device with one or more processors and memory, receiving user input including an n-gram and determining a similarity between a representation of the n-gram and a representation of a first language. The representation of the first language is based on an occurrence of each of a plurality of n-grams in the first language and an occurrence of each of the plurality of n-grams in a second language. The method further includes determining whether the similarity between the representation of the n-gram and the representation of the first language satisfies a threshold.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: November 13, 2018
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Bishal Barman
  • Publication number: 20160357728
    Abstract: Systems and processes for language identification from short strings are provided. In accordance with one example, a method includes, at a first electronic device with one or more processors and memory, receiving user input including an n-gram and determining a similarity between a representation of the n-gram and a representation of a first language. The representation of the first language is based on an occurrence of each of a plurality of n-grams in the first language and an occurrence of each of the plurality of n-grams in a second language. The method further includes determining whether the similarity between the representation of the n-gram and the representation of the first language satisfies a threshold.
    Type: Application
    Filed: September 3, 2015
    Publication date: December 8, 2016
    Inventors: Jerome R. BELLEGARDA, Bishal BARMAN