Patents by Inventor Ranjitha Gurunath Kulkarni

Ranjitha Gurunath Kulkarni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11853817
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that can leverage a natural language model to determine a most probable candidate sequence of tokens and thereby generate a predicted user activity. In particular, the disclosed systems can tokenize activity event vectors to generate a series of sequential tokens that correspond to recent user activity of one or more user accounts. In addition, the disclosed systems can, for each candidate (e.g., hypothetical) user activity, augment the series of sequential tokens to include a corresponding token. Based on respective probability scores for each of the augmented series of sequential tokens, the disclosed systems can identify as the predicted user activity, a candidate user activity corresponding to one of the augmented series of sequential tokens associated with a highest probability score. Based on the predicted user activity, the disclosed systems can surface one or more suggestions to a client device.
    Type: Grant
    Filed: January 18, 2023
    Date of Patent: December 26, 2023
    Assignee: Dropbox, Inc.
    Inventors: Ranjitha Gurunath Kulkarni, Xingyu Xiang, Jongmin Baek, Ermo Wei
  • Patent number: 11567812
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that can leverage a natural language model to determine a most probable candidate sequence of tokens and thereby generate a predicted user activity. In particular, the disclosed systems can tokenize activity event vectors to generate a series of sequential tokens that correspond to recent user activity of one or more user accounts. In addition, the disclosed systems can, for each candidate (e.g., hypothetical) user activity, augment the series of sequential tokens to include a corresponding token. Based on respective probability scores for each of the augmented series of sequential tokens, the disclosed systems can identify as the predicted user activity, a candidate user activity corresponding to one of the augmented series of sequential tokens associated with a highest probability score. Based on the predicted user activity, the disclosed systems can surface one or more suggestions to a client device.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: January 31, 2023
    Assignee: Dropbox, Inc.
    Inventors: Ranjitha Gurunath Kulkarni, Xingyu Xiang, Jongmin Baek, Ermo Wei
  • Publication number: 20220107852
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that can leverage a natural language model to determine a most probable candidate sequence of tokens and thereby generate a predicted user activity. In particular, the disclosed systems can tokenize activity event vectors to generate a series of sequential tokens that correspond to recent user activity of one or more user accounts. In addition, the disclosed systems can, for each candidate (e.g., hypothetical) user activity, augment the series of sequential tokens to include a corresponding token. Based on respective probability scores for each of the augmented series of sequential tokens, the disclosed systems can identify as the predicted user activity, a candidate user activity corresponding to one of the augmented series of sequential tokens associated with a highest probability score. Based on the predicted user activity, the disclosed systems can surface one or more suggestions to a client device.
    Type: Application
    Filed: October 7, 2020
    Publication date: April 7, 2022
    Inventors: Ranjitha Gurunath Kulkarni, Xingyu Xiang, Jongmin Baek, Ermo Wei
  • Patent number: 10847147
    Abstract: Automatic speech recognition systems can benefit from cues in user voice such as hyperarticulation. Traditional approaches typically attempt to define and detect an absolute state of hyperarticulation, which is very difficult, especially on short voice queries. This disclosure provides for an approach for hyperarticulation detection using pair-wise comparisons and on a real-world speech recognition system. The disclosed approach uses delta features extracted from a pair of repetitive user utterances. The improvements provided by the disclosed systems and methods include improvements in word error rate by using hyperarticulation information as a feature in a second pass N-best hypotheses rescoring setup.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: November 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ranjitha Gurunath Kulkarni, Ahmed Moustafa El Kholy, Ziad Al Bawab, Noha Alon, Imed Zitouni
  • Patent number: 10650811
    Abstract: Disclosed in various examples are methods, systems, and machine-readable mediums for providing improved computer implemented speech recognition by detecting and correcting speech recognition errors during a speech session. The system recognizes repeated speech commands from a user in a speech session that are similar or identical to each other. To correct these repeated errors, the system creates a customized language model that is then utilized by the language modeler to produce a refined prediction of the meaning of the repeated speech commands. The custom language model may comprise clusters of similar past predictions of speech commands from the speech session of the user.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Meryem Pinar Donmez Ediz, Ranjitha Gurunath Kulkarni, Shuangyu Chang, Nitin Kamra
  • Publication number: 20190296933
    Abstract: A technique is described herein for facilitating the programming and control of a collection of devices. In one manner of operation, the technique involves: receiving signals from the collection of devices that describe a sequence of events that have occurred in operation of the collection of devices; storing the signals; determining a rule associated with the sequence of events using a machine-trained sequence-detection component (SDC), the rule identifying a next event in the sequence of events; determining whether the rule is viable; and, if the rule is determined to be viable, sending control information to at least one device in the collection of devices. The control information instructs the identified device(s) to perform the next event that has been identified.
    Type: Application
    Filed: March 20, 2018
    Publication date: September 26, 2019
    Inventors: Anirudh KOUL, Ranjitha GURUNATH KULKARNI
  • Publication number: 20190287519
    Abstract: Disclosed in various examples are methods, systems, and machine-readable mediums for providing improved computer implemented speech recognition by detecting and correcting speech recognition errors during a speech session. The system recognizes repeated speech commands from a user in a speech session that are similar or identical to each other. To correct these repeated errors, the system creates a customized language model that is then utilized by the language modeler to produce a refined prediction of the meaning of the repeated speech commands. The custom language model may comprise clusters of similar past predictions of speech commands from the speech session of the user.
    Type: Application
    Filed: March 13, 2018
    Publication date: September 19, 2019
    Inventors: Meryem Pinar Donmez Ediz, Ranjitha Gurunath Kulkarni, Shuangyu Chang, Nitin Kamra
  • Publication number: 20190279612
    Abstract: Automatic speech recognition systems can benefit from cues in user voice such as hyperarticulation. Traditional approaches typically attempt to define and detect an absolute state of hyperarticulation, which is very difficult, especially on short voice queries. This disclosure provides for an approach for hyperarticulation detection using pair-wise comparisons and on a real-world speech recognition system. The disclosed approach uses delta features extracted from a pair of repetitive user utterances. The improvements provided by the disclosed systems and methods include improvements in word error rate by using hyperarticulation information as a feature in a second pass N-best hypotheses rescoring setup.
    Type: Application
    Filed: May 24, 2019
    Publication date: September 12, 2019
    Inventors: Ranjitha Gurunath Kulkarni, Ahmed Moustafa El Kholy, Ziad Al Bawab, Noha Alon, Imed Zitouni
  • Patent number: 10354642
    Abstract: Automatic speech recognition systems can benefit from cues in user voice such as hyperarticulation. Traditional approaches typically attempt to define and detect an absolute state of hyperarticulation, which is very difficult, especially on short voice queries. This disclosure provides for an approach for hyperarticulation detection using pair-wise comparisons and on a real-world speech recognition system. The disclosed approach uses delta features extracted from a pair of repetitive user utterances. The improvements provided by the disclosed systems and methods include improvements in word error rate by using hyperarticulation information as a feature in a second pass N-best hypotheses rescoring setup.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: July 16, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ranjitha Gurunath Kulkarni, Ahmed Moustafa El Kholy, Ziad Al Bawab, Noha Alon, Imed Zitouni
  • Publication number: 20180254035
    Abstract: Automatic speech recognition systems can benefit from cues in user voice such as hyperarticulation. Traditional approaches typically attempt to define and detect an absolute state of hyperarticulation, which is very difficult, especially on short voice queries. This disclosure provides for an approach for hyperarticulation detection using pair-wise comparisons and on a real-world speech recognition system. The disclosed approach uses delta features extracted from a pair of repetitive user utterances. The improvements provided by the disclosed systems and methods include improvements in word error rate by using hyperarticulation information as a feature in a second pass N-best hypotheses rescoring setup.
    Type: Application
    Filed: June 15, 2017
    Publication date: September 6, 2018
    Inventors: Ranjitha Gurunath Kulkarni, Ahmed Moustafa El Kholy, Ziad Al Bawab, Noha Alon, Imed Zitouni
  • Patent number: 9922095
    Abstract: One or more systems and/or techniques are provided for automatic closed captioning for media content. In an example, real-time content, occurring within a threshold timespan of a broadcast of media content (e.g., social network posts occurring during and an hour before a live broadcast of an interview), may be accessed. A list of named entities, occurring within the social network data, may be generated (e.g., Interviewer Jon, Interviewee Kathy, Husband Dave, Son Jack, etc.). A ranked list of named entities may be created based upon trending named entities within the list of named entities (e.g., a named entity may be ranked higher based upon a more frequent occurrence within the social network posts). A dynamic grammar (e.g., library, etc.) may be built based upon the ranked list of named entities. Speech recognition may be performed upon the broadcast of media content utilizing the dynamic grammar to create closed caption text.
    Type: Grant
    Filed: June 2, 2015
    Date of Patent: March 20, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anirudh Koul, Ranjitha Gurunath Kulkarni, Serge-Eric Tremblay
  • Publication number: 20160357746
    Abstract: One or more systems and/or techniques are provided for automatic closed captioning for media content. In an example, real-time content, occurring within a threshold timespan of a broadcast of media content (e.g., social network posts occurring during and an hour before a live broadcast of an interview), may be accessed. A list of named entities, occurring within the social network data, may be generated (e.g., Interviewer Jon, Interviewee Kathy, Husband Dave, Son Jack, etc.). A ranked list of named entities may be created based upon trending named entities within the list of named entities (e.g., a named entity may be ranked higher based upon a more frequent occurrence within the social network posts). A dynamic grammar (e.g., library, etc.) may be built based upon the ranked list of named entities. Speech recognition may be performed upon the broadcast of media content utilizing the dynamic grammar to create closed caption text.
    Type: Application
    Filed: June 2, 2015
    Publication date: December 8, 2016
    Inventors: Anirudh Koul, Ranjitha Gurunath Kulkarni, Serge-Eric Tremblay