Patents by Inventor Sharmistha Sarkar Gray

Sharmistha Sarkar Gray has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11043207
    Abstract: A method, computer program product, and computer system for measuring, by a computing device, a plurality of Room Impulse Responses (RIRs) associated with a set of two or more microphones. At least a portion of the RIRs may be augmented. At least the portion of the RIRs may be converted to their respective Relative Transfer Function (RTF) representations. The RTF representations may be applied to training data to generate an acoustic model for automatic speech recognition.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: June 22, 2021
    Assignee: Nuance Communications, Inc.
    Inventors: Dushyant Sharma, Sharmistha Sarkar Gray, Uwe Helmut Jost, Patrick A. Naylor
  • Patent number: 10573336
    Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: February 25, 2020
    Assignee: LENA FOUNDATION
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20180174601
    Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.
    Type: Application
    Filed: February 17, 2018
    Publication date: June 21, 2018
    Applicant: LENA Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 9899037
    Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: February 20, 2018
    Assignee: LENA FOUNDATION
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 9799348
    Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: October 24, 2017
    Assignee: LENA FOUNDATION
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20160351074
    Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.
    Type: Application
    Filed: May 30, 2016
    Publication date: December 1, 2016
    Applicant: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20160210986
    Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 21, 2016
    Applicant: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20160203832
    Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 14, 2016
    Applicant: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards