Patents Assigned to LENA Foundation
  • Patent number: 11328738
    Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: May 10, 2022
    Assignee: LENA FOUNDATION
    Inventors: Jeffrey A. Richards, Stephen M. Hannon
  • Publication number: 20200135229
    Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.
    Type: Application
    Filed: December 26, 2019
    Publication date: April 30, 2020
    Applicant: LENA Foundation
    Inventors: Jeffrey A. Richards, Stephen M. Hannon
  • Patent number: 10573336
    Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: February 25, 2020
    Assignee: LENA FOUNDATION
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 10529357
    Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: January 7, 2020
    Assignee: LENA FOUNDATION
    Inventors: Jeffrey A. Richards, Stephen M. Hannon
  • Publication number: 20190180772
    Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 13, 2019
    Applicant: LENA Foundation
    Inventors: Jeffrey A. Richards, Stephen M. Hannon
  • Patent number: 10223934
    Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.
    Type: Grant
    Filed: May 30, 2016
    Date of Patent: March 5, 2019
    Assignee: Lena Foundation
    Inventor: Terrance D. Paul
  • Publication number: 20180174601
    Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.
    Type: Application
    Filed: February 17, 2018
    Publication date: June 21, 2018
    Applicant: LENA Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 9899037
    Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: February 20, 2018
    Assignee: LENA FOUNDATION
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 9799348
    Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: October 24, 2017
    Assignee: LENA FOUNDATION
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20160351074
    Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.
    Type: Application
    Filed: May 30, 2016
    Publication date: December 1, 2016
    Applicant: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20160210986
    Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 21, 2016
    Applicant: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Publication number: 20160203832
    Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 14, 2016
    Applicant: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 9355651
    Abstract: In one embodiment, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: May 31, 2016
    Assignee: LENA FOUNDATION
    Inventors: Dongxin D. Xu, Terrance D. Paul
  • Patent number: 9240188
    Abstract: In one embodiment, the system and method for expressive language development; a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination; and the computer programmed to execute a method that includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method also includes extracting acoustic parameters of the key child recordings and comparing the acoustic parameters of the key child recordings to known acoustic parameters for children. The method returns a determination of a likelihood of autism.
    Type: Grant
    Filed: January 23, 2009
    Date of Patent: January 19, 2016
    Assignee: Lena Foundation
    Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha S. Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
  • Patent number: 8938390
    Abstract: In one embodiment, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings.
    Type: Grant
    Filed: February 27, 2009
    Date of Patent: January 20, 2015
    Assignee: LENA Foundation
    Inventors: Dongxin D. Xu, Terrance D. Paul
  • Publication number: 20140255887
    Abstract: In one embodiment, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings.
    Type: Application
    Filed: April 29, 2014
    Publication date: September 11, 2014
    Applicant: LENA Foundation
    Inventors: Dongxin D. XU, Terrance D. Paul
  • Publication number: 20140234811
    Abstract: A method of supporting vocabulary and language learning by positioning at least one microphone so as to capture speech in the listening environment of a learner. The microphone is monitored to develop a speech signal. The speech signal is analyzed to determine at least one characteristic of the speech or vocalization, wherein the characteristic indicates a qualitative or quantitative feature of the speech. The determined characteristic is compared to a preselected standard or such characteristic is tracked to show growth over time and the comparison or growth is reported to the person associated with the speech signal or person who potentially can affect the language environment of the learner.
    Type: Application
    Filed: April 28, 2014
    Publication date: August 21, 2014
    Applicant: LENA Foundation
    Inventor: Terrance D. Paul
  • Patent number: 8744847
    Abstract: Certain aspects and embodiments of the present invention are directed to systems and methods for monitoring and analyzing the language environment and the development of a key child. A key child's language environment and language development can be monitored without placing artificial limitations on the key child's activities or requiring a third party observer. The language environment can be analyzed to identify phones or speech sounds spoken by the key child, independent of content. The number and type of phones is analyzed to automatically assess the key child's expressive language development. The assessment can result in a standard score, an estimated developmental age, or an estimated mean length of utterance.
    Type: Grant
    Filed: April 25, 2008
    Date of Patent: June 3, 2014
    Assignee: LENA Foundation
    Inventors: Terrance Paul, Dongxin Xu, Jeffrey A. Richards
  • Patent number: 8708702
    Abstract: A method of supporting vocabulary and language learning by positioning at least one microphone so as to capture speech in the listening environment of a learner. The microphone is monitored to develop a speech signal. The speech signal is analyzed to determine at least one characteristic of the speech or vocalization, wherein the characteristic indicates a qualitative or quantitative feature of the speech. The determined characteristic is compared to a preselected standard or such characteristic is tracked to show growth over time and the comparison or growth is reported to the person associated with the speech signal or person who potentially can affect the language environment of the learner.
    Type: Grant
    Filed: September 13, 2005
    Date of Patent: April 29, 2014
    Assignee: LENA Foundation
    Inventor: Terrance D. Paul
  • Patent number: 8078465
    Abstract: Certain aspects and embodiments of the present invention are directed to systems and methods for monitoring and analyzing the language environment and the development of a key child. A key child's language environment and language development can be monitored without placing artificial limitations on the key child's activities or requiring a third party observer. The language environment can be analyzed to identify words, vocalizations, or other noises directed to or spoken by the key child, independent of content. The analysis can include the number of responses between the child and another, such as an adult and the number of words spoken by the child and/or another, independent of content of the speech. One or more metrics can be determined based on the analysis and provided to assist in improving the language environment and/or tracking language development of the key child.
    Type: Grant
    Filed: January 23, 2008
    Date of Patent: December 13, 2011
    Assignee: LENA Foundation
    Inventors: Terrance Paul, Dongxin Xu, Umit Yapenel, Sharmistha Gray