Patents Assigned to LENA Foundation
-
Patent number: 11328738Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.Type: GrantFiled: December 26, 2019Date of Patent: May 10, 2022Assignee: LENA FOUNDATIONInventors: Jeffrey A. Richards, Stephen M. Hannon
-
Publication number: 20200135229Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.Type: ApplicationFiled: December 26, 2019Publication date: April 30, 2020Applicant: LENA FoundationInventors: Jeffrey A. Richards, Stephen M. Hannon
-
Patent number: 10573336Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.Type: GrantFiled: February 17, 2018Date of Patent: February 25, 2020Assignee: LENA FOUNDATIONInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 10529357Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.Type: GrantFiled: December 7, 2018Date of Patent: January 7, 2020Assignee: LENA FOUNDATIONInventors: Jeffrey A. Richards, Stephen M. Hannon
-
Publication number: 20190180772Abstract: A method including receiving one or more datasets of audio data of a key child captured in a natural sound environment of the key child. The method also includes segmenting each of the one or more datasets of audio data to create audio segments. The audio segments include cry-related segments and non-cry segments. The method additionally includes determining periods of the cry-related segments that satisfy one or more threshold non-sparsity criteria. The method further includes performing a classification on the periods to classify each of the periods as either a cry period or a fussiness period. Other embodiments are described.Type: ApplicationFiled: December 7, 2018Publication date: June 13, 2019Applicant: LENA FoundationInventors: Jeffrey A. Richards, Stephen M. Hannon
-
Patent number: 10223934Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.Type: GrantFiled: May 30, 2016Date of Patent: March 5, 2019Assignee: Lena FoundationInventor: Terrance D. Paul
-
Publication number: 20180174601Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.Type: ApplicationFiled: February 17, 2018Publication date: June 21, 2018Applicant: LENA FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 9899037Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.Type: GrantFiled: January 15, 2016Date of Patent: February 20, 2018Assignee: LENA FOUNDATIONInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 9799348Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.Type: GrantFiled: January 15, 2016Date of Patent: October 24, 2017Assignee: LENA FOUNDATIONInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20160351074Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.Type: ApplicationFiled: May 30, 2016Publication date: December 1, 2016Applicant: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20160210986Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.Type: ApplicationFiled: January 15, 2016Publication date: July 21, 2016Applicant: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20160203832Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.Type: ApplicationFiled: January 15, 2016Publication date: July 14, 2016Applicant: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 9355651Abstract: In one embodiment, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings.Type: GrantFiled: April 29, 2014Date of Patent: May 31, 2016Assignee: LENA FOUNDATIONInventors: Dongxin D. Xu, Terrance D. Paul
-
Patent number: 9240188Abstract: In one embodiment, the system and method for expressive language development; a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination; and the computer programmed to execute a method that includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method also includes extracting acoustic parameters of the key child recordings and comparing the acoustic parameters of the key child recordings to known acoustic parameters for children. The method returns a determination of a likelihood of autism.Type: GrantFiled: January 23, 2009Date of Patent: January 19, 2016Assignee: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha S. Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 8938390Abstract: In one embodiment, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings.Type: GrantFiled: February 27, 2009Date of Patent: January 20, 2015Assignee: LENA FoundationInventors: Dongxin D. Xu, Terrance D. Paul
-
Publication number: 20140255887Abstract: In one embodiment, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings.Type: ApplicationFiled: April 29, 2014Publication date: September 11, 2014Applicant: LENA FoundationInventors: Dongxin D. XU, Terrance D. Paul
-
Publication number: 20140234811Abstract: A method of supporting vocabulary and language learning by positioning at least one microphone so as to capture speech in the listening environment of a learner. The microphone is monitored to develop a speech signal. The speech signal is analyzed to determine at least one characteristic of the speech or vocalization, wherein the characteristic indicates a qualitative or quantitative feature of the speech. The determined characteristic is compared to a preselected standard or such characteristic is tracked to show growth over time and the comparison or growth is reported to the person associated with the speech signal or person who potentially can affect the language environment of the learner.Type: ApplicationFiled: April 28, 2014Publication date: August 21, 2014Applicant: LENA FoundationInventor: Terrance D. Paul
-
Patent number: 8744847Abstract: Certain aspects and embodiments of the present invention are directed to systems and methods for monitoring and analyzing the language environment and the development of a key child. A key child's language environment and language development can be monitored without placing artificial limitations on the key child's activities or requiring a third party observer. The language environment can be analyzed to identify phones or speech sounds spoken by the key child, independent of content. The number and type of phones is analyzed to automatically assess the key child's expressive language development. The assessment can result in a standard score, an estimated developmental age, or an estimated mean length of utterance.Type: GrantFiled: April 25, 2008Date of Patent: June 3, 2014Assignee: LENA FoundationInventors: Terrance Paul, Dongxin Xu, Jeffrey A. Richards
-
Patent number: 8708702Abstract: A method of supporting vocabulary and language learning by positioning at least one microphone so as to capture speech in the listening environment of a learner. The microphone is monitored to develop a speech signal. The speech signal is analyzed to determine at least one characteristic of the speech or vocalization, wherein the characteristic indicates a qualitative or quantitative feature of the speech. The determined characteristic is compared to a preselected standard or such characteristic is tracked to show growth over time and the comparison or growth is reported to the person associated with the speech signal or person who potentially can affect the language environment of the learner.Type: GrantFiled: September 13, 2005Date of Patent: April 29, 2014Assignee: LENA FoundationInventor: Terrance D. Paul
-
Patent number: 8078465Abstract: Certain aspects and embodiments of the present invention are directed to systems and methods for monitoring and analyzing the language environment and the development of a key child. A key child's language environment and language development can be monitored without placing artificial limitations on the key child's activities or requiring a third party observer. The language environment can be analyzed to identify words, vocalizations, or other noises directed to or spoken by the key child, independent of content. The analysis can include the number of responses between the child and another, such as an adult and the number of words spoken by the child and/or another, independent of content of the speech. One or more metrics can be determined based on the analysis and provided to assist in improving the language environment and/or tracking language development of the key child.Type: GrantFiled: January 23, 2008Date of Patent: December 13, 2011Assignee: LENA FoundationInventors: Terrance Paul, Dongxin Xu, Umit Yapenel, Sharmistha Gray