Patents by Inventor Jill S. Gilkerson
Jill S. Gilkerson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10573336Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.Type: GrantFiled: February 17, 2018Date of Patent: February 25, 2020Assignee: LENA FOUNDATIONInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20180174601Abstract: A method of assessing expressive language development of a key child. The method can include processing an audio recording taken in a language environment of the key child to identify segments of the audio recording that correspond to vocalizations of the key child. The method also can include applying an adult automatic speech recognition phone decoder to the segments of the audio recordings to identify each occurrence of a plurality of phone categories and to determine a duration for each of the plurality of phone categories. The method additionally can include determining a duration distribution for the plurality of phone categories based on the durations for the plurality of phone categories. The method further can include using the duration distribution for the plurality of phone categories in an age-based model to assess the expressive language development of the key child.Type: ApplicationFiled: February 17, 2018Publication date: June 21, 2018Applicant: LENA FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 9899037Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.Type: GrantFiled: January 15, 2016Date of Patent: February 20, 2018Assignee: LENA FOUNDATIONInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 9799348Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.Type: GrantFiled: January 15, 2016Date of Patent: October 24, 2017Assignee: LENA FOUNDATIONInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20160351074Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.Type: ApplicationFiled: May 30, 2016Publication date: December 1, 2016Applicant: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20160210986Abstract: A method of determining an emotion of an utterance. The method can include receiving the utterance at a processor-based device comprising an audio engine. The method also can include extracting emotion-related acoustic features from the utterance. The method additionally can include comparing the emotion-related acoustic features to a plurality of emotion models that are representative of emotions. The method further can include selecting a model from the plurality of emotion models based on the comparing the emotion-related acoustic features to the plurality of emotion models. The method additionally can include outputting the emotion of the utterance, wherein the emotion corresponds to the selected model. Other embodiments are provided.Type: ApplicationFiled: January 15, 2016Publication date: July 21, 2016Applicant: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20160203832Abstract: In some embodiments, a method of creating an automatic language characteristic recognition system. The method can include receiving a plurality of audio recordings. The method also can include segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording. The method additionally can include clustering each audio segment of the plurality of audio segments according to audio characteristics of each audio segment to form a plurality of audio segment clusters. Other embodiments are provided.Type: ApplicationFiled: January 15, 2016Publication date: July 14, 2016Applicant: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha Sarkar Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Patent number: 9240188Abstract: In one embodiment, the system and method for expressive language development; a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination; and the computer programmed to execute a method that includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method also includes extracting acoustic parameters of the key child recordings and comparing the acoustic parameters of the key child recordings to known acoustic parameters for children. The method returns a determination of a likelihood of autism.Type: GrantFiled: January 23, 2009Date of Patent: January 19, 2016Assignee: Lena FoundationInventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha S. Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards
-
Publication number: 20090191521Abstract: In one embodiment, the system and method for expressive language development; a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination; and the computer programmed to execute a method that includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method also includes extracting acoustic parameters of the key child recordings and comparing the acoustic parameters of the key child recordings to known acoustic parameters for children. The method returns a determination of a likelihood of autism.Type: ApplicationFiled: January 23, 2009Publication date: July 30, 2009Applicant: Infoture, Inc.Inventors: Terrance D. Paul, Dongxin D. Xu, Sharmistha S. Gray, Umit Yapanel, Jill S. Gilkerson, Jeffrey A. Richards