Patents by Inventor Chee Wee Leong
Chee Wee Leong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12300244Abstract: Data is received that encapsulates a video of a subject performing a task. This video is used to generate a transcript using an automatic speech recognition (ASR) system. A plurality of text segments are generated from the transcript and then tokenized. A textual representation of each segment is extracted by a transformer model using the tokenized text segment (i.e., the tokens corresponding to the text segment). Thereafter, for each segment, a fused representation derived from the textual representations and corresponding visual and audio features from the video is generated. A sparse attention machine learning model then selects an optimal slice of the video based on the fused representations. The optimal slice can then be input into one or more machine learning models trained to characterize performance of the task by the subject.Type: GrantFiled: August 22, 2022Date of Patent: May 13, 2025Assignee: Educational Testing ServiceInventors: Chee Wee Leong, Xianyang Chen, Vinay K. Basheerabad, Chong Min Lee, Patrick D. Houghton
-
Patent number: 11455488Abstract: Systems and methods are provided for processing a drawing in a modeling prototype. A data structure associated with a visual model is accessed. The visual model is analyzed to extract construct-relevant features, where the construct-relevant features are extracted using a drawing object by identifying visual attributes of the visual model and populating a data structure for each object drawn. The visual model is analyzed to generate a statistical model, where the statistical model is generated using a multidimensional scoring rubric by targeting different constructs which compositely estimate learning progression levels, wherein the statistical model is based on features that are principally aligned with one or more of the constructs. An automated scoring is determined based on the construct-relevant features and the statistical model, where the automated scoring is stored in a computer readable medium. and is outputted for display, transmitted across a computer network, or printed.Type: GrantFiled: March 20, 2019Date of Patent: September 27, 2022Assignee: Educational Testing ServiceInventors: Chee Wee Leong, Lei Liu, Rutuja Ubale, Lei Chen
-
Patent number: 10803318Abstract: Systems and methods are provided for scoring video clips using visual feature extraction. A signal including a video clip of a subject is received. For each frame of the video clip, physiological features of the subject visually rendered in the video clip are extracted. A plurality of visual words associated with the extracted physiological features are determined. A document including the plurality of visual words is generated. A plurality of feature vectors associated with the document are determined. The plurality of feature vectors to a regression model for scoring are provided.Type: GrantFiled: May 17, 2017Date of Patent: October 13, 2020Assignee: Educational Testing ServiceInventors: Lei Chen, Gary Feng, Chee Wee Leong, Chong Min Lee
-
Patent number: 10706738Abstract: Systems and methods are described for providing a multi-modal evaluation of a presentation. A system includes a motion capture device configured to detect motion an examinee giving a presentation and an audio recording device configured to capture audio of the examinee giving the presentation. One or more data processors are configured to extract a non-verbal feature of the presentation based on data collected by the motion capture device and an audio feature of the presentation based on data collected by the audio recording device. The one or more data processors are further configured to generate a presentation score based on the non-verbal feature and the audio feature.Type: GrantFiled: April 23, 2019Date of Patent: July 7, 2020Assignee: Educational Testing ServiceInventors: Lei Chen, Gary Feng, Chee Wee Leong, Christopher Kitchen, Chong Min Lee
-
Patent number: 10607188Abstract: Systems and methods described herein utilize supervised machine learning to generate a model for scoring interview responses. The system may access a training response, which in one embodiment is an audiovisual recording of a person responding to an interview question. The training response may have an assigned human-determined score. The system may extract at least one delivery feature and at least one content feature from the audiovisual recording of the training response, and use the extracted features and the human-determined score to train a response scoring model for scoring interview responses. The response scoring model may be configured based on the training to automatically assign scores to audiovisual recordings of interview responses. The scores for interview responses may be used by interviewers to assess candidates.Type: GrantFiled: March 24, 2015Date of Patent: March 31, 2020Assignee: Educational Testing ServiceInventors: Patrick Charles Kyllonen, Lei Chen, Michelle Paulette Martin, Isaac Bejar, Chee Wee Leong, Joanna Gorin, David Michael Williamson
-
Patent number: 10311743Abstract: Systems and methods are described for providing a multi-modal evaluation of a presentation. A system includes a motion capture device configured to detect motion an examinee giving a presentation and an audio recording device configured to capture audio of the examinee giving the presentation. One or more data processors are configured to extract a non-verbal feature of the presentation based on data collected by the motion capture device and an audio feature of the presentation based on data collected by the audio recording device. The one or more data processors are further configured to generate a presentation score based on the non-verbal feature and the audio feature.Type: GrantFiled: April 8, 2014Date of Patent: June 4, 2019Assignee: Educational Testing ServiceInventors: Lei Chen, Gary Feng, Chee Wee Leong, Christopher Kitchen, Chong Min Lee
-
Patent number: 10176365Abstract: Computer-implemented systems and methods for evaluating a performance are provided. Motion of a user in a performance is detected using a motion capture device. Data collected by the motion capture device is processed with a processing system to identify occurrences of first and second types of actions by the user. The data collected by the motion capture device is processed with the processing system to determine values indicative of amounts of time between the occurrences. A non-verbal feature of the performance is determined based on the identified occurrences and the values. A score for the performance is generated using the processing system by applying a computer scoring model to the non-verbal feature.Type: GrantFiled: April 20, 2016Date of Patent: January 8, 2019Assignee: Educational Testing ServiceInventors: Vikram Ramanarayanan, Lei Chen, Chee Wee Leong, Gary Feng, David Suendermann-Oeft
-
Patent number: 9852379Abstract: Systems and methods described herein utilize supervised machine learning to generate a figure-of-speech prediction model for classify content words in a running text as either being figurative (e.g., as a metaphor, simile, etc.) or non-figurative (i.e., literal). The prediction model may extract and analyze any number of features in making its prediction, including a topic model feature, unigram feature, part-of-speech feature, concreteness feature, concreteness difference feature, literal context feature, non-literal context feature, and off-topic feature, each of which are described in detail herein. Since uses of figure of speech in writings may signal content sophistication, the figure-of-speech prediction model allows scoring engines to further take into consideration a text's use of figure of speech when generating a score.Type: GrantFiled: March 6, 2015Date of Patent: December 26, 2017Assignee: Educational Testing ServiceInventors: Beata Beigman Klebanov, Chee Wee Leong, Michael Flor, Michael Heilman
-
Publication number: 20150269529Abstract: Systems and methods described herein utilize supervised machine learning to generate a model for scoring interview responses. The system may access a training response, which in one embodiment is an audiovisual recording of a person responding to an interview question. The training response may have an assigned human-determined score. The system may extract at least one delivery feature and at least one content feature from the audiovisual recording of the training response, and use the extracted features and the human-determined score to train a response scoring model for scoring interview responses. The response scoring model may be configured based on the training to automatically assign scores to audiovisual recordings of interview responses. The scores for interview responses may be used by interviewers to assess candidates.Type: ApplicationFiled: March 24, 2015Publication date: September 24, 2015Inventors: Patrick Charles Kyllonen, Lei Chen, Michelle Paulette Martin, Isaac Bejar, Chee Wee Leong, Joanna Gorin, David Michael Williamson
-
Publication number: 20150254565Abstract: Systems and methods described herein utilize supervised machine learning to generate a figure-of-speech prediction model for classify content words in a running text as either being figurative (e.g., as a metaphor, simile, etc.) or non-figurative (i.e., literal). The prediction model may extract and analyze any number of features in making its prediction, including a topic model feature, unigram feature, part-of-speech feature, concreteness feature, concreteness difference feature, literal context feature, non-literal context feature, and off-topic feature, each of which are described in detail herein. Since uses of figure of speech in writings may signal content sophistication, the figure-of-speech prediction model allows scoring engines to further take into consideration a text's use of figure of speech when generating a score.Type: ApplicationFiled: March 6, 2015Publication date: September 10, 2015Inventors: Beata Beigman Klebanov, Chee Wee Leong, Michael Flor, Michael Heilman
-
Publication number: 20140302469Abstract: Systems and methods are described for providing a multi-modal evaluation of a presentation. A system includes a motion capture device configured to detect motion an examinee giving a presentation and an audio recording device configured to capture audio of the examinee giving the presentation. One or more data processors are configured to extract a non-verbal feature of the presentation based on data collected by the motion capture device and an audio feature of the presentation based on data collected by the audio recording device. The one or more data processors are further configured to generate a presentation score based on the non-verbal feature and the audio feature.Type: ApplicationFiled: April 8, 2014Publication date: October 9, 2014Applicant: Educational Testing ServiceInventors: Lei Chen, Gary Feng, Chee Wee Leong, Christopher Kitchen, Chong Min Lee
-
Patent number: 8819047Abstract: The described implementations relate to processing of electronic data. One implementation is manifested as a technique that can include receiving an input statement that includes a plurality of terms. The technique can also include providing, in response to the input statement, ranked supporting documents that support the input statement or ranked contradicting results that contradict the input statement.Type: GrantFiled: April 4, 2012Date of Patent: August 26, 2014Assignee: Microsoft CorporationInventors: Silviu-Petru Cucerzan, Chee Wee Leong
-
Publication number: 20130268519Abstract: The described implementations relate to processing of electronic data. One implementation is manifested as a technique that can include receiving an input statement that includes a plurality of terms. The technique can also include providing, in response to the input statement, ranked supporting documents that support the input statement or ranked contradicting results that contradict the input statement.Type: ApplicationFiled: April 4, 2012Publication date: October 10, 2013Applicant: MICROSOFT CORPORATIONInventors: Silviu-Petru Cucerzan, Chee Wee Leong