Patents by Inventor David Suendermann-Oeft

David Suendermann-Oeft has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230137366
    Abstract: A system and method for remote monitoring of patient motor functions includes a computing device that uses captured image data depicting a patient's body part and, based on movement information, detects whether a condition may exist that is affecting motor functions. The body part can be a hand that is tracked as the user performs a tapping exercise. The body part can also include the patient's face during speech and also without speech.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 4, 2023
    Inventors: Oliver ROESLER, William BURKE, Hardik KOTHARE, Jackson LISCOMBE, Michael NEUMANN, Andrew CORNISH, Doug HABBERSTAD, David PAUTLER, David SUENDERMANN-OEFT, Vikram RAMANARAYANAN
  • Publication number: 20230018524
    Abstract: A virtual agent instructs a responding person to perform specific verbal exercises. Audio and image inputs from the responding person's performance of the exercises are used to identify speech, video, cognitive, and/or respiratory biomarkers, which are then used to evaluate speech motor function and/or neurological health. Contemplated exercises include test aspects of oral motor proficiency, sustained phonation, diadochokinesis, reading speech, spontaneous speech, spirometry, picture description, and emotion elicitation. Metrics from evaluation of the responding person's performance are advantageously produced automatically, and are presented in spreadsheet format.
    Type: Application
    Filed: October 22, 2021
    Publication date: January 19, 2023
    Inventors: Vikram Ramanarayanan, Oliver Roesler, Michael Neumann, David Pautler, Doug Habberstad, Andrew Cornish, Hardik Kothare, Vignesh Murali, Jackson Liscombe, Dirk Schnelle-Walka, Patrick Lange, David Suendermann-Oeft
  • Publication number: 20220335939
    Abstract: A computer-generated dialog session is customized for a user having a pathology characterized at least in part by a speech pathology. The user's speech is analyzed for spans of speech in which the starts and ends of the spans satisfy predetermined thresholds of time. Customization occurs by altering at least one of the following configurable parameters: (a) a threshold minimum signal strength of speech (dB) to consider as the start of the span of speech; (b) an adjustment factor by which signal strengths of background noise increases between consecutive spans of speech; (c) a threshold between signal strength during the span of speech and signal strength during the span of non-speech; (d) a start speech time threshold; and (e) an end speech time threshold.
    Type: Application
    Filed: April 19, 2022
    Publication date: October 20, 2022
    Applicant: Modality.AI
    Inventors: Jackson Liscombe, Hardik Kothare, Doug Habberstad, Andrew Cornish, Oliver Roesler, Michael Neumann, David Pautler, David Suendermann-Oeft, Vikram Ramanarayanan
  • Publication number: 20220139562
    Abstract: A virtual agent converses with a patient or other person to assess one or more psychological or other medical conditions of the other person. The virtual agent uses both semantic and affect content from the person to branch the conversation, and to assess a psychological or other condition of the person. The virtual agent can have artificial intelligence functionalities or can utilize a separate artificial intelligence functionality. A communication agent can be utilized to monitor a telecommunication session with the person, and if appropriate, modify relative bandwidth utilization between the audio and image inputs. The virtual agent/artificial intelligence agent can simultaneously assist multiple virtual agents, who are each conversing with a responding person assessing their psychological or other medical condition(s), in parallel. The contemplated virtual agents can be especially useful in assessing disorder severity in multiple neurological and mental disorders.
    Type: Application
    Filed: September 10, 2021
    Publication date: May 5, 2022
    Inventors: Michael Neumann, Oliver Roesler, David Suendermann-Oeft, Vikram Ramanarayanan
  • Patent number: 11238844
    Abstract: Systems and methods for identifying a person's native language and/or non-native language based on code-switched text and/or speech, are presented. The systems may be trained using various methods. For example, a language identification system may be trained using one or more code-switched corpora. Text and/or speech features may be extracted from the corpora and used, in combination with a per-word language identify of the text and/or speech, to train at least one machine learner. Code-switched text and/or speech may be received and processed by extracting text and/or speech features. These features may be fed into the at least one machine learner to identify the person's native language.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: February 1, 2022
    Assignee: Educational Testing Service
    Inventors: Vikram Ramanarayanan, Robert Pugh, Yao Qian, David Suendermann-Oeft
  • Patent number: 11222627
    Abstract: Systems and methods are provided for conducting a simulated conversation with a language learner include determining a first dialog state of the simulated conversation. First audio data corresponding to simulated speech based on the dialog state is transmitted. Second audio data corresponding to a variable length utterance spoken in response to the simulated speech is received. A fixed dimension vector is generated based on the variable length utterance. A semantic label is predicted for the variable-length utterance based on the fixed dimension vector. A second dialog state of the simulated conversation is determined based on the semantic label, and third audio data corresponding to simulated speech is transmitted based on the second dialog state.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: January 11, 2022
    Assignee: Educational Testing Service
    Inventors: Yao Qian, Rutuja Ubale, Vikram Ramanarayanan, Patrick Lange, David Suendermann-Oeft, Keelan Evanini, Eugene Tsuprun
  • Patent number: 11132913
    Abstract: Systems and methods are provided for acquiring physical-world data indicative of interactions of a subject with an avatar for evaluation. An interactive avatar is provided for interaction with the subject. Speech from the subject to the avatar is captured, and automatic speech recognition is performed to determine content of the subject speech. Motion data from the subject interacting with the avatar is captured. A next action of the interactive avatar is determined based on the content of the subject speech or the motion data. The next action of the avatar is implemented, and a score for the subject is determined based on the content of the subject speech and the motion data.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: September 28, 2021
    Assignee: Educational Testing Service
    Inventors: Vikram Ramanarayanan, Mark Katz, Eric Steinhauer, Ravindran Ramaswamy, David Suendermann-Oeft
  • Patent number: 10937444
    Abstract: A system for end-to-end automated scoring is disclosed. The system includes a word embedding layer for converting a plurality of ASR outputs into input tensors; a neural network lexical model encoder receiving the input tensors; a neural network acoustic model encoder implementing AM posterior probability, word duration, mean value of pitch and mean value of intensity based on a plurality of cues; and a linear regression module, for receiving concatenated encoded features from the neural network lexical model encoder and the neural network acoustic model encoder.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: March 2, 2021
    Assignee: Educational Testing Service
    Inventors: David Suendermann-Oeft, Lei Chen, Jidong Tao, Shabnam Ghaffarzadegan, Yao Qian
  • Patent number: 10607504
    Abstract: Systems and methods are provided for implementing an educational dialog system. An initial task model is accessed that identifies a plurality of dialog states associated with a task, a language model configured to identify a response meaning associated with a received response, and a language understanding model configured to select a next dialog state based on the identified response meaning. The task is provided to a plurality of persons for training. The task model is updated by revising the language model and the language understanding model based on responses received to prompts of the provided task, and the updated task is provided to a student for development of speaking capabilities.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: March 31, 2020
    Assignee: Educational Testing Service
    Inventors: Vikram Ramanarayanan, David Suendermann-Oeft, Patrick Lange, Alexei V. Ivanov, Keelan Evanini, Yao Qian, Zhou Yu
  • Patent number: 10592733
    Abstract: Systems and methods are provided providing a spoken dialog system. Output is provided from a spoken dialog system that determines audio responses to a person based on recognized speech content from the person during a conversation between the person and the spoken dialog system. Video data associated with the person interacting with the spoken dialog system is received. A video engagement metric is derived from the video data, where the video engagement metric indicates a level of the person's engagement with the spoken dialog system.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 17, 2020
    Assignee: Educational Testing Service
    Inventors: Vikram Ramanarayanan, David Suendermann-Oeft, Patrick Lange, Alexei V. Ivanov, Keelan Evanini, Yao Qian, Eugene Tsuprun, Hillary R. Molloy
  • Patent number: 10283142
    Abstract: Systems and methods are provided for a processor-implemented method of analyzing quality of sound acquired via a microphone. An input metric is extracted from a sound recording at each of a plurality of time intervals. The input metric is provided at each of the time intervals to a neural network that includes a memory component, where the neural network provides an output metric at each of the time intervals, where the output metric at a particular time interval is based on the input metric at a plurality of time intervals other than the particular time interval using the memory component of the neural network. The output metric is aggregated from each of the time intervals to generate a score indicative of the quality of the sound acquired via the microphone.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: May 7, 2019
    Assignee: Educational Testing Service
    Inventors: Zhou Yu, Vikram Ramanarayanan, David Suendermann-Oeft, Xinhao Wang, Klaus Zechner, Lei Chen, Jidong Tao, Yao Qian
  • Publication number: 20190065464
    Abstract: Systems, methods, and computer-readable non-transitory storage medium for communicating medical information based at least in part on an oral communication between a doctor and a patient is disclosed. In this method and system, doctor and patient's respective contexts is inferred from the oral communication. It is also preferred that diagnostic information and respective contexts of the communications can be also inferred. Then, a desired impact of a recipient to a written communication related to the oral communication is inferred. Once the desired impact is inferred, the system generates output text using an artificial intelligence system, or by accessing a database of a plurality of stock phrases, to have appropriate surface text, and subtext, and optionally appropriate tone. The output text can be selected as a function of inferred diagnostic information, the inferred doctor and recipient's respective contexts, the desired impact, and the stock phrases.
    Type: Application
    Filed: March 8, 2018
    Publication date: February 28, 2019
    Inventors: Greg P. Finley, Erik Edwards, Amanda Robinson, Najmeh Sadoughi, James Fone, Mark Miller, David Suendermann-Oeft, Wael Salloum
  • Publication number: 20190065462
    Abstract: Systems, methods, and computer-readable non-transitory storage medium in which a statistical machine translation model for formatting medical reports is trained in a learning phase using bitexts and in a tuning phase using manually transcribed dictations. Bitexts are generated from automated speech recognition dictations and corresponding formatted reports, using a series of steps including identifying matches and edits between the dictations and their corresponding reports using dynamic programming, merging matches with adjacent edits, calculating a confidence score, identifying acceptable matches, edits, and merged edits, grouping adjacent acceptable matches, edits, and merged edits, and generating a plurality of bitexts each having a predetermined maximum word count (e.g., 100 words), preferably with a predetermined overlap (e.g., two thirds) with another bitext.
    Type: Application
    Filed: August 31, 2018
    Publication date: February 28, 2019
    Inventors: Wael Salloum, Greg Finley, Erik Edwards, Mark Miller, David Suendermann-Oeft
  • Publication number: 20190043486
    Abstract: A method for assisting the transformation of a dictated, into a structured and written, report within a specialized field. The method starts with using automated speed recognition to produce a preliminary textual representation, which it then transforms into a simplified and normalized input sequence, which it copies and then transforms the copy by replacing words with tokens appropriate to the class of word as known, rare, or reducible, thereby creating a tokenized input sequence. The method then identifies and removes any preamble from the narrative text and restores punctuation, before restoring for each token within the tokenized input sequence its separable individual and original word and thus producing punctuated narrative text for processing into the written and structured report.
    Type: Application
    Filed: August 21, 2017
    Publication date: February 7, 2019
    Inventors: Wael Salloum, Greg Finley, Erik Edwards, Mark Miller, David Suendermann-Oeft
  • Patent number: 10176365
    Abstract: Computer-implemented systems and methods for evaluating a performance are provided. Motion of a user in a performance is detected using a motion capture device. Data collected by the motion capture device is processed with a processing system to identify occurrences of first and second types of actions by the user. The data collected by the motion capture device is processed with the processing system to determine values indicative of amounts of time between the occurrences. A non-verbal feature of the performance is determined based on the identified occurrences and the values. A score for the performance is generated using the processing system by applying a computer scoring model to the non-verbal feature.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: January 8, 2019
    Assignee: Educational Testing Service
    Inventors: Vikram Ramanarayanan, Lei Chen, Chee Wee Leong, Gary Feng, David Suendermann-Oeft
  • Patent number: 10008209
    Abstract: Systems and methods are provided for providing voice authentication of a candidate speaker. Training data sets are accessed, where each training data set comprises data associated with a training speech sample of a speaker and a plurality of speaker metrics, where the plurality of speaker metrics include a native language of the speaker. The training data sets are used to train a neural network, where the data associated with each training speech sample is a training input to the neural network, and each of the plurality of speaker metrics is a training output to the neural network. Data associated with a speech sample is provided to the neural network to generate a vector that contains values for the plurality of speaker metrics, and the values contained in the vector are compared to values contained in a reference vector associated with a known person to determine whether the candidate speaker is the known person.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: June 26, 2018
    Assignee: Educational Testing Service
    Inventors: Yao Qian, Jidong Tao, David Suendermann-Oeft, Keelan Evanini, Alexei V. Ivanov, Vikram Ramanarayanan