Patents by Inventor Uwe Helmut Jost

Uwe Helmut Jost has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190318731
    Abstract: A method of performing bidirectional automatic speech recognition (ASR) using an external information source includes performing a precompute pass by pre-processing an utterance in a backward direction to generate pre-processing data stored in a data structure. In a run-time pass, ASR is performed on the utterance in a forward direction using the pre-processing data to generate a prediction list that has a given number of words in path probability order. A word prediction based on the prediction list is presented to an external information source to obtain a response confirming, selecting or correcting the word prediction. The word prediction based on the response and the prediction list are updated. Processing repeats until the end of the utterance is reached. The method outputs an automatic speech recognized form of the utterance based on the word prediction. Use of the external information source in an integrated manner improves current and future predictions.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Uwe Helmut Jost, Neeraj Deshmukh
  • Publication number: 20190272145
    Abstract: A method, computer program product, and computing system for initially aligning two or more audio signals to address coarse temporal misalignment between the two or more audio signals. The two or more audio signals are detected by two or more audio detection systems within a monitored space. The two or more audio signals are subsequently realigned to address ongoing temporal signal drift between the two or more audio signals.
    Type: Application
    Filed: November 15, 2018
    Publication date: September 5, 2019
    Inventors: Dushyant Sharma, Patrick A. Naylor, Uwe Helmut Jost
  • Publication number: 20190272901
    Abstract: A method, computer program product, and computing system for obtaining encounter information of a patient encounter, wherein the encounter information includes machine vision encounter information; and processing the encounter information to generate an encounter transcript.
    Type: Application
    Filed: February 8, 2019
    Publication date: September 5, 2019
    Inventors: Daniel Paulino Almendro Barreda, Dushyant Sharma, Joel Praveen Pinto, Uwe Helmut Jost, Patrick A. Naylor
  • Publication number: 20190272905
    Abstract: A method, computer program product, and computing system for obtaining encounter information of a patient encounter, wherein the encounter information includes machine vision encounter information; processing the machine vision encounter information to identify one or more humanoid shapes; and steering one or more audio recording beams toward the one or more humanoid shapes to capture audio encounter information.
    Type: Application
    Filed: February 8, 2019
    Publication date: September 5, 2019
    Inventors: Daniel Paulino Almendro Barreda, Dushyant Sharma, Joel Praveen Pinto, Uwe Helmut Jost, Patrick A. Naylor
  • Publication number: 20190272147
    Abstract: A method, computer program product, and computing system for obtaining, by a computing device, encounter information of a patient encounter, wherein the encounter information may include audio encounter information obtained from at least a first encounter participant. The audio encounter information obtained from at least the first encounter participant may be processed. A user interface may be generated displaying a plurality of layers associated with the audio encounter information obtained from at least the first encounter participant.
    Type: Application
    Filed: March 5, 2019
    Publication date: September 5, 2019
    Inventors: Paul Joseph Vozila, Guido Remi Marcel Gallopyn, Uwe Helmut Jost, Matthias Helletzgruber, Jeremy Martin Jancsary, Kumar Abhinav, Joel Praveen Pinto, Donald E. Owen, Mehmet Mert Öz
  • Publication number: 20190272896
    Abstract: A method, computer program product, and computing system for obtaining, by a computing device, encounter information of a patient encounter, wherein the encounter information may include audio encounter information obtained from at least a first encounter participant. The audio encounter information obtained from at least the first encounter participant may be processed. A user interface may be generated displaying a plurality of layers associated with the audio encounter information obtained from at least the first encounter participant. A user input may be received from a peripheral device to navigate through each of the plurality of layers associated with the audio encounter information displayed on the user interface.
    Type: Application
    Filed: March 5, 2019
    Publication date: September 5, 2019
    Inventors: Paul Joseph Vozila, Guido Remi Marcel Gallopyn, Uwe Helmut Jost, Matthias Helletzgruber, Jeremy Martin Jancsary, Kumar Abhinav, Joel Praveen Pinto, Donald E. Owen, Mehmet Mert Öz
  • Publication number: 20190272844
    Abstract: A method, computer program product, and computing system for determining a time delay between a first audio signal received on a first audio detection system and a second audio signal received on a second audio detection system. The first and second audio detection systems are located within a monitored space. The first audio detection system is located with respect to the second audio detection system within the monitored space.
    Type: Application
    Filed: November 15, 2018
    Publication date: September 5, 2019
    Inventors: DUSHYANT SHARMA, Patrick A. Naylor, Uwe Helmut Jost
  • Publication number: 20190272902
    Abstract: A method, computer program product, and computing system for obtaining, by a computing device, encounter information of a patient encounter, wherein the encounter information may include audio encounter information obtained from at least a first encounter participant. The audio encounter information obtained from at least the first encounter participant may be processed. An alert may be generated to obtain additional encounter information of the patient encounter.
    Type: Application
    Filed: March 5, 2019
    Publication date: September 5, 2019
    Inventors: PAUL JOSEPH VOZILA, Guido Remi Marcel Gallopyn, Uwe Helmut Jost, Matthias Helletzgruber, Jeremy Martin Jancsary, Kumar Abhinav, Joel Praveen Pinto, Donald E. Owen, Mehmet Mert Öz
  • Publication number: 20190272895
    Abstract: A method, computer program product, and computing system for obtaining, by a computing device, encounter information of a patient encounter, wherein the encounter information may include audio encounter information obtained from at least a first encounter participant. The audio encounter information obtained from at least the first encounter participant may be processed. A user interface may be generated displaying a plurality of layers associated with the audio encounter information obtained from at least the first encounter participant, wherein at least one of the plurality of layers is one of exposed to the user interface and not exposed to the user interface based upon, at least in part, a confidence level.
    Type: Application
    Filed: March 5, 2019
    Publication date: September 5, 2019
    Inventors: Paul Joseph Vozila, Guido Remi Marcel Gallopyn, Uwe Helmut Jost, Matthias Helletzgruber, Jeremy Martin Jancsary, Kumar Abhinav, Joel Praveen Pinto, Donald E. Owen, Mehmet Mert Öz
  • Publication number: 20190066821
    Abstract: A method, computer program product, and computing system for synchronizing machine vision and audio is executed on a computing device and includes obtaining encounter information of a patient encounter, wherein the encounter information includes machine vision encounter information and audio encounter information. The machine vision encounter information and the audio encounter information are temporally-aligned to produce a temporarily-aligned encounter recording.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 28, 2019
    Inventors: DONALD E. OWEN, Uwe Helmut Jost, Daniel Paulino Almendro Barreda, Dushyant Sharma
  • Publication number: 20190051394
    Abstract: A modular ACD system is configured to automate clinical documentation and includes a machine vision system configured to obtain machine vision encounter information concerning a patient encounter. An audio recording system is configured to obtain audio encounter information concerning the patient encounter. A compute system is configured to receive the machine vision encounter information and the audio encounter information.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 14, 2019
    Inventors: Donald E. Owen, Uwe Helmut Jost, Daniel Paulino Almendro Barreda, Dushyant Sharma
  • Publication number: 20190051378
    Abstract: A method, computer program product, and computing system for source separation is executed on a computing device and includes obtaining encounter information of a patient encounter, wherein the encounter information includes first audio encounter information obtained from a first encounter participant and at least a second audio encounter information obtained from at least a second encounter participant. The first audio encounter information and the at least a second audio encounter information are processed to eliminate audio interference between the first audio encounter information and the at least a second audio encounter information.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 14, 2019
    Inventors: Guido Remi Marcel Gallopyn, Dushyant Sharma, Uwe Helmut Jost, Donald E. Owen, Patrick Naylor, Amr Nour-Eldin, Daniel Paulino Almendro Barreda
  • Publication number: 20190026494
    Abstract: A method, computer program product, and computing system for receiving content from a third-party; processing the content to predict the disclosure of sensitive information; and obscuring the sensitive information from a platform user.
    Type: Application
    Filed: July 18, 2018
    Publication date: January 24, 2019
    Inventors: KENNETH WILLIAM DOUGLAS SMITH, Uwe Helmut Jost, Jean-Guy Elie Dahan, Fabrizio Lussana, Vittorio Manzone, David Copp
  • Patent number: 9922664
    Abstract: A system for and method of characterizing a target application acoustic domain analyzes one or more speech data samples from the target application acoustic domain to determine one or more target acoustic characteristics, including a CODEC type and bit-rate associated with the speech data samples. The determined target acoustic characteristics may also include other aspects of the target speech data samples such as sampling frequency, active bandwidth, noise level, reverberation level, clipping level, and speaking rate. The determined target acoustic characteristics are stored in a memory as a target acoustic data profile. The data profile may be used to select and/or modify one or more out of domain speech samples based on the one or more target acoustic characteristics.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: March 20, 2018
    Assignee: Nuance Communications, Inc.
    Inventors: Dushyant Sharma, Patrick Naylor, Uwe Helmut Jost
  • Publication number: 20170278527
    Abstract: A system for and method of characterizing a target application acoustic domain analyzes one or more speech data samples from the target application acoustic domain to determine one or more target acoustic characteristics, including a CODEC type and bit-rate associated with the speech data samples. The determined target acoustic characteristics may also include other aspects of the target speech data samples such as sampling frequency, active bandwidth, noise level, reverberation level, clipping level, and speaking rate. The determined target acoustic characteristics are stored in a memory as a target acoustic data profile. The data profile may be used to select and/or modify one or more out of domain speech samples based on the one or more target acoustic characteristics.
    Type: Application
    Filed: March 28, 2016
    Publication date: September 28, 2017
    Inventors: Dushyant Sharma, Patrick Naylor, Uwe Helmut Jost
  • Patent number: 9679564
    Abstract: A graphical user interface is described for human guided audio source separation in a multi-speaker automated transcription system receiving audio signals representing speakers participating together in a speech session. A speaker avatar for each speaker is distributed about a user interface display to suggest speaker positions relative to each other during the speech session. There also is a speaker highlight element on the interface display for visually highlighting a specific speaker avatar corresponding to an active speaker in the speech session to aid a human transcriptionist listening to the speech session to identify the active speaker. A speech signal processor performs signal processing of the audio signals to isolate an audio signal corresponding to the highlighted speaker avatar.
    Type: Grant
    Filed: December 12, 2012
    Date of Patent: June 13, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: Andrew Johnathon Daborn, Uwe Helmut Jost
  • Patent number: 9514741
    Abstract: Training speech recognizers, e.g., their language or acoustic models, using actual user data is useful, but retaining personally identifiable information may be restricted in certain environments due to regulations. Accordingly, a method or system is provided for enabling training of an acoustic model which includes dynamically shredding a speech corpus to produce text segments and depersonalized audio features corresponding to the text segments. The method further includes enabling a system to train an acoustic model using the text segments and the depersonalized audio features. Because the data is depersonalized, actual data may be used, enabling speech recognizers to keep up-to-date with user trends in speech and usage, among other benefits.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: December 6, 2016
    Assignee: Nuance Communications, Inc.
    Inventors: Uwe Helmut Jost, Philip Charles Woodland, Marcel Katz, Syed Raza Shahid, Paul J. Vozila, William F. Ganong, III
  • Patent number: 9514740
    Abstract: Training speech recognizers, e.g., their language or acoustic models, using actual user data is useful, but retaining personally identifiable information may be restricted in certain environments due to regulations. Accordingly, a method or system is provided for enabling training of a language model which includes producing segments of text in a text corpus and counts corresponding to the segments of text, the text corpus being in a depersonalized state. The method further includes enabling a system to train a language model using the segments of text in the depersonalized state and the counts. Because the data is depersonalized, actual data may be used, enabling speech recognizers to keep up-to-date with user trends in speech and usage, among other benefits.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: December 6, 2016
    Assignee: Nuance Communications, Inc.
    Inventors: Uwe Helmut Jost, Philip Charles Woodland, Marcel Katz, Syed Raza Shahid, Paul J. Vozila, William F. Ganong, III
  • Patent number: 9099091
    Abstract: Typical textual prediction of voice data employs a predefined implementation arrangement of a single or multiple prediction sources. Using a predefined implementation arrangement of the prediction sources may not provide a good prediction performance in a consistent manner with variations in voice data quality. Prediction performance may be improved by employing adaptive textual prediction. According to at least one embodiment determining a configuration of a plurality of prediction sources, used for textual interpretation of the voice data, is determined based at least in part on one or more features associated with the voice data or one or more a-priori interpretations of the voice data. A textual output prediction of the voice data is then generated using the plurality of prediction sources according to the determined configuration. Employing an adaptive configuration of the text prediction sources facilitates providing more accurate text transcripts of the voice data.
    Type: Grant
    Filed: January 22, 2013
    Date of Patent: August 4, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Diven Topiwala, Uwe Helmut Jost, Lisa Meredith, Daniel Almendro Barreda
  • Publication number: 20140278425
    Abstract: Training speech recognizers, e.g., their language or acoustic models, using actual user data is useful, but retaining personally identifiable information may be restricted in certain environments due to regulations. Accordingly, a method or system is provided for enabling training of a language model which includes producing segments of text in a text corpus and counts corresponding to the segments of text, the text corpus being in a depersonalized state. The method further includes enabling a system to train a language model using the segments of text in the depersonalized state and the counts. Because the data is depersonalized, actual data may be used, enabling speech recognizers to keep up-to-date with user trends in speech and usage, among other benefits.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Inventors: Uwe Helmut Jost, Philip Charles Woodland, Marcel Katz, Syed Raza Shahid, Paul J. Vozila, William F. Ganong, III