Patents by Inventor Daniela Braga

Daniela Braga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170068651
    Abstract: A system and method of tagging utterances with Named Entity Recognition (“NER”) labels using unmanaged crowds is provided. The system may generate various annotation jobs in which a user, among a crowd, is asked to tag which parts of an utterance, if any, relate to various entities associated with a domain. For a given domain that is associated with a number of entities that exceeds a threshold N value, multiple batches of jobs (each batch having jobs that have a limited number of entities for tagging) may be used to tag a given utterance from that domain. This reduces the cognitive load imposed on a user, and prevents the user from having to tag more than N entities. As such, a domain with a large number of entities may be tagged efficiently by crowd participants without overloading each crowd participant with too many entities to tag.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 9, 2017
    Applicant: VoiceBox Technologies Corporation
    Inventors: Spencer John ROTHWELL, Daniela BRAGA, Ahmad Khamis ELSHENAWY, Stephen Steele CARTER
  • Publication number: 20170069325
    Abstract: Systems and methods of providing text related to utterances, and gathering voice data in response to the text are provide herein. In various implementations, an identification token that identifies a first file for a voice data collection campaign, and a second file for a session script may be received from a natural language processing training device. The first file and the second file may be used to configure the mobile application to display a sequence of screens, each of the sequence of screens containing text of at least one utterance specified in the voice data collection campaign. Voice data may be received from the natural language processing training device in response to user interaction with the text of the at least one utterance. The voice data and the text may be stored in a transcription library.
    Type: Application
    Filed: March 28, 2016
    Publication date: March 9, 2017
    Applicant: VOICEBOX TECHNOLOGIES CORPORATION
    Inventors: Daniela BRAGA, Faraz ROMANI, Ahmad Khamis ELSHENAWY, Michael KENNEWICK
  • Publication number: 20170068656
    Abstract: A system and method of recording utterances for building Named Entity Recognition (“NER”) models, which are used to build dialog systems in which a computer listens and responds to human voice dialog. Utterances to be uttered may be provided to users through their mobile devices, which may record the user uttering (e.g., verbalizing, speaking, etc.) the utterances and upload the recording to a computer for processing. The use of the user's mobile device, which is programmed with an utterance collection application (e.g., configured as a mobile app), facilitates the use of crowd-sourcing human intelligence tasking for widespread collection of utterances from a population of users. As such, obtaining large datasets for building NER models may be facilitated by the system and method disclosed herein.
    Type: Application
    Filed: July 20, 2016
    Publication date: March 9, 2017
    Applicant: VOICEBOX TECHNOLOGIES CORPORATION
    Inventors: Daniela BRAGA, Spencer John ROTHWELL, Faraz ROMANI, Ahmad Khamis ELSHENAWY, Stephen Steele CARTER, Michael KENNEWICK
  • Publication number: 20170068659
    Abstract: Systems and methods gathering text commands in response to a command context using a first crowdsourced are discussed herein. A command context for a natural language processing system may be identified, where the command context is associated with a command context condition to provide commands to the natural language processing system. One or more command creators associated with one or more command creation devices may be selected. A first application one the one or more command creation devices may be configured to display command creation instructions for each of the one or more command creators to provide text commands that satisfy the command context, and to display a field for capturing a user-generated text entry to satisfy the command creation condition in accordance with the command creation instructions. Systems and methods for reviewing the text commands using second and crowdsourced jobs are also presented herein.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 9, 2017
    Applicant: VoiceBox Technologies Corporation
    Inventors: Spencer John ROTHWELL, Daniela BRAGA, Ahmad Khamis ELSHENAWY, Stephen Steele CARTER
  • Patent number: 9448993
    Abstract: A system and method of recording utterances for building Named Entity Recognition (“NER”) models, which are used to build dialog systems in which a computer listens and responds to human voice dialog. Utterances to be uttered may be provided to users through their mobile devices, which may record the user uttering (e.g., verbalizing, speaking, etc.) the utterances and upload the recording to a computer for processing. The use of the user's mobile device, which is programmed with an utterance collection application (e.g., configured as a mobile app), facilitates the use of crowd-sourcing human intelligence tasking for widespread collection of utterances from a population of users. As such, obtaining large datasets for building NER models may be facilitated by the system and method disclosed herein.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: September 20, 2016
    Assignee: VoiceBox Technologies Corporation
    Inventors: Daniela Braga, Spencer John Rothwell, Faraz Romani, Ahmad Khamis Elshenawy, Stephen Steele Carter, Michael Kennewick
  • Patent number: 9401142
    Abstract: Systems and methods of validating transcriptions of natural language content using crowdsourced validation jobs are provided herein. In various implementations, a transcription pair comprising natural language content and text corresponding to a transcription of the natural language content may be gathered. A first group of validation devices may be selected for reviewing the transcription pair. A first crowdsourced validation job may be created for the first group of validation devices. The first crowdsourced validation job may be provided to the first group of validation devices. A vote representing whether or not the text accurately represents the natural language content may be received from each of the first group of validation devices. A validation score may be assigned to the transcription pair based, at least in part, on the votes from each of the first group of validation devices.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: July 26, 2016
    Assignee: VoiceBox Technologies Corporation
    Inventors: Spencer John Rothwell, Daniela Braga, Ahmad Khamis Elshenawy, Stephen Steele Carter
  • Patent number: 9361887
    Abstract: Systems and methods of providing text related to utterances, and gathering voice data in response to the text are provide herein. In various implementations, an identification token that identifies a first file for a voice data collection campaign, and a second file for a session script may be received from a natural language processing training device. The first file and the second file may be used to configure the mobile application to display a sequence of screens, each of the sequence of screens containing text of at least one utterance specified in the voice data collection campaign. Voice data may be received from the natural language processing training device in response to user interaction with the text of the at least one utterance. The voice data and the text may be stored in a transcription library.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: June 7, 2016
    Assignee: VoiceBox Technologies Corporation
    Inventors: Daniela Braga, Faraz Romani, Ahmad Khamis Elshenawy, Michael Kennewick
  • Patent number: 9037460
    Abstract: Dynamic features are utilized with CRFs to handle long-distance dependencies of output labels. The dynamic features present a probability distribution involved in explicit distance from/to a special output label that is pre-defined according to each application scenario. Besides the number of units in the segment (from the previous special output label to the current unit), the dynamic features may also include the sum of any basic features of units in the segment. Since the added dynamic features are involved in the distance from the previous specific label, the searching lattice associated with Viterbi searching is expanded to distinguish the nodes with various distances. The dynamic features may be used in a variety of different applications, such as Natural Language Processing, Text-To-Speech and Automatic Speech Recognition. For example, the dynamic features may be used to assist in prosodic break and pause prediction.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: May 19, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jian Luan, Linfang Wang, Hairong Xia, Sheng Zhao, Daniela Braga
  • Publication number: 20130262105
    Abstract: Dynamic features are utilized with CRFs to handle long-distance dependencies of output labels. The dynamic features present a probability distribution involved in explicit distance from/to a special output label that is pre-defined according to each application scenario. Besides the number of units in the segment (from the previous special output label to the current unit), the dynamic features may also include the sum of any basic features of units in the segment. Since the added dynamic features are involved in the distance from the previous specific label, the searching lattice associated with Viterbi searching is expanded to distinguish the nodes with various distances. The dynamic features may be used in a variety of different applications, such as Natural Language Processing, Text-To-Speech and Automatic Speech Recognition. For example, the dynamic features may be used to assist in prosodic break and pause prediction.
    Type: Application
    Filed: March 28, 2012
    Publication date: October 3, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Jian Luan, Linfang Wang, Hairong Xia, Sheng Zhao, Daniela Braga