Patents by Inventor TRUNG HUU BUI

TRUNG HUU BUI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190156822
    Abstract: A technique for multiple turn conversational task assistance includes receiving data representing a conversation between a user and an agent. The conversation includes a digitally recorded video portion and a digitally recorded audio portion, where the audio portion corresponds to the video portion. Next, the audio portion is segmented into a plurality of audio chunks. For each of the audio chunks, a transcript of the respective audio chunk is received. Each of the audio chunks is grouped into one or more dialog acts, where each dialog act includes at least one of the respective audio chunks, the validated transcript corresponds to the respective audio chunks, and a portion of the video portion corresponds to the respective audio chunk. Each of the dialog acts is stored in a data corpus.
    Type: Application
    Filed: November 22, 2017
    Publication date: May 23, 2019
    Applicant: Adobe Inc.
    Inventors: Ramesh Radhakrishna Manuvinakurike, Trung Huu Bui, Walter W. Chang
  • Publication number: 20180373952
    Abstract: The present invention is directed towards providing automated workflows for the identification of a reading order from text segments extracted from a document. Ordering the text segments is based on trained natural language models. In some embodiments, the workflows are enabled to perform a method for identifying a sequence associated with a portable document. The methods includes iteratively generating a probabilistic language model, receiving the portable document, and selectively extracting features (such as but not limited to text segments) from the document. The method may generate pairs of features (or feature pair from the extracted features). The method may further generate a score for each of the pairs based on the probabilistic language model and determine an order to features based on the scores. The method may provide the extracted features in the determined order.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 27, 2018
    Inventors: Trung Huu Bui, Hung Hai Bui, Shawn Alan Gaither, Walter Wei-Tuh Chang, Michael Frank Kraley, Pranjal Daga
  • Patent number: 10055403
    Abstract: The present disclosure relates dialog states, which computers use to internally represent what users have in mind in dialog. A dialog state tracker employs various rules that enhance the ability of computers to correctly identify the presence of slot-value pairs, which make up dialog states, in utterances or conversational input of dialog. Some rules provide for identifying synonyms of values of slot-values pairs in utterances. Other rules provide for identifying slot-value pairs based on coreferences between utterances and previous utterances of dialog sessions. Rules are also provided for carrying over slot-value pairs from dialog states of previous utterances to a dialog state of a current utterance. Yet other rules provide for removing slot-value pairs from candidate dialog states, which are later used as dialog states of utterances.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: August 21, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Trung Huu Bui, Hung Hai Bui, Franck Dernoncourt
  • Publication number: 20180218728
    Abstract: Domain-specific speech recognizer generation with crowd sourcing is described. The domain-specific speech recognizers are generated for voice user interfaces (VUIs) configured to replace or supplement application interfaces. In accordance with the described techniques, the speech recognizers are generated for a respective such application interface and are domain-specific because they are each generated based on language data that corresponds to the respective application interface. This domain-specific language data is used to build a domain-specific language model. The domain-specific language data is also used to collect acoustic data for building an acoustic model. In particular, the domain-specific language data is used to generate user interfaces that prompt crowd-sourcing participants to say selected words represented by the language data for recording. The recordings of these selected words are then used to build the acoustic model.
    Type: Application
    Filed: February 2, 2017
    Publication date: August 2, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Ramesh Radhakrishna Manuvinakurike, Trung Huu Bui, Robert S. N. Dates
  • Publication number: 20180181592
    Abstract: Methods and systems are provided for ranking images against queries. A visual modality ranking of visual features of a digital image against a query is generated. A language modality ranking of text features of text associated with the digital image against the query is also generated. A multi-modal neural network determines importance weightings of the language modality ranking and the visual modality ranking against the query. The visual modality ranking and the language modality ranking are combined into a multi-modal ranking of the digital image against the query based on the importance weightings. The digital image is provided as a search result of the query based on the multi-modal ranking.
    Type: Application
    Filed: December 27, 2016
    Publication date: June 28, 2018
    Inventors: Kan Chen, Zhaowen Wang, Trung Huu Bui, Chen Fang
  • Publication number: 20170228366
    Abstract: The present disclosure relates dialog states, which computers use to internally represent what users have in mind in dialog. A dialog state tracker employs various rules that enhance the ability of computers to correctly identify the presence of slot-value pairs, which make up dialog states, in utterances or conversational input of dialog. Some rules provide for identifying synonyms of values of slot-values pairs in utterances. Other rules provide for identifying slot-value pairs based on coreferences between utterances and previous utterances of dialog sessions. Rules are also provided for carrying over slot-value pairs from dialog states of previous utterances to a dialog state of a current utterance. Yet other rules provide for removing slot-value pairs from candidate dialog states, which are later used as dialog states of utterances.
    Type: Application
    Filed: February 5, 2016
    Publication date: August 10, 2017
    Inventors: TRUNG HUU BUI, HUNG HAI BUI, FRANCK DERNONCOURT