Patents by Inventor Eui Chul Shin

Eui Chul Shin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12314670
    Abstract: Systems and methods are provided for automatically generating a program based on a natural language utterance using semantic parsing. The semantic parsing includes translating a natural language utterance into instructions in a logical form for execution. The methods use a pre-trained natural language model and generate a canonical utterance as an intermediate form before generating the logical form. The natural language model may be an auto-regressive natural language model with a transformer to paraphrase a sequence of words or tokens in the natural language utterance. The methods generate a prompt including exemplar input/output pairs as a few-shot learning technique for the natural language model to predict words or tokens. The methods further use constrained decoding to determine a canonical utterance, iteratively selecting sequence of words as predicted by the model against rules for canonical utterances. The methods generate a program based on the canonical utterance for execution in an application.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: May 27, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Benjamin David Van Durme, Adam D. Pauls, Daniel Louis Klein, Eui Chul Shin, Christopher H. Lin, Pengyu Chen, Subhro Roy, Emmanouil Antonios Platanios, Jason Michael Eisner, Benjamin Lev Snyder, Samuel McIntire Thomson
  • Publication number: 20250124229
    Abstract: Implementations of semantic parsing using pre-trained language models are provided. One aspect includes a computing system for semantic parsing of natural language.
    Type: Application
    Filed: October 16, 2023
    Publication date: April 17, 2025
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jacob Daniel ANDREAS, Kaj Alexander Nelson BOSTROM, Hao FANG, Harsh JHAMTANI, Jason Michael EISNER, Benjamin David VAN DURME, Patrick Aozhe XIA, Eui Chul SHIN, Samuel McIntire THOMSON
  • Publication number: 20240296177
    Abstract: Systems and methods are provided for implementing conversational large language model (“LLM”) or other AI/ML-based user tenant orchestration. A first prompt is generated based on natural language (“NL”) input from a user. The first prompt is used by a first LLM or AI/ML system to generate a query to access data items that are stored in a portion of a multitenant data storage system, the portion being accessible by the user. Once accessed and received, the data items are input into a second prompt that is used by a second LLM or AI/ML system to return a set of functions with corresponding sets of arguments. The set of functions are executed according to the sets of arguments, and the results of the executed functions are used to generate a response to the NL input. The generated response is then caused to be presented to the user via a user interface.
    Type: Application
    Filed: May 4, 2023
    Publication date: September 5, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Matthew Jonathan GARDNER, Jason Michael EISNER, Christopher KEDZIE, Andrei VOROBEV, Eui Chul SHIN, Joshua James CLAUSMAN
  • Publication number: 20240202518
    Abstract: Examples are disclosed that related to synthesizing a dataset of utterances in an automated manner using a computer while preserving user privacy. The synthesized dataset of utterances is usable to train a machine learning model. In one example, a differentially private parse tree generation model is trained based at least on private parse trees of a private utterance-parse tree dataset. A differentially private parse-to-utterance model is trained based at least on private utterances and corresponding private parse trees of the private utterance-parse tree dataset. A synthesized parse tree dataset is generated. The synthesized parse tree dataset includes synthesized parse trees sampled at random from the trained differentially private parse tree generation model. A synthesized utterance dataset is generated, via the trained differentially private parse-to-utterance model.
    Type: Application
    Filed: May 22, 2023
    Publication date: June 20, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jason Michael EISNER, Eui Chul SHIN, Fatemehsadat MIRESHGHALLAH, Tatsunori Benjamin HASHIMOTO, Yu SU
  • Publication number: 20230124765
    Abstract: Aspects of the present disclosure relate to a machine learning-based dialogue authoring environment. In examples, a developer or creator of a virtual environment may use a generative multimodal machine learning (ML) model to create or otherwise update aspects of a dialogue tree for one or more computer-controlled agents and/or players of the virtual environment. For example, the developer may provide an indication of context associated with the dialogue for use by the ML model, such that the ML model may generate a set of candidate interactions accordingly. The developer may select a subset of the candidate interactions for inclusion in the dialogue tree, which may then be used to generate associated nodes within the tree accordingly. Thus, nodes in the dialogue tree may be iteratively defined based on model output of the ML model, thereby assisting the developer with dialogue authoring for the virtual environment.
    Type: Application
    Filed: October 6, 2022
    Publication date: April 20, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Gabriel A. DESGARENNES, William B. DOLAN, Christopher John BROCKETT, Hamid PALANGI, Ryan VOLUM, Olivia Diane DENG, Eui Chul SHIN, Randolph Lawrence D'AMORE, Sudha RAO, Yun Hui XU, Benjamin David VAN DURME, Kellie Nicole HILL
  • Publication number: 20220327288
    Abstract: Systems and methods are provided for automatically generating a program based on a natural language utterance using semantic parsing. The semantic parsing includes translating a natural language utterance into instructions in a logical form for execution. The methods use a pre-trained natural language model and generate a canonical utterance as an intermediate form before generating the logical form. The natural language model may be an auto-regressive natural language model with a transformer to paraphrase a sequence of words or tokens in the natural language utterance. The methods generate a prompt including exemplar input/output pairs as a few-shot learning technique for the natural language model to predict words or tokens. The methods further use constrained decoding to determine a canonical utterance, iteratively selecting sequence of words as predicted by the model against rules for canonical utterances. The methods generate a program based on the canonical utterance for execution in an application.
    Type: Application
    Filed: April 13, 2021
    Publication date: October 13, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Benjamin David VAN DURME, Adam D. PAULS, Daniel Louis KLEIN, Eui Chul SHIN, Christopher H. LIN, Pengyu CHEN, Subhro ROY, Emmanouil Antonios PLATANIOS, Jason Michael EISNER, Benjamin Lev SNYDER, Samuel McIntire THOMSON
  • Publication number: 20160138171
    Abstract: The present invention proposed manufacturing device of coating layers with good conductivity and corrosion resistance at high productivity comprising etching the oxide layer on the stainless steel substrate by plasma etching to activate the surface and prevent from decreasing it's conductivity, coating metal nitrides like CrN or TiN in nano size thickness on the etched surface and coating carbon layer at nano size thickness on top of it. According to the present invention, it is possible to produce manufacture fuel cell bipolar plate, electrode material and stainless steel with reinforced conductivity and corrosion resistance in mass.
    Type: Application
    Filed: October 16, 2015
    Publication date: May 19, 2016
    Inventors: Youngha Jun, Jaimoo Yoo, Kiho Yeo, Eui Chul Shin