Patents by Inventor Hyong-Gyun Kim

Hyong-Gyun Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220310079
    Abstract: A conversational assistant for conversational engagement platform can contain various modules including a user-model augmentation module, a dialogue management module, and a user-state analysis input/output module. The dialogue management module receives metrics tied to a user from the other modules to understand a current topic and a user's emotions regarding the current topic from the user-state analysis input/output module and then adapts dialogue from the dialogue management module to the user based on dialogue rules factoring in these different metrics. The dialogue rules also factors in both i) a duration of a conversational engagement with the user and ii) an attempt to maintain a positive experience for the user with the conversational engagement. A flexible ontology relationship representation about the user is built and stores learned metrics about the user over time with each conversational engagement, and then in combination with the dialogue rules, drives the conversations with the user.
    Type: Application
    Filed: June 15, 2020
    Publication date: September 29, 2022
    Inventors: Edgar T. Kalns, Dimitra Vergyi, Girish Acharya, Andreas Kathol, Leonor Almada, Hyong-Gyun Kim, Nikoletta Baslou, Michael Wessel, Aaron Spaulding, Roland Heusser, James F. Carpenter, Min Yin
  • Publication number: 20210081056
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 18, 2021
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10884503
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: January 5, 2021
    Assignee: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Publication number: 20170160813
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: October 24, 2016
    Publication date: June 8, 2017
    Applicant: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 7302392
    Abstract: A computer system in the form of a voice command platform includes a voice browser and voice-based applications. The voice browser has global-level grammar elements and the voice applications have application-level grammar and grammar elements. A programming feature is provided by which developers of the voice applications can programmably weigh or weight global-level grammar elements relative to the application-level grammar or grammar elements. As a consequence of the weighting, a speech recognition engine for the voice browser is more likely to accurately recognize voice input from a user. The weighting can be applied on the application as a whole, or at any given state in the application. Also, the weighting can be made to the global level grammar elements as a group, or on an individual basis.
    Type: Grant
    Filed: October 7, 2003
    Date of Patent: November 27, 2007
    Assignee: Sprint Spectrum L.P.
    Inventors: Balaji S. Thenthiruperai, Elizabeth R. Roche, Hyong-Gyun Kim