Patents by Inventor Hyuckchul Jung

Hyuckchul Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11724403
    Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: August 15, 2023
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION
    Inventors: Svetlana Stoyanchev, Srinivas Bangalore, John Chen, Hyuckchul Jung
  • Patent number: 10739976
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: August 11, 2020
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
  • Publication number: 20200171670
    Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
    Type: Application
    Filed: February 6, 2020
    Publication date: June 4, 2020
    Inventors: Svetlana STOYANCHEV, Srinivas BANGALORE, John CHEN, Hyuckchul JUNG
  • Patent number: 10556348
    Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: February 11, 2020
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Svetlana Stoyanchev, Srinivas Bangalore, John Chen, Hyuckchul Jung
  • Publication number: 20180157403
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Application
    Filed: January 16, 2018
    Publication date: June 7, 2018
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
  • Patent number: 9904450
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: February 27, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
  • Publication number: 20180001482
    Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
    Type: Application
    Filed: September 15, 2017
    Publication date: January 4, 2018
    Inventors: Svetlana STOYANCHEV, Srinivas BANGALORE, John CHEN, Hyuckchul JUNG
  • Publication number: 20170293600
    Abstract: Voice enabled dialog with web pages is provided. An Internet address of a web page is received including an area with which a user of a client device can specify information. The web page is loaded using the received Internet address of the web page. A task structure of the web page is then extracted. An abstract representation of the web is then generated. A dialog script, based on the abstract representation of the web page is then provided. Spoken information received from the user is converted into text and the converted text is inserted into the area.
    Type: Application
    Filed: June 26, 2017
    Publication date: October 12, 2017
    Inventors: Amanda Joy STENT, Hyuckchul JUNG, I. Dan MELAMED, Nobal Bikram NIRAULA
  • Patent number: 9764477
    Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: September 19, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Svetlana Stoyanchev, Srinivas Bangalore, John Chen, Hyuckchul Jung
  • Patent number: 9690854
    Abstract: Voice enabled dialog with web pages is provided. An Internet address of a web page is received including an area with which a user of a client device can specify information. The web page is loaded using the received Internet address of the web page. A task structure of the web page is then extracted. An abstract representation of the web is then generated. A dialog script, based on the abstract representation of the web page is then provided. Spoken information received from the user is converted into text and the converted text is inserted into the area.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: June 27, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: Amanda Joy Stent, Hyuckchul Jung, I. Dan Melamed, Nobal Bikram Niraula
  • Publication number: 20160179908
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Application
    Filed: December 19, 2014
    Publication date: June 23, 2016
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
  • Publication number: 20160151918
    Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
    Type: Application
    Filed: December 1, 2014
    Publication date: June 2, 2016
    Inventors: Svetlana STOYANCHEV, Srinivas BANGALORE, John CHEN, Hyuckchul JUNG
  • Publication number: 20150149168
    Abstract: Voice enabled dialog with web pages is provided. An Internet address of a web page is received including an area with which a user of a client device can specify information. The web page is loaded using the received Internet address of the web page. A task structure of the web page is then extracted. An abstract representation of the web is then generated. A dialog script, based on the abstract representation of the web page is then provided. Spoken information received from the user is converted into text and the converted text is inserted into the area.
    Type: Application
    Filed: November 27, 2013
    Publication date: May 28, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Amanda Joy STENT, Hyuckchul JUNG, I. Dan MELAMED, Nobal Bikram NIRAULA
  • Patent number: 7983997
    Abstract: A system which allows a user to teach a computational device how to perform complex, repetitive tasks that the user usually would perform using the device's graphical user interface (GUI) often but not limited to being a web browser. The system includes software running on a user's computational device. The user “teaches” task steps by inputting natural language and demonstrating actions with the GUI. The system uses a semantic ontology and natural language processing to create an explicit representation of the task that is stored on the computer. After a complete task has been taught, the system is able to automatically execute the task in new situations. Because the task is represented in terms of the ontology and user's intentions, the system is able to adapt to changes in the computer code while still pursuing the objectives taught by the user.
    Type: Grant
    Filed: November 2, 2007
    Date of Patent: July 19, 2011
    Assignee: Florida Institute for Human and Machine Cognition, Inc.
    Inventors: James F. Allen, Nathanael Chambers, Lucian Galescu, Hyuckchul Jung, William Taysom
  • Publication number: 20090119587
    Abstract: A system which allows a user to teach a computational device how to perform complex, repetitive tasks that the user usually would perform using the device's graphical user interface (GUI) often but not limited to being a web browser. The system includes software running on a user's computational device. The user “teaches” task steps by inputting natural language and demonstrating actions with the GUI. The system uses a semantic ontology and natural language processing to create an explicit representation of the task that is stored on the computer. After a complete task has been taught, the system is able to automatically execute the task in new situations. Because the task is represented in terms of the ontology and user's intentions, the system is able to adapt to changes in the computer code while still pursuing the objectives taught by the user.
    Type: Application
    Filed: November 2, 2007
    Publication date: May 7, 2009
    Inventors: James F. Allen, Nathanael Chambers, Lucian Galescu, Hyuckchul Jung, William Taysom