Patents by Inventor Hyuckchul Jung
Hyuckchul Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11724403Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).Type: GrantFiled: February 6, 2020Date of Patent: August 15, 2023Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATIONInventors: Svetlana Stoyanchev, Srinivas Bangalore, John Chen, Hyuckchul Jung
-
Patent number: 10739976Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: GrantFiled: January 16, 2018Date of Patent: August 11, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
-
Publication number: 20200171670Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).Type: ApplicationFiled: February 6, 2020Publication date: June 4, 2020Inventors: Svetlana STOYANCHEV, Srinivas BANGALORE, John CHEN, Hyuckchul JUNG
-
Patent number: 10556348Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).Type: GrantFiled: September 15, 2017Date of Patent: February 11, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Svetlana Stoyanchev, Srinivas Bangalore, John Chen, Hyuckchul Jung
-
Publication number: 20180157403Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: ApplicationFiled: January 16, 2018Publication date: June 7, 2018Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
-
Patent number: 9904450Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: GrantFiled: December 19, 2014Date of Patent: February 27, 2018Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
-
Publication number: 20180001482Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).Type: ApplicationFiled: September 15, 2017Publication date: January 4, 2018Inventors: Svetlana STOYANCHEV, Srinivas BANGALORE, John CHEN, Hyuckchul JUNG
-
Publication number: 20170293600Abstract: Voice enabled dialog with web pages is provided. An Internet address of a web page is received including an area with which a user of a client device can specify information. The web page is loaded using the received Internet address of the web page. A task structure of the web page is then extracted. An abstract representation of the web is then generated. A dialog script, based on the abstract representation of the web page is then provided. Spoken information received from the user is converted into text and the converted text is inserted into the area.Type: ApplicationFiled: June 26, 2017Publication date: October 12, 2017Inventors: Amanda Joy STENT, Hyuckchul JUNG, I. Dan MELAMED, Nobal Bikram NIRAULA
-
Patent number: 9764477Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).Type: GrantFiled: December 1, 2014Date of Patent: September 19, 2017Assignee: AT&T Intellectual Property I, L.P.Inventors: Svetlana Stoyanchev, Srinivas Bangalore, John Chen, Hyuckchul Jung
-
Patent number: 9690854Abstract: Voice enabled dialog with web pages is provided. An Internet address of a web page is received including an area with which a user of a client device can specify information. The web page is loaded using the received Internet address of the web page. A task structure of the web page is then extracted. An abstract representation of the web is then generated. A dialog script, based on the abstract representation of the web page is then provided. Spoken information received from the user is converted into text and the converted text is inserted into the area.Type: GrantFiled: November 27, 2013Date of Patent: June 27, 2017Assignee: Nuance Communications, Inc.Inventors: Amanda Joy Stent, Hyuckchul Jung, I. Dan Melamed, Nobal Bikram Niraula
-
Publication number: 20160179908Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: ApplicationFiled: December 19, 2014Publication date: June 23, 2016Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
-
Publication number: 20160151918Abstract: A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).Type: ApplicationFiled: December 1, 2014Publication date: June 2, 2016Inventors: Svetlana STOYANCHEV, Srinivas BANGALORE, John CHEN, Hyuckchul JUNG
-
Publication number: 20150149168Abstract: Voice enabled dialog with web pages is provided. An Internet address of a web page is received including an area with which a user of a client device can specify information. The web page is loaded using the received Internet address of the web page. A task structure of the web page is then extracted. An abstract representation of the web is then generated. A dialog script, based on the abstract representation of the web page is then provided. Spoken information received from the user is converted into text and the converted text is inserted into the area.Type: ApplicationFiled: November 27, 2013Publication date: May 28, 2015Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Amanda Joy STENT, Hyuckchul JUNG, I. Dan MELAMED, Nobal Bikram NIRAULA
-
Patent number: 7983997Abstract: A system which allows a user to teach a computational device how to perform complex, repetitive tasks that the user usually would perform using the device's graphical user interface (GUI) often but not limited to being a web browser. The system includes software running on a user's computational device. The user “teaches” task steps by inputting natural language and demonstrating actions with the GUI. The system uses a semantic ontology and natural language processing to create an explicit representation of the task that is stored on the computer. After a complete task has been taught, the system is able to automatically execute the task in new situations. Because the task is represented in terms of the ontology and user's intentions, the system is able to adapt to changes in the computer code while still pursuing the objectives taught by the user.Type: GrantFiled: November 2, 2007Date of Patent: July 19, 2011Assignee: Florida Institute for Human and Machine Cognition, Inc.Inventors: James F. Allen, Nathanael Chambers, Lucian Galescu, Hyuckchul Jung, William Taysom
-
Publication number: 20090119587Abstract: A system which allows a user to teach a computational device how to perform complex, repetitive tasks that the user usually would perform using the device's graphical user interface (GUI) often but not limited to being a web browser. The system includes software running on a user's computational device. The user “teaches” task steps by inputting natural language and demonstrating actions with the GUI. The system uses a semantic ontology and natural language processing to create an explicit representation of the task that is stored on the computer. After a complete task has been taught, the system is able to automatically execute the task in new situations. Because the task is represented in terms of the ontology and user's intentions, the system is able to adapt to changes in the computer code while still pursuing the objectives taught by the user.Type: ApplicationFiled: November 2, 2007Publication date: May 7, 2009Inventors: James F. Allen, Nathanael Chambers, Lucian Galescu, Hyuckchul Jung, William Taysom