Patents by Inventor Daniel J. Hwang

Daniel J. Hwang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10775997
    Abstract: Techniques are described herein that are capable of causing a control interface to be presented on a touch-enabled device based on a motion or absence thereof. A motion, such as a hover gesture, can be detected and the control interface presented in response to the detection. Alternatively, absence of a motion can be detected and the control interface presented in response to the detection. A hover gesture can occur without a user physically touching a touch screen of a touch-enabled device. Instead, the user's finger or fingers can be positioned at a spaced distance above the touch screen. The touch screen can detect that the user's fingers are proximate to the touch screen, such as through capacitive sensing. Additionally, finger movement can be detected while the fingers are hovering to expand the existing options for gesture input.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: September 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. Hwang, Juan (Lynn) Dai, Sharath Viswanathan, Joseph B. Tobens, Jose A. Rodriguez, Peter G. Davis
  • Patent number: 10192549
    Abstract: An electronic device can receive user input via voice or text that includes tasks to be performed. A digital personal assistant infrastructure service can control to which registered action provider the task is assigned. Per-task action provider preferences can be stored. If a preferred action provider is not able to complete the task, the task can still be performed by a registered action provider that has appropriate capabilities. Machine learning can determine a user's preferences. Resource conservation and effective user interaction can result.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: January 29, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang, Robert L. Chambers, David Pinch, Zachary Thomas John Siddall
  • Patent number: 9959129
    Abstract: Techniques are described for headlessly completing a task of an application in the background of a digital personal assistant. For example, a method can include receiving a voice input via a microphone. Natural language processing can be performed using the voice input to determine a user voice command. The user voice command can include a request to perform a task of the application. The application can be caused to execute the task as a background process without a user interface of the application appearing. A user interface of the digital personal assistant can provide a response to the user, based on a received state associated with the task, so that the response comes from within a context of the user interface of the digital personal assistant without surfacing the user interface of the application.
    Type: Grant
    Filed: January 9, 2015
    Date of Patent: May 1, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang, Robert L. Chambers, Thomas Soemo, Adina Magdalena Trufinescu, Khuram Shahid, Ali Emami
  • Publication number: 20180005634
    Abstract: Techniques are described for discovering capabilities of voice-enabled resources. A voice-controlled digital personal assistant can respond to user requests to list available voice-enabled resources that are capable of performing a specific task using voice input. The voice-controlled digital personal assistant can also respond to user requests to list the tasks that a particular voice-enabled resource can perform using voice input. The voice-controlled digital personal assistant can also support a practice mode in which users practice voice commands for performing tasks supported by voice-enabled resources.
    Type: Application
    Filed: September 14, 2017
    Publication date: January 4, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Jonathan Campbell, Daniel J. Hwang
  • Patent number: 9837081
    Abstract: Techniques are described for discovering capabilities of voice-enabled resources. A voice-controlled digital personal assistant can respond to user requests to list available voice-enabled resources that are capable of performing a specific task using voice input. The voice-controlled digital personal assistant can also respond to user requests to list the tasks that a particular voice-enabled resource can perform using voice input. The voice-controlled digital personal assistant can also support a practice mode in which users practice voice commands for performing tasks supported by voice-enabled resources.
    Type: Grant
    Filed: December 30, 2014
    Date of Patent: December 5, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Jonathan Campbell, Daniel J. Hwang
  • Patent number: 9812126
    Abstract: An electronic device in a topology of interconnected electronic devices can listen for a wake phrase and voice commands. The device can control when and how it responds so that a single device responds to voice commands. Per-task device preferences can be stored for a user. If a preferred device is not available, the task can still be performed on a device that has appropriate capabilities. Machine learning can determine a user's preferences. Power conservation and effective user interaction can result.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: November 7, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yaser Khan, Aleksandar Uzelac, Daniel J. Hwang, Sergio Paolantonio, Jenny Kam, Vishwac Sena Kannan, Dennis James Mooney, II, Alice Jane Bernheim Brush
  • Publication number: 20170228150
    Abstract: Techniques are described herein that are capable of causing a control interface to be presented on a touch-enabled device based on a motion or absence thereof. A motion, such as a hover gesture, can be detected and the control interface presented in response to the detection. Alternatively, absence of a motion can be detected and the control interface presented in response to the detection. A hover gesture can occur without a user physically touching a touch screen of a touch-enabled device. Instead, the user's finger or fingers can be positioned at a spaced distance above the touch screen. The touch screen can detect that the user's fingers are proximate to the touch screen, such as through capacitive sensing. Additionally, finger movement can be detected while the fingers are hovering to expand the existing options for gesture input.
    Type: Application
    Filed: April 25, 2017
    Publication date: August 10, 2017
    Inventors: Daniel J. Hwang, Juan (Lynn) Dai, Sharath Viswanathan, Joseph B. Tobens, Jose A. Rodriguez, Peter G. Davis
  • Patent number: 9690542
    Abstract: A method for providing digital personal assistant responses may include receiving, by a digital personal assistant associated with a plurality of reactive agents, a user input initiating a dialog with the digital personal assistant within the computing device. In response to receiving the input, an operation mode of the computing device may be detected from a plurality of available operation modes. One of the plurality of reactive agents can be selected based on the received input. A plurality of response strings associated with the selected reactive agent can be accessed. At least one of the plurality of response strings is selected based at least on the operation mode and at least one hardware characteristic of the computing device. The selected at least one of the plurality of response strings is providing during the dialog, as a response to the user input.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: June 27, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mouni Reddy, Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang, Molly Rose Suver, Lisa Joy Stifelman
  • Patent number: 9645651
    Abstract: Techniques are described herein that are capable of causing a control interface to be presented on a touch-enabled device based on a motion or absence thereof. A motion, such as a hover gesture, can be detected and the control interface presented in response to the detection. Alternatively, absence of a motion can be detected and the control interface presented in response to the detection. A hover gesture can occur without a user physically touching a touch screen of a touch-enabled device. Instead, the user's finger or fingers can be positioned at a spaced distance above the touch screen. The touch screen can detect that the user's fingers are proximate to the touch screen, such as through capacitive sensing. Additionally, finger movement can be detected while the fingers are hovering to expand the existing options for gesture input.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: May 9, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. Hwang, Juan (Lynn) Dai, Sharath Viswanathan, Joseph B. Tobens, Jose A. Rodriguez, Peter G. Davis
  • Patent number: 9508339
    Abstract: A method for updating language understanding classifier models includes receiving via one or more microphones of a computing device, a digital voice input from a user of the computing device. Natural language processing using the digital voice input is used to determine a user voice request. Upon determining the user voice request does not match at least one of a plurality of pre-defined voice commands in a schema definition of a digital personal assistant, a GUI of an end-user labeling tool is used to receive a user selection of at least one of the following: at least one intent of a plurality of available intents and/or at least one slot for the at least one intent. A labeled data set is generated by pairing the user voice request and the user selection, and is used to update a language understanding classifier.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: November 29, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang
  • Patent number: 9501218
    Abstract: Techniques are described herein that are capable of increasing touch and/or hover accuracy on a touch-enabled device. For example, attribute(s) of a hand or a portion thereof (e.g., one or more fingers) may be used to determine a location on a touch screen to which a user intends to point. Such attribute(s) may be derived, measured, etc. For instance, a value corresponding to a distance between the hand/portion and the touch screen may be derived from a magnitude of a measurement of an interaction between the hand/portion and the touch screen. In another example, virtual elements displayed on the touch screen may be mapped to respective areas in a plane that is parallel (e.g., coincident) with the touch screen. In accordance with this example, receiving a touch and/or hover command with regard to an area in the plane may indicate selection of the corresponding virtual element.
    Type: Grant
    Filed: January 10, 2014
    Date of Patent: November 22, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. Hwang, Juan (Lynn) Dai, Sharath Viswanathan
  • Publication number: 20160225370
    Abstract: A method for updating language understanding classifier models includes receiving via one or more microphones of a computing device, a digital voice input from a user of the computing device. Natural language processing using the digital voice input is used to determine a user voice request. Upon determining the user voice request does not match at least one of a plurality of pre-defined voice commands in a schema definition of a digital personal assistant, a GUI of an end-user labeling tool is used to receive a user selection of at least one of the following: at least one intent of a plurality of available intents and/or at least one slot for the at least one intent. A labeled data set is generated by pairing the user voice request and the user selection, and is used to update a language understanding classifier.
    Type: Application
    Filed: January 30, 2015
    Publication date: August 4, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang
  • Publication number: 20160202957
    Abstract: A method for generating a reactive agent definition may include acquiring, by a reactive agent development environment (RADE) tool of a computing device, an extensible markup language (XML) schema template for defining a reactive agent of a digital personal assistant running on the computing device. The RADE tool may receive input identifying at least one domain-intent pair associated with a category of functions performed by the computing device. A multi-turn dialog flow defining a plurality of states associated with the domain-intent pair may be generated using a graphical user interface of the RADE tool. The XML schema template may be updated based on the received input and the multi-turn dialog flow to produce an updated XML schema specific to the domain-intent pair. The reactive agent definition may be generated using the updated XML schema.
    Type: Application
    Filed: January 13, 2015
    Publication date: July 14, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Zachary Thomas John Siddall, Vishwac Sena Kannan, Aleksandar Uzelac, Eric Christian Brown, Daniel J. Hwang
  • Publication number: 20160203002
    Abstract: Techniques are described for headlessly completing a task of an application in the background of a digital personal assistant. For example, a method can include receiving a voice input via a microphone. Natural language processing can be performed using the voice input to determine a user voice command. The user voice command can include a request to perform a task of the application. The application can be caused to execute the task as a background process without a user interface of the application appearing. A user interface of the digital personal assistant can provide a response to the user, based on a received state associated with the task, so that the response comes from within a context of the user interface of the digital personal assistant without surfacing the user interface of the application.
    Type: Application
    Filed: January 9, 2015
    Publication date: July 14, 2016
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang, Robert L. Chambers, Thomas Soemo, Adina Magdalena Trufinescu, Khuram Shahid, Ali Emami
  • Publication number: 20160189717
    Abstract: Techniques are described for discovering capabilities of voice-enabled resources. A voice-controlled digital personal assistant can respond to user requests to list available voice-enabled resources that are capable of performing a specific task using voice input. The voice-controlled digital personal assistant can also respond to user requests to list the tasks that a particular voice-enabled resource can perform using voice input. The voice-controlled digital personal assistant can also support a practice mode in which users practice voice commands for performing tasks supported by voice-enabled resources.
    Type: Application
    Filed: December 30, 2014
    Publication date: June 30, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Jonathan Campbell, Daniel J. Hwang
  • Publication number: 20160179464
    Abstract: A method for providing digital personal assistant responses may include receiving, by a digital personal assistant associated with a plurality of reactive agents, a user input initiating a dialog with the digital personal assistant within the computing device. In response to receiving the input, an operation mode of the computing device may be detected from a plurality of available operation modes. One of the plurality of reactive agents can be selected based on the received input. A plurality of response strings associated with the selected reactive agent can be accessed. At least one of the plurality of response strings is selected based at least on the operation mode and at least one hardware characteristic of the computing device. The selected at least one of the plurality of response strings is providing during the dialog, as a response to the user input.
    Type: Application
    Filed: December 22, 2014
    Publication date: June 23, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mouni Reddy, Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang, Molly Rose Suver, Lisa Joy Stifelman
  • Publication number: 20160155442
    Abstract: An electronic device can receive user input via voice or text that includes tasks to be performed. A digital personal assistant infrastructure service can control to which registered action provider the task is assigned. Per-task action provider preferences can be stored. If a preferred action provider is not able to complete the task, the task can still be performed by a registered action provider that has appropriate capabilities. Machine learning can determine a user's preferences. Resource conservation and effective user interaction can result.
    Type: Application
    Filed: April 1, 2015
    Publication date: June 2, 2016
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang, Robert L. Chambers, David Pinch, Zachary Thomas John Siddall
  • Publication number: 20160155443
    Abstract: An electronic device in a topology of interconnected electronic devices can listen for a wake phrase and voice commands. The device can control when and how it responds so that a single device responds to voice commands. Per-task device preferences can be stored for a user. If a preferred device is not available, the task can still be performed on a device that has appropriate capabilities. Machine learning can determine a user's preferences. Power conservation and effective user interaction can result.
    Type: Application
    Filed: April 1, 2015
    Publication date: June 2, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yaser Khan, Aleksandar Uzelac, Daniel J. Hwang, Sergio Paolantonio, Jenny Kam, Vishwac Sena Kannan, Dennis James Mooney, II, Alice Jane Bernheim Brush
  • Publication number: 20150199101
    Abstract: Techniques are described herein that are capable of increasing touch and/or hover accuracy on a touch-enabled device. For example, attribute(s) of a hand or a portion thereof (e.g., one or more fingers) may be used to determine a location on a touch screen to which a user intends to point. Such attribute(s) may be derived, measured, etc. For instance, a value corresponding to a distance between the hand/portion and the touch screen may be derived from a magnitude of a measurement of an interaction between the hand/portion and the touch screen. In another example, virtual elements displayed on the touch screen may be mapped to respective areas in a plane that is parallel (e.g., coincident) with the touch screen. In accordance with this example, receiving a touch and/or hover command with regard to an area in the plane may indicate selection of the corresponding virtual element.
    Type: Application
    Filed: January 10, 2014
    Publication date: July 16, 2015
    Applicant: Microsoft Corporation
    Inventors: Daniel J. Hwang, Juan (Lynn) Dai, Sharath Viswanathan
  • Publication number: 20150089419
    Abstract: Techniques are described herein that are capable of causing a control interface to be presented on a touch-enabled device based on a motion or absence thereof. A motion, such as a hover gesture, can be detected and the control interface presented in response to the detection. Alternatively, absence of a motion can be detected and the control interface presented in response to the detection. A hover gesture can occur without a user physically touching a touch screen of a touch-enabled device. Instead, the user's finger or fingers can be positioned at a spaced distance above the touch screen. The touch screen can detect that the user's fingers are proximate to the touch screen, such as through capacitive sensing. Additionally, finger movement can be detected while the fingers are hovering to expand the existing options for gesture input.
    Type: Application
    Filed: September 24, 2013
    Publication date: March 26, 2015
    Applicant: Microsoft Corporation
    Inventors: Daniel J. Hwang, Juan (Lynn) Dai, Sharath Viswanathan, Joseph B. Tobens, Jose A. Rodriguez, Peter G. Davis