Patents by Inventor Janet E. Galore
Janet E. Galore has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10599393Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.Type: GrantFiled: August 1, 2018Date of Patent: March 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
-
Patent number: 10398972Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: GrantFiled: September 16, 2016Date of Patent: September 3, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Oscar E. Murillo, Andy D. Wilson, Alex A. Kipman, Janet E. Galore
-
Publication number: 20190138271Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.Type: ApplicationFiled: August 1, 2018Publication date: May 9, 2019Inventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
-
Patent number: 10067740Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.Type: GrantFiled: May 10, 2016Date of Patent: September 4, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
-
Publication number: 20170144067Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: ApplicationFiled: September 16, 2016Publication date: May 25, 2017Inventors: Oscar E. Murillo, Andy D. Wilson, Alex A. Kipman, Janet E. Galore
-
Publication number: 20160350071Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.Type: ApplicationFiled: May 10, 2016Publication date: December 1, 2016Inventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
-
Patent number: 9348417Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.Type: GrantFiled: November 1, 2010Date of Patent: May 24, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
-
Publication number: 20120109868Abstract: The subject disclosure relates to a technology by which output data in the form of audio, visual, haptic, and/or other output is automatically selected and tailored by a system, including adapting in real time, to address one or more users' specific needs, context and implicit/explicit intent. State data and preference data are input into a real time adaptive output system that uses the data to select among output modalities, e.g., to change output mechanisms, add/remove output mechanisms, and/or change rendering characteristics. The output may be rendered on one or more output mechanisms to a single user or multiple users, including via a remote output mechanism.Type: ApplicationFiled: November 1, 2010Publication date: May 3, 2012Applicant: Microsoft CorporationInventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Tim Wantland, Blaise H. Aguera-Arcas
-
Publication number: 20120105257Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.Type: ApplicationFiled: November 1, 2010Publication date: May 3, 2012Applicant: Microsoft CorporationInventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay