Patents by Inventor Gregg Robert Wygonik
Gregg Robert Wygonik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11221669Abstract: Systems and methods related to engaging with a virtual assistant via ancillary input are provided. Ancillary input may refer to non-verbal, non-tactile input based on eye-gaze data and/or eye-gaze attributes, including but not limited to, facial recognition data, motion or gesture detection, eye-contact data, head-pose or head-position data, and the like. Thus, to initiate and/or maintain interaction with a virtual assistant, a user need not articulate an attention word or words. Rather the user may initiate and/or maintain interaction with a virtual assistant more naturally and may even include the virtual assistant in a human conversation with multiple speakers. The virtual assistant engagement system may utilize at least one machine-learning algorithm to more accurately determine whether a user desires to engage with and/or maintain interaction with a virtual assistant. Various hardware configurations associated with a virtual assistant device may allow for both near-field and/or far-field engagement.Type: GrantFiled: December 20, 2017Date of Patent: January 11, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ryen William White, Andrew David Wilson, Gregg Robert Wygonik, Nirupama Chandrasekaran, Sean Edward Andrist
-
Patent number: 10877575Abstract: In a multi-user, collaborative environment, an operating system state change can occur by passing a stylus pen from one user to another. Upon detecting a change of the active user, a state machine transition can result in a change in a user experience. In one example, a change in the user experience includes a change in user interface (UI) functionality. Some examples include changing color of the digital ink being displayed, changing a handwriting pattern, and automatically switching a view displayed on the user interface. The switched view can be switching from a single-user model to a multi-user model. In another example, a state machine transition can result in establishing connections between devices (e.g., phones or laptops) associated with the users that used the pen.Type: GrantFiled: March 6, 2017Date of Patent: December 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Roy H. Berger, Jr., Gregg Robert Wygonik, Hayden William McAfee, Christian Klein, Ken M. Sadahiro, Alain Philippe Maillot
-
Patent number: 10782800Abstract: A digital inking device can automatically adapt its interaction modality to provide contextually relevant status information and contextually relevant user interface control elements based on a user's activity. An interaction model of a digital inking device can automatically adapt the display of particular control user interface control elements and particular status indicators based on one or more factors. For example, a digital inking device can select an interaction model from a number of interaction models based on a way a digital inking device is held by a user, a fingerprint of a user, an angle and/or distance of a digital inking device with respect to a paired computer, a particular grip a user has on a digital inking device, an amount of pressure that is used to hold a digital inking device, a contact pressure between a digital inking device and a paired computer, and/or one more gestures performed by a user.Type: GrantFiled: December 8, 2017Date of Patent: September 22, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Christian Klein, Jan-Kristian Markiewicz, Gregg Robert Wygonik
-
Patent number: 10768804Abstract: A gesture language for a device with multiple touch surfaces is described. Generally, a series of new touch input models is described that includes touch input interactions on two disjoint touch-sensitive surfaces. For example, a mobile device can include a primary display on a “front” side of the device, and a secondary display or touch-sensitive surface on the “back” side of the device, such as a surface that is opposite the primary display. Accordingly, the gesture language can include a series of “back touch” interactions with the touch-sensitive surface on the backside of the device. Example interactions include direct and indirect touch input on the back side, as well as simultaneous touch input on both sides of the device.Type: GrantFiled: September 6, 2016Date of Patent: September 8, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Christian Klein, Gregg Robert Wygonik, Ricardo A. Espinoza Reyes, Raymond Quan, Sophors Khut
-
Patent number: 10642453Abstract: Techniques for navigational aid for a hinged device via semantic abstraction are described. Generally, the techniques described herein improve a user experience when the user is navigating through content, such as user-generated content in an electronic document. For example, the techniques described herein semantically abstract authored content in an electronic document to provide abstracted content. In implementations, abstracted content includes abstracted pages that each represent a different section of the authored content. When a user scans through an electronic document, rather than scanning page by page, techniques described herein instead navigate through displays of the abstracted pages. In addition, a hinge between different displays can be used as an input mechanism to control a speed of navigating through the abstracted pages to allow a user to more easily locate specific sections of the authored content.Type: GrantFiled: April 3, 2019Date of Patent: May 5, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Gregg Robert Wygonik, Raymond Quan, Christian Klein
-
Publication number: 20190227682Abstract: Techniques for navigational aid for a hinged device via semantic abstraction are described. Generally, the techniques described herein improve a user experience when the user is navigating through content, such as user-generated content in an electronic document. For example, the techniques described herein semantically abstract authored content in an electronic document to provide abstracted content. In implementations, abstracted content includes abstracted pages that each represent a different section of the authored content. When a user scans through an electronic document, rather than scanning page by page, techniques described herein instead navigate through displays of the abstracted pages.Type: ApplicationFiled: April 3, 2019Publication date: July 25, 2019Inventors: Gregg Robert Wygonik, Raymond Quan, Christian Klein
-
Publication number: 20190187787Abstract: Systems and methods related to engaging with a virtual assistant via ancillary input are provided. Ancillary input may refer to non-verbal, non-tactile input based on eye-gaze data and/or eye-gaze attributes, including but not limited to, facial recognition data, motion or gesture detection, eye-contact data, head-pose or head-position data, and the like. Thus, to initiate and/or maintain interaction with a virtual assistant, a user need not articulate an attention word or words. Rather the user may initiate and/or maintain interaction with a virtual assistant more naturally and may even include the virtual assistant in a human conversation with multiple speakers. The virtual assistant engagement system may utilize at least one machine-learning algorithm to more accurately determine whether a user desires to engage with and/or maintain interaction with a virtual assistant. Various hardware configurations associated with a virtual assistant device may allow for both near-field and/or far-field engagement.Type: ApplicationFiled: December 20, 2017Publication date: June 20, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Ryen William WHITE, Andrew David Wilson, Gregg Robert Wygonik, Nirupama Chandrasekaran, Sean Edward Andrist
-
Publication number: 20190179431Abstract: A digital inking device can automatically adapt its interaction modality to provide contextually relevant status information and contextually relevant user interface control elements based on a user's activity. An interaction model of a digital inking device can automatically adapt the display of particular control user interface control elements and particular status indicators based on one or more factors. For example, a digital inking device can select an interaction model from a number of interaction models based on a way a digital inking device is held by a user, a fingerprint of a user, an angle and/or distance of a digital inking device with respect to a paired computer, a particular grip a user has on a digital inking device, an amount of pressure that is used to hold a digital inking device, a contact pressure between a digital inking device and a paired computer, and/or one more gestures performed by a user.Type: ApplicationFiled: December 8, 2017Publication date: June 13, 2019Inventors: Christian KLEIN, Jan-Kristian MARKIEWICZ, Gregg Robert WYGONIK
-
Patent number: 10296176Abstract: Techniques for navigational aid for a hinged device via semantic abstraction are described. Generally, the techniques described herein improve a user experience when the user is navigating through content, such as user-generated content in an electronic document. For example, the techniques described herein semantically abstract authored content in an electronic document to provide abstracted content. In implementations, abstracted content includes abstracted pages that each represent a different section of the authored content. When a user scans through an electronic document, rather than scanning page by page, techniques described herein instead navigate through displays of the abstracted pages. In addition, a hinge between different displays can be used as an input mechanism to control a speed of navigating through the abstracted pages to allow a user to more easily locate specific sections of the authored content.Type: GrantFiled: January 30, 2017Date of Patent: May 21, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Gregg Robert Wygonik, Raymond Quan, Christian Klein
-
Patent number: 10248224Abstract: Techniques for input based on interactions with a physical hinge are described. Generally, a new class of interactions involves user manipulation of a physical hinge in order to provide input to a computing device. These hinge-based interactions provide input to a computing system that can be leveraged to initiate one or more system-level commands or operations, initiate transitions between discrete views of content, interact with content displayed via one or more display devices, and so on. In an example, a sequence of two or more consecutive hinge angle changes is recognized as a hinge gesture to perform a particular operation, such as a transition between a single-tasking state and a multitasking state.Type: GrantFiled: October 25, 2016Date of Patent: April 2, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Christian Klein, Raymond Quan, Ricardo A. Espinoza Reyes, Gregg Robert Wygonik
-
Publication number: 20180253163Abstract: In a multi-user, collaborative environment, an operating system state change can occur by passing a stylus pen from one user to another. Upon detecting a change of the active user, a state machine transition can result in a change in a user experience. In one example, a change in the user experience includes a change in user interface (UI) functionality. Some examples include changing color of the digital ink being displayed, changing a handwriting pattern, and automatically switching a view displayed on the user interface. The switched view can be switching from a single-user model to a multi-user model. In another example, a state machine transition can result in establishing connections between devices (e.g., phones or laptops) associated with the users that used the pen.Type: ApplicationFiled: March 6, 2017Publication date: September 6, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Roy H. Berger, JR., Gregg Robert Wygonik, Hayden William McAfee, Christian Klein, Ken M. Sadahiro, Alain Philippe Maillot
-
Publication number: 20180217725Abstract: Techniques for navigational aid for a hinged device via semantic abstraction are described. Generally, the techniques described herein improve a user experience when the user is navigating through content, such as user-generated content in an electronic document. For example, the techniques described herein semantically abstract authored content in an electronic document to provide abstracted content. In implementations, abstracted content includes abstracted pages that each represent a different section of the authored content. When a user scans through an electronic document, rather than scanning page by page, techniques described herein instead navigate through displays of the abstracted pages.Type: ApplicationFiled: January 30, 2017Publication date: August 2, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Gregg Robert Wygonik, Raymond Quan, Christian Klein
-
Publication number: 20180113520Abstract: Techniques for input based on interactions with a physical hinge are described. Generally, a new class of interactions involves user manipulation of a physical hinge in order to provide input to a computing device. These hinge-based interactions provide input to a computing system that can be leveraged to initiate one or more system-level commands or operations, initiate transitions between discrete views of content, interact with content displayed via one or more display devices, and so on. In an example, a sequence of two or more consecutive hinge angle changes is recognized as a hinge gesture to perform a particular operation, such as a transition between a single-tasking state and a multitasking state.Type: ApplicationFiled: October 25, 2016Publication date: April 26, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Christian Klein, Raymond Quan, Ricardo A. Espinoza Reyes, Gregg Robert Wygonik
-
Publication number: 20180067638Abstract: A gesture language for a device with multiple touch surfaces is described. Generally, a series of new touch input models is described that includes touch input interactions on two disjoint touch-sensitive surfaces. For example, a mobile device can include a primary display on a “front” side of the device, and a secondary display or touch-sensitive surface on the “back” side of the device, such as a surface that is opposite the primary display. Accordingly, the gesture language can include a series of “back touch” interactions with the touch-sensitive surface on the backside of the device. Example interactions include direct and indirect touch input on the back side, as well as simultaneous touch input on both sides of the device.Type: ApplicationFiled: September 6, 2016Publication date: March 8, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Christian Klein, Gregg Robert Wygonik, Ricardo A. Espinoza Reyes, Raymond Quan, Sophors Khut
-
Publication number: 20180061374Abstract: A computing device that processes data from multiple sensors to modify image elements on a display for viewing is described. In implementations, a display adapter circuit in a computing device processes hinge data from a first sensor and orientation data from a second sensor and infers a viewing angle to output the image elements for display. In implementations, the display adapter circuit adapts the display device to enable display of the image elements at the inferred viewing angle while the computing device is between a closed position and a fully open position. The computing device can also or instead be implemented to transition display of modified image elements while first and second housings of the computing device move relative to one another.Type: ApplicationFiled: August 23, 2016Publication date: March 1, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Gregg Robert Wygonik, Christian Klein, Sophors Khut
-
Publication number: 20170329952Abstract: Providing casual digital ink experiences on an active pen capable computing device. A method includes detecting an active pen interaction with a screen of the computing device. The computing device comprises a locked lock screen requiring authentication to unlock the lock screen. As a result of detecting the active pen interaction with the screen of the computing device, the method further includes launching a casual digital ink experience without requiring authentication to the computing device, such that the casual digital ink experience is launched without unlocking the lock screen.Type: ApplicationFiled: May 13, 2016Publication date: November 16, 2017Inventors: Katsumi Take, Gregg Robert Wygonik, Robert John Connon