Patents by Inventor Daniel J. Wigdor
Daniel J. Wigdor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10972680Abstract: On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme.Type: GrantFiled: March 10, 2011Date of Patent: April 6, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Daniel J. Wigdor, Megan Tedesco
-
Patent number: 9152317Abstract: A method of operating a graphical user interface of a computing device is disclosed. The method comprises displaying a graphical user interface (GUI) element on the touch sensitive display screen. The method further comprises in response to receiving touch input data indicative of a one-touch gesture mapping the one-touch gesture to a corresponding GUI element function. The method further comprises in response to receiving touch input data indicative of a multi-touch gesture, mapping the multi-touch gesture to the corresponding GUI element function. The method further comprises transforming display of the GUI element on the touch sensitive display screen based on the corresponding GUI element function.Type: GrantFiled: August 14, 2009Date of Patent: October 6, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Vanessa A. Larco, Daniel J. Wigdor, Sarah Graham Williams
-
Publication number: 20140120518Abstract: A method for providing multi-touch input training on a display surface is disclosed. A touch/hover input is detected at one or more regions of the display surface. A visualization of the touch/hover input is displayed at a location of the display surface offset from the touch/hover input. One or more annotations are displayed at a location of the display surface offset from the touch/hover input and proximate to the visualization, where each annotation shows a different legal continuation of the touch/hover input.Type: ApplicationFiled: January 2, 2014Publication date: May 1, 2014Applicant: Microsoft CorporationInventors: Hrvoje Benko, Daniel J. Wigdor, Dustin Freeman
-
Patent number: 8622742Abstract: A method for providing multi-touch input training on a display surface is disclosed. A touch input is detected at one or more regions of the display surface. A visualization of the touch input is displayed at a location of the display surface offset from the touch input. One or more annotations are displayed at a location of the display surface offset from the touch input and proximate to the visualization, where each annotation shows a different legal continuation of the touch input.Type: GrantFiled: November 16, 2009Date of Patent: January 7, 2014Assignee: Microsoft CorporationInventors: Hrvoje Benko, Daniel J. Wigdor, Dustin Freeman
-
Patent number: 8514188Abstract: A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint.Type: GrantFiled: December 30, 2009Date of Patent: August 20, 2013Assignee: Microsoft CorporationInventors: Paul Armistead Hoover, Maxim Oustiogov, Daniel J. Wigdor, Hrvoje Benko, Jarrod Lombardo
-
Patent number: 8487888Abstract: Embodiments are disclosed herein that relate to multi-modal interaction on a computing device comprising a multi-touch display. One disclosed embodiment comprises a method of multi-modal interaction including recognizing a hand posture of a user's first hand directed at the display and displaying a modal region based on the hand posture, wherein the modal region defines an area on the display. The method further includes receiving an input selecting a mode to be applied to the modal region, wherein the mode indicates functionalities to be associated with the modal region and defines a mapping of touch gestures to actions associated with the mode. The method further includes, while the modal region remains displayed, recognizing a touch gesture from a user's second hand directed at the display within the modal region and performing an action on the display based upon a mapping of the touch gesture.Type: GrantFiled: December 4, 2009Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Daniel J. Wigdor, Paul Armistead Hoover, Kay Hofmeester
-
Publication number: 20120264510Abstract: An integrated virtual environment is provided by obtaining a 3D spatial model of a physical environment in which a user is located, and identifying, via analysis of the 3D spatial model, a physical object in the physical environment. The method further comprises generating a virtualized representation of the physical object, and incorporating the virtualized representation of the physical object into an existing virtual environment, thereby yielding the integrated virtual environment. The method further comprises displaying, on a display device and from a vantage point of the user, a view of the integrated virtual environment, said view being changeable in response to the user moving and/or interacting within the physical environment.Type: ApplicationFiled: April 12, 2011Publication date: October 18, 2012Applicant: MICROSOFT CORPORATIONInventors: Daniel J. Wigdor, Megan Tedesco, Andrew Wilson, John Clavin
-
Publication number: 20120229508Abstract: On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme.Type: ApplicationFiled: March 10, 2011Publication date: September 13, 2012Applicant: MICROSOFT CORPORATIONInventors: Daniel J. Wigdor, Megan Tedesco
-
Patent number: 8261212Abstract: A computing system for displaying a GUI element on a natural user interface is described herein. The computing system includes a display configured to display a natural user interface of a program executed on the computing system, and a gesture sensor configured to detect a gesture input directed at the natural user interface by a user. The computing system also includes a processor configured to execute a gesture-recognizing module for recognizing a registration phase, an operation phase, and a termination phase of the gesture input, and a gesture assist module configured to first display a GUI element overlaid upon the natural user interface in response to recognition of the registration phase. The GUI element includes a visual or audio operation cue to prompt the user to carry out the operation phase of the gesture input, and a selector manipulatable by the user via the operation phase of the gesture.Type: GrantFiled: October 20, 2009Date of Patent: September 4, 2012Assignee: Microsoft CorporationInventors: Daniel J. Wigdor, Paul Armistead Hoover
-
Publication number: 20110270824Abstract: Collaborative search and share is provided by a method of facilitating collaborative content-finding, which includes displaying a toolbar user interface object for each user that not only allows each user to perform content-finding but also increases awareness of each user to the activities of other users. The method further includes displaying content results as various disparate image clips that can easily be shared, moved, etc. amongst users.Type: ApplicationFiled: April 30, 2010Publication date: November 3, 2011Applicant: MICROSOFT CORPORATIONInventors: Meredith June Morris, Daniel J. Wigdor, Vanessa Adriana Larco, Jarrod Lombardo, Sean Clarence McDirmid, Chao Wang, Monty Todd LaRue, Erez Kikin-Gil
-
Publication number: 20110157025Abstract: A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint.Type: ApplicationFiled: December 30, 2009Publication date: June 30, 2011Inventors: Paul Armistead Hoover, Maxim Oustiogov, Daniel J. Wigdor, Hrvoje Benko, Jarrod Lombardo
-
Publication number: 20110134047Abstract: Embodiments are disclosed herein that relate to multi-modal interaction on a computing device comprising a multi-touch display. One disclosed embodiment comprises a method of multi-modal interaction including recognizing a hand posture of a user's first hand directed at the display and displaying a modal region based on the hand posture, wherein the modal region defines an area on the display. The method further includes receiving an input selecting a mode to be applied to the modal region, wherein the mode indicates functionalities to be associated with the modal region and defines a mapping of touch gestures to actions associated with the mode. The method further includes, while the modal region remains displayed, recognizing a touch gesture from a user's second hand directed at the display within the modal region and performing an action on the display based upon a mapping of the touch gesture.Type: ApplicationFiled: December 4, 2009Publication date: June 9, 2011Applicant: MICROSOFT CORPORATIONInventors: Daniel J. Wigdor, Paul Armistead Hoover, Kay Hofmeester
-
Publication number: 20110117526Abstract: A method for providing multi-touch input initiation training on a display surface is disclosed. A set of one or more registration hand postures is determined, where each registration hand posture corresponds to one or more gestures executable from that registration hand posture. A registration posture guide is displayed on the display surface. The registration posture guide includes a catalogue for each registration hand posture, where the catalogue includes a contact silhouette showing a model touch-contact interface between the display surface and that registration hand posture.Type: ApplicationFiled: November 16, 2009Publication date: May 19, 2011Applicant: MICROSOFT CORPORATIONInventors: Daniel J. Wigdor, Hrvoje Benko
-
Publication number: 20110117535Abstract: A method for providing multi-touch input training on a display surface is disclosed. A touch input is detected at one or more regions of the display surface. A visualization of the touch input is displayed at a location of the display surface offset from the touch input. One or more annotations are displayed at a location of the display surface offset from the touch input and proximate to the visualization, where each annotation shows a different legal continuation of the touch input.Type: ApplicationFiled: November 16, 2009Publication date: May 19, 2011Applicant: MICROSOFT CORPORATIONInventors: Hrvoje Benko, Daniel J. Wigdor, Dustin Freeman
-
Publication number: 20110119216Abstract: A computing device that detects precursory user-input preactions executed in an instructive region and user-input action gestures executed in a functionally-active region is provided. The computing device includes a natural input trainer to present a predictive input cue on a display in response to detecting a precursory user-input preaction performed in the instructive region. The computing device also includes an interface engine to execute a computing function in response to detecting a successive user-input action gesture performed in the functionally-active region subsequent to detection of the precursory user-input preaction.Type: ApplicationFiled: November 16, 2009Publication date: May 19, 2011Applicant: MICROSOFT CORPORATIONInventor: Daniel J. Wigdor
-
Publication number: 20110093821Abstract: A computing system for displaying a GUI element on a natural user interface is described herein. The computing system includes a display configured to display a natural user interface of a program executed on the computing system, and a gesture sensor configured to detect a gesture input directed at the natural user interface by a user. The computing system also includes a processor configured to execute a gesture-recognizing module for recognizing a registration phase, an operation phase, and a termination phase of the gesture input, and a gesture assist module configured to first display a GUI element overlaid upon the natural user interface in response to recognition of the registration phase. The GUI element includes a visual or audio operation cue to prompt the user to carry out the operation phase of the gesture input, and a selector manipulatable by the user via the operation phase of the gesture.Type: ApplicationFiled: October 20, 2009Publication date: April 21, 2011Applicant: MICROSOFT CORPORATIONInventors: Daniel J. Wigdor, Paul Armistead Hoover
-
Publication number: 20110041096Abstract: A method of operating a graphical user interface of a computing device is disclosed. The method comprises displaying a graphical user interface (GUI) element on the touch sensitive display screen. The method further comprises in response to receiving touch input data indicative of a one-touch gesture mapping the one-touch gesture to a corresponding GUI element function. The method further comprises in response to receiving touch input data indicative of a multi-touch gesture, mapping the multi-touch gesture to the corresponding GUI element function. The method further comprises transforming display of the GUI element on the touch sensitive display screen based on the corresponding GUI element function.Type: ApplicationFiled: August 14, 2009Publication date: February 17, 2011Inventors: Vanessa A. Larco, Daniel J. Wigdor, Sarah Graham Williams