Patents by Inventor Daniel J. Wigdor

Daniel J. Wigdor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10972680
    Abstract: On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme.
    Type: Grant
    Filed: March 10, 2011
    Date of Patent: April 6, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. Wigdor, Megan Tedesco
  • Patent number: 9152317
    Abstract: A method of operating a graphical user interface of a computing device is disclosed. The method comprises displaying a graphical user interface (GUI) element on the touch sensitive display screen. The method further comprises in response to receiving touch input data indicative of a one-touch gesture mapping the one-touch gesture to a corresponding GUI element function. The method further comprises in response to receiving touch input data indicative of a multi-touch gesture, mapping the multi-touch gesture to the corresponding GUI element function. The method further comprises transforming display of the GUI element on the touch sensitive display screen based on the corresponding GUI element function.
    Type: Grant
    Filed: August 14, 2009
    Date of Patent: October 6, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vanessa A. Larco, Daniel J. Wigdor, Sarah Graham Williams
  • Publication number: 20140120518
    Abstract: A method for providing multi-touch input training on a display surface is disclosed. A touch/hover input is detected at one or more regions of the display surface. A visualization of the touch/hover input is displayed at a location of the display surface offset from the touch/hover input. One or more annotations are displayed at a location of the display surface offset from the touch/hover input and proximate to the visualization, where each annotation shows a different legal continuation of the touch/hover input.
    Type: Application
    Filed: January 2, 2014
    Publication date: May 1, 2014
    Applicant: Microsoft Corporation
    Inventors: Hrvoje Benko, Daniel J. Wigdor, Dustin Freeman
  • Patent number: 8622742
    Abstract: A method for providing multi-touch input training on a display surface is disclosed. A touch input is detected at one or more regions of the display surface. A visualization of the touch input is displayed at a location of the display surface offset from the touch input. One or more annotations are displayed at a location of the display surface offset from the touch input and proximate to the visualization, where each annotation shows a different legal continuation of the touch input.
    Type: Grant
    Filed: November 16, 2009
    Date of Patent: January 7, 2014
    Assignee: Microsoft Corporation
    Inventors: Hrvoje Benko, Daniel J. Wigdor, Dustin Freeman
  • Patent number: 8514188
    Abstract: A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: August 20, 2013
    Assignee: Microsoft Corporation
    Inventors: Paul Armistead Hoover, Maxim Oustiogov, Daniel J. Wigdor, Hrvoje Benko, Jarrod Lombardo
  • Patent number: 8487888
    Abstract: Embodiments are disclosed herein that relate to multi-modal interaction on a computing device comprising a multi-touch display. One disclosed embodiment comprises a method of multi-modal interaction including recognizing a hand posture of a user's first hand directed at the display and displaying a modal region based on the hand posture, wherein the modal region defines an area on the display. The method further includes receiving an input selecting a mode to be applied to the modal region, wherein the mode indicates functionalities to be associated with the modal region and defines a mapping of touch gestures to actions associated with the mode. The method further includes, while the modal region remains displayed, recognizing a touch gesture from a user's second hand directed at the display within the modal region and performing an action on the display based upon a mapping of the touch gesture.
    Type: Grant
    Filed: December 4, 2009
    Date of Patent: July 16, 2013
    Assignee: Microsoft Corporation
    Inventors: Daniel J. Wigdor, Paul Armistead Hoover, Kay Hofmeester
  • Publication number: 20120264510
    Abstract: An integrated virtual environment is provided by obtaining a 3D spatial model of a physical environment in which a user is located, and identifying, via analysis of the 3D spatial model, a physical object in the physical environment. The method further comprises generating a virtualized representation of the physical object, and incorporating the virtualized representation of the physical object into an existing virtual environment, thereby yielding the integrated virtual environment. The method further comprises displaying, on a display device and from a vantage point of the user, a view of the integrated virtual environment, said view being changeable in response to the user moving and/or interacting within the physical environment.
    Type: Application
    Filed: April 12, 2011
    Publication date: October 18, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Daniel J. Wigdor, Megan Tedesco, Andrew Wilson, John Clavin
  • Publication number: 20120229508
    Abstract: On a display configured to provide a photorepresentative view from a user's vantage point of a physical environment in which the user is located, a method is provided comprising receiving, from the user, an input selecting a theme for use in augmenting the photorepresentative view. The method further includes obtaining, optically and in real time, environment information of the physical environment and generating a spatial model of the physical environment based on the environment information. The method further includes identifying, via analysis of the spatial model, one or more features within the spatial model that each corresponds to one or more physical features in the physical environment. The method further includes based on such analysis, displaying, on the display, an augmentation of an identified feature, the augmentation being associated with the theme.
    Type: Application
    Filed: March 10, 2011
    Publication date: September 13, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Daniel J. Wigdor, Megan Tedesco
  • Patent number: 8261212
    Abstract: A computing system for displaying a GUI element on a natural user interface is described herein. The computing system includes a display configured to display a natural user interface of a program executed on the computing system, and a gesture sensor configured to detect a gesture input directed at the natural user interface by a user. The computing system also includes a processor configured to execute a gesture-recognizing module for recognizing a registration phase, an operation phase, and a termination phase of the gesture input, and a gesture assist module configured to first display a GUI element overlaid upon the natural user interface in response to recognition of the registration phase. The GUI element includes a visual or audio operation cue to prompt the user to carry out the operation phase of the gesture input, and a selector manipulatable by the user via the operation phase of the gesture.
    Type: Grant
    Filed: October 20, 2009
    Date of Patent: September 4, 2012
    Assignee: Microsoft Corporation
    Inventors: Daniel J. Wigdor, Paul Armistead Hoover
  • Publication number: 20110270824
    Abstract: Collaborative search and share is provided by a method of facilitating collaborative content-finding, which includes displaying a toolbar user interface object for each user that not only allows each user to perform content-finding but also increases awareness of each user to the activities of other users. The method further includes displaying content results as various disparate image clips that can easily be shared, moved, etc. amongst users.
    Type: Application
    Filed: April 30, 2010
    Publication date: November 3, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Meredith June Morris, Daniel J. Wigdor, Vanessa Adriana Larco, Jarrod Lombardo, Sean Clarence McDirmid, Chao Wang, Monty Todd LaRue, Erez Kikin-Gil
  • Publication number: 20110157025
    Abstract: A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint.
    Type: Application
    Filed: December 30, 2009
    Publication date: June 30, 2011
    Inventors: Paul Armistead Hoover, Maxim Oustiogov, Daniel J. Wigdor, Hrvoje Benko, Jarrod Lombardo
  • Publication number: 20110134047
    Abstract: Embodiments are disclosed herein that relate to multi-modal interaction on a computing device comprising a multi-touch display. One disclosed embodiment comprises a method of multi-modal interaction including recognizing a hand posture of a user's first hand directed at the display and displaying a modal region based on the hand posture, wherein the modal region defines an area on the display. The method further includes receiving an input selecting a mode to be applied to the modal region, wherein the mode indicates functionalities to be associated with the modal region and defines a mapping of touch gestures to actions associated with the mode. The method further includes, while the modal region remains displayed, recognizing a touch gesture from a user's second hand directed at the display within the modal region and performing an action on the display based upon a mapping of the touch gesture.
    Type: Application
    Filed: December 4, 2009
    Publication date: June 9, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Daniel J. Wigdor, Paul Armistead Hoover, Kay Hofmeester
  • Publication number: 20110117526
    Abstract: A method for providing multi-touch input initiation training on a display surface is disclosed. A set of one or more registration hand postures is determined, where each registration hand posture corresponds to one or more gestures executable from that registration hand posture. A registration posture guide is displayed on the display surface. The registration posture guide includes a catalogue for each registration hand posture, where the catalogue includes a contact silhouette showing a model touch-contact interface between the display surface and that registration hand posture.
    Type: Application
    Filed: November 16, 2009
    Publication date: May 19, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Daniel J. Wigdor, Hrvoje Benko
  • Publication number: 20110117535
    Abstract: A method for providing multi-touch input training on a display surface is disclosed. A touch input is detected at one or more regions of the display surface. A visualization of the touch input is displayed at a location of the display surface offset from the touch input. One or more annotations are displayed at a location of the display surface offset from the touch input and proximate to the visualization, where each annotation shows a different legal continuation of the touch input.
    Type: Application
    Filed: November 16, 2009
    Publication date: May 19, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Hrvoje Benko, Daniel J. Wigdor, Dustin Freeman
  • Publication number: 20110119216
    Abstract: A computing device that detects precursory user-input preactions executed in an instructive region and user-input action gestures executed in a functionally-active region is provided. The computing device includes a natural input trainer to present a predictive input cue on a display in response to detecting a precursory user-input preaction performed in the instructive region. The computing device also includes an interface engine to execute a computing function in response to detecting a successive user-input action gesture performed in the functionally-active region subsequent to detection of the precursory user-input preaction.
    Type: Application
    Filed: November 16, 2009
    Publication date: May 19, 2011
    Applicant: MICROSOFT CORPORATION
    Inventor: Daniel J. Wigdor
  • Publication number: 20110093821
    Abstract: A computing system for displaying a GUI element on a natural user interface is described herein. The computing system includes a display configured to display a natural user interface of a program executed on the computing system, and a gesture sensor configured to detect a gesture input directed at the natural user interface by a user. The computing system also includes a processor configured to execute a gesture-recognizing module for recognizing a registration phase, an operation phase, and a termination phase of the gesture input, and a gesture assist module configured to first display a GUI element overlaid upon the natural user interface in response to recognition of the registration phase. The GUI element includes a visual or audio operation cue to prompt the user to carry out the operation phase of the gesture input, and a selector manipulatable by the user via the operation phase of the gesture.
    Type: Application
    Filed: October 20, 2009
    Publication date: April 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Daniel J. Wigdor, Paul Armistead Hoover
  • Publication number: 20110041096
    Abstract: A method of operating a graphical user interface of a computing device is disclosed. The method comprises displaying a graphical user interface (GUI) element on the touch sensitive display screen. The method further comprises in response to receiving touch input data indicative of a one-touch gesture mapping the one-touch gesture to a corresponding GUI element function. The method further comprises in response to receiving touch input data indicative of a multi-touch gesture, mapping the multi-touch gesture to the corresponding GUI element function. The method further comprises transforming display of the GUI element on the touch sensitive display screen based on the corresponding GUI element function.
    Type: Application
    Filed: August 14, 2009
    Publication date: February 17, 2011
    Inventors: Vanessa A. Larco, Daniel J. Wigdor, Sarah Graham Williams