Patents by Inventor Jarrod Lombardo
Jarrod Lombardo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9389764Abstract: Various embodiments enable target disambiguation and correction. In one or more embodiments, target disambiguation includes an entry mode in which attempts are made to disambiguate one or more targets that have been selected by a user, and an exit mode which exits target disambiguation. Entry mode can be triggered in a number of different ways including, by way of example and not limitation, acquisition of multiple targets, selection latency, a combination of multiple target acquisition and selection latency, and the like. Exit mode can be triggered in a number of different ways including, by way of example and not limitation, movement of a target selection mechanism outside of a defined geometry, speed of movement of the target selection mechanism, and the like.Type: GrantFiled: May 27, 2011Date of Patent: July 12, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Paul Armistead Hoover, Michael J. Patten, Theresa B. Pittappilly, Jan-Kristian Markiewicz, Adrian J. Garside, Maxim V. Mazeev, Jarrod Lombardo
-
Patent number: 8514188Abstract: A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint.Type: GrantFiled: December 30, 2009Date of Patent: August 20, 2013Assignee: Microsoft CorporationInventors: Paul Armistead Hoover, Maxim Oustiogov, Daniel J. Wigdor, Hrvoje Benko, Jarrod Lombardo
-
Publication number: 20120304061Abstract: Various embodiments enable target disambiguation and correction. In one or more embodiments, target disambiguation includes an entry mode in which attempts are made to disambiguate one or more targets that have been selected by a user, and an exit mode which exits target disambiguation. Entry mode can be triggered in a number of different ways including, by way of example and not limitation, acquisition of multiple targets, selection latency, a combination of multiple target acquisition and selection latency, and the like. Exit mode can be triggered in a number of different ways including, by way of example and not limitation, movement of a target selection mechanism outside of a defined geometry, speed of movement of the target selection mechanism, and the like.Type: ApplicationFiled: May 27, 2011Publication date: November 29, 2012Inventors: Paul Armistead Hoover, Michael J. Patten, Theresa B. Pittappilly, Jan-Kristian Markiewicz, Adrian J. Garside, Maxim V. Mazeev, Jarrod Lombardo
-
Publication number: 20120092381Abstract: An invention is disclosed for using touch gestures to zoom a video to full-screen. As the user reverse-pinches on a touch-sensitive surface to zoom in on a video, the invention tracks the amount of a zoom. When the user has zoomed to the point where one of the dimensions (height or width) of the video reaches a threshold (such as some percentage of a dimension of the display device—e.g. the width of the video reaches 80% of the width of the display device), the invention determines to display the video in full-screen, and “snaps” the video to full-screen. The invention may do this by way of an animation, such as expanding the video to fill the screen.Type: ApplicationFiled: October 19, 2010Publication date: April 19, 2012Applicant: Microsoft CorporationInventors: Paul Armistead Hoover, Vishnu Sivaji, Jarrod Lombardo, Daniel John Wigdor
-
Publication number: 20110300516Abstract: Braille symbols are automatically read aloud, to aid in learning or using Braille. More generally, a tile which bears a tactile symbol and a corresponding visual symbol is placed in a sensing area, automatically distinguished from other tiles, and vocalized. The tile is sensed and distinguished from other tiles based on various signal mechanisms, or by computer vision analysis of the tile's visual symbol. Metadata is associated with the tile. Additional placed tiles are similarly sensed, identified, and vocalized. When multiple tiles are placed in the sensing area, they are vocalized individually, and an audible phrase spelled by their arrangement of tactile symbols is also produced. A lattice is provided with locations for receiving tiles. Metadata are associated with lattice locations. Tile placement is used to control an application program which responds to tile identifications.Type: ApplicationFiled: June 2, 2010Publication date: December 8, 2011Applicant: MICROSOFT CORPORATIONInventors: Daniel Wigdor, Meredith June Morris, Jarrod Lombardo, Annuska Perkins, Sean Hayes, Curtis Douglas Aumiller
-
Publication number: 20110270824Abstract: Collaborative search and share is provided by a method of facilitating collaborative content-finding, which includes displaying a toolbar user interface object for each user that not only allows each user to perform content-finding but also increases awareness of each user to the activities of other users. The method further includes displaying content results as various disparate image clips that can easily be shared, moved, etc. amongst users.Type: ApplicationFiled: April 30, 2010Publication date: November 3, 2011Applicant: MICROSOFT CORPORATIONInventors: Meredith June Morris, Daniel J. Wigdor, Vanessa Adriana Larco, Jarrod Lombardo, Sean Clarence McDirmid, Chao Wang, Monty Todd LaRue, Erez Kikin-Gil
-
Publication number: 20110157025Abstract: A method of controlling a virtual object within a virtual workspace includes recognizing a hand posture of an initial touch gesture directed to a touch-input receptor, and a mode constraint is set based on the hand posture. The mode constraint specifies a constrained parameter of a virtual object that is to be maintained responsive to a subsequent touch gesture. The method further includes recognizing a subsequent touch gesture directed to the touch-input receptor. An unconstrained parameter of the virtual object is modulated responsive to the subsequent touch gesture while the constrained parameter of the virtual object is maintained in accordance with the mode constraint.Type: ApplicationFiled: December 30, 2009Publication date: June 30, 2011Inventors: Paul Armistead Hoover, Maxim Oustiogov, Daniel J. Wigdor, Hrvoje Benko, Jarrod Lombardo
-
Publication number: 20110138284Abstract: A touch screen input device is provided which simulates a 3-state input device such as a mouse. One of these states is used to preview the effect of activating a graphical user interface element when the screen is touched. In this preview state touching a graphical user interface element on the screen with a finger or stylus does not cause the action associated with that element to be performed. Rather, when the screen is touched while in the preview state audio cues are provided to the user indicating what action would arise if the action associated with the touched element were to be performed.Type: ApplicationFiled: December 3, 2009Publication date: June 9, 2011Applicant: MICROSOFT CORPORATIONInventors: Daniel John Wigdor, Jarrod Lombardo, Annuska Zolyomi Perkins, Sean Hayes