Patents by Inventor Josh Clow

Josh Clow has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10606564
    Abstract: Architecture that generates a companion window in combination with a source application experience to enable the accomplishment of a side task yet not switch away from the context of the source application. The companion window experience is a window that is rendered proximate (e.g., beside) a user's source application experience, in a predictable location, and with a predictable user model for invocation and dismissal. The companion window allows the user to retain full visual context of the associated source application experience, while rendering activities that directly pertain to the source application experience or activities that allow the user to interact with two applications.
    Type: Grant
    Filed: December 27, 2010
    Date of Patent: March 31, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jonathan Gordner, Stephan Hoefnagels, Josh Clow, Colin Jeanne, Alexander Allen, Kenneth Parker, Nandini Bhattacharya, Jonathan Li, Kieran Snyder
  • Patent number: 9329768
    Abstract: Computer-readable media, computerized methods, and computer systems for intuitively invoking a panning action (e.g., moving content within a content region of a display area) by applying a user-initiated input at the content region rendered at a touchscreen interface are provided. Initially, aspects of the user-initiated input include a location of actuation (e.g., touch point on the touchscreen interface) and a gesture. Upon ascertaining that the actuation location occurred within the content region and that the gesture is a drag operation, based on a distance of uninterrupted tactile contact with the touchscreen interface, a panning mode may be initiated. When in the panning mode, and if the application rendering the content at the display area supports scrolling functionality, the gesture will control movement of the content within the content region. In particular, the drag operation of the gesture will pan the content within the display area when surfaced at the touchscreen interface.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: May 3, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING LLC
    Inventors: David A Matthews, Jan-Kristian Markiewicz, Reed L Townsend, Pamela De La Torre Baltierra, Todd A Torset, Josh A Clow, Xiao Tu, Leroy B Keely
  • Patent number: 9081439
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: July 14, 2015
    Assignee: MICROSOFT TECHNOOGY LICENSING, LLC
    Inventors: Adrian J. Garside, F. David Jones, Josh A. Clow, Judy C. Tandog, Leroy B. Keely, Tracy Dianne Schultz
  • Publication number: 20140111440
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Application
    Filed: December 27, 2013
    Publication date: April 24, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: ADRIAN J. GARSIDE, F. DAVID JONES, JOSH A. CLOW, JUDY C. TANDOG, LEROY B. KEELY, TRACY DIANNE SCHULTZ
  • Patent number: 8619053
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Grant
    Filed: July 27, 2012
    Date of Patent: December 31, 2013
    Assignee: Microsoft Corporation
    Inventors: Adrian J. Garside, F. David Jones, Josh A. Clow, Judy C. Tandog, Leroy B. Keely, Tracy Dianne Schultz
  • Patent number: 8375336
    Abstract: Computer-readable media, computerized methods, and computer systems for intuitively invoking a panning action (e.g., moving content within a content region of a display area) by applying a user-initiated input at the content region rendered at a touchscreen interface are provided. Initially, aspects of the user-initiated input include a location of actuation (e.g., touch point on the touchscreen interface) and a gesture. Upon ascertaining that the actuation location occurred within the content region and that the gesture is a drag operation, based on a distance of uninterrupted tactile contact with the touchscreen interface, a panning mode may be initiated. When in the panning mode, and if the application rendering the content at the display area supports scrolling functionality, the gesture will control movement of the content within the content region. In particular, the drag operation of the gesture will pan the content within the display area when surfaced at the touchscreen interface.
    Type: Grant
    Filed: October 3, 2008
    Date of Patent: February 12, 2013
    Assignee: Microsoft Corporation
    Inventors: David A. Matthews, Jan-Kristian Markiewicz, Reed L. Townsend, Pamela De La Torre Baltierra, Todd A. Torset, Josh A. Clow, Xiao Tu, Leroy B. Keely
  • Publication number: 20120293418
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Application
    Filed: July 27, 2012
    Publication date: November 22, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Adrian J. Garside, F. David Jones, Josh A. Clow, Judy C. Tandog, Leroy B. Keely, Tracy Dianne Schultz
  • Patent number: 8253708
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Grant
    Filed: July 13, 2009
    Date of Patent: August 28, 2012
    Assignee: Microsoft Corporation
    Inventors: Adrian J. Garside, F. David Jones, Josh A. Clow, Judy C. Tandog, Leroy B. Keely, Tracy Dianne Schultz
  • Publication number: 20120167004
    Abstract: Architecture that generates a companion window in combination with a source application experience to enable the accomplishment of a side task yet not switch away from the context of the source application. The companion window experience is a window that is rendered proximate (e.g., beside) a user's source application experience, in a predictable location, and with a predictable user model for invocation and dismissal. The companion window allows the user to retain full visual context of the associated source application experience, while rendering activities that directly pertain to the source application experience or activities that allow the user to interact with two applications.
    Type: Application
    Filed: December 27, 2010
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jonathan Ian Gordner, Stephan Hoefnagels, Josh Clow, Colin Jeanne, Alexander Allen, Kenneth Parker, Nandini Bhattacharya, Jonathan Li, Kieran Snyder
  • Patent number: 7634738
    Abstract: Systems, methods, and computer-readable media process computer input data (such as electronic ink data, speech input data, keyboard input data, etc.), including focus change data, in a manner so that the input insertion range better comports with the user's original intent. More specifically, user input data may be accepted, before, during, and/or after a focus change event is initiated, and the systems and methods will process this input data in an intuitive manner, directing the data to areas of an application program or the operating system that better comport with the user's original intent. In this manner, loss of input data may be avoided and misdirected input data may be avoided, thereby lowering user frustration during focus change events.
    Type: Grant
    Filed: November 19, 2004
    Date of Patent: December 15, 2009
    Assignee: Microsoft Corporation
    Inventors: Josh A. Clow, Adrian J. Garside, David V. Winkler
  • Publication number: 20090292989
    Abstract: Computer-readable media, computerized methods, and computer systems for intuitively invoking a panning action (e.g., moving content within a content region of a display area) by applying a user-initiated input at the content region rendered at a touchscreen interface are provided. Initially, aspects of the user-initiated input include a location of actuation (e.g., touch point on the touchscreen interface) and a gesture. Upon ascertaining that the actuation location occurred within the content region and that the gesture is a drag operation, based on a distance of uninterrupted tactile contact with the touchscreen interface, a panning mode may be initiated. When in the panning mode, and if the application rendering the content at the display area supports scrolling functionality, the gesture will control movement of the content within the content region. In particular, the drag operation of the gesture will pan the content within the display area when surfaced at the touchscreen interface.
    Type: Application
    Filed: October 3, 2008
    Publication date: November 26, 2009
    Applicant: MICROSOFT CORPORATION
    Inventors: David A. Matthews, Jan-Kristian Markiewicz, Reed L. Townsend, Pamela De La Torre Baltierra, Todd A. Torset, Josh A. Clow, Xiao Tu, Leroy B. Keely
  • Publication number: 20090273565
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Application
    Filed: July 13, 2009
    Publication date: November 5, 2009
    Applicant: Microsoft Corporation
    Inventors: Adrian J. Garside, F. David Jones, Josh A. Clow, Judy C. Tandog, Leroy B. Keely, Tracy Dianne Schultz
  • Patent number: 7561145
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Grant
    Filed: March 18, 2005
    Date of Patent: July 14, 2009
    Assignee: Microsoft Corporation
    Inventors: Adrian J. Garside, F. David Jones, Josh A. Clow, Judy C. Tandog, Leroy B. Keely, Tracy Dianne Schultz
  • Patent number: 7461348
    Abstract: Systems, methods, and computer-readable media process computer input data (such as electronic ink data, speech input data, keyboard input data, etc.), including focus change data, in a manner so that the input insertion range better comports with the user's original intent. More specifically, user input data may be accepted, before, during, and/or after a focus change event is initiated, and the systems and methods will process this input data in an intuitive manner, directing the data to areas of an application program or the operating system that better comport with the user's original intent. In this manner, loss of input data may be avoided and misdirected input data may be avoided, thereby lowering user frustration during focus change events.
    Type: Grant
    Filed: November 19, 2004
    Date of Patent: December 2, 2008
    Assignee: Microsoft Corporation
    Inventors: Josh A. Clow, Adrian J. Garside, David V. Winkler
  • Publication number: 20060209040
    Abstract: User interfaces, methods, systems, and computer-readable media for activating and/or displaying text input systems on display devices may include: (a) displaying a text input system activation target at a user changeable location on a display device; (b) receiving user input directed to the activation target; and (c) activating a text input system in response to the user input. Such user interfaces, methods, and systems further may include (d) displaying a pre-interaction condition of the activation target; (e) receiving user input directed to the activation target in this pre-interaction condition; and (f) changing an appearance of the activation target from the pre-interaction condition to a larger size and/or a different visual appearance in response to this user input. Additional aspects of this invention relate to computer-readable media for providing user interfaces, systems, and methods as described above.
    Type: Application
    Filed: March 18, 2005
    Publication date: September 21, 2006
    Applicant: Microsoft Corporation
    Inventors: Adrian Garside, F. Jones, Josh Clow, Judy Tandog, Leroy Keely, Tracy Schultz
  • Publication number: 20060123159
    Abstract: Systems, methods, and computer-readable media process computer input data (such as electronic ink data, speech input data, keyboard input data, etc.), including focus change data, in a manner so that the input insertion range better comports with the user's original intent. More specifically, user input data may be accepted, before, during, and/or after a focus change event is initiated, and the systems and methods will process this input data in an intuitive manner, directing the data to areas of an application program or the operating system that better comport with the user's original intent. In this manner, loss of input data may be avoided and misdirected input data may be avoided, thereby lowering user frustration during focus change events.
    Type: Application
    Filed: November 19, 2004
    Publication date: June 8, 2006
    Applicant: Microsoft Corporation
    Inventors: Josh Clow, Adrian Garside, David Winkler
  • Publication number: 20060112349
    Abstract: Systems, methods, and computer-readable media process computer input data (such as electronic ink data, speech input data, keyboard input data, etc.), including focus change data, in a manner so that the input insertion range better comports with the user's original intent. More specifically, user input data may be accepted, before, during, and/or after a focus change event is initiated, and the systems and methods will process this input data in an intuitive manner, directing the data to areas of an application program or the operating system that better comport with the user's original intent. In this manner, loss of input data may be avoided and misdirected input data may be avoided, thereby lowering user frustration during focus change events.
    Type: Application
    Filed: November 19, 2004
    Publication date: May 25, 2006
    Applicant: Microsoft Corporation
    Inventors: Josh Clow, Adrian Garside, David Winkler