Ink Modes

Techniques for ink modes are described. According to various embodiments, different ink modes are supported. For instance, implementations support ink for selection, ink for commanding, ink for recognition, and so forth. According to various embodiments, a visual affordance of a particular active ink mode is presented on a document with which a user is interacting. For instance, the visual affordance is presented in response to detecting a proximity of a pen to an input surface such as a touch display. Further, different ink modes each are associated with different respective visual affordances.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATION

This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 62/002,648, Attorney Docket Number 355121.01, filed May 23, 2014 and titled “Ink,” the entire disclosure of which is incorporated by reference in its entirety.

BACKGROUND

Devices today (e.g., computing devices) typically support a variety of different input techniques. For instance, a particular device may receive input from a user via a keyboard, a mouse, voice input, touch input (e.g., to a touchscreen), and so forth. One particularly intuitive input technique enables a user to utilize a touch instrument (e.g., a pen, a stylus, a finger, and so forth) to provide freehand input to a touch-sensing functionality such as a touchscreen, which is interpreted as digital ink. The freehand input may be converted to a corresponding visual representation on a display, such as for taking notes, for creating and editing an electronic document, and so forth. Many current techniques for digital ink, however, typically provide limited ink functionality.

SUMMARY

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

Techniques for ink modes are described. According to various embodiments, different ink modes are supported. For instance, implementations support ink for selection, ink for commanding, ink for recognition, and so forth. According to various embodiments, a visual affordance of a particular active ink mode is presented on a document with which a user is interacting. For instance, the visual affordance is presented in response to detecting a proximity of a pen to an input surface such as a touch display. Further, different ink modes each are associated with different respective visual affordances.

BRIEF DESCRIPTION OF THE DRAWINGS

The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items.

FIG. 1 is an illustration of an environment in an example implementation that is operable to employ techniques discussed herein in accordance with one or more embodiments.

FIG. 2 depicts an example implementation scenario for a permanent ink mode in accordance with one or more embodiments.

FIG. 3 depicts an example implementation scenario for a transient ink mode in accordance with one or more embodiments.

FIG. 4 depicts an example implementation scenario for a transient ink mode in accordance with one or more embodiments.

FIG. 5 depicts an example implementation scenario for a transient ink mode in accordance with one or more embodiments.

FIG. 6 depicts an example implementation scenario for a transient ink mode in accordance with one or more embodiments.

FIG. 7 depicts an example implementation scenario for a multiple transient ink layers in accordance with one or more embodiments.

FIG. 8 depicts an example implementation scenario for presenting an inking menu in accordance with one or more embodiments.

FIG. 9 is a flow diagram that describes steps in a method for processing ink according to a current ink mode in accordance with one or more embodiments.

FIG. 10 is a flow diagram that describes steps in a method for a transient ink timer in accordance with one or more embodiments.

FIG. 11 is a flow diagram that describes steps in a method for propagating transient ink to different transient ink layers for different users in accordance with one or more embodiments.

FIG. 12 is a flow diagram that describes steps in a method for presenting an ink menu in accordance with one or more embodiments.

FIG. 13 depicts an example implementation scenario for ink for selection in accordance with one or more embodiments.

FIG. 14 depicts an example implementation scenario for ink for selection in accordance with one or more embodiments.

FIG. 15 depicts an example implementation scenario for ink for selection in accordance with one or more embodiments.

FIG. 16 depicts an example implementation scenario for ink for selection in accordance with one or more embodiments.

FIG. 17 depicts an example implementation scenario for ink for selection in accordance with one or more embodiments.

FIG. 18 depicts an example implementation scenario for ink for selection in accordance with one or more embodiments.

FIG. 19 is a flow diagram that describes steps in a method for ink for selection in accordance with one or more embodiments.

FIG. 20 depicts an example implementation scenario for ink notes in accordance with one or more embodiments.

FIG. 21 depicts an example implementation scenario for ink notes in accordance with one or more embodiments.

FIG. 22 is a flow diagram that describes steps in a method for generating an ink note in accordance with one or more embodiments.

FIG. 23 depicts an example implementation scenario for ink for commanding in accordance with one or more embodiments.

FIG. 24 depicts an example implementation scenario for ink for commanding in accordance with one or more embodiments.

FIG. 25 depicts an example implementation scenario for ink for commanding in accordance with one or more embodiments.

FIG. 26 depicts an example implementation scenario for ink for commanding in accordance with one or more embodiments.

FIG. 27 depicts an example implementation scenario for ink for commanding in accordance with one or more embodiments.

FIG. 28 is a flow diagram that describes steps in a method for ink for commanding in accordance with one or more embodiments.

FIG. 29 is a flow diagram that describes steps in a method for ink for commanding in accordance with one or more embodiments.

FIG. 30 depicts an example implementation scenario for ink for shape recognition in accordance with one or more embodiments.

FIG. 31 depicts an example implementation scenario for ink for text recognition in accordance with one or more embodiments.

FIG. 32 depicts an example implementation scenario for ink for text recognition in accordance with one or more embodiments.

FIG. 33 depicts an example implementation scenario for ink for text recognition in accordance with one or more embodiments.

FIG. 34 depicts an example implementation scenario for ink for text recognition in accordance with one or more embodiments.

FIG. 35 depicts an example implementation scenario for ink for text recognition in accordance with one or more embodiments.

FIG. 36 is a flow diagram that describes steps in a method for ink for recognition in accordance with one or more embodiments.

FIG. 37 is a flow diagram that describes steps in a method for ink for text recognition in accordance with one or more embodiments.

FIG. 38 is a flow diagram that describes steps in a method for ink for character recognition in accordance with one or more embodiments.

FIG. 39 illustrates an example system and computing device as described with reference to FIG. 1, which are configured to implement embodiments of techniques described herein.

DETAILED DESCRIPTION

Overview

Techniques for ink modes are described. Generally, ink refers to freehand input to a touch-sensing functionality such as a touchscreen, which is interpreted as digital ink, referred to herein as “ink.” Ink may be provided in various ways, such as using a pen (e.g., an active pen, a passive pen, and so forth), a stylus, a finger, and so forth.

According to various implementations, different ink modes are supported. For instance, implementations support ink for selection, ink for commanding, ink for recognition, and so forth.

Ink for selection provides different ways for utilizing ink to select objects, such as visual objects on a display. For instance, different ink gestures applied via freehand input using a pen are converted into different selection shapes for selecting objects. According to various implementations, ink for selection reduces an amount of time and a number of user interactions required to select an object.

Ink for commanding provides different ways for utilizing ink to specify various commands to be performed. For instance, different ink commands applied via freehand input using a pen are recognized and automatically executed. According to various implementations, ink for commanding reduces an amount of time and a number of user interactions required to enter and execute commands. For instance, a user may simply write a command using ink, and the command is automatically recognized and executed without requiring the user to locate and select a visual control or menu item for the command.

Ink for recognition provides different ways for recognizing and converting characters provided via freehand ink. For instance, different ink characters applied via freehand input using a pen are recognized and converted into encoded versions of different shapes and text characters. The shapes and text characters, for instance, are added to a primary content layer of a document. According to various implementations, ink for recognition reduces an amount of time and a number of user interactions required to generate encoded characters, such as shapes, text, and so forth.

According to various implementations, a visual affordance of a particular active ink mode is presented on a document with which a user is interacting. For instance, the visual affordance is presented in response to detecting a proximity of a pen to an input surface such as a touch display. Further, different ink modes each are associated with different respective visual affordances. Thus, a user is informed of which ink mode is currently active without having to access a settings menu or other interface separate from a document with which the user is interacting, thus reducing user interactions required to ascertain an active ink mode.

In the following discussion, an example environment is first described that is operable to employ techniques described herein. Next, a section entitled “Example Implementation Scenarios and Procedures” describes some example implementation scenarios and methods for ink modes in accordance with one or more embodiments. Finally, a section entitled “Example System and Device” describes an example system and device that are operable to employ techniques discussed herein in accordance with one or more embodiments.

Example Environment

FIG. 1 is an illustration of an environment 100 in an example implementation that is operable to employ techniques for ink modes discussed herein. Environment 100 includes a client device 102 which can be embodied as any suitable device such as, by way of example and not limitation, a smartphone, a tablet computer, a portable computer (e.g., a laptop), a desktop computer, a wearable device, and so forth. In at least some implementations, the client device 102 represents a smart appliance, such as an Internet of Things (“IoT”) device. Thus, the client device 102 may range from a system with significant processing power, to a lightweight device with minimal processing power. One of a variety of different examples of a client device 102 is shown and described below in FIG. 39.

The client device 102 includes a variety of different functionalities that enable various activities and tasks to be performed. For instance, the client device 102 includes an operating system 104, applications 106, and a communication module 108. Generally, the operating system 104 is representative of functionality for abstracting various system components of the client device 102, such as hardware, kernel-level modules and services, and so forth. The operating system 104, for instance, can abstract various components (e.g., hardware, software, and firmware) of the client device 102 to the applications 106 to enable interaction between the components and the applications 106.

The applications 106 represents functionalities for performing different tasks via the client device 102. Examples of the applications 106 include a word processing application, a spreadsheet application, a web browser, a gaming application, and so forth. The applications 106 may be installed locally on the client device 102 to be executed via a local runtime environment, and/or may represent portals to remote functionality, such as cloud-based services, web apps, and so forth. Thus, the applications 106 may take a variety of forms, such as locally-executed code, portals to remotely hosted services, and so forth.

The communication module 108 is representative of functionality for enabling the client device 102 to communication over wired and/or wireless connections. For instance, the communication module 108 represents hardware and logic for communication via a variety of different wired and/or wireless technologies and protocols.

The client device 102 further includes a display device 110, input mechanisms 112 including a digitizer 114 and touch input devices 116, and an ink module 118. The display device 110 generally represents functionality for visual output for the client device 102. Additionally, the display device 110 represents functionality for receiving various types of input, such as touch input, pen input, and so forth. The input mechanisms 112 generally represent different functionalities for receiving input to the computing device 102. Examples of the input mechanisms 112 include gesture-sensitive sensors and devices (e.g., such as touch-based sensors and movement-tracking sensors (e.g., camera-based)), a mouse, a keyboard, a stylus, a touch pad, accelerometers, a microphone with accompanying voice recognition software, and so forth. The input mechanisms 112 may be separate or integral with the displays 110; integral examples include gesture-sensitive displays with integrated touch-sensitive or motion-sensitive sensors. The digitizer 114 represents functionality for converting various types of input to the display device 110 and the touch input devices 116 into digital data that can be used by the computing device 102 in various ways, such as for generating digital ink.

According to various implementations, the ink module 118 represents functionality for performing various aspects of techniques for ink modes discussed herein. Various functionalities of the ink module 118 are discussed below. The ink module 118 includes a transient layer application programming interface (API) 120 and a permanent layer API 122. The transient layer API 120 represents functionality for enabling interaction with a transient ink layer, and the permanent layer API 122 represents functionality for enabling ink interaction with a permanent object (e.g., document) layer. In at least some implementations, the transient layer API 120 and the permanent layer API 122 may be utilized (e.g., by the applications 106) to access transient ink functionality and permanent ink functionality, respectively.

The environment 100 further includes a pen 124, which is representative of an input device for providing input to the display device 110. Generally, the pen 124 is in a form factor of a traditional pen but includes functionality for interacting with the display device 110 and other functionality of the client device 102. In at least some implementations, the pen 124 is an active pen that includes electronic components for interacting with the client device 102. The pen 124, for instance, includes a battery that can provide power to internal components of the pen 124.

Alternatively or additionally, the pen 124 may include a magnet or other functionality that supports hover detection over the display device 110. This is not intended to be limiting, however, and in at least some implementations the pen 124 may be passive, e.g., a stylus without internal electronics. Generally, the pen 124 is representative of an input device that can provide input that can be differentiated from other types of input by the client device 102. For instance, the digitizer 114 is configured to differentiate between input provided via the pen 124, and input provided by a different input mechanism such as a user's finger, a stylus, and so forth.

The pen 124 includes a pen mode button 126, which represents a selectable control (e.g., a switch) for switching the pen 124 between different pen input modes. Generally, different pen input modes enable input from the pen 124 to be utilized and/or interpreted by the ink module 118 in different ways. Examples of different pen input modes are detailed below.

Having described an example environment in which the techniques described herein may operate, consider now a discussion of an example implementation scenario in accordance with one or more embodiments.

Transient Ink and Permanent Ink

According to various implementations, ink can be applied in different ink modes including a transient ink mode and a permanent ink mode. Generally, transient ink refers to ink that is temporary and that can be used for various purposes, such as invoking particular actions, annotating a document, and so forth. For instance, in transient implementations, ink can be used for annotation layers for electronic documents, temporary visual emphasis, text recognition, invoking various commands and functionalities, and so forth.

Permanent ink generally refers to implementations where ink becomes a part of the underlying object, such as for creating a document, writing on a document (e.g., for annotation and/or editing), applying ink to graphics, and so forth. Permanent ink, for example, can be considered as a graphics object, such as for note taking, for creating visual content, and so forth.

In at least some implementations, a pen (e.g., the pen 124) applies ink whenever the pen is in contact with an input surface, such as the display device 104 and/or other input surface. Further, a pen can apply ink across many different applications, platforms, and services. In one or more implementations, an application and/or service can specify how ink is used in relation to an underlying object, such as a word processing document, a spreadsheet and so forth. For instance, in some scenarios ink is applied as transient ink, and other scenarios ink is applied as permanent ink. Examples of different implementations and attributes of transient ink and permanent ink are detailed below.

Example Implementation Scenarios and Procedures

This section describes some example implementation scenarios and example procedures for ink modes in accordance with one or more implementations. The implementation scenarios and procedures may be implemented in the environment 100 described above, the system 3900 of FIG. 39, and/or any other suitable environment. The implementation scenarios and procedures, for example, describe example operations of the client device 102 and the ink module 118. While the implementation scenarios and procedures are discussed with reference to a particular application, it is to be appreciated that techniques for ink modes discussed herein are applicable across a variety of different applications, services, and environments. In at least some embodiments, steps described for the various procedures are implemented automatically and independent of user interaction.

FIG. 2 depicts an example implementation scenario 200 for a permanent ink mode in accordance with one or more implementations. The upper portion of the scenario 200 includes a graphical user interface (GUI) 202 displayed on the display 110. Generally, the GUI 202 represents a GUI for a particular functionality, such as an instance of the applications 106. Also depicted is a user holding the pen 124. Displayed within the GUI 202 is a document 204, e.g., an electronic document generated via one of the applications 106.

Proceeding to the lower portion of the scenario 200, the user brings the pen 124 in proximity to the surface of the display 110 and within the GUI 202. The pen 124, for instance, is placed within a particular distance of the display 110 (e.g., less than 2 centimeters) but not in contact with the display 110. This behavior is generally referred to herein as “hovering” the pen 124. In response to detecting proximity of the pen 124, a hover target 206 is displayed within the GUI 202 and at a point within the GUI 202 that is directly beneath the tip of the pen 124. Generally, the hover target 206 represents a visual affordance that indicates that ink functionality is active such that a user may apply ink to the document 204.

According to various implementations, the visual appearance (e.g., shape, color, shading, and so forth) of the hover target 206 provides a visual cue indicating a current ink mode that is active. In the scenario 200, the hover target is presented as a solid circle, which indicates that a permanent ink mode is active. For instance, if the user proceeds to put the pen 124 in contact with the display 110 to apply ink to the document 204 in a permanent ink mode, the ink will become part of the document 204, e.g., will be added to a primary content layer of the document 204. Consider, for example, that the text (e.g., primary content) displayed in the document 204 was created via ink input in a permanent ink mode. Thus, ink applied in a permanent ink mode represents a permanent ink layer that is added to a primary content layer of the document 204.

In further response to detecting hovering of the pen 124, an ink flag 208 is visually presented adjacent to and/or at least partially overlaying a portion of the document 204. Generally, the ink flag 208 represents a visual affordance that indicates that ink functionality is active such that a user may apply ink to the document 204. In at least some implementations, the ink flag 208 may be presented additionally or alternatively to the hover target 206. In this particular example, the ink flag 208 includes a visual cue indicating a current ink mode that is active. In the scenario 200, the ink flag 208 includes a solid circle, which indicates that a permanent ink mode is active. As further detailed below, the ink flag 208 is selectable to cause an ink menu to be displayed that includes various ink-related functionalities, options, and settings that can be applied.

FIG. 3 depicts an example implementation scenario 300 for a transient ink mode in accordance with one or more implementations. The upper portion of the scenario 300 includes a graphical user interface (GUI) 302 displayed on the display 110. Generally, the GUI 302 represents a GUI for a particular functionality, such as an instance of the applications 106. Displayed within the GUI 302 is a document 304, e.g., an electronic document generating via one of the applications 106. The document 304 includes primary content 306, which represents content generated as part of a primary content layer for the document 304. For instance, in this particular example the document 304 is a text-based document, and thus the primary content 306 includes text that is populated to the document. Various other types of documents and primary content may be employed, such as for graphics, multimedia, web content, and so forth.

As further illustrated, a user is hovering the pen 124 within a certain proximity of the surface of the display 110, such as discussed above with reference to the scenario 200. In response, a hover target 308 is displayed within the document 304 and beneath the tip of the pen. In this particular example, the hover target 308 is presented as a hollow circle, thus indicating that a transient ink mode is active. For instance, if the user proceeds to apply ink to the document 304, the ink will behave according to a transient ink mode. Examples of different transient ink behaviors are detailed elsewhere herein.

Further in response to the user hovering the pen 124 over the display 110, an ink flag 310 is presented. In this particular example, the ink flag 310 includes a hollow circle 312, thus providing a visual cue that a transient ink mode is active.

Proceeding to the lower portion of the scenario 300, the user removes the pen 124 from proximity to the display 110. In response, the hover target 308 and the ink flag 310 are removed from the display 110. For instance, in at least some implementations, a hover target and/or an ink flag are presented when the pen 124 is detected as being hovered over the display 110, and are removed from the display 110 when the pen 124 is removed such that the pen 124 is no longer detected as being hovered over the display 110. This is not intended to be limiting, however, and in at least some implementations, an ink flag may be persistently displayed to indicate that inking functionality is active and/or available.

FIG. 4 depicts an example implementation scenario 400 for a transient ink mode in accordance with one or more implementations. The upper portion of the scenario 300 includes the GUI 302 with the document 304 (introduced above) displayed on the display 110. In at least some implementations, the scenario 400 represents an extension of the scenario 300, above.

In the upper portion of the scenario 400, a user applies ink content 402 to the document 304 using the pen 124. In this particular scenario, the ink content 402 corresponds to an annotation of the document 402. It is to be appreciated, however, that a variety of different types of transient ink other than annotations may be employed. Notice that as the user is applying the ink content 402, a hover target is not displayed. For instance, in at least some implementations when the pen 124 transitions from a hover position to contact with the display 110, a hover target is removed. Notice also that the ink flag 310 includes a hollow circle 312, indicating that the ink content 402 is applied according to a transient ink mode.

Proceeding to the lower portion of the scenario 400, the user lifts the pen 124 from the display 110 such that the pen 124 is not detected, e.g., the pen 124 is not in contact with the display 110 and is not in close enough proximity to the display 110 to be detected as hovering. In response to the pen 124 no longer being detected in contact with or in proximity to the display 110, an ink timer 406 begins running. For instance, the ink timer 406 begins counting down from a specific time value, such as 30 seconds, 60 seconds, and so forth. Generally, the ink timer is representative of functionality to implement a countdown function, such as for tracking time between user interactions with the display 110 via the pen 124. The ink timer 406, for example, represents a functionality of the ink module 118.

As a visual cue that the ink counter 406 is elapsing, the hollow circle 312 begins to unwind, e.g., begins to disappear from the ink flag 310. In at least some implementations, the hollow circle 312 unwinds at a rate that corresponds to the countdown of the ink timer 406. For instance, when the ink timer 406 is elapsed by 50%, then 50% of the hollow circle 312 is removed from the ink flag 310. Thus, unwinding of the hollow circle 312 provides a visual cue that the ink timer 406 is elapsing, and how much of the ink timer has elapsed and/or remains to be elapsed.

In at least some implementations, if the ink timer 406 is elapsing as in the lower portion of the scenario 400 and the user proceeds to place the pen 124 in proximity to the display 110 (e.g., hovered or in contact with the display 110), the ink timer 406 will reset and will not begin elapsing again until the user removes the pen 124 from the display 110 such that the pen 124 is not detected. In such implementations, the hollow circle 312 will be restored within the ink flag 310 as in the upper portion of the scenario 400.

FIG. 5 depicts an example implementation scenario 500 for a transient ink mode in accordance with one or more implementations. The upper portion of the scenario 300 includes the GUI 302 with the document 304 (introduced above) displayed on the display 110. In at least some implementations, the scenario 500 represents an extension of the scenario 400, above.

In the upper portion of the scenario 500, the ink timer 406 has elapsed. For instance, notice that the hollow circle 312 has completely unwound within the ink flag 310, e.g., is visually removed from the ink flag 310. According to various implementations, this provides a visual cue that the ink timer 406 has completely elapsed.

Proceeding to the lower portion of the scenario 500, and in response to expiry of the ink timer 406, the ink content 402 is removed from the GUI 302 and saved as part of a transient ink layer 504 for the document 304. Further, the ink flag 310 is populated with a user icon 502. The user icon 502, for example, represents a user that is currently logged in to the computing device 102, and/or a user that is interacting with the document 304. Alternatively or additionally, the pen 124 includes user identification data that is detected by the computing device 102 and thus is leveraged to track which user is interacting with the document 304. For example, the pen 124 includes a tagging mechanism (e.g., a radio-frequency identifier (RFID) chip) embedded with a user identity for a particular user. Thus, when the pen 124 is placed in proximity to the display 110, the tagging mechanism is detected by the computing device 102 and utilized to attribute ink input and/or other types of input to a particular user. As used herein, the term “user” may be used to refer to an identity for an individual person, and/or an identity for a discrete group of users that are grouped under a single user identity.

According to various implementations, population of the user icon 502 to the ink flag 310 represents a visual indication that the transient ink layer 504 exists for the document 304, and that the transient ink layer 504 is associated with (e.g., was generated by) a particular user. Generally, the transient ink layer 504 represents a data layer that is not part of the primary content layer of the document 304, but that is persisted and can be referenced for various purposes. Further attributes of transient ink layers are described elsewhere herein.

FIG. 6 depicts an example implementation scenario 600 for a transient ink mode in accordance with one or more implementations. The upper portion of the scenario 600 includes the GUI 302 with the document 304 (introduced above) displayed on the display 110. In at least some implementations, the scenario 600 represents an extension of the scenario 500, above.

In the upper portion of the scenario 600, the ink flag 310 is displayed indicating that a transient ink layer (e.g., the transient ink layer 504) exists for the document 304, and that the transient ink layer is linked to a particular user represented by the user icon 502 in the ink flag 310.

Proceeding to the lower portion of the scenario 600, a user selects the ink flag 310 with the pen 124, which causes the ink content 402 to be returned to display as part of the document 304. The ink content 402, for example, is bound to the transient ink layer 504, along with other transient ink content generated for the transient ink layer 504. Thus, in at least some implementations, the transient ink layer 504 is accessible by various techniques, such as by selection of the ink flag 310.

Additionally or alternatively to selection of the ink flag 310, if the user proceeds to apply further ink content to the document 304 while in the transient ink mode, the transient ink layer 504 is retrieved and transient ink content included as part of the transient ink layer 504 is displayed as part of the document 504. In at least some implementations, transient ink content of the transient ink layer 504 is bound (e.g., anchored) to particular portions (e.g., pages, lines, text, and so forth) of the document 304. For instance, the user generated the ink content 402 adjacent to a particular section of text. Thus, when the transient ink layer 504 is recalled as depicted in the scenario 600, the ink content 402 is displayed adjacent to the particular section of text.

According to various implementations, the transient ink layer 504 is cumulative such that a user may add ink content to and remove ink content from the transient ink layer 504 over a span of time and during multiple different interactivity sessions. Thus, the transient ink layer 504 generally represents a record of multiple user interactions with the document 304, such as for annotations, proofreading, commenting, and so forth. Alternatively or additionally, multiple transient layers may be created for the document 304, such as when significant changes are made to the primary content 306, when other users apply transient ink to the document 304, and so forth.

In at least some implementations, when the user pauses interaction with the document 304, the ink timer 406 begins elapsing such as discussed above with reference to the scenarios 400, 500. Accordingly, the scenario 600 may return to the scenario 400.

FIG. 7 depicts an example implementation scenario 700 for a multiple transient ink layers in accordance with one or more implementations. The upper portion of the scenario 600 includes the GUI 302 with the document 304 (introduced above) displayed on the display 110. In at least some implementations, the scenario 700 represents an extension of the scenario 600, above.

Displayed as part of the GUI 302 is the ink flag 310 with the user icon 502, along with an ink flag 702 with a user icon 704, and an ink flag 706 with a user icon 708. Generally, the ink flags 702, 706 represent other users that have interacted with the document 304, and the user icons 704, 708 represent users associated with their respective ink flags. Thus, each individual ink flag represents a different respective user.

The scenario 700 further includes the transient ink layer 504 associated with the ink flag 310, along with a transient ink layer 710 linked to the ink flag 702, and a transient ink layer 712 linked to the ink flag 706. Generally, the transient ink layers 710, 712 represent individual transient ink layers that are bound to individual user identities. Each individual transient ink layer 504, 710, 712 is individually accessible can be viewed and edited separately. In at least some implementations, the multiple of the transient ink layers 504, 710, 712 can be invoked such that ink content for the multiple layers is displayed concurrently as part of the document 304. Example ways of invoking a transient ink layer are detailed elsewhere herein. Further, transient ink layer behaviors discussed elsewhere herein are applicable to the scenario 700.

FIG. 8 depicts an example implementation scenario 800 for presenting an inking menu in accordance with one or more implementations. The upper portion of the scenario 800 includes the GUI 302 with the document 304 (introduced above) displayed on the display 110. In at least some implementations, the scenario 800 represents an extension of the scenarios discussed above. Further to the scenario 800, the user selects the ink flag 310 while the transient layer 504 is active. For instance, the user manipulates the pen 124 to first tap the ink flag 310, which invokes the transient ink layer 504 such that the ink content 402 is retrieved and displayed, and then taps the ink flag 310 a second time within a particular period of time, e.g., 3 seconds.

Proceeding to the lower portion of the scenario 800 and in response to the second selection of the ink flag 310, the ink flag 310 expands to present an ink menu 802. Generally, the ink menu 802 includes multiple selectable indicia that are selectable to cause different ink-related actions to be performed, such as to apply and/or change various settings, invoke various functionalities, and so forth. To aid in visual understanding, an expanded representation 802a of the ink menu 802 is depicted. The example visual indicia included in the ink menu 802 are now discussed in turn.

Play control 804—according to various implementations, when ink content (e.g., transient and/or permanent ink) is applied to a document, application of the ink content is recorded in real-time. For instance, application of ink content is recorded as an animation that shows the ink content being applied to a document as it was initially applied by a user. Accordingly, selection of the play control 804 causes a playback of ink content as it was originally applied. Further details concerning ink playback are presented below.

Transient Ink Control 806—selection of this control causes a transition from a different ink mode (e.g., a permanent ink mode) to a transient ink mode.

Permanent Ink Control 808—selection of this control causes a transition from a different ink mode (e.g., a transient ink mode) to a permanent ink mode.

Text Recognition Control 810—selection of this control causes a transition to a text recognition mode. For instance, in a text recognition mode, characters applied using ink are converted into machine-encoded text.

Shape Recognition Control 812—selection of this control causes a transition to a shape recognition mode. For instance, in a shape recognition mode, shapes applied using ink are converted into machine-encoded shapes, such as quadrilaterals, triangles, circles, and so forth.

Selection Mode Control 814—selection of this control causes a transition to a selection mode. Generally, in a selection mode, input from a pen is interpreted as a selection action, such as to select text and/or other objects displayed in a document.

Erase Mode Control 816—selection of this control causes a transition to an erase mode. Generally, in an erase mode, input from a pen is interpreted as an erase action, such as to erase ink, text, and/or other objects displayed in a document.

Command Control 818—selection of this control causes a transition to a command mode. For instance, in a command mode, input from a pen is interpreted as a command to perform a particular action and/or task.

Color Control 820—selection of this control enables a user to change an ink color that is applied to a document. For example, selection of this control causes a color menu to be presented that includes multiple different selectable colors. Section of a color from the color menu specifies the color for ink content that is applied to a document.

Ink Note Control 822—this control is selectable to invoke ink note functionality, such as to enable ink content to be propagated to a note Ink note functionality is described in more detail below.

Emphasis Control 824—selection of this control causes a transition from a different ink mode (e.g., a permanent or transient ink mode) to an emphasis ink mode. Generally, in an emphasis ink mode, ink is temporary and fades and disappears after a period of time. Emphasis ink, for example, is not saved as part of primary content or a transient ink layer, but is used for temporary purposes, such as for visually identifying content, emphasizing content, and so forth.

Pin Control 826—this control is selectable to pin a transient ink layer to a document, and to unpin the transient ink layer from the document. For instance, selecting the pin control 826 causes transient ink of a transient ink layer to be persistently displayed as part of a document. With reference to the scenario 500, for example, selection of the pin control 826 prevents the ink timer 406 from being initiated when a user removes the pen 124 from proximity to the display 110.

The pin control 826 is also selectable to unpin a transient ink layer from a document. For instance, with reference to the scenario 500, selection of the pin control 826 unpins a transient ink layer such that the ink timer 406 begins to elapse when a user removes the pen 124 from proximity to the display 110. In at least some implementations, when the user selects the ink flag 310 to cause the ink menu 802 to be presented, the pin control 826 is presented within the ink menu 802 in the same region of the display in which the user icon 502 is displayed in the ink flag 310. Thus, a user can double-tap on the same spot in the over the ink flag 310 to cause the ink menu to be presented, and to pin and unpin transient ink from the document 304.

In at least some implementations, the visuals presented for the individual controls represent hover targets that are displayed when the respective modes are active. The example implementation scenarios above depict examples of hover targets for transient and permanent ink modes, and similar scenarios apply for other ink modes and the visuals displayed for their respective controls in the ink menu 802.

According to one or more implementations, providing input outside of the ink menu 802 causes the ink menu 802 to collapse. For instance, if the user taps the pen 124 in the GUI 302 outside of the ink menu 802, the ink menu 802 collapses such that the ink flag 310 is again displayed.

FIG. 9 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for processing ink according to a current ink mode in accordance with one or more embodiments.

Step 900 detects a pen in proximity to an input surface. The touch input device 116, for instance, detects that the pen 124 is hovered and/or in contact with the touch input device 116. As referenced above, a hover operation can be associated with a particular threshold proximity to an input surface such that hovering the pen 124 at or within the threshold proximity to the input surface is interpreted as a hover operation, but placing the pen 124 farther than the threshold proximity from the input surface is not interpreted as a hover operation.

Step 902 ascertains a current ink mode. The ink module 118, for example, ascertains an ink mode that is currently active on the computing device 102. Examples of different ink modes are detailed elsewhere herein, and include a permanent ink mode, a transient ink mode, a text recognition mode, a shape recognition mode, a selection mode, an erase mode, a command mode, and so forth.

In at least some implementations, a current ink mode may be automatically selected by the ink module 118, such as based on an application and/or document context that is currently in focus. For instance, an application 106 may specify a default ink mode that is to be active for the application. Further, some applications may specify ink mode permissions that indicate allowed and disallowed ink modes. A particular application 106, for example, may specify that a permanent ink mode is not allowed for documents presented by the application, such as to protect documents from being edited.

Alternatively or additionally, a current ink mode is user-selectable, such as in response to user input selecting an ink mode from the ink menu 802. For instance, a user may cause a switch from a default ink mode for an application to a different ink mode.

Step 904 causes a visual affordance identifying the current ink mode to be displayed. Examples of such an affordance include a hover target, a visual included as part of an ink flag and/or ink-related menu, and so forth. Examples of different visual affordances are detailed throughout this description and the accompanying drawings.

Step 906 processes ink content applied to the input surface according to the current ink mode. The ink content, for instance, is processed as permanent ink, transient ink, and so forth. For example, if a permanent ink mode is active, the ink content is saved as permanent ink, such as part of a primary content layer of a document. If the transient ink mode is active, the ink content is propagated to a transient ink layer of a document. Examples of different mode-specific ink behaviors and actions are detailed elsewhere herein.

FIG. 10 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for a transient ink timer in accordance with one or more implementations. In at least some implementations, the method represents an extension of the method described above with reference to FIG. 9.

Step 1000 receives ink content applied to a document via input from a pen to an input surface while in a transient ink mode. The ink module 118, for example, processes ink content received from the pen 124 to the display 110 as transient ink.

Step 1002 detects that the pen is removed from proximity to the input surface. For instance, the touch input device 116 detects that the pen 124 is not in contact with and is not hovering over a surface of the touch input device 116, e.g., the display 110.

Step 1004 initiates a timer. The timer, for example, is initiated in response to detecting that the pen is removed from proximity to the input surface. In at least some implementations, a visual representation of the timer is presented. For instance, the visual representation provides a visual cue that the timer is elapsing, and indicates a relative amount (e.g., percentage) of the timer that has elapsed. The visual representation, for example, is animated to visually convey that the timer is elapsing. One example of a visual representation of a timer is discussed above with reference to FIGS. 4 and 5.

Step 1006 ascertains whether the pen is detected at the input surface before the timer expires. For instance, the ink module 118 ascertains whether the pen 124 is detected is contact with and/or hovering over the touch input device 116 prior to expiry of the timer. If the pen is detected at the input surface prior to expiry of the timer (“Yes”), step 1008 resets the timer and the process returns to step 1000.

If the pen is not detected at the input surface prior to expiry of the timer (“No”), step 1010 removes the ink content from the document and propagates the ink content to a transient layer for the document. For instance, response to expiry of the timer, the transient ink content is removed from display and propagated to a transient data layer for the document that is separate from a primary content layer of the document. In at least some implementations, a new transient ink layer is created for the document, and the transient ink content is propagated to the new transient ink layer. Alternatively or additionally, the transient ink content is propagated to an existing transient ink layer. For example, the transient ink layer may represent an accumulation of transient ink provided by a user over multiple different interactions with the document and over a period of time.

As discussed above, the transient ink layer may be associated with a particular user, e.g., a user that applies the transient ink content to the document. Thus, the transient ink is linked to the particular user and may subsequently be accessed by the user.

FIG. 11 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for propagating transient ink to different transient ink layers for different users in accordance with one or more implementations. In at least some implementations, the method represents an extension of the methods described above with reference to FIGS. 9 and 10.

Step 1100 receives transient ink content to a document from multiple different users. The transient ink content, for instance, is received during different interactivity sessions with the document that are individually associated with a different user. The document, for example, a shared among different users, such as part of a group collaboration on the document.

Step 1102 propagates transient ink content from each user to a different respective transient ink layer for the document. A different transient ink layer, for example, is generated for each user, and transient ink content applied by each user is propagated to a respective transient ink layer for each user.

Step 1104 causes visual affordances of the different transient ink layers to be displayed. Each transient ink layer, for example, is represented by a visual affordance that visually identifies the transient ink layer and a user linked to the transient ink layer. Examples of such affordances are discussed above with reference to ink flags.

Step 1106 enables each transient ink layer to be individually accessible. The ink module 118, for example, enables each transient ink layer to be accessed (e.g., displayed) separately from the other transient ink layers. In at least some implementations, a transient ink layer is accessible by selecting a visual affordance that represents the transient ink layer. Further, multiple transient ink layers may be accessed concurrently, such as by selecting visual affordances that identify the transient ink layers.

While implementations are discussed herein with reference to display of transient ink layers, it is to be appreciated that a transient ink layer may be accessible in various other ways and separately from a document to which the transient ink layer is bound. For instance, a transient ink layer may be printed, shared (e.g., emailed) separately from a document for which the transient ink layer is created, published to a website, and so forth.

FIG. 12 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for presenting an ink menu in accordance with one or more implementations. In at least some implementations, the method represents an extension of the methods described above.

Step 1200 detects an action to invoke an ink menu. The ink module 118, for example, detects that a user requests an ink menu. For instance, as described in the scenario 800, a user may select a visual control (e.g., the ink flag 310) to request an ink menu. Alternatively or additionally, an ink menu can be automatically invoked in response to various events, such as the pen 124 detected in proximity to the surface of the display surface, an ink-related application and/or service being launched, an application and/or service querying a user to select an ink mode, and so forth.

Step 1202 causes the ink menu to be presented. The ink module 118, for example, causes the ink menu 802 to be displayed. In at least some implementations, a default set of functionalities is associated by the ink module 118 with the ink menu 802. Accordingly, different applications may modify the default set of functionalities, such as by adding a functionality to the default set of functionalities, removing a functionality from the default set of functionalities, and so forth. Thus, according to one or more implementations, the ink menu 802 is populated with a set of functionalities (e.g., selectable controls) based on a customized set of functionalities specified by and/or for an application that is currently in focus.

Step 1204 receives user input to the ink menu. For example, the ink module 118 detects that a user manipulates the pen 124 to select a control displayed as part of the ink menu 802.

Step 1206 performs an action in response to the user input. The ink module 118, for instance, causes an action to be performed based on which control is selected by the user. Examples of such actions include changing an ink mode, initiating ink playback, applying different ink formatting, and so forth.

The next portion of this discussion presents example implementation scenarios and an example procedure for ink for selection in accordance with various implementations. Generally, ink for selection provides ways of selecting and processing content using ink in various ways.

FIG. 13 depicts an example implementation scenario 1300 for ink for selection in accordance with one or more implementations. The scenario 1300, for example, represents a continuation of the scenarios described above. The upper portion of the scenario 1300 includes a GUI 1302 with a document 1304 and the ink menu 802 (introduced above) displayed on the display 110. In this particular example, the document 1304 represents a web page. It is to be appreciated, however, that techniques discussed herein may utilize a wide variety of other types of documents and content.

In the upper portion of the scenario 1300, a user manipulates the pen 124 to apply a selection gesture 1306 within the document 1304, e.g., to a surface of the display 110 within the document 1304. The selection gesture 1306, for instance, is applied in an ink selection mode. A selection mode may be activated in various ways, such as in response to selection of the selection mode control 814, in response to selection of the pen mode button 126, and so forth. For instance, pressing and/or holding the pen mode button 126 activates a selection mode that causes gestures (e.g., ink gestures) to be interpreted according to the ink selection mode.

In at least some implementations, applying the selection gesture 1306 causes corresponding ink to be applied along the path of the selection gesture such that a visual indication of the selection gesture 1306 is displayed. Alternatively, no visual indication of the selection gesture 1306 is displayed.

Notice that as the user applies the selection gesture 1306, a selection shape 1308 is automatically generated within the document 1304. The ink module 118, for example, detects that the selection gesture 1306 is applied in the ink selection mode, and in response causes the selection shape 1308 to be automatically generated.

According to various implementations, the shape and size of the selection shape 1308 is based on attributes of the selection gesture 1306. In this particular example, the selection gesture 1306 is a straight line that is horizontal relative to the visual orientation of the document 1304. Accordingly, the selection shape 1308 is drawn as a square, with the size of the square being based on a length of the selection gesture 1306. For instance, the selection shape 1308 starts at a start point 1310 of the selection gesture 1306 and expands outwardly from the center point 1310 as the selection gesture 1306 increases in length. Thus, in this particular example the start point 1310 represents a center of the selection shape 1308. Further, the length of the selection gesture 1306 represents one-half the length of a side of the selection shape 1308, and this size relationship is maintained as the selection gesture 1306 changes in length.

Proceeding to the lower portion of the scenario 1300, the selection gesture 1306 increases in length, and thus the selection shape 1308 increases in size. As illustrated, the selection shape 1308 increases in size to encompass content 1312 displayed as part of the document 1304. In this particular example, the content 1312 represents a web page object. The scenario 1300 then proceeds to a scenario 1400.

FIG. 14 depicts an example implementation scenario 1400 for ink for selection in accordance with one or more implementations. The scenario 1300, for example, represents a continuation of the scenario 1300 described above. The upper portion of the scenario 1400 includes the GUI 1302 with the document 1304 (introduced above) displayed on the display 110.

In the upper portion of the scenario 1400, a selection release event is detected. For instance, the user lifts the pen 124 such that the pen 124 is not detected in proximity to the surface of the display 110. Alternatively or additionally, the user selects or releases the pen mode button 126. Accordingly, and proceeding to the lower portion of the scenario 1400, the selection shape 1308 is converted into a selection 1402 of content within the selection shape 1308. In this particular example, the selection 1402 represents a selection of the content 1312. The ink module 118, for instance, detects the selection release event and in response, automatically converts the selection shape 1308 into the selection 1402.

According implementations discussed herein, various actions may be performed utilizing the selection 1402. For instance, the content 1312 may be copied, pasted, saved, shared, populated to an ink note (discussed below), and so forth. Examples of different actions that can be performed utilizing selected content are detailed throughout this disclosure.

FIG. 15 depicts an example implementation scenario 1500 for ink for selection in accordance with one or more implementations. The scenario 1500, for example, represents a continuation and/or variation of the scenarios described above. The upper portion of the scenario 1500 includes the GUI 1302 with the document 1304 displayed on the display 110. Further included is an ink flag 1502 that includes a plus sign (“+”) indicating that an ink selection mode is currently active. The plus sign, for instance, corresponds to a visual presented for the selection mode control 814, discussed above with reference to the ink menu 802. Thus, the ink flag 1502 provides a visual affordance indicating that the ink selection mode is active. Although not expressly illustrated here, if a user were to hover the pen 124 over the surface of the display 110, the plus sign would be displayed on the display 110 as a hover target beneath the tip of the pen 124.

In the upper portion of the scenario 1500, a user manipulates the pen 124 to apply a selection gesture 1504 within the document 1304. The selection gesture 1504, for instance, is applied in an ink selection mode. A selection mode may be activated in various ways, examples of which are discussed elsewhere herein. In at least some implementations, applying the selection gesture 1504 causes corresponding ink to be applied along the path of the selection gesture such that a visual indication of the selection gesture 1504 is displayed. Alternatively, no visual indication of the selection gesture 1504 is displayed.

Notice that as the user applies the selection gesture 1504, a selection shape 1506 is automatically generated within the document 1304. The ink module 118, for example, detects that the selection gesture 1504 is applied in the ink selection mode, and in response causes the selection shape 1506 to be automatically generated.

According to various implementations, the shape and size of the selection shape 1506 is based on attributes of the selection gesture 1504. In this particular example, the selection gesture 1504 is a straight line that is diagonal relative to the visual orientation of the document 1304. Accordingly, the selection shape 1506 is drawn as a rectangle, with the size of the rectangle being based on a length of the selection gesture 1504. For instance, the selection shape 1506 starts at a start point 1508 of the selection gesture 1504 and expands outwardly from the start point 1508 as the selection gesture 1504 increases in length. Thus, in this particular example the start point 1508 represents a corner of the selection shape 1506. Further, the length of the selection gesture 1504 represents a diagonal of the selection shape 1506 (e.g., a diagonal of a rectangle), and this size relationship is maintained as the selection gesture 1504 changes in length.

Proceeding to the lower portion of the scenario 1500, the selection gesture 1504 increases in length, and thus the selection shape 1506 increases in size. As illustrated, the selection shape 1506 increases in size to encompass the content 1312 and content 1510 displayed as part of the document 1304. The scenario 1500 then proceeds to a scenario 1600.

FIG. 16 depicts an example implementation scenario 1600 for ink for selection in accordance with one or more implementations. The scenario 1600, for example, represents a continuation of the scenario 1500 described above. The upper portion of the scenario 1600 includes the GUI 1302 with the document 1304 displayed on the display 110.

In the upper portion of the scenario 1600, a selection release event is detected. For instance, the user lifts the pen 124 such that the pen 124 is not detected in proximity to the surface of the display 110. Alternatively or additionally, the user selects or releases the pen mode button 126. Accordingly, and proceeding to the lower portion of the scenario 1600, the selection shape 1506 is converted into a selection 1602 of content within the selection shape 1506. In this particular example, the selection 1602 represents a selection of the content 1312 and the content 1510. The ink module 118, for instance, detects the selection release event and in response, automatically converts the selection shape 1506 into the selection 1602.

According implementations discussed herein, various actions may be performed utilizing the selection 1602. For instance, the content 1312,1510 may be copied, pasted, saved, shared, populated to an ink note (discussed below), and so forth. Examples of different actions that can be performed utilizing selected content are detailed throughout this disclosure.

As an example in addition to the scenarios described above, consider a further scenario where a user applies a vertical selection gesture (e.g., relative to the display 110) in the document 1304 while in an ink selection mode. In response to the vertical selection gesture, a selection shape may be drawn as a circle. For instance, the center of the circle corresponds to a start point of the vertical selection gesture, and the length of the vertical selection gesture corresponds to a radius of the circle.

Thus, the scenarios 1300-1600 illustrate that techniques discussed herein enable selection shapes to be drawn based on attributes of a selection gesture. For instance, selection gestures applied as lines in different directions (e.g., horizontal, diagonal, vertical, and so forth) cause different respective types of selection shapes to be drawn. The examples discussed herein are provided for purpose of illustration only, and it is to be appreciated that implementations discussed herein cover a wide variety of other selection shapes and relationships between selection gestures and selection shapes.

FIG. 17 depicts an example implementation scenario 1700 for ink for selection in accordance with one or more implementations. The scenario 1700, for example, represents a continuation and/or variation of the scenarios described above. The upper portion of the scenario 1500 includes the GUI 1302 with the document 1304 displayed on the display 110. Further included is the ink flag 1502 indicating that an ink selection mode is currently active.

In the upper portion of the scenario 1700, a user manipulates the pen 124 to apply a selection gesture 1702 within the document 1304. The selection gesture 1702, for instance, is applied in an ink selection mode. A selection mode may be activated in various ways, examples of which are discussed elsewhere herein. Notice in this particular example, the selection gesture 1702 is non-linear, i.e., is not a straight line. According to various implementations, a non-linear selection gesture is associated with different selection behaviors than linear selection gesture, such as those described above.

In at least some implementations, applying the selection gesture 1702 causes corresponding ink to be applied along the path of the selection gesture such that a visual indication of the selection gesture 1702 is displayed. Alternatively, no visual indication of the selection gesture 1702 is displayed.

Proceeding to the lower portion of the scenario 1700, a selection release event is detected. For instance, the user lifts the pen 124 such that the pen 124 is not detected in proximity to the surface of the display 110. Alternatively or additionally, the user selects or releases the pen mode button 126. Accordingly, in response to the selection release event, the ink module 118 causes an auto-complete line 1704 to be drawn between a start point 1706 and an end point 1708 of the selection gesture 1702. The auto-complete line 1704 closes the selection gesture 1702 to generate a closed shape around the content 1312.

While the selection gesture 1702 and the auto-complete line 1704 are depicted as being displayed on the display 110, it is to be appreciated that in at least some implementations, the selection gesture 1702 and the auto-complete line 1704 are not displayed but represent depictions of underlying selection data tracked by the ink module 118. The scenario 1700 then proceeds to a scenario 1800.

FIG. 18 depicts an example implementation scenario 1800 for ink for selection in accordance with one or more implementations. The scenario 1800, for example, represents a continuation of the scenario 1700 described above. The upper portion of the scenario 1800 includes the GUI 1302 with the document 1304 displayed on the display 110. Further illustrated is the closed selection gesture 1702 including the auto-complete line 1704 around the content 1312, such as depicted in the lower portion of the scenario 1700.

Proceeding to the lower portion of the scenario 1800, the closed selection gesture 1702 is converted into a selection 1802 of the content 1312. According to various implementations, the ink module 118 detects the release of the selection gesture 1702 and in response, automatically generates the auto-complete line 1704 and converts the closed selection gesture 1702 into the selection 1802 independent of user input.

According implementations discussed herein, various actions may be performed utilizing the selection 1802. For instance, the content 1312 may be copied, pasted, saved, shared, populated to an ink note (discussed below), and so forth. Examples of different actions that can be performed utilizing selected content are detailed throughout this disclosure.

Thus, the example scenarios presented above illustrate different selection gestures representing an open gestures, such as a line, an open curve, and so forth. Further, the open gestures are automatically converted to corresponding closed selection shapes, such as rectangles, circles, closed curves, and so forth.

FIG. 19 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for ink for selection in accordance with one or more implementations. In at least some implementations, the method represents an extension of the methods described above.

Step 1900 detects pen input drawing a selection gesture. The ink module 118, for example, detects that a user applies a selection gesture to an input surface while in an ink selection mode. In at least some implementations, a visual representation of the selection gesture is presented using ink. Alternatively, no visual representation of the selection gesture is presented. Generally, the selection gesture corresponds to an open gesture, such as a straight line, an open curve, and so forth. For instance, a start point and an end point of the selection gesture do not coincide, and the selection gesture does not intersect itself.

Step 1902 generates a selection shape based on a direction of the selection gesture. Generally, the selection shape corresponds to a closed shape, such as a square, a rectangle, a closed curve (e.g., a circle), an irregular closed shape, and so forth. As illustrated in the scenarios described above, the shape of the selection shape depends on an orientation in which the selection gesture is applied relative to a document in which the selection gesture is applied. Further, the size of the selection shape is determined based on the length of the selection gesture. For instance, the size of the selection shape increases with an increase in length of the selection gesture.

In at least some implementations, generating a selection shape includes generating an auto-complete line between opposite ends of the selection gesture. For instance, if the selection gesture is an open curve (e.g., an arc), an auto-complete line is automatically drawn between opposite ends of the open curve to generate a closed curve.

Step 1904 ascertains a release event for the selection gesture. The release event, for instance, corresponds to a user removing the pen from proximity to the display surface. Alternatively or additionally, the release event represents the user releasing the pen mode control 126.

Step 1906 causes an automatic selection of content within the selection shape responsive to the release event. For example, the ink module 118 detects the release event and automatically converts the selection shape into a selection of content encompassed by the selection shape.

Step 1908 causes an action to be performed utilizing the selected content. The ink module 118, for example, causes the selected content to be copied, pasted, shared, populated to an ink note, and so forth. In at least some implementations, the ink module 118 causes the selected content to be propagated to another functionality, such as an application 106. Examples of other actions that can be applied to selected content are detailed elsewhere herein.

The next portion of this discussion presents example implementation scenarios and an example procedure for ink notes in accordance with various implementations. Generally, ink notes provide ways of preserving ink as notes that can be saved, shared, and accessed in various ways.

FIG. 20 depicts an example implementation scenario 2000 for ink notes in accordance with one or more implementations. The scenario 2000, for example, represents a continuation of the scenarios described above. The upper portion of the scenario 2000 includes the GUI 302 with the document 304 and the ink menu 802 (introduced above) displayed on the display 110.

In the upper portion of the scenario 2000, a user selects the ink note control 822. For instance, the user taps the ink note control 822, and/or drags the ink note control 822 from the ink menu 802 into the body of the document 304.

Proceeding to the lower portion of the scenario 2000, and responsive to the user selection of the ink note control 822, an ink note 2002 is presented in the GUI 302. Generally, the ink note 2002 represents an electronic canvas on which notes can be applied using ink. A user then applies ink content 2004 to the ink note 2002.

In at least some implementations, the scenario 2000 occurs while the GUI 302 is in a transient ink mode. Accordingly, ink content applied to the document 304 itself will behave according to the transient ink mode. However, ink content applied within the ink note 2002 behaves according to an ink note mode. Thus, the ink note 2002 represents a separate inking environment from the document 304, and thus different behaviors apply to the ink content 2004 than to ink content within the document 304.

The ink note 2002 includes a save control 2006 and a share control 2008. According to various implementations, selecting the save control 2006 causes the ink content 2004 to be saved to a particular location, such as a pre-specified data storage location. In at least some implementations, a single selection of the save control 2006 causes the ink content 2004 to be saved and the ink note 2002 to be removed from display such that a user may return to interacting with the document 304.

The share control 2008 is selectable to share the ink content 2004, such as with another user. For instance, selecting the share control 2008 causes the ink content 2004 to be automatically propagated to a message, such as the body of an email message, an instant message, a text message, and so forth. A user may then address and send the message to one or more users. Alternatively or additionally, selecting the share control 2008 may cause the ink content 2004 to be posted to a web-based venue, such as a social networking service, a blog, a website, and so forth. According to various implementations, functionality of the share control 2008 is user configurable such that a user may specify behaviors caused by selection of the share control 2008.

FIG. 21 depicts an example implementation scenario 2100 for ink notes in accordance with one or more implementations. The scenario 2100, for example, represents a continuation and/or variation of the scenarios described above. The upper portion of the scenario 2100 includes the GUI 302 with the document 304 and the ink menu 802 (introduced above) displayed on the display 110.

In the upper portion of the scenario 2100, a user applies ink content 2102 to the document 302 and then applies a selection action 2104 to the ink content 2102. In this particular example, the selection action 2104 is implemented as inking a closed loop around the ink content 2102. Other selection actions may be utilized, however, such as techniques for ink for selection described above.

Proceeding to the lower portion of the scenario 2100, the user selects the ink note control 822, such as by tapping on the ink note control 822 and/or dragging the ink note control 822 out of the ink menu 802. In response, an ink note 2106 is automatically generated and populated with the ink content 2102. The ink module 118, for example, detects that the ink content 2102 is selected via the selection action 2104, and thus populates the ink content 2102 to the ink note 2106. Thus, the selection action 2104 followed by the selection of the ink note control 822 is interpreted as a command to generate the ink note 2106 and populate the ink note 2106 with the ink content 2102.

In this particular example, the ink content is moved (e.g., cut and paste) from the body of the document 304 into the ink note 2106. In some alternative implementations, the ink content is copied into the ink note 2106 such that the ink content 2102 remains in the body of the document 304. The user may then save the ink note 2106 by selecting the save control 2006, and may share the ink content 2102 by selecting the share control 2008. Example attributes and actions of the save control 2006 and the share control 2008 are described above.

While the scenario 2100 is discussed with reference to populating the ink content 2102 to the ink note 2106, it is to be appreciated that a wide variety of other content may be populated to the ink note 2106. For instance, a user may select a portion of the primary content 306 from the document 304 (e.g., text content), and a subsequent selection of the ink note control 822 would cause the ink note 2102 to be generated and populated with the selected primary content. As another example implementation, a combination of ink content and primary content can be selected, and a subsequent selection of the ink note control would cause the ink note 2102 to be generated and populated with both the selected ink content and primary content.

In at least some implementations, selection of content to be populated to an ink note is performed utilizing techniques for ink for selection described above. Thus, populating selected content to an ink note as described in this section represents an action that can be performed utilizing selected content, as described above with reference to step 1908 of FIG. 19.

The scenarios described above generally describe that ink note functionality is invocable via selection of the ink note control 822. In additional or alternative implementations, ink note functionality is invocable in other ways, such as in response to a dragging gesture from anywhere within the ink menu 802 into the body of the document 304, a custom gesture applied anywhere within the GUI 302, a gesture involving a combination of finger touch input and pen input, a voice command, a touchless gesture, and so forth.

Thus, the scenario 2100 illustrates that techniques discussed herein reduce a number of user interactions required to propagate content to a note (e.g., an ink note), since a user may simply select existing content and invoke an ink note functionality to populate the existing content to an ink note.

FIG. 22 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for generating an ink note in accordance with one or more implementations. In at least some implementations, the method represents an extension of the methods described above.

Step 2200 detects a user selection of content. The ink module 118, for example, ascertains that a user selects a portion of ink content, primary content, combinations thereof, and so forth. In at least some implementations, content may be selected via techniques for ink for selection, described above.

Step 2202 ascertains that an ink note functionality is invoked. Various ways of invoking ink note functionality are detailed above.

Step 2204 populates the selected content to an ink note. For example, the ink module 118 generates an ink note and populates (e.g., copies or moves) the selected content to the ink note. In at least some implementations, the ink note is generated and the selected content is populated to the ink note automatically and in response to a single user invocation of ink note functionality, e.g., a single user action.

Step 2206 performs an action in relation to the ink note in response to user input. For instance, a user provides input that causes the ink note to be saved, to be shared, to be deleted, and so forth. Examples of user input include user selection of a selectable control, a user applying an ink and/or touch gesture, input via an input mechanism 112, and so forth.

The next portion of this discussion presents example implementation scenarios and example procedures for ink for commanding in accordance with various implementations. Generally, ink for commanding provides ways of causing various commands to be performed in response to ink input.

FIG. 23 depicts an example implementation scenario 2300 for ink for commanding in accordance with one or more implementations. The scenario 2300, for example, represents a continuation of the scenarios described above. The upper portion of the scenario 2300 includes the GUI 1302 with the document 1304 and the ink menu 802 (introduced above) displayed on the display 110.

In the upper portion of the scenario 2300, the content 1312 is selected as a selection 2302, such as using techniques for ink for selection described above. Further, notice that the pen 124 is hovered above the surface of the display 110 and that a hover target 2304 is displayed beneath the tip of the pen 124. The hover target 2304 includes the visual icon presented for the command control 818 of the ink menu 802, thus providing a visual affordance that a command mode is currently active. According to various implementations, the command mode can be activated in various ways, such as in response to a user selection of the command control 818, a user selection of the pen mode button 126, and so forth.

Proceeding to the lower portion of the scenario 2300, a user applies ink within the selection 2302 to write a command 2306. In this particular example, the command 2306 includes the term “Reminder,” which is interpreted in the command mode as a command to generate a reminder based on the content 1312 included in the selection 2302. The scenario 2300 then proceeds to a scenario 2400.

FIG. 24 depicts an example implementation scenario 2400 for ink for commanding in accordance with one or more implementations. The scenario 2400, for example, represents a continuation of the scenario 2300 described above. The upper portion of the scenario 2400 includes the GUI 1302 with the document 1304 (introduced above) displayed on the display 110. Further included is the content 1312 selected via the selection 2302, and the command 2306 inked within the selection 2302. As referenced above, the command 2306 is recognized (e.g., by the ink module 118) as a command to generate a reminder based on information from the selected content 1312.

In the upper portion of the scenario 2400, a release event is detected, examples of which are discussed above. In response to the release event and recognition of the command 2306, and proceeding to the lower portion of the scenario 2400, data from the content 1312 is propagated to a calendar 2402 to generate a calendar event 2404. The calendar 2402, for instance, represents a GUI for a calendar application that represents an instance of the applications 106.

As illustrated, the content 1312 presents information about an upcoming event. Thus, information about the upcoming event is ascertained from the content 1312, and utilized to generate the calendar event 2404. For instance, the ink module 118 recognizes that characters displayed as part of the content 1312 are particular words and phrases that have a particular meaning (e.g., date, time, location, etc.), and thus are used to populate relevant fields of the calendar event 2404. Alternatively or additionally, metadata for the content 1312 is accessed to ascertain information about the content 1312. These implementations are presented for purpose of example only, and information about the visual object may be ascertained in a variety of different ways.

Adjacent to and/or overlaid on the calendar 2402 is an ink flag 2406 including a commanding icon. Generally, the ink flag 2406 with the commanding icon present a visual affordance that a commanding mode is active such that ink input will be interpreted according to the commanding mode.

According to implementations discussed herein, the calendar event 2404 is generated automatically and in response to the user selecting the content 1312 and writing the command 2306. For instance, no further user input is required after writing the command 2306 for the calendar event 2404 to be generated. In response to the command 2306 being written, for example, the calendar 2402 is automatically launched and the calendar event 2404 is generated independent of user input.

FIG. 25 depicts an example implementation scenario 2500 for ink for commanding in accordance with one or more implementations. The scenario 2500, for example, represents a continuation and/or variation of the scenarios described above. The upper portion of the scenario 2500 includes the GUI 1302 with the document 1304 (introduced above) displayed on the display 110.

In the upper portion of the scenario 2500, the content 1312 is selected as a selection 2502, such as using techniques for ink for selection described above. Further, a user applies ink within the selection 2502 to enter a command 2504 while in a command mode. In this particular example, the command 2504 includes the phrase “email to John Smith.”

In the upper portion of the scenario 2500, a release event is detected, examples of which are discussed above. In response to the release event and recognition of the command 2504, and proceeding to the lower portion of the scenario 2500, an email message 2506 is generated and populated with information from the content 1312. For example, the ink module 118 parses the command 2504 and recognizes that the term “email” represents a command to generate an email message with selected content of the content 1312. The ink module 118 further recognizes that the term “to John Smith” represents a recipient of the email. Thus, the ink module 118 communicates this information to an email functionality. The email message 2506, for instance, is generated by an email application that represents an instance of the applications 106. For instance, an email address for John Smith is included in contact information for the user, and is thus retrieved by the email application and used to address the email message.

As illustrated, the content 1312 presents information about an upcoming event. Thus, information about the upcoming event is ascertained from the content 1312, and utilized to populate the email message 2506. Example ways of ascertaining information from the content 1312 are discussed above.

In at least some implementations, the email message 2506 is automatically generated, populated with information from the content 1312, and sent to the recipient without any further user input after entering the command 2504.

Alternatively, after entry of the command 2504, the email message 2506 is automatically generated and populated with information from the content 1312. The user is then given the opportunity to view and edit the email message 2506 prior to sending the email message 2506.

While the scenarios 2300-2500 are discussed from the perspective of a selection occurring before a command being entered, it is to be appreciated that the temporal relationship between object selection and commanding may be arranged in various other ways. For instance, a user may first apply ink to write a command, and may subsequently select an object on which the command is to be performed. With reference to the scenario 2500, for example, the user may first write the command 2504 and then subsequently select the content 1312 to cause the command 2504 to be performed as depicted in the scenario 2500.

Further, it is not required that a command be inked within a selected object. With reference to the scenario 2500, for example, the command 2504 may be written anywhere within the document 1304 and outside of the selection of the content 1312. In such a scenario, the ink module 118 would recognize that the content 1312 is selected and that the command 2504 is applied, and would cause the command 2504 to be performed utilizing the selected content 1312. In a commanding mode, for instance, a command and a selection are linked whatever the visual and/or temporal relationship is between the command and the selection.

FIG. 26 depicts an example implementation scenario 2600 for ink for commanding in accordance with one or more implementations. The scenario 2600, for example, represents a continuation and/or variation of the scenarios described above. The upper portion of the scenario 2600 includes the GUI 1302 with the document 1304 (introduced above) displayed on the display 110. Adjacent to and/or overlaid on the document 1304 is an ink flag 2602 including a commanding icon, indicating that a commanding mode is active.

Further depicted in the upper portion of the scenario 2600 is that the user brings the pen 124 in proximity to a command region 2604 of the GUI 1302. The command region 2604, for instance, represents a pre-specified portion of the GUI 1302 that is associated with commanding mode functionality.

Proceeding to the lower portion of the scenario 2600 and in response to detecting the pen 124 in proximity to the command region 2604, a command field 2606 is presented which includes a prompt 2608 for user input. The prompt 2608, for instance, prompts the user to input a command into the command field 2606. Accordingly, the user enters a command 2610 into the command field 2608. In this particular example, the command 2610 includes an instruction to search for weather on a particular date. The scenario 2600 then proceeds to a scenario 2700.

FIG. 27 depicts an example implementation scenario 2700 for ink for commanding in accordance with one or more implementations. The scenario 2700, for example, represents a continuation of the scenario 2600 described above. The upper portion of the scenario 2700 includes the GUI 1302 with the document 1304 (introduced above) displayed on the display 110. Further included is the command field with the command 2612. In the upper portion of the scenario 2700, the user removes the pen 124 from proximity to the display 110 such that a release event is generated.

Proceeding to the lower portion of the scenario 2700 and in response to detection of the release event, the command 2612 is executed such that command results 2702 are presented on the display 110. The command 2612, for instance, is submitted as a set of search terms to a web search engine, which performs a search using the search terms and returns the command results 2702. The command results 2702 include weather information for the particular date that is retrieved and displayed on the display 110. The document 1304, for instance, is replaced in the display 110 with the weather information. According to various implementations, the command results 2702 are retrieved and displayed automatically and in response to the user entering the command 2612 and removing the pen 124 from proximity to the display 110. For example, the command results 2702 are retrieved and displayed without any further user input after entering the command 2612.

Accordingly, the scenarios 2600, 2700 illustrate that techniques can be employed to provide a designated region in which commands can be entered. Further, commands can be entered in a natural language form that can be parsed (e.g., by the ink module 118), recognized, and performed by various functionalities. For instance, the ink module 118 can forward commands to appropriate applications 106 to be recognized and/or performed.

Thus, the scenarios 2300-2700 illustrate that techniques discussed herein can be utilized to recognize various commands and to perform various actions based on the commands. The commands discussed in these scenarios are presented for purpose of example only, and it is to be appreciated that a wide variety of other commands not expressly discussed herein may be employed in accordance with techniques discussed herein.

FIG. 28 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for ink for commanding in accordance with one or more implementations. For example, the method describes an example way of performing one or more of the implementation scenarios described above. In at least some implementations, the method represents an extension of the methods described above.

Step 2800 detects a selection of content. The ink module 118, for instance, detects that the content is selected via input from the pen 124. In at least some implementations, the content is selected via techniques for ink for selection examples of which are described above.

Step 2802 ascertains input of a command via freehand input from a pen. For example, the ink module 118 detects that a command is applied via freehand ink input from the pen 124. Example ways of providing and detecting a command are described above.

Step 2804 causes the command to be executed using the content. The ink module 118, for instance, causes an action to be performed utilizing the content and based on the command. In at least some implementations, the ink module 118 communicates the command and the content and/or attributes of the content to an application 106 to cause the application 106 to perform the command.

Examples of different actions that can be performed based on a command are described above with reference to the scenarios 2300-2700, such as generating a calendar event and/or an email based on content, performing a search (e.g., a web search) based on a command, and so forth. Examples of other actions that can be performed based on a command include sharing selected content to a website (e.g., a social networking site), propagating selected content to a content editing application for editing, saving selected content as a content file, and so forth. Thus, a wide variety of different commands can be interpreted to enable a wide variety of different actions to be performed. In at least some implementations, causing a command to be executed includes causing a visual output of the command to be displayed.

FIG. 29 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for ink for commanding in accordance with one or more implementations. For example, the method describes an example way of performing one or more of the implementation scenarios described above. In at least some implementations, the method represents an extension of the methods described above.

Step 2900 detects a pen in proximity to a designated command region of an input surface. The ink module 118, for example, detects that the pen 124 is in proximity to a command region of the display 110. Generally, the command region represents a pre-specified region of a display and/or a GUI that is associated with ink for commanding functionality.

Step 2902 causes a command field to be presented. The command field generally represents a GUI region in which commands can be entered. For instance, the ink module 118 causes the command field to be displayed automatically and in response to detecting the pen 124 in proximity to the command region. In at least some implementations, the command field is presented in response to a hover of the pen 124 over the input surface and prior to the pen 124 touching the input surface. Thus, the command field may be presented independent of a user selection to invoke the command field.

Step 2904 ascertains user input of a command to the command field. The ink module 118, for example, detects a command term and/or phrase entered into the command field. In at least some implementations, natural language processing may be employed to parse a command and correlate command terminology to particular machine-based commands.

Step 2906 causes the command to be performed. For instance, the ink module 118 performs one or more aspects of the command. Alternatively or additionally, the ink module 118 forwards the command to an application 106 to cause the application 106 to perform (e.g., execute) the command. In at least some implementations, causing the command to be performed causes a visual display of command results. Alternatively or additionally, causing a command to be performed causes a reconfiguration of a device state (e.g., of the client device 102), such as to change device settings, to change application state, to mitigate errors and/or device malfunctioning, and so forth.

According to one or more implementations, a command is performed utilizing selected content, such as content selected via techniques for ink for selection described above.

The next portion of this discussion presents example implementation scenarios and example procedures for ink for recognition in accordance with various implementations. Generally, ink for recognition provides ways of converting ink input into machine-coded characters and shapes.

FIG. 30 depicts an example implementation scenario 3000 for ink for shape recognition in accordance with one or more implementations. The scenario 3000, for example, represents a continuation of the scenarios described above. The upper portion of the scenario 3000 includes a GUI 3002 with a document 3004 and the ink menu 802 (introduced above) displayed on the display 110. Displayed within the document 3004 is primary content 3006, which in this example includes a map of geographic regions.

In the upper portion of the scenario 3000, a user manipulates the pen 124 to apply ink to draw a freehand shape 3008a and a freehand shape 3008b, which in this example are circles. Further, the freehand shapes 3008a, 3008b are drawn while in a shape recognition mode. For instance, the user selects the shape recognition control 812 (e.g., before or after drawing the freehand shapes 3008a, 3008b), which causes a transition to a shape recognition mode. Notice further that a hover target 3010 is displayed beneath the tip of the pen 124, providing a visual affordance that the shape recognition mode is currently active.

Proceeding to the lower portion of the scenario 3000, the freehand shapes 3008a, 3008b are recognized as circles (e.g., by the ink module 118), and thus the freehand shapes 3008a, 3008b are converted to respective machine-encoded (“encoded”) shapes 3012a, 3012b, i.e., encoded circles. Further, the encoded shapes 3012a, 3012b are added to primary content (i.e., a primary content layer) of the document 3004. According to various implementations, the encoded shapes 3012a, 3012b may be edited and manipulated in various ways, such as shaded, filled, resized, moved, and so forth.

Thus, the scenario 3000 illustrates that in a shape recognition mode, shapes drawn in freehand using ink input can be recognized and converted to corresponding machine-encoded shapes. While the scenario 3000 is discussed with reference to recognition of freehand circles, it is to be appreciated that a wide variety of other freehand shapes may be recognized and converted to machine-encoded shapes, such as triangles, rectangles, parallelograms, irregular shapes, and so forth.

FIG. 31 depicts an example implementation scenario 3100 for ink for text recognition in accordance with one or more implementations. The scenario 3100, for example, represents a continuation of the scenarios described above. The upper portion of the scenario 3100 includes a GUI 3102 and the ink menu 802 (introduced above) displayed on the display 110. In this particular example, the GUI 3102 represents a GUI for an email application. Depicted within the GUI 3102 is an email inbox 3104 and an email message 3106.

In the upper portion of the scenario 3100, a user taps the pen 124 within the body of the email message 3106, which causes a text prompt 3108 to be displayed. The text prompt 3108, for instance, represents a visual affordance that a text recognition mode is active, and that text applied using ink will be recognized and positioned starting at the text prompt 3108. For example, prior to tapping the pen 124 within the body of the email message 3106, the user activates a text recognition mode by selecting the text recognition control 810 from the ink menu 802.

Proceeding to the lower portion of the scenario 3100, the user begins writing freehand text 3110 with the pen 124 within the body of the email message 3106 but in a different region than where the text prompt 3108 is displayed. In response to detecting the pen 124 in proximity to the surface of the display 110, a text guide 3112 is presented as a straight line adjacent to the tip of the pen 124. Generally, the text guide 3112 provides visual assistance to enable a user to orient the characters of the freehand text 3110. According to various implementations, proper orientation of freehand text increases the accuracy of ink to text recognition by creating a spacing construct that aids in recognition.

FIG. 32 depicts an example implementation scenario 3200 for ink for text recognition in accordance with one or more implementations. The scenario 3200, for example, represents a continuation of the scenario 3100 described above. The upper portion of the scenario 3200 includes the GUI 3102 (introduced above) displayed on the display 110.

In the upper portion of the scenario 3200, portions of the freehand text 3110 are recognized (e.g., via OCR) and converted to machine-encoded (“encoded”) text 3202. As depicted, the encoded text 3202 is placed starting at the original position of the text prompt 3108 in the email message 3106 as depicted in the scenario 3100. Further, the text prompt 3108 moves to indicate where newly recognized text will be presented.

Proceeding to the lower portion of the scenario 3200, notice that portions of the freehand text 3110 that are recognized and converted to the encoded text 3202 are removed from display. Thus, the user may apply further freehand text 3204 where the removed portions of freehand text were displayed, and/or in other portions of the GUI 3102. Accordingly, visual clutter of the GUI 3102 is reduced and the user is afforded space to enter the additional freehand text 3204.

FIG. 33 depicts an example implementation scenario 3300 for ink for text recognition in accordance with one or more implementations. The scenario 3300, for example, represents a continuation of the scenarios 3100, 3200 described above. The scenario 3300 includes the GUI 3102 (introduced above) displayed on the display 110.

In the scenario 3300, the freehand text 3110, 3204 from the scenarios 3100, 3200 is converted into the encoded text 3202. Further, the user removes the pen 124 from proximity to the surface of the display 110. Accordingly, the text guide 3112 presented in the scenarios 3100, 3200 is removed from display. Should the user begin writing additional freehand text into the GUI 3102, the text guide 3112 would be redisplayed and the additional freehand text would be recognized and appended to the encoded text 3202.

Thus, the scenarios 3100-3300 illustrate that ink may be applied for text recognition in any portion of a display. For instance, a user is not constrained to entering freehand text in a predefined region, but may simply enter the freehand text in any portion of a display and the freehand text will be converted to encoded text that is populated to a different region of the display. Accordingly, entry of freehand ink text is not constrained to a region in which a recognized and encoded version of the text will be displayed.

FIG. 34 depicts an example implementation scenario 3400 for ink for text recognition in accordance with one or more implementations. The scenario 3400, for example, represents a continuation of the scenarios described above. The scenario 3400 includes the GUI 3102 (introduced above) displayed on the display 110.

In the upper portion of the scenario 3400, a user taps the pen 124 within the inbox 3104, which causes a selection 3402 of the inbox 3104. Further, notice that a hover target 3404 is presented as a letter “T,” indicating that a text recognition mode is currently active.

Proceeding to the lower portion of the scenario 3400, the user enters freehand text 3406 within the selected inbox 3104. In this particular example, the freehand text 3406 includes the letter “W.” Notice further that in response to detecting the pen 124 in proximity to the display 110 within the inbox 3104, a text guide 3408 is presented adjacent to the tip of the pen 124. The scenario 3400 then proceeds to a scenario 3500.

FIG. 35 depicts an example implementation scenario 3500 for ink for text recognition in accordance with one or more implementations. The scenario 3500, for example, represents a continuation of the scenarios described above. The scenario 3500 includes the GUI 3102 (introduced above) displayed on the display 110.

In the upper portion of the scenario 3500, after applying the freehand text 3406, the user removes the pen 124 from proximity to the display 110. Proceeding to the lower portion of the scenario 3500, the freehand text 3406 is recognized as an encoded letter “W.” Thus, email messages listed in the inbox 3104 are processed (e.g., sorted, filtered, and so forth) based on the letter “W.” In this particular example, the email messages are rearranged in descending alphabetical order of sender (“from”) starting with the letter “W.”

Thus, the scenarios 3400, 3500 illustrate that freehand ink input to different regions of a display can be recognized as encoded text and utilized for different purposes, such as to invoke different functionalities. For instance, in the scenarios 3000-3300, freehand ink input is recognized as shapes and/or text, and converted to encoded shapes and text that is populated to a primary content layer of a document. The scenarios 3400, 3500 illustrate implementations where freehand ink input is recognized and utilized to perform an action on existing data, such as searching, filtering, sorting, and so forth.

While the implementation scenarios presented above describe text and shape recognition modes separately, it is to be appreciated that implementations discussed herein support concurrently text and shape recognition. For instance, with reference to the scenario 3000 described above, if a user were to apply freehand ink text characters to the GUI 3002, the freehand ink text would be recognized and converted into encoded text and displayed within the GUI 3002 along with the encoded shapes 3012a, 3012b.

FIG. 36 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for ink for recognition in accordance with one or more implementations. For example, the method describes an example way of performing one or more of the implementation scenarios described above. In at least some implementations, the method represents an extension of the methods described above.

Step 3600 detecting a pen in proximity to an input surface while the system is in an ink recognition mode. For instance, the ink module 118 receives a notification that the pen 124 is in proximity to the surface of the display 110, such as from the display 110 itself.

Step 3602 causes a visual affordance identifying the ink recognition mode to be displayed responsive to said detecting. For example, a hover target that identifies a shape recognition mode and/or a text recognition mode is displayed below and/or adjacent to the pen 124. Alternatively or additionally, an icon that represents a shape recognition mode and/or a text recognition mode is displayed as part of an ink flag and/or an ink menu. Examples of different hover targets and icons are discussed above. According to one or more implementations, the visual affordance is removed from display in response to detecting that the pen is removed from proximity to the input surface.

Step 3604 processes ink content applied to the input surface according to the ink recognition mode. For instance, shapes applied using ink are converted to encoded shapes, and text applied using ink is converted to encoded text. Example ways of processing ink are discussed above and depicted in the accompanying figures.

FIG. 37 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for ink for text recognition in accordance with one or more implementations. For example, the method describes an example way of performing one or more of the implementation scenarios described above. In at least some implementations, the method represents an extension of the methods described above.

Step 3700 receives freehand pen input to a region of a display. The ink module 118, for example, ascertains that freehand pen input is applied within a region of a display. According to various implementations, the region of the display is not visually identified as a designated region for receiving pen input, e.g., is not visually distinguished from other regions of the display. For instance, a user may randomly select the region.

Step 3702 converts the freehand pen input to encoded text. For example, the freehand pen input includes characters that are recognized as particular text characters, and that are converted into encoded text characters.

Step 3704 populates the encoded text to a different region of the display. The encoded text, for instance, is displayed in a different region of the display than the region in which the freehand pen input was provided.

FIG. 38 is a flow diagram that describes steps in a method in accordance with one or more embodiments. The method, for instance, describes an example procedure for ink for character recognition in accordance with one or more implementations. For example, the method describes an example way of performing one or more of the implementation scenarios described above. In at least some implementations, the method represents an extension of the methods described above.

Step 3800 detects a user selection of content. The ink module 118, for example, detects that a user selects a portion of content. With reference to the example scenarios above, the ink module 118 detects that a user selects the inbox 3104.

Step 3802 processes the content based on a character recognized from freehand pen input of a character. For instance, freehand ink input is recognized as a portion of text, e.g., one or more text characters, and selected content is processed based on the portion of text. Alternatively or additionally, the freehand input may include non-textual characters (e.g., shapes) that are recognized as having particular meanings. For instance, a square may correspond to a particular type of processing, a circle to a different type of processing, a triangle to yet another different type of processing, and so forth. Thus, textual and non-textual characters may be recognized to perform associated processing. Examples of processing the content using the portion of text include searching, filtering, sorting, and so forth, using the portion of text.

In at least some implementations, different portions of content are associated with different types of processing based on text. For instance, consider the email scenarios described with reference to FIGS. 31-35. In these scenarios, freehand pen input of a text character within the inbox 3104 is interpreted (e.g., by the ink module 118 and/or an application 106) as a command utilize the text character to perform a specific type of processing on messages stored in the inbox 3104, such as searching, filtering, sorting, and so forth. Further, freehand pen input within the email message 3106 is interpreted as a command to convert the freehand input into encoded text characters, and to populate the encoded text characters into the email message 3106.

Although discussed separately, it is to be appreciated that the implementations, scenarios, and procedures described above can be combined and implemented together in various ways. For instance, the implementations, scenarios, and procedures describe different functionalities of single integrated inking platform, such as implemented by the ink module 118.

Having described some example implementation scenarios and procedures for ink modes, consider now a discussion of an example system and device in accordance with one or more embodiments.

Example System and Device

FIG. 39 illustrates an example system generally at 3900 that includes an example computing device 3902 that is representative of one or more computing systems and/or devices that may implement various techniques described herein. For example, the client device 102 discussed above with reference to FIG. 1 can be embodied as the computing device 3902. The computing device 3902 may be, for example, a server of a service provider, a device associated with the client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system.

The example computing device 3902 as illustrated includes a processing system 3904, one or more computer-readable media 3906, and one or more Input/Output (I/O) Interfaces 3908 that are communicatively coupled, one to another. Although not shown, the computing device 3902 may further include a system bus or other data and command transfer system that couples the various components, one to another. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. A variety of other examples are also contemplated, such as control and data lines.

The processing system 3904 is representative of functionality to perform one or more operations using hardware. Accordingly, the processing system 3904 is illustrated as including hardware element 3910 that may be configured as processors, functional blocks, and so forth. This may include implementation in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors. The hardware elements 3910 are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions may be electronically-executable instructions.

The computer-readable media 3906 is illustrated as including memory/storage 3912. The memory/storage 3912 represents memory/storage capacity associated with one or more computer-readable media. The memory/storage 3912 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The memory/storage 3912 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media 3906 may be configured in a variety of other ways as further described below.

Input/output interface(s) 3908 are representative of functionality to allow a user to enter commands and information to computing device 3902, and also allow information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone (e.g., for voice recognition and/or spoken input), a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., which may employ visible or non-visible wavelengths such as infrared frequencies to detect movement that does not involve touch as gestures), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth. Thus, the computing device 3902 may be configured in a variety of ways as further described below to support user interaction.

Various techniques may be described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The terms “module,” “functionality,” “entity,” and “component” as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.

An implementation of the described modules and techniques may be stored on or transmitted across some form of computer-readable media. The computer-readable media may include a variety of media that may be accessed by the computing device 3902. By way of example, and not limitation, computer-readable media may include “computer-readable storage media” and “computer-readable signal media.”

“Computer-readable storage media” may refer to media and/or devices that enable persistent storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Computer-readable storage media do not include signals per se. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer-readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information and which may be accessed by a computer.

“Computer-readable signal media” may refer to a signal-bearing medium that is configured to transmit instructions to the hardware of the computing device 3902, such as via a network. Signal media typically may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism. Signal media also include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.

As previously described, hardware elements 3910 and computer-readable media 3906 are representative of instructions, modules, programmable device logic and/or fixed device logic implemented in a hardware form that may be employed in some embodiments to implement at least some aspects of the techniques described herein. Hardware elements may include components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware devices. In this context, a hardware element may operate as a processing device that performs program tasks defined by instructions, modules, and/or logic embodied by the hardware element as well as a hardware device utilized to store instructions for execution, e.g., the computer-readable storage media described previously.

Combinations of the foregoing may also be employed to implement various techniques and modules described herein. Accordingly, software, hardware, or program modules and other program modules may be implemented as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or more hardware elements 3910. The computing device 3902 may be configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of modules that are executable by the computing device 3902 as software may be achieved at least partially in hardware, e.g., through use of computer-readable storage media and/or hardware elements 3910 of the processing system. The instructions and/or functions may be executable/operable by one or more articles of manufacture (for example, one or more computing devices 3902 and/or processing systems 3904) to implement techniques, modules, and examples described herein.

As further illustrated in FIG. 39, the example system 3900 enables ubiquitous environments for a seamless user experience when running applications on a personal computer (PC), a television device, and/or a mobile device. Services and applications run substantially similar in all three environments for a common user experience when transitioning from one device to the next while utilizing an application, playing a video game, watching a video, and so on.

In the example system 3900, multiple devices are interconnected through a central computing device. The central computing device may be local to the multiple devices or may be located remotely from the multiple devices. In one embodiment, the central computing device may be a cloud of one or more server computers that are connected to the multiple devices through a network, the Internet, or other data communication link.

In one embodiment, this interconnection architecture enables functionality to be delivered across multiple devices to provide a common and seamless experience to a user of the multiple devices. Each of the multiple devices may have different physical requirements and capabilities, and the central computing device uses a platform to enable the delivery of an experience to the device that is both tailored to the device and yet common to all devices. In one embodiment, a class of target devices is created and experiences are tailored to the generic class of devices. A class of devices may be defined by physical features, types of usage, or other common characteristics of the devices.

In various implementations, the computing device 3902 may assume a variety of different configurations, such as for computer 3914, mobile 3916, and television 3918 uses. Each of these configurations includes devices that may have generally different constructs and capabilities, and thus the computing device 3902 may be configured according to one or more of the different device classes. For instance, the computing device 3902 may be implemented as the computer 3914 class of a device that includes a personal computer, desktop computer, a multi-screen computer, laptop computer, netbook, and so on.

The computing device 3902 may also be implemented as the mobile 3916 class of device that includes mobile devices, such as a mobile phone, portable music player, portable gaming device, a tablet computer, a wearable device, a multi-screen computer, and so on. The computing device 3902 may also be implemented as the television 3918 class of device that includes devices having or connected to generally larger screens in casual viewing environments. These devices include televisions, set-top boxes, gaming consoles, and so on.

The techniques described herein may be supported by these various configurations of the computing device 3902 and are not limited to the specific examples of the techniques described herein. For example, functionalities discussed with reference to the client device 102 and/or ink module 118 may be implemented all or in part through use of a distributed system, such as over a “cloud” 3920 via a platform 3922 as described below.

The cloud 3920 includes and/or is representative of a platform 3922 for resources 3924. The platform 3922 abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud 3920. The resources 3924 may include applications and/or data that can be utilized while computer processing is executed on servers that are remote from the computing device 3902. Resources 3924 can also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network.

The platform 3922 may abstract resources and functions to connect the computing device 3902 with other computing devices. The platform 3922 may also serve to abstract scaling of resources to provide a corresponding level of scale to encountered demand for the resources 3924 that are implemented via the platform 3922. Accordingly, in an interconnected device embodiment, implementation of functionality described herein may be distributed throughout the system 3900. For example, the functionality may be implemented in part on the computing device 3902 as well as via the platform 3922 that abstracts the functionality of the cloud 3920.

Discussed herein are a number of methods that may be implemented to perform techniques discussed herein. Aspects of the methods may be implemented in hardware, firmware, or software, or a combination thereof. The methods are shown as a set of steps that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. Further, an operation shown with respect to a particular method may be combined and/or interchanged with an operation of a different method in accordance with one or more implementations. Aspects of the methods can be implemented via interaction between various entities discussed above with reference to the environment 100.

Implementations discussed herein include:

EXAMPLE 1

A system for selecting content based on pen input and causing an action to be performed on the selected content, the system including: an input surface; one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: detecting pen input from a pen to the input surface drawing an open selection gesture; generating a closed selection shape based on a particular direction of the selection gesture relative to the input surface; ascertaining a release event for the selection gesture; causing an automatic selection of content within the selection shape responsive to the release event; and causing an action to be performed utilizing the selected content.

EXAMPLE 2

A system as described in example 1, wherein the selection gesture is different than the selection shape.

EXAMPLE 3

A system as described in one or more of examples 1 or 2, wherein the selection gesture includes a single line, and wherein the selection shape includes a rectangle.

EXAMPLE 4

A system as described in one or more of examples 1-3, wherein the selection gesture includes an open curve, and wherein the selection shape includes a closed curve.

EXAMPLE 5

A system as described in one or more of examples 1-4, wherein the selection shape is generated concurrently with said drawing of the open selection gesture.

EXAMPLE 6

A system as described in one or more of examples 1-5, wherein the release event includes one or more of the pen being removed from proximity to the input surface, or a selection of a pen button on the pen.

EXAMPLE 7

A system as described in one or more of examples 1-6, wherein the selection gesture includes an open curve, and wherein the operations further include automatically drawing an auto-complete line between respective ends of the open curve to generate the selection shape as a closed curve.

EXAMPLE 8

A system as described in one or more of examples 1-7, wherein the operations further include presenting a visual affordance that the system is in an ink selection mode, the visual affordance including one or more of an ink flag or a hover target.

EXAMPLE 9

A computer-implemented method for causing a command to performed based on freehand input of the command, the method including: detecting a selection of content; ascertaining, by logic executed via a computing system, input of a command via freehand input of characters from a pen to an input surface of the computing system; and causing the command to be executed by the computing system using the content.

EXAMPLE 10

A computer-implemented method as described in example 9, wherein the freehand input is applied at least partially within the selection.

EXAMPLE 11

A computer-implemented method as described in one or more of examples 9 or 10, wherein the content is selected after the input of the command.

EXAMPLE 12

A computer-implemented method as described in one or more of examples 9-11, wherein the freehand input includes one or more of a word or phrase that is recognized by the computing system as the command.

EXAMPLE 13

A computer-implemented method as described in one or more of examples 9-12, wherein the command includes a command to generate a reminder, and wherein said causing the command to be executed includes causing a reminder to be generated using at least some of the content.

EXAMPLE 14

A computer-implemented method as described in one or more of examples 9-13, wherein the command includes a command to share the content, and wherein said causing the command to be executed includes causing at least some of the content to be processed into a shareable form.

EXAMPLE 15

A computer-implemented method processing ink input in an ink recognition mode, the method including: detecting a pen in proximity to an input surface of a computing system while the computing system is in an ink recognition mode; causing by the computing system a visual affordance identifying the ink recognition mode to be displayed responsive to said detecting; and processing by the computing system ink content applied to the input surface according to the ink recognition mode.

EXAMPLE 16

A computer-implemented method as described in example 15, wherein the visual affordance includes one or more of an ink flag or a hover target.

EXAMPLE 17

A computer-implemented method as described in one or more of examples 15 or 16, wherein the recognition mode includes a shape recognition mode, and there said processing includes processing the ink content to generate encoded non-text shapes.

EXAMPLE 18

A computer-implemented method as described in one or more of examples 15-17, wherein the recognition mode includes a shape recognition mode, and there said processing includes processing the ink content to generate encoded non-text shapes.

EXAMPLE 19

A computer-implemented method as described in one or more of examples 15-18, wherein the input surface includes a display, the ink content includes freehand input to a particular region of the display that is not visually identified as a designated region for receiving input, and wherein the method further includes: converting the freehand input into encoded text; and populating the encoded text to a different region of the display.

EXAMPLE 20

A computer-implemented method as described in one or more of examples 15-19, wherein the input surface includes a display, the ink content includes freehand input of a character to a particular region of the display, and wherein the method further includes processing selected content based on a character recognized from the freehand input.

Conclusion

Techniques for ink modes are described. Although embodiments are described in language specific to structural features and/or methodological acts, it is to be understood that the embodiments defined in the appended claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed embodiments.

Claims

1. A system comprising:

an input surface;
one or more processors; and
one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: detecting pen input from a pen to the input surface drawing an open selection gesture; generating a closed selection shape based on a particular direction of the selection gesture relative to the input surface; ascertaining a release event for the selection gesture; causing an automatic selection of content within the selection shape responsive to the release event; and causing an action to be performed utilizing the selected content.

2. The system as described in claim 1, wherein the selection gesture is different than the selection shape.

3. The system as described in claim 1, wherein the selection gesture comprises a single line, and wherein the selection shape comprises a rectangle.

4. The system as described in claim 1, wherein the selection gesture comprises an open curve, and wherein the selection shape comprises a closed curve.

5. The system as described in claim 1, wherein the selection shape is generated concurrently with said drawing of the open selection gesture.

6. The system as described in claim 1, wherein the release event comprises one or more of the pen being removed from proximity to the input surface, or a selection of a pen button on the pen.

7. The system as described in claim 1, wherein the selection gesture comprises an open curve, and wherein the operations further include automatically drawing an auto-complete line between respective ends of the open curve to generate the selection shape as a closed curve.

8. The system as described in claim 1, wherein the operations further include presenting a visual affordance that the system is in an ink selection mode, the visual affordance comprising one or more of an ink flag or a hover target.

9. A computer-implemented method, comprising:

detecting a selection of content;
ascertaining, by logic executed via a computing system, input of a command via freehand input of characters from a pen to an input surface of the computing system; and
causing the command to be executed by the computing system using the content.

10. A computer-implemented method as recited in claim 9, wherein the freehand input is applied at least partially within the selection.

11. A computer-implemented method as recited in claim 9, wherein the content is selected after the input of the command.

12. A computer-implemented method as recited in claim 9, wherein the freehand input comprises one or more of a word or phrase that is recognized by the computing system as the command.

13. A computer-implemented method as recited in claim 9, wherein the command comprises a command to generate a reminder, and wherein said causing the command to be executed comprises causing a reminder to be generated using at least some of the content.

14. A computer-implemented method as recited in claim 9, wherein the command comprises a command to share the content, and wherein said causing the command to be executed comprises causing at least some of the content to be processed into a shareable form.

15. A computer-implemented method, comprising:

detecting a pen in proximity to an input surface of a computing system while the computing system is in an ink recognition mode;
causing by the computing system a visual affordance identifying the ink recognition mode to be displayed responsive to said detecting; and
processing by the computing system ink content applied to the input surface according to the ink recognition mode.

16. A computer-implemented method as recited in claim 15, wherein the visual affordance comprises one or more of an ink flag or a hover target.

17. A computer-implemented method as recited in claim 15, wherein the recognition mode comprises a shape recognition mode, and there said processing comprises processing the ink content to generate encoded non-text shapes.

18. A computer-implemented method as recited in claim 15, wherein the recognition mode comprises a shape recognition mode, and there said processing comprises processing the ink content to generate encoded non-text shapes.

19. A computer-implemented method as recited claim 15, wherein the input surface comprises a display, the ink content comprises freehand input to a particular region of the display that is not visually identified as a designated region for receiving input, and wherein the method further comprises:

converting the freehand input into encoded text; and
populating the encoded text to a different region of the display.

20. A computer-implemented method as recited in claim 15, wherein the input surface comprises a display, the ink content comprises freehand input of a character to a particular region of the display, and wherein the method further comprises processing selected content based on a character recognized from the freehand input.

Patent History
Publication number: 20150338939
Type: Application
Filed: Mar 23, 2015
Publication Date: Nov 26, 2015
Inventor: William H. Vong (Hunts Point, WA)
Application Number: 14/665,330
Classifications
International Classification: G06F 3/0354 (20060101); G06T 11/20 (20060101); G06T 11/60 (20060101);