Patents by Inventor Jan-Kristian Markiewicz
Jan-Kristian Markiewicz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20170220243Abstract: Various embodiments provide self-revealing gestures that are designed to provide an indication of how to perform one or more different gestures. In at least one embodiment, an initiation gesture is received, relative to an object. The initiation gesture is configured to cause presentation of a visualization designed to provide an indication of how to perform a different gesture. Responsive to receiving the initiation gesture, the visualization is presented without causing performance of an operation associated with the different gesture.Type: ApplicationFiled: April 10, 2017Publication date: August 3, 2017Inventors: Jan-Kristian Markiewicz, Gerrit H Hofmeester, Orry W Soegiono, Jennifer Marie Wolfe, Chantal M Leonard, Holger Kuehnle, Moneta Ho Kushner
-
Publication number: 20170131898Abstract: The claimed subject matter is directed to providing feedback in a touch screen device in response to an actuation of a virtual unit in a virtual input device. Specifically, the claimed subject matter provides a method and system for providing visual feedback in response to an actuation of a virtual key in a virtual keyboard. One embodiment of the claimed subject matter is implemented as a method for providing luminescent feedback in response to an actuation of a virtual key in a virtual keyboard. User input in a virtual keyboard corresponding to a virtual key is received. The corresponding virtual key is actuated and registered in response to the user input, and a luminescent feedback is displayed to the user as confirmation of the actuation of the virtual key.Type: ApplicationFiled: January 24, 2017Publication date: May 11, 2017Inventors: Jan-Kristian Markiewicz, Manuel Clement, Jason Silvis
-
Patent number: 9639262Abstract: Various embodiments provide self-revealing gestures that are designed to provide an indication of how to perform one or more different gestures. In at least one embodiment, an initiation gesture is received, relative to an object. The initiation gesture is configured to cause presentation of a visualization designed to provide an indication of how to perform a different gesture. Responsive to receiving the initiation gesture, the visualization is presented without causing performance of an operation associated with the different gesture.Type: GrantFiled: January 2, 2012Date of Patent: May 2, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Jan-Kristian Markiewicz, Gerrit H. Hofmeester, Orry W. Soegiono, Jennifer Marie Wolfe, Chantal M. Leonard, Holger Kuehnle, Moneta Ho Kushner
-
Publication number: 20170115844Abstract: Techniques are described herein that are capable of presenting a control interface based on (e.g., based at least in part on) a multi-input command. A multi-input command is a command that includes two or more inputs. Each of the inputs may be of any suitable type. For instance, any one or more of the inputs may be a touch input, a hover input, etc. Moreover, any one or more of the inputs may be a finger input, a pointing device input, etc. A finger input is an input in which a finger touches or hovers over a touch display module of a touch-enabled device. A pointing device input is an input in which a pointing device (e.g., a stylus) touches or hovers over a touch display module of a touch-enabled device.Type: ApplicationFiled: October 24, 2015Publication date: April 27, 2017Inventor: Jan-Kristian Markiewicz
-
Patent number: 9612670Abstract: A system and method for implementing an efficient and easy to user interface for a touch screen device. A cursor may be placed by a user using simple inputs. The device operates places the cursor coarsely and refines the cursor placement upon further input from the user. Text may be selected using a gripper associated with the cursor. The user interface allows text selection without occluding the text being selected with the user's finger or the gripper. For selecting text in a multi-line block of text, a dynamic safety zone is implemented to simplify text selection for the user.Type: GrantFiled: September 12, 2011Date of Patent: April 4, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Jeffrey J. Weir, Jennifer L. Anderson, Jennifer Wolfe, Gerrit H. Hofmeester, Jan-Kristian Markiewicz, Andrew R. Brauninger, Stuart J. Stuple, David Earl Washington, Matthew J. Kotler, Ryan Demopoulos, Amish Patel
-
Publication number: 20170068445Abstract: A computer-implemented technique is described herein that receives captured stroke information when a user enters a handwritten note using an input capture device. The technique then analyzes the captured stroke information to produce output analysis information. Based on the output analysis information, the technique modifies the captured stroke information into an actionable form that contains one or more actionable content items, while otherwise preserving the original form of the captured stroke information. The technique then presents the modified stroke information on a canvas display device. The user may subsequently activate one or more actionable content items in the modified stroke information to perform various supplemental tasks that pertain to the handwritten note. In one case, for example, the technique can recognize the presence of entity items and/or list items in the note and then reproduce them in an actionable form.Type: ApplicationFiled: September 3, 2015Publication date: March 9, 2017Inventors: Nicole Lee, Jan-Kristian Markiewicz, Sarah Graham Williams
-
Publication number: 20170068436Abstract: A computer-implemented technique is described herein that receives captured stroke information when a user enters a handwritten note using an input capture device. The technique then performs analysis on the stroke information to produce output analysis information. Based on the output analysis information, the technique then retrieves at least one supplemental information item that is associated with the captured stroke information. The technique then displays the supplemental information item on a canvas display device, together with the original captured stroke information. In effect, the supplemental information item annotates the captured stroke information with meaningful additional information that enhances the value of the captured stroke information.Type: ApplicationFiled: September 3, 2015Publication date: March 9, 2017Inventors: Nathaniel E. B. Auer, Lee Dicks Clark, Jan-Kristian Markiewicz, Nicole Lee
-
Publication number: 20170068854Abstract: A computer-implemented technique is described herein that receives captured stroke information when a user enters handwritten notes using an input capture device. The technique then automatically performs analysis on the captured stroke information to produce output analysis information. Based on the output analysis information, the technique uses an assistant component to identify a response to the captured stroke information and/or to identify an action to be performed. The technique then presents the response, together with the original captured stroke information. In addition, or alternatively, the technique performs the action. In one case, the response is a text-based response; that text-based response may be presented in a freeform handwriting style to give the user the impression that a virtual assistant is responding to the user's own note. In another case, the response engages the user in an interactive exercise of any type.Type: ApplicationFiled: September 3, 2015Publication date: March 9, 2017Inventors: Jan-Kristian Markiewicz, Nathaniel E. B. Auer, Lee Dicks Clark, Katsumi Take, Nicole Lee
-
Patent number: 9588681Abstract: The claimed subject matter is directed to providing feedback in a touch screen device in response to an actuation of a virtual unit in a virtual input device. Specifically, the claimed subject matter provides a method and system for providing visual feedback in response to an actuation of a virtual key in a virtual keyboard. One embodiment of the claimed subject matter is implemented as a method for providing luminescent feedback in response to an actuation of a virtual key in a virtual keyboard. User input in a virtual keyboard corresponding to a virtual key is received. The corresponding virtual key is actuated and registered in response to the user input, and a luminescent feedback is displayed to the user as confirmation of the actuation of the virtual key.Type: GrantFiled: April 24, 2014Date of Patent: March 7, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Jan-Kristian Markiewicz, Manuel Clement, Jason Silvis
-
Patent number: 9507435Abstract: As a user writes using a handheld writing device, such as an electronic pen or stylus, handwriting input is received and initially displayed as digital ink. The display of the digital ink is converted to recognized text inline with additional digital ink as the user continues to write. A user may edit a word of recognized text inline with other text by selecting the word. An enlarged version of the word is displayed in a character correction user interface that allows a user to make corrections on an individual character basis and also provides other correction options for the word.Type: GrantFiled: January 17, 2012Date of Patent: November 29, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Jan-Kristian Markiewicz, Adrian James Garside, Takanobu Murayama, Krishna Kotipali, Susan E. Dziadosz
-
Patent number: 9442642Abstract: Technologies are generally described for providing a tethered selection handle for direct selection of content on a touch or gesture interface. Touch or gesture input on a computing device may be detected to begin content selection, and a start handle may be displayed near the initial input location. An end handle may be displayed to indicate an end of the selection. After the selection, the end handle, a portion of the end handle, or a separate indicator may be displayed at a location of user's current interaction point to indicate to the user that the computing device is aware of the movement of the user's interaction point away from the end handle, but the content selection had not changed. The newly displayed indicator may be tethered to the end handle to further indicate the connection between the end of the selected content and the user's current interaction point.Type: GrantFiled: June 14, 2013Date of Patent: September 13, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alex Pereira, Jon Clapper, Jan-Kristian Markiewicz
-
Patent number: 9400567Abstract: A system and method for implementing an efficient and easy to user interface for a touch screen device. A cursor may be placed by a user using simple inputs. The device operates places the cursor coarsely and refines the cursor placement upon further input from the user. Text may be selected using a gripper associated with the cursor. The user interface allows text selection without occluding the text being selected with the user's finger or the gripper. For selecting text in a multi-line block of text, a dynamic safety zone is implemented to simplify text selection for the user.Type: GrantFiled: November 21, 2012Date of Patent: July 26, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Jeffrey J. Weir, Jennifer L. Anderson, Jennifer Wolfe, Gerrit H. Hofmeester, Jan-Kristian Markiewicz, Andrew R. Brauninger, Stuart J. Stuple, David Earl Washington, Matthew J. Kotler, Ryan Demopoulos, Amish Patel
-
Publication number: 20160210027Abstract: Application closing techniques are described. In one or more implementations, a computing device recognizes an input as involving selection of an application displayed in a display environment by the computing device and subsequent movement of a point of the selection toward an edge of the display environment. Responsive to the recognizing of the input, the selected application is closed by the computing device.Type: ApplicationFiled: December 28, 2015Publication date: July 21, 2016Inventors: Brian S. LeVee, Jan-Kristian Markiewicz, Gerrit H. Hofmeester, Nils A. Sundelin, Chaitanya Dev Sareen, Matthew I. Worley, Jesse Clay Satterfield, Adam E. Barrus, Benjamin Salim Srour, Bret P. Anderson
-
Patent number: 9389764Abstract: Various embodiments enable target disambiguation and correction. In one or more embodiments, target disambiguation includes an entry mode in which attempts are made to disambiguate one or more targets that have been selected by a user, and an exit mode which exits target disambiguation. Entry mode can be triggered in a number of different ways including, by way of example and not limitation, acquisition of multiple targets, selection latency, a combination of multiple target acquisition and selection latency, and the like. Exit mode can be triggered in a number of different ways including, by way of example and not limitation, movement of a target selection mechanism outside of a defined geometry, speed of movement of the target selection mechanism, and the like.Type: GrantFiled: May 27, 2011Date of Patent: July 12, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Paul Armistead Hoover, Michael J. Patten, Theresa B. Pittappilly, Jan-Kristian Markiewicz, Adrian J. Garside, Maxim V. Mazeev, Jarrod Lombardo
-
Patent number: 9367230Abstract: One or more techniques and/or systems are provided for utilizing input data received from an indirect interaction device (e.g., mouse, touchpad, etc.) as if the data was received from a direct interaction device (e.g., touchscreen). Interaction models are described for handling input data received from an indirect interaction device. For example, the interaction models may provide for the presentation of two or more targets (e.g., cursors) on a display when two or more contacts (e.g., fingers) are detected by indirect interaction device. Moreover, based upon a number of contacts detected and/or a pressured applied by respective contacts, the presented target(s) may be respectively transitioned between a hover visualization and an engage visualization. Targets in an engage visualization may manipulate a size of an object presented in a user interface, pan the object, drag the object, rotate the object, and/or otherwise engage the object, for example.Type: GrantFiled: November 8, 2011Date of Patent: June 14, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Sarah G. Williams, Scott Honji, Masahiko Kaneko, Jan-Kristian Markiewicz, Vincent Ball, Amish Patel, Paul R. Millsap
-
Patent number: 9329768Abstract: Computer-readable media, computerized methods, and computer systems for intuitively invoking a panning action (e.g., moving content within a content region of a display area) by applying a user-initiated input at the content region rendered at a touchscreen interface are provided. Initially, aspects of the user-initiated input include a location of actuation (e.g., touch point on the touchscreen interface) and a gesture. Upon ascertaining that the actuation location occurred within the content region and that the gesture is a drag operation, based on a distance of uninterrupted tactile contact with the touchscreen interface, a panning mode may be initiated. When in the panning mode, and if the application rendering the content at the display area supports scrolling functionality, the gesture will control movement of the content within the content region. In particular, the drag operation of the gesture will pan the content within the display area when surfaced at the touchscreen interface.Type: GrantFiled: February 11, 2013Date of Patent: May 3, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING LLCInventors: David A Matthews, Jan-Kristian Markiewicz, Reed L Townsend, Pamela De La Torre Baltierra, Todd A Torset, Josh A Clow, Xiao Tu, Leroy B Keely
-
Publication number: 20160110090Abstract: This document describes techniques and apparatuses for gesture-based content-object zooming. In some embodiments, the techniques receive a gesture made to a user interface displaying multiple content objects, determine which content object to zoom, determine an appropriate size for the content object based on bounds of the object and the size of the user interface, and zoom the object to the appropriate size.Type: ApplicationFiled: December 21, 2015Publication date: April 21, 2016Inventors: Michael J. Patten, Paul Armistead Hoover, Jan-Kristian Markiewicz
-
Publication number: 20160077793Abstract: Systems, methods, and computer storage media are provided for initiating a system-wide voice-to-text dictation service in response to a preconfigured gesture. Data input fields, independent of the application from which they are presented to a user, are configured to at least detect one or more input events. A gesture listener process, controlled by the system, is configured to detect a preconfigured gesture corresponding to a data input field. Detection of the preconfigured gesture generates an input event configured to invoke a voice-to-text session for the corresponding data input field. The preconfigured gesture can be configured such that any visible on-screen affordances (e.g., microphone button on a virtual keyboard) are omitted to maintain aesthetic purity and further provide system-wide access to the dictation service.Type: ApplicationFiled: September 15, 2014Publication date: March 17, 2016Inventors: ROBERT JOSEPH DISANO, ALEXANDRE DOUGLAS PEREIRA, LISA JOY STIFELMAN, JAN-KRISTIAN MARKIEWICZ, SHANE JEREMY LANDRY, CHRISTIAN KLEIN
-
Publication number: 20160048318Abstract: A device, method, and computer-readable media for switching between a digital ink selection mode and another mode are presented. The device includes a surface and processor that receive and identify gestures or writing instrument strokes. The processor receives the identified gestures or writing instrument strokes from the digital ink-enabled surface. In response to a tap gesture, the processor processes the area associated with the tap to detect digital ink strokes for a word, sentence, or shape corresponding to the tapped area. In turn, the device enters an ink selection mode for the located ink strokes. The digital ink-enabled surface may have a display that is updated to render a closed shape around the located digital ink strokes. The device may switch from digital ink selection mode to the other mode in response to additional writing instrument interactions or additional gestures, including interactions or gestures on displayed whitespace.Type: ApplicationFiled: August 15, 2014Publication date: February 18, 2016Inventor: JAN-KRISTIAN MARKIEWICZ
-
Patent number: 9262071Abstract: Various embodiments provide techniques for direct manipulation of content. The direct manipulation of content can provide an intuitive way for a user to access and interact with content. In at least some embodiments, content manipulation is “direct” in that content displayed in a user interface (e.g., one or more Web pages in a Web browser interface) can be moved in and/or out of the user interface in a direction that corresponds to user-initiated physical movements, such as the user dragging or flicking the content with the user's finger or some other type of input device.Type: GrantFiled: March 16, 2009Date of Patent: February 16, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Helen E. Drislane, David A. Matthews, Jan-Kristian Markiewicz, Paul L. Cutsinger, Jr., Bruce A. Morgan, Brian E. Manthos, Prashant Singh