Patents by Inventor Timothy S. Paek
Timothy S. Paek has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140368434Abstract: Described herein are technologies that facilitate decoding a continuous sequence of gestures set forth in the air by a user. A sensor captures movement of a portion of a body of the user relative to a keyboard displayed on a display screen, and a continuous trace is identified based upon the captured movement. The continuous trace is decoded to ascertain a word desirably set forth by the user.Type: ApplicationFiled: June 13, 2013Publication date: December 18, 2014Inventors: Timothy S. Paek, Johnson Apacible
-
Publication number: 20140365878Abstract: Disclosed herein are representative embodiments of tools and techniques for providing one or more ink-trace predictions for shape writing. According to one exemplary technique, a portion of a shape-writing shape is received by a touchscreen. Based on the portion of the shape-writing shape, an ink trace is displayed. Also, predicted text is determined. The ink trace corresponding to a first portion of the predicted text. Additionally, an ink-trace prediction is provided connecting the ink trace to at least one or more keyboard keys corresponding to one or more characters of a second portion of the predicted text.Type: ApplicationFiled: June 10, 2013Publication date: December 11, 2014Inventors: Juan Dai, Timothy S. Paek, Dmytro Rudchenko, Parthasarathy Sundararajan, Eric Norman Badger, Pu Li
-
Publication number: 20140359434Abstract: Disclosed herein are representative embodiments of tools and techniques for providing out-of-dictionary indicators for shape writing. According to one exemplary technique, a first shape-writing shape is received by a touchscreen and a failed recognition event is determined to have occurred for the first shape-writing shape. Also, a second shape-writing shape is received by the touchscreen and a failed recognition event is determined to have occurred for the second shape-writing shape. The first shape-writing shape is compared to the second shape-writing shape. Additionally, at least one out-of-dictionary indicator is provided based on the comparing of the first shape-writing shape to the second shape-writing shape.Type: ApplicationFiled: May 30, 2013Publication date: December 4, 2014Inventors: Juan Dai, Timothy S. Paek, Dmytro Rudchenko, Parthasarathy Sundararajan, Eric Norman Badger, Pu Li
-
Publication number: 20140347262Abstract: Described herein are technologies relating to display of a representation of an object on a display screen with visual verisimilitude to a viewer. A location of eyes of the viewer relative to a reference point on the display screen is determined. Additionally, a direction of gaze of the eyes of the viewer is determined. Based upon the location and direction of gaze of the eyes of the viewer, the representation of the object can be displayed at a scale and orientation such that it appears with visual verisimilitude to the viewer.Type: ApplicationFiled: May 24, 2013Publication date: November 27, 2014Applicant: Microsoft CorporationInventors: Timothy S. Paek, Johnson Apacible
-
Publication number: 20140329487Abstract: A mobile device is described herein that provides a user interface experience to a user who is operating the mobile device within a vehicle. The mobile device provides the user interface experience using mode functionality. The mode functionality operates by receiving inference-input information from one or more input sources. At least one input source corresponds to at least one movement-sensing device, provided by the mobile device, that determines movement of the mobile device. The mode functionality then infers a state of the vehicle based on the inference-input information and presents a user interface experience that is appropriate for the vehicle state. In one scenario, the mode functionality can also infer that the vehicle is in a distress condition. In response, the mode functionality can solicit assistance for the user.Type: ApplicationFiled: July 15, 2014Publication date: November 6, 2014Applicant: MICROSOFT CORPORATIONInventors: Timothy S. PAEK, Paramvir BAHL, Sreenivas ADDAGATLA
-
Publication number: 20140310213Abstract: An apparatus and method are disclosed for providing feedback and guidance to touch screen device users to improve text entry user experience and performance by generating input history data including character probabilities, word probabilities, and touch models. According to one embodiment, a method comprises receiving first input data, automatically learning user tendencies based on the first input data to generate input history data, receiving second input data, and generating auto-corrections or suggestion candidates for one or more words of the second input data based on the input history data. The user can then select one of the suggestion candidates to replace a selected word with the selected suggestion candidate.Type: ApplicationFiled: June 27, 2014Publication date: October 16, 2014Inventors: Eric Norman Badger, Drew Elliott Linerud, Itai Almog, Timothy S. Paek, Parthasarathy Sundararajan, Dmytro Rudchenko, Asela J. Gunawardana
-
Publication number: 20140278349Abstract: Techniques are described to generate text prediction candidates corresponding to detected text characters according to an adaptive language model that includes multiple individual language model dictionaries. Respective scoring data from the dictionaries is combined to select prediction candidates in different interaction scenarios. In an implementation, dictionaries corresponding to multiple different languages are combined to produce multi-lingual predictions. Predictions for different languages may be weighted proportionally according to relative usage by a user. Weights used to combine contributions from multiple dictionaries may also depend upon factors such as how recently a word is used, number of times used, and so forth. Further, the dictionaries may include interaction-specific dictionaries that are learned by monitoring a user's typing activity to adapt predictions to corresponding usage scenarios.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: MICROSOFT CORPORATIONInventors: Jason A. Grieves, Dmytro Rudchenko, Parthasarathy Sundararajan, Timothy S. Paek, Itai Almog, Gleb G. Krivosheev
-
Publication number: 20140232632Abstract: The subject disclosure is directed towards a wearable interactive device, such as a wearable identity badge. When a user moves the device, such as to position a display (e.g., part) of the device a sensed distance at a sensed horizontal and vertical angle, the device outputs content that is based on the position. Context data also may be used in determining the content to output, as well as any other sensed data that may be available.Type: ApplicationFiled: February 15, 2013Publication date: August 21, 2014Applicant: MICROSOFT CORPORATIONInventors: Stephen Edward Hodges, Norman Timo Pohl, John Helmes, Nicolas Villar Martinez, Timothy S. Paek, Johnson Tan Apacible
-
Patent number: 8811938Abstract: A mobile device is described herein that provides a user interface experience to a user who is operating the mobile device within a vehicle. The mobile device provides the user interface experience using mode functionality. The mode functionality operates by receiving inference-input information from one or more input sources. At least one input source corresponds to at least one movement-sensing device, provided by the mobile device, that determines movement of the mobile device. The mode functionality then infers a state of the vehicle based on the inference-input information and presents a user interface experience that is appropriate for the vehicle state. In one scenario, the mode functionality can also infer that the vehicle is in a distress condition. In response, the mode functionality can solicit assistance for the user.Type: GrantFiled: December 16, 2011Date of Patent: August 19, 2014Assignee: Microsoft CorporationInventors: Timothy S. Paek, Paramvir Bahl, Sreenivas Addagatla
-
Patent number: 8782556Abstract: An apparatus and method are disclosed for providing feedback and guidance to touch screen device users to improve text entry user experience and performance by generating input history data including character probabilities, word probabilities, and touch models. According to one embodiment, a method comprises receiving first input data, automatically learning user tendencies based on the first input data to generate input history data, receiving second input data, and generating auto-corrections or suggestion candidates for one or more words of the second input data based on the input history data. The user can then select one of the suggestion candidates to replace a selected word with the selected suggestion candidate.Type: GrantFiled: March 22, 2010Date of Patent: July 15, 2014Assignee: Microsoft CorporationInventors: Eric Norman Badger, Drew Elliot Linerud, Itai Almog, Timothy S. Paek, Parthasarathy Sundararajan, Dmytro Rudchenko, Asela J. Gunawardana
-
Publication number: 20140098036Abstract: Described herein are various technologies pertaining to shapewriting. A touch-sensitive input panel comprises a plurality of keys, where each key in the plurality of keys is representative of a respective plurality of characters. A user can generate a trace over the touch-sensitive input panel, wherein the trace passes over keys desirably selected by the user. A sequence of characters, such as a word, is decoded based upon the trace, and is output to a display or a speaker.Type: ApplicationFiled: January 20, 2013Publication date: April 10, 2014Applicant: Microsoft CorporationInventors: Timothy S. Paek, Johnson Apacible, Dmytro Rudchenko, Bongshin Lee, Juan Dai, Yutaka Suzue
-
Publication number: 20140098038Abstract: Technologies relating to touch-sensitive displays are described herein. A computing device with a touch-sensitive display is configurable to act as multiple control devices, such as a video game controller, a remote control, and music player. Different haptic regions can be assigned for the different configurations, where the haptic regions are configured to provide haptic feedback when a user interacts with such haptic regions. Thus, similar to conventional input mechanisms with physical human-machine interfaces, haptic feedback is provided as a user employs the computing device, allowing for eyes-free interaction.Type: ApplicationFiled: June 7, 2013Publication date: April 10, 2014Inventors: Timothy S. Paek, Hong Tan, Asela Gunawardana, Mark Yeend
-
Publication number: 20140101593Abstract: A soft input panel (SIP) for a computing device is configured to be used by a person holding a computing device with one hand. For example, a user grips a mobile computing device with his right hand at the bottom right corner and uses his right thumb to touch the various keys of the SIP, or grips a mobile computing device with his left hand at the bottom left corner and uses his left thumb to touch the various keys of the SIP. The SIP comprises arced or slanted rows of keys that correspond to the natural pivoting motion of the user's thumb.Type: ApplicationFiled: December 27, 2012Publication date: April 10, 2014Applicant: Microsoft CorporationInventors: Timothy S. Paek, Dmytro Rudchenko, Bongshin Lee, Nikhil Devanur Rangarajan
-
Publication number: 20140098024Abstract: Described herein is a split virtual keyboard that is displayed on a tablet (slate) computing device. The split virtual keyboard includes a first portion and a second portion, the first portion being separated from the second portion. The first portion includes a plurality of character keys that are representative at least one respective character. The tablet computing device is configured to support text generation by way of a continuous sequence of strokes over the plurality of character keys in the first portion of the split virtual keyboard.Type: ApplicationFiled: June 17, 2013Publication date: April 10, 2014Inventors: Timothy S. Paek, Bongshin Lee, Asela Gunawardana, Johnson Apacible, Anoop Gupta
-
Publication number: 20140101545Abstract: Various technologies pertaining to provision of haptic feedback to users of computing devices with touch-sensitive displays are described. First haptic feedback is provided to assist a user in localizing a finger or thumb relative to a graphical object displayed on a touch-sensitive display, where no input data is provided to an application corresponding to the graphical object. A toggle command set forth by the user is subsequently identified; thereafter, an input gesture is received on the touch-sensitive display, and second haptic feedback is provided to aid the user in setting forth input data to the application.Type: ApplicationFiled: March 7, 2013Publication date: April 10, 2014Applicant: MICROSOFT CORPORATIONInventors: Timothy S. Paek, Johnson Apacible, Bongshin Lee, Asela Gunawardana, Vishwas Kulkarni, Hong Z. Tan
-
Publication number: 20130345958Abstract: Described is a technology by which context data such as time, location and user-specific data is used to generate a stop recommendation during a vehicle trip. When a user is at a location or travels along a route, one or more stop recommendations may be computed for providing to the user. A cloud service may compute the stop recommendations, and send them to an automotive device of the user, which may be a smartphone coupled to the vehicle, for output to the user.Type: ApplicationFiled: June 26, 2012Publication date: December 26, 2013Applicant: MICROSOFT CORPORATIONInventors: Timothy S. Paek, Paramvir Bahl
-
Publication number: 20130342363Abstract: Described is a technology by which ambient data related to a vehicle is sensed and processed, for use in determining a state change related to external traffic awareness. Based upon the state change, an allowed level of interactivity with a user interface may be changed, and/or a notification may be output. Images and/or depth data may be sensed as part of determining whether a user who is interacting with a device in a stopped vehicle is to be made aware of the changed condition with respect to other vehicles, pedestrians and/or the like.Type: ApplicationFiled: June 26, 2012Publication date: December 26, 2013Applicant: MICROSOFT CORPORATIONInventors: Timothy S. Paek, Paramvir Bahl
-
Publication number: 20130297307Abstract: A dictation module is described herein which receives and interprets a complete utterance of the user in incremental fashion, that is, one incremental portion at a time. The dictation module also provides rendered text in incremental fashion. The rendered text corresponds to the dictation module's interpretation of each incremental portion. The dictation module also allows the user to modify any part of the rendered text, as it becomes available. In one case, for instance, the dictation module provides a marking menu which includes multiple options by which a user can modify a selected part of the rendered text. The dictation module also uses the rendered text (as modified or unmodified by the user using the marking menu) to adjust one or more models used by the dictation model to interpret the user's utterance.Type: ApplicationFiled: May 1, 2012Publication date: November 7, 2013Applicant: MICROSOFT CORPORATIONInventors: Timothy S. Paek, Bongshin Lee, Bo-June Hsu
-
Publication number: 20130157607Abstract: A mobile device is described herein that provides a user interface experience to a user who is operating the mobile device within a vehicle. The mobile device provides the user interface experience using mode functionality. The mode functionality operates by receiving inference-input information from one or more input sources. At least one input source corresponds to at least one movement-sensing device, provided by the mobile device, that determines movement of the mobile device. The mode functionality then infers a state of the vehicle based on the inference-input information and presents a user interface experience that is appropriate for the vehicle state. In one scenario, the mode functionality can also infer that the vehicle is in a distress condition. In response, the mode functionality can solicit assistance for the user.Type: ApplicationFiled: December 16, 2011Publication date: June 20, 2013Applicant: Microsoft CorporationInventors: Timothy S. Paek, Paramvir Bahl, Sreenivas Addagatla
-
Publication number: 20130155237Abstract: A mobile device is described herein which includes functionality for recognizing gestures made by a user within a vehicle. The mobile device operates by receiving image information that captures a scene including objects within an interaction space. The interaction space corresponds to a volume that projects out from the mobile device in a direction of the user. The mobile device then determines, based on the image information, whether the user has performed a recognizable gesture within the interaction space, without touching the mobile device. The mobile device can receive the image information from a camera device that is an internal component of the mobile device and/or a camera device that is a component of a mount which secures the mobile device within the vehicle. In some implementations, one or more projectors provided by the mobile device and/or the mount may illuminate the interaction space.Type: ApplicationFiled: December 16, 2011Publication date: June 20, 2013Applicant: Microsoft CorporationInventors: Timothy S. Paek, Paramvir Bahl, Oliver H. Foehr