Patents by Inventor Kenneth P. Hinckley
Kenneth P. Hinckley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8665479Abstract: Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device.Type: GrantFiled: February 21, 2012Date of Patent: March 4, 2014Assignee: Microsoft CorporationInventors: Desney S. Tan, Hrvoje Benko, Stephen G. Latta, Steven Nabil Bathiche, Kevin Geisner, Kenneth P. Hinckley
-
Patent number: 8660978Abstract: A computing device is described herein for detecting and addressing unintended contact of a hand portion (such as a palm) or other article with a computing device. The computing device uses multiple factors to determine whether input events are accidental, including, for instance, the tilt of a pen device as it approaches a display surface of the computing device. The computing device can also capture and analyze input events which represent a hand that is close to the display surface, but not making physical contact with the display surface. The computing device can execute one or more behaviors to counteract the effect of any inadvertent input actions that it may detect.Type: GrantFiled: December 17, 2010Date of Patent: February 25, 2014Assignee: Microsoft CorporationInventors: Kenneth P. Hinckley, Michel Pahud
-
Publication number: 20130300668Abstract: Grip-based device adaptations are described in which a touch-aware skin of a device is employed to adapt device behavior in various ways. The touch-aware skin may include a plurality of sensors from which a device may obtain input and decode the input to determine grip characteristics indicative of a user's grip. On-screen keyboards and other input elements may then be configured and located in a user interface according to a determined grip. In at least some embodiments, a gesture defined to facilitate selective launch of on-screen input element may be recognized and used in conjunction with grip characteristics to launch the on-screen input element in dependence upon grip. Additionally, touch and gesture recognition parameters may be adjusted according to a determined grip to reduce misrecognition.Type: ApplicationFiled: May 20, 2013Publication date: November 14, 2013Inventors: Anatoly Churikov, Catherine N. Boulanger, Hrvoje Benko, Luis E. Cabrera-Cordon, Paul Henry Dietz, Steven Nabil Bathiche, Kenneth P. Hinckley
-
Publication number: 20130286223Abstract: Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared.Type: ApplicationFiled: April 25, 2012Publication date: October 31, 2013Applicant: MICROSOFT CORPORATIONInventors: Stephen G. Latta, Kenneth P. Hinckley, Kevin Geisner, Steven Nabil Bathiche, Hrvoje Benko, Vivek Pradeep
-
Patent number: 8539384Abstract: Embodiments of multi-screen pinch and expand gestures are described. In various embodiments, a first input is recognized at a first screen of a multi-screen system, and the first input includes a first motion input. A second input is recognized at a second screen of the multi-screen system, and the second input includes a second motion input. A pinch gesture or an expand gesture can then be determined from the first and second motion inputs that are associated with the recognized first and second inputs.Type: GrantFiled: February 25, 2010Date of Patent: September 17, 2013Assignee: Microsoft CorporationInventors: Kenneth P. Hinckley, Koji Yatani
-
Publication number: 20130234992Abstract: In some implementations, a touch point on a surface of a touchscreen device may be determined. An image of a region of space above the surface and surrounding the touch point may be determined The image may include a brightness gradient that captures a brightness of objects above the surface. A binary image that includes one or more binary blobs may be created based on a brightness of portions of the image. A determination may be made as to which of the one more binary blobs are connected to each other to form portions of a particular user. A determination may be made that the particular user generated the touch point.Type: ApplicationFiled: April 22, 2013Publication date: September 12, 2013Applicant: Microsoft CorporationInventors: Stephen E. Hodges, Hrvoje Benko, Ian M. Sands, David Alexander Butler, Shahram Izadi, William Ben Kunz, Kenneth P. Hinckley
-
Publication number: 20130215454Abstract: Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device.Type: ApplicationFiled: February 21, 2012Publication date: August 22, 2013Applicant: MICROSOFT CORPORATIONInventors: Desney S. Tan, Hrvoje Benko, Stephen G. Latta, Steven Nabil Bathiche, Kevin Geisner, Kenneth P. Hinckley
-
Patent number: 8509847Abstract: A mobile device connection system is provided. The system includes an input medium to detect a device position or location. An analysis component determines a device type and establishes a connection with the device. The input medium can include vision systems to detect device presence and location where connections are established via wireless technologies.Type: GrantFiled: December 28, 2012Date of Patent: August 13, 2013Assignee: Microsoft CorporationInventors: Andrew D. Wilson, Raman K Sarin, Kenneth P. Hinckley
-
Publication number: 20130201095Abstract: Techniques involving presentations are described. In one or more implementations, a user interface is output by a computing device that includes a slide of a presentation, the slide having an object that is output for display in three dimensions. Responsive to receipt of one or more inputs by the computing device, how the object in the slide is output for display in the three dimensions is altered.Type: ApplicationFiled: February 7, 2012Publication date: August 8, 2013Applicant: MICROSOFT CORPORATIONInventors: Paul Henry Dietz, Vivek Pradeep, Stephen G. Latta, Kenneth P. Hinckley, Hrvoje Benko, Alice Jane Bernheim Brush
-
Publication number: 20130201113Abstract: Functionality is described herein for detecting and responding to gestures performed by a user using a computing device, such as, but not limited to, a tablet computing device. In one implementation, the functionality operates by receiving touch input information in response to the user touching the computing device, and movement input information in response to the user moving the computing device. The functionality then determines whether the input information indicates that a user has performed or is performing a multi-touch-movement (MTM) gesture. The functionality can then perform any behavior in response to determining that the user has performed an MTM gesture, such as by modifying a view or invoking a function, etc.Type: ApplicationFiled: February 7, 2012Publication date: August 8, 2013Applicant: Microsoft CorporationInventors: Kenneth P. Hinckley, Michel Pahud, William A. S. Buxton
-
Publication number: 20130182892Abstract: Methods, systems, and computer-readable media for establishing an ad hoc network of devices that can be used to interpret gestures. Embodiments of the invention use a network of sensors with an ad hoc spatial configuration to observe physical objects in a performance area. The performance area may be a room or other area within range of the sensors. Initially, devices within the performance area, or with a view of the performance area, are indentified. Once identified, the sensors go through a discovery phase to locate devices within an area. Once the discovery phase is complete and the devices within the ad hoc network are located, the combined signals received from the devices may be used to interpret gestures made within the performance area.Type: ApplicationFiled: January 18, 2012Publication date: July 18, 2013Applicant: MICROSOFT CORPORATIONInventors: Eric Horvitz, Kenneth P. Hinckley, Hrvoje Benko
-
Publication number: 20130181953Abstract: A stylus computing environment is described. In one or more implementations, one or more inputs are detected using one or more sensors of a stylus. A user that has grasped the stylus, using fingers of the user's hand, is identified from the received one or more inputs.Type: ApplicationFiled: January 13, 2012Publication date: July 18, 2013Applicant: MICROSOFT CORPORATIONInventors: Kenneth P. Hinckley, Stephen G. Latta
-
Publication number: 20130181902Abstract: Skinnable touch device grip pattern techniques are described herein. A touch-aware skin may be configured to substantially cover the outer surfaces of a computing device. The touch-aware skin may include a plurality of skin sensors configured to detect interaction with the skin at defined locations. The computing device may include one or more modules operable to obtain input from the plurality of skin sensors and decode the input to determine grips patterns that indicate how the computing device is being held by a user. Various functionality provided by the computing device may be selectively enabled and/or adapted based on a determined grip pattern such that the provided functionality may change to match the grip pattern.Type: ApplicationFiled: January 17, 2012Publication date: July 18, 2013Applicant: MICROSOFT CORPORATIONInventors: Kenneth P. Hinckley, Paul Henry Dietz, Hrvoje Benko, Desney S. Tan, Steven Nabil Bathiche
-
Patent number: 8487937Abstract: A computer system and for displaying a static animation image in response to an action related to a displayed object that occurs on the computer system is presented. An initial state of the displayed object is determined with regard to the action. A final state of the displayed object with regard to the action is also determined. Transition aspects between the initial state and the final state are then determined. A static animation image is generated according to the initial state, the transition aspects, and the final state. The static animation image represents, in static form, an animation indicative of the action from the initial state to the final state of the displayed object. The static animation image is displayed on the graphical user interface in lieu of animation.Type: GrantFiled: January 4, 2006Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Daniel C Robbins, Desney S Tan, George G Robertson, Kenneth P Hinckley, Maneesh Agrawala, Patrick M Baudisch, Steven M Drucker, Tovi S Grossman
-
Patent number: 8490047Abstract: This document describes various techniques for creating, modifying, and using graphical mashups. In one embodiment, a graphical mashup is created based on locations of graphical representations of objects in a working area. Logical connections between the objects are created based on the objects' locations relative to each other. Alternatively or additionally, the techniques may enable a user to create or modify a graphical mashup by adding or deleting objects, modifying logical connections between objects, annotating objects, or abstracting the graphical mashup.Type: GrantFiled: January 15, 2009Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Georg F. Petschnigg, Jonathan R. Harris, Kenneth P. Hinckley
-
Patent number: 8473870Abstract: Embodiments of a multi-screen hold and drag gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi-screen system when the hold input is held in place. A motion input is recognized at a second screen of the multi-screen system, and the motion input is recognized to select a displayed object while the hold input remains held in place. A hold and drag gesture can then be determined from the recognized hold and motion inputs.Type: GrantFiled: February 25, 2010Date of Patent: June 25, 2013Assignee: Microsoft CorporationInventors: Kenneth P. Hinckley, Koji Yatani
-
Publication number: 20130154952Abstract: Functionality is described herein for interpreting gestures made by a user in the course of interacting with a handheld computing device. The functionality operates by: (a) receiving a touch input event from at least one touch input mechanism; (b) receiving a movement input event from at least one movement input mechanism in response to movement of the computing device; and (c) determining whether the touch input event and the movement input event indicate that a user has made a multi-touch-movement (MTM) gesture. A user performs a MTM gesture by touching a surface of the touch input mechanism to establish two or more contacts in conjunction with moving the computing device in a prescribed manner. The functionality can define an action space in response to the MTM gesture and perform an action which affects the action space.Type: ApplicationFiled: December 16, 2011Publication date: June 20, 2013Applicant: MICROSOFT CORPORATIONInventors: Kenneth P. Hinckley, Hyunyoung Song
-
Publication number: 20130138424Abstract: The subject disclosure is directed towards detecting symbolic activity within a given environment using a context-dependent grammar. In response to receiving sets of input data corresponding to one or more input modalities, a context-aware interactive system processes a model associated with interpreting the symbolic activity using context data for the given environment. Based on the model, related sets of input data are determined. The context-aware interactive system uses the input data to interpret user intent with respect to the input and thereby, identify one or more commands for a target output mechanism.Type: ApplicationFiled: November 28, 2011Publication date: May 30, 2013Applicant: MICROSOFT CORPORATIONInventors: Michael F. Koenig, Oscar Enrique Murillo, Ira Lynn Snyder, JR., Andrew D. Wilson, Kenneth P. Hinckley, Ali M. Vassigh
-
Patent number: 8432366Abstract: The claimed subject matter provides a system and/or a method that facilitates distinguishing input among one or more users in a surface computing environment. A variety of information can be obtained and analyzed to infer an association between a particular input and a particular user. Touch point information can be acquired from a surface wherein the touch point information relates to a touch point. In addition, one or more environmental sensors can monitor the surface computing environment and provide environmental information. The touch point information and the environmental information can be analyzed to determine direction of inputs, location of users, and movement of users and so on. Individual analysis results can be correlated and/or aggregated to generate a inference of association between a touch point and user.Type: GrantFiled: March 3, 2009Date of Patent: April 30, 2013Assignee: Microsoft CorporationInventors: Stephen E. Hodges, Hrvoje Benko, Ian M. Sands, David Alexander Butler, Shahram Izadi, William Ben Kunz, Kenneth P. Hinckley
-
Publication number: 20130082978Abstract: Embodiments of the present invention relate to systems, methods and computer storage media for detecting user input in an extended interaction space of a device, such as a handheld device. The method and system allow for utilizing a first sensor of the device sensing in a positive z-axis space of the device to detect a first input, such as a user's non-device-contacting gesture. The method and system also contemplate utilizing a second sensor of the device sensing in a negative z-axis space of the device to detect a second input. Additionally, the method and system contemplate updating a user interface presented on a display in response to detecting the first input by the first sensor in the positive z-axis space and detecting the second input by the second sensor in the negative z-axis space.Type: ApplicationFiled: September 30, 2011Publication date: April 4, 2013Applicant: MICROSOFT CORPORATIONInventors: Eric Horvitz, Kenneth P. Hinckley, Hrvoje Benko, Desney S. Tan