Patents by Inventor Michel Pahud

Michel Pahud has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180314484
    Abstract: Various systems and techniques for intuitively and non-obstructively obtaining awareness of fellow collaborator views and/or statuses in an augmented reality environment are disclosed. An interaction is detected between a first and second HMD, both collaborative participants of a shared dataset represented in an augmented reality environment. The detected interaction can cause the first HMD to request state information associated with the second HMD. The second HMD can process the request in accordance with a privacy policy to generate a set of visual data as a response to the request. The second HMD can communicate the generated set of visual data to the first HMD, such that the first HMD can render the received set of visual data as additional computer-generated object(s) to be displayed concurrently with its own augmented view.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 1, 2018
    Inventors: MICHEL PAHUD, NATHALIE RICHE, EYAL OFEK, CHRISTOPHE HURTER
  • Publication number: 20180300056
    Abstract: An apparatus includes a keyboard engine that operates a keyboard that accepts shape-writing input and radial entry input. A keyboard input module obtains input data from at least one input sensor of the keyboard. An intention disambiguation engine enables simultaneous use of the shape-writing input and the radial entry input acceptance for a user of the keyboard.
    Type: Application
    Filed: February 23, 2018
    Publication date: October 18, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: William A. S. Buxton, Richard L. Hughes, Kenneth P. Hinckley, Michel Pahud, Irina Spiridonova
  • Publication number: 20180239519
    Abstract: The unified system for bimanual interactions provides a lightweight and integrated interface that allows the user to efficiently interact with and manipulate content in the user interface. The system is configured to detect a multi-finger interaction on the touchscreen and to differentiate whether the user intends to pan, zoom or frame a portion of the user interface. Generally, the framing interaction is identified by detection of the user's thumb and forefinger on the touchscreen, which cooperate to define a focus area between vectors extending outwards from the user's thumb and forefinger. Upon a determination that the user intends to interact with or manipulate content within the focus area, the unified system for bimanual interactions provides an indication of the objects that are located within the focus area and contextual menus for interacting with the objects.
    Type: Application
    Filed: February 20, 2017
    Publication date: August 23, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. Hinckley, Michel Pahud, William Arthur Stewart Buxton, Haijun Xia
  • Publication number: 20180239482
    Abstract: Thumb+pen inputs are described herein, to improve the functionality of touch enabled devices for accepting bimanual input in situations where the device is gripped or supported by one of the user's hands, leaving only one hand free. The thumb of an engaging hand is identified and controls are placed within its range of motion to enhance the functionality provided by the free hand. The actions of the thumb can be used to determine how pen actions made using the other hand are interpreted. Alternatively, the pen can indicate an object through pointing, while the thumb indirectly manipulates one or more of its parameters through touch controls. Marking menus, spring-loaded modes, indirect input, and conventional multi-touch interfaces are applied with respect to the bimanual input mode in which one hand is positioned to hold or support the device, and the other hand is free to improve device operability and accessibility.
    Type: Application
    Filed: February 20, 2017
    Publication date: August 23, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. Hinckley, Michel Pahud, William Arthur Stewart Buxton, Ken Pfeuffer
  • Publication number: 20180239520
    Abstract: The unified system for bimanual interactions provides a lightweight and integrated interface that allows the user to efficiently interact with and manipulate content in the user interface. The system is configured to detect a multi-finger interaction on the touchscreen and to differentiate whether the user intends to pan, zoom or frame a portion of the user interface. Generally, the framing interaction is identified by detection of the user's thumb and forefinger on the touchscreen, which cooperate to define a focus area between vectors extending outwards from the user's thumb and forefinger. Upon a determination that the user intends to interact with or manipulate content within the focus area, the unified system for bimanual interactions provides an indication of the objects that are located within the focus area and contextual menus for interacting with the objects.
    Type: Application
    Filed: February 20, 2017
    Publication date: August 23, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. Hinckley, William Arthur Stewart Buxton, Michel Pahud, Haijun Xia
  • Publication number: 20180239509
    Abstract: Use of pre-interaction context associated with gesture and touch interactions is provided. A user interface is configured to receive an interaction. A touch or gesture input may be received from a user via a touchscreen. Prior to receiving an interaction on the user interface from the user, pre-interaction context is detected. Based on the pre-interaction context, the user's interaction with the user interface is interpreted and the interaction is provided on the user interface.
    Type: Application
    Filed: February 20, 2017
    Publication date: August 23, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. Hinckley, Michel Pahud, William Arthur Stewart Buxton, Seongkook Heo
  • Patent number: 10013065
    Abstract: An example system includes a plurality of moveable light emitters, each moveable light emitter configured to independently emit a display light from a current display location within that moveable light emitter's range of motion responsive to activation from a corresponding light activator. The system also includes a location engine to determine, for each light emitter, the current display location of that light emitter, and a mapping engine to map, for each current display location, the light activator activating the light emitter currently located at that current display location.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: July 3, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Sasa Junuzovic, Michel Pahud, Eyal Ofek, Yoichi Ochiai, Michael J. Sinclair
  • Patent number: 9940016
    Abstract: An apparatus includes a keyboard engine that operates a keyboard that accepts shape-writing input and radial entry input. A keyboard input module obtains input data from at least one input sensor of the keyboard. An intention disambiguation engine enables simultaneous use of the shape-writing input and the radial entry input acceptance for a user of the keyboard.
    Type: Grant
    Filed: September 13, 2014
    Date of Patent: April 10, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: William A. S. Buxton, Richard L. Hughes, Kenneth P. Hinckley, Michel Pahud, Irina Spiridonova
  • Patent number: 9870083
    Abstract: A grip of a primary user on a touch-sensitive computing device and a grip of a secondary user on the touch-sensitive computing device are sensed and correlated to determine whether the primary user is sharing or handing off the computing device to the secondary user. In the case of handoff, capabilities of the computing device may be restricted, while in a sharing mode only certain content on the computing device is shared. In some implementations both a touch-sensitive pen and the touch-sensitive computing device are passed from a primary user to a secondary user. Sensor inputs representing the grips of the users on both the pen and the touch-sensitive computing device are correlated to determine the context of the grips and to initiate a context-appropriate command in an application executing on the touch-sensitive pen or the touch-sensitive computing device. Meta data is also derived from the correlated sensor inputs.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: January 16, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ken Hinckley, Hrvoje Benko, Michel Pahud, Andrew D. Wilson, Pourang Polad Irani, Francois Guimbretiere
  • Publication number: 20180004386
    Abstract: Various technologies described herein pertain to utilizing sensed pre-touch interaction to control a mobile computing device. A pre-touch interaction of a user with the mobile computing device is detected. The pre-touch interaction includes a grip of the user on the mobile computing device and/or a hover of one or more fingers of the user with respect to a touchscreen of the mobile computing device. The finger(s) of the user can be within proximity but not touching the touchscreen as part of the hover. Parameter(s) of the pre-touch interaction of the user with the mobile computing device are identified, and a touch of the user on the touchscreen of the mobile computing device is detected. A computing operation is executed responsive to the touch, where the computing operation is based on the touch and the parameter(s) of the pre-touch interaction of the user with the mobile computing device.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Kenneth P. Hinckley, Michel Pahud, Hrvoje Benko, William Arthur Stewart Buxton, Seongkook Heo
  • Patent number: 9857970
    Abstract: Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: January 2, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kenneth P. Hinckley, Koji Yatani, Michel Pahud
  • Publication number: 20170329446
    Abstract: A method, system, and one or more computer-readable storage media for providing multi-dimensional haptic touch screen interaction are provided herein. The method includes detecting a force applied to a touch screen by an object and determining a magnitude, direction, and location of the force. The method also includes determining a haptic force feedback to be applied by the touch screen on the object based on the magnitude, direction, and location of the force applied to the touch screen, and displacing the touch screen in a specified direction such that the haptic force feedback is applied by the touch screen on the object.
    Type: Application
    Filed: June 20, 2017
    Publication date: November 16, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michael J. SINCLAIR, Michel PAHUD, Hrvoje BENKO
  • Publication number: 20170300170
    Abstract: Pen and computing device sensor correlation technique embodiments correlate sensor signals received from various grips on a touch-sensitive pen and touches to a touch-sensitive computing device in order to determine the context of such grips and touches and to issue context-appropriate commands to the touch sensitive pen or the touch-sensitive computing device. A combination of concurrent sensor inputs received from both a touch-sensitive pen and a touch-sensitive computing device are correlated. How the touch-sensitive pen and the touch-sensitive computing device are touched or gripped are used to determine the context of their use and the user's intent. A context-appropriate user interface action based can then be initiated. Also the context can be used to label metadata.
    Type: Application
    Filed: July 1, 2017
    Publication date: October 19, 2017
    Inventors: Ken Hinckley, Hrvoje Benko, Michel Pahud, Andrew D. Wilson, Pourang Polad Irani, Francois Guimbretiere
  • Publication number: 20170277367
    Abstract: In general, the multi-touch detection implementations described herein use touch detection technologies to provide new and advantageous interactions between users and touch displays using touch, pens and user-wearable devices (UWDs). These new and advantageous interactions include user-associated mobile menus, combined click-through and radial marking menus, menus to automate and improve drawing or manipulation of content on a display, new menus and methods of selecting objects and text on a display, and new interactions with UWDs and touchscreen displays by using UWDs with gestures. In addition, targeted haptic feedback to the UWD of specific users of a display is enabled. In some multi-touch detection implementations menus or tools available to act on object on a display can be ported entirely, or in part, between displays, such as between small and large displays.
    Type: Application
    Filed: June 29, 2016
    Publication date: September 28, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michel Pahud, Kenneth P. Hinckley, William Buxton
  • Patent number: 9727161
    Abstract: Pen and computing device sensor correlation technique embodiments correlate sensor signals received from various grips on a touch-sensitive pen and touches to a touch-sensitive computing device in order to determine the context of such grips and touches and to issue context-appropriate commands to the touch-sensitive pen or the touch-sensitive computing device. A combination of concurrent sensor inputs received from both a touch-sensitive pen and a touch-sensitive computing device are correlated. How the touch-sensitive pen and the touch-sensitive computing device are touched or gripped are used to determine the context of their use and the user's intent. A context-appropriate user interface action based can then be initiated. Also the context can be used to label metadata.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: August 8, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ken Hinckley, Hrvoje Benko, Michel Pahud, Andrew D. Wilson, Pourang Polad Irani, Francois Guimbretiere
  • Patent number: 9715300
    Abstract: A method, system, and one or more computer-readable storage media for providing multi-dimensional haptic touch screen interaction are provided herein. The method includes detecting a force applied to a touch screen by an object and determining a magnitude, direction, and location of the force. The method also includes determining a haptic force feedback to be applied by the touch screen on the object based on the magnitude, direction, and location of the force applied to the touch screen, and displacing the touch screen in a specified direction such that the haptic force feedback is applied by the touch screen on the object.
    Type: Grant
    Filed: March 4, 2013
    Date of Patent: July 25, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael J. Sinclair, Michel Pahud, Hrvoje Benko
  • Publication number: 20170192493
    Abstract: In some examples, a surface, such as a desktop, in front or around a portable electronic device may be used as a relatively large surface for interacting with the portable electronic device, which typically has a small display screen. A user may write or draw on the surface using any object such as a finger, pen, or stylus. The surface may also be used to simulate a partial or full size keyboard. The use of a camera to sense the three-dimensional (3D) location or motion of the object may enable use of above-the-surface gestures, entry of directionality, and capture of real objects into a document being processed or stored by the electronic device. One or more objects may be used to manipulate elements displayed by the portable electronic device.
    Type: Application
    Filed: January 4, 2016
    Publication date: July 6, 2017
    Inventors: Eyal Ofek, Michel Pahud, Pourang P. Irani
  • Publication number: 20170185151
    Abstract: In some examples, a system senses the location of a stylus or finger of a user relative to a virtual displayed object to determine when a virtual touch occurs. Upon or after such a determination, the system may generate a haptic action that indicates to the user that the virtual touch occurred. The haptic action may be located on a particular portion of a haptic device, which may be a handheld device. The particular portion may correspond to the location where the virtual displayed object was virtually touched. In this way, the user may receive physical feedback associated with the virtual touch of the virtual displayed object. In some examples, the virtual displayed object may change in response to the virtual touch, thus further providing visual and physical feedback associated with the virtual touch of the virtual displayed object to the user.
    Type: Application
    Filed: December 28, 2015
    Publication date: June 29, 2017
    Inventors: Michel Pahud, Johnson T. Apacible, Sasa Junuzovic, Dave W. Brown
  • Publication number: 20170153741
    Abstract: A camera of a display device may be used to capture images of a hovering finger or stylus. Image processing techniques may be applied to the captured image to sense right-left position of the hovering finger or stylus. To measure distance to the hovering finger or stylus from the camera, a pattern may be displayed by the display so that the hovering finger or stylus is illuminated by a particular portion or color of the pattern over which the finger or stylus hovers. The image processing techniques may be used to determine, from the captured image, which particular portion or color of the pattern illuminates the finger or stylus. This determination, in conjunction with the known displayed pattern, may provide the 3D location or the distance to the hovering finger or stylus from the camera.
    Type: Application
    Filed: December 1, 2015
    Publication date: June 1, 2017
    Inventors: Eyal Ofek, Michel Pahud
  • Publication number: 20170115782
    Abstract: By correlating user grip information with micro-mobility events, electronic devices can provide support for a broad range of interactions and contextually-dependent techniques. Such correlation allows electronic devices to better identify device usage contexts, and in turn provide a more responsive and helpful user experience, especially in the context of reading and task performance. To allow for accurate and efficient device usage context identification, a model may be used to make device usage context determinations based on the correlated gesture and micro-mobility data. Once a context, device usage context, or gesture is identified, an action can be taken on one or more electronic devices.
    Type: Application
    Filed: October 23, 2015
    Publication date: April 27, 2017
    Inventors: Kenneth P. Hinckley, Hrvoje Benko, Michel Pahud, Dongwook Yoon