Patents by Inventor Michel Pahud

Michel Pahud has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120154293
    Abstract: A computing device is described herein which accommodates gestures that involve intentional movement of the computing device, either by establishing an orientation of the computing device and/or by dynamically moving the computing device, or both. The gestures may also be accompanied by contact with a display surface (or other part) of the computing device. For example, the user may establish contact with the display surface via a touch input mechanism and/or a pen input mechanism and then move the computing device in a prescribed manner.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: Microsoft Corporation
    Inventors: Kenneth P. Hinckley, Michel Pahud, Wenqi Shen
  • Publication number: 20120159401
    Abstract: Workspaces are manipulated on a mobile device having a display screen. A set of two or more discrete workspaces is established. A default discrete workspace is then displayed on the screen, where the default discrete workspace is one of the discrete workspaces in the set. Whenever a user gestures with the mobile device, the gesture is used to select one of the discrete workspaces from the set, and the selected discrete workspace will be displayed on the screen.
    Type: Application
    Filed: December 16, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Michel Pahud, Ken Hinckley, William A. S. Buxton
  • Publication number: 20120154255
    Abstract: A computing device is described which includes plural display parts provided on respective plural device parts. The display parts define a display surface which provides interfaces to different tools. The tools, in turn, allow a local participant to engage in an interactive session with one or more remote participants. In one case, the tools include: a shared workspace processing module for providing a shared workspace for use by the participants; an audio-video conferencing module for enabling audio-video communication among the participants; and a reference space module for communicating hand gestures and the like among the participants, etc. In one case, the computing device is implemented as a portable computing device that can be held in a participant's hand during use.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: Microsoft Corporation
    Inventors: Kenneth P. Hinckley, Michel Pahud, William A. S. Buxton
  • Publication number: 20120154294
    Abstract: A computing device is described herein which collects input event(s) from at least one contact-type input mechanism (such as a touch input mechanism) and at least one movement-type input mechanism (such as an accelerometer and/or gyro device). The movement-type input mechanism can identify the orientation of the computing device and/or the dynamic motion of the computing device. The computing device uses these input events to interpret the type of input action that has occurred, e.g., to assess when at least part of the input action is unintentional. The computing device can then perform behavior based on its interpretation, such as by ignoring part of the input event(s), restoring a pre-action state, correcting at least part of the input event(s), and so on.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: Microsoft Corporation
    Inventors: Kenneth P. Hinckley, Michel Pahud, Wenqi Shen
  • Publication number: 20120158629
    Abstract: A computing device is described herein for detecting and addressing unintended contact of a hand portion (such as a palm) or other article with a computing device. The computing device uses multiple factors to determine whether input events are accidental, including, for instance, the tilt of a pen device as it approaches a display surface of the computing device. The computing device can also capture and analyze input events which represent a hand that is close to the display surface, but not making physical contact with the display surface. The computing device can execute one or more behaviors to counteract the effect of any inadvertent input actions that it may detect.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 21, 2012
    Applicant: Microsoft Corporation
    Inventors: Kenneth P. Hinckley, Michel Pahud
  • Publication number: 20120092436
    Abstract: Telepresence of a mobile user (MU) utilizing a mobile device (MD) and remote users who are participating in a telepresence session is optimized. The MD receives video of a first remote user (FRU). Whenever the MU gestures with the MD using a first motion, video of the FRU is displayed. The MD can also receive video and audio of the FRU and a second remote user (SRU), display a workspace, and reproduce the audio of the FRU and SRU in a default manner. Whenever the MU gestures with the MD using the first motion, video of the FRU is displayed and audio of the FRU and SRU is reproduced in a manner that accentuates the FRU. Whenever the MU gestures with the MD using a second motion, video of the SRU is displayed and audio of the FRU and SRU is reproduced in a manner that accentuates the SRU.
    Type: Application
    Filed: October 19, 2010
    Publication date: April 19, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Michel Pahud, Ken Hinckley, William A. S. Buxton
  • Patent number: 8095366
    Abstract: Various technologies and techniques are disclosed that improve the instructional nature of fonts and/or the ability to create instructional fonts. Font characters are modified based on user interaction to enhance the user's understanding and/or fluency of the word. The font characters can have sound, motion, and altered appearance. When altering the appearance of the characters, the system operates on a set of control points associated with characters, changes the position of the characters, and changes the influence of the portion of characters on a set of respective spline curves. A designer or other user can customize the fonts and user experience by creating an episode package that specifies words to include in the user interface, and details about actions to take when certain events fire. The episode package can include media effects to play when a particular event associated with the media effect occurs.
    Type: Grant
    Filed: March 27, 2006
    Date of Patent: January 10, 2012
    Assignee: Microsoft Corporation
    Inventors: Margaret K. Johnson, Heinz W. Schuller, Howard W. Phillips, Michel Pahud
  • Publication number: 20110252316
    Abstract: A system described herein includes an acquirer component that acquires an electronic document that comprises text in a first language, wherein the acquirer component acquires the electronic document based at least in part upon a physical object comprising the text contacting or becoming proximate to the interactive display of the surface computing device. The system also includes a language selector component that receives an indication of a second language from a user of the surface computing device and selects the second language. A translator component translates the text in the electronic document from the first language to the second language, and a formatter component formats the electronic document for display to the user on the interactive display of the surface computing device, wherein the electronic document comprises the text in the second language.
    Type: Application
    Filed: April 12, 2010
    Publication date: October 13, 2011
    Applicant: Microsoft Corporation
    Inventors: Michel Pahud, Takako Aikawa, Andrew D. Wilson, Hrvoje Benko, Sauleh Eetemadi, Anand M. Chakravarty
  • Publication number: 20110191704
    Abstract: Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device.
    Type: Application
    Filed: February 4, 2010
    Publication date: August 4, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Kenneth P. Hinckley, Koji Yatani, Jonathan R. Harris, Andrew S. Allen, Georg F. Petschnigg, Michel Pahud
  • Publication number: 20110191719
    Abstract: Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device.
    Type: Application
    Filed: February 4, 2010
    Publication date: August 4, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Kenneth P. Hinckley, Koji Yatani, Andrew S. Allen, Jonathan R. Harris, Georg F. Petschnigg, Michel Pahud
  • Publication number: 20110181524
    Abstract: Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device.
    Type: Application
    Filed: January 28, 2010
    Publication date: July 28, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Kenneth P. Hinckley, Koji Yatani, Michel Pahud
  • Publication number: 20110083089
    Abstract: Apparatus and methods for improving touch-screen interface usability and accuracy by determining the trajectory of a pointer as it approaches the touch-screen and modifying the touch-screen display accordingly. The system may predict an object on the display the user is likely to select next. The system may designate this object as a Designated Target Object, or DTO. The system may modify the appearance of the DTO by, for example, changing the size of the DTO, or by changing its shape, style, coloring, perspective, positioning, etc.
    Type: Application
    Filed: October 1, 2009
    Publication date: April 7, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Michel Pahud, Oscar E. Murillo, Amy K. Karlson, Benjamin B. Bederson
  • Publication number: 20100318399
    Abstract: A template and/or knowledge associated with a synchronous meeting are obtained by a computing device. The computing device then adaptively manages the synchronous meeting based at least in part on the template and/or knowledge.
    Type: Application
    Filed: June 15, 2009
    Publication date: December 16, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Jin Li, James E. Oker, Rajesh K. Hegde, Dinei Afonso Ferreira Florencio, Michel Pahud, Sharon K. Cunnington, Philip A. Chou, Zhengyou Zhang
  • Publication number: 20100306670
    Abstract: The claimed subject matter provides a system and/or a method that facilitates interacting with data associated with a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. A portion of data can be virtually represented within the telepresence session in which at least one virtually represented user interacts therewith. A detect component can monitor motions related to at least one virtually represented user to identify a gesture, the gesture involves a virtual interaction with the portion of data within the telepresence session. An interaction component can implement a manipulation to the portion of data virtually represented within the telepresence session based upon the identified gesture.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Kori Marie Quinn, Rajesh Kutpadi Hegde, Sharon Kay Cunnington, Michel Pahud, Xuedong D. Huang, Zhengyon Zhang
  • Publication number: 20100306647
    Abstract: The claimed subject matter provides a system and/or a method that facilitates replicating a telepresence session with a real world physical meeting. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. A trigger component can monitor the telepresence session in real time to identify a participant interaction with an object, wherein the object is at least one of a real world physical object or a virtually represented object within the telepresence session. A feedback component can implement a force feedback to at least one participant within the telepresence session based upon the identified participant interaction with the object, wherein the force feedback is employed via a device associated with at least one participant.
    Type: Application
    Filed: May 27, 2009
    Publication date: December 2, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Zhengyon Zhang, Xuedong D. Huang, Jin Li, Rajesh Kutpadi Hegde, Kori Marie Quinn, Michel Pahud, Jayman Dalal
  • Publication number: 20100245536
    Abstract: The claimed subject matter provides a system and/or a method that facilitates managing one or more devices utilized for communicating data within a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. A device can be utilized by at least one virtually represented user that enables communication within the telepresence session, the device includes at least one of an input to transmit a portion of a communication to the telepresence session or an output to receive a portion of a communication from the telepresence session. A detection component can adjust at least one of the input related to the device or the output related to the device based upon the identification of a cue, the cue is at least one of a movement detected, an event detected, or an ambient variation.
    Type: Application
    Filed: March 30, 2009
    Publication date: September 30, 2010
    Applicant: Microsoft Corporation
    Inventors: Christian Huitema, William A.S. Buxton, Jonathan E. Paff, Zicheng Liu, Rajesh Kutpadi Hegde, Zhengyou Zhang, Kori Marie Quinn, Jin Li, Michel Pahud
  • Publication number: 20100228825
    Abstract: The claimed subject matter provides a system and/or a method that facilitates enhancing the employment of a telepresence session. An automatic telepresence engine that can evaluate data associated with at least one of an attendee, a schedule for an attendee, or a portion of an electronic communication for an attendee. The automatic telepresence engine can identify at least one the following for a telepresence session based upon the evaluated data: a participant to include for the telepresence session, a portion of data related to a presentation within the telepresence session, a portion of data related to a meeting topic within the telepresence session, a device utilized by an attendee to communicate within the telepresence session. The automatic telepresence engine can initiate the telepresence session within a communication framework that includes two or more virtually represented users that communicate therein.
    Type: Application
    Filed: March 6, 2009
    Publication date: September 9, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Rajesh Kutpadi Hegde, Xuedong David Huang, Sharon Kay Cunnington, Jin Li, Michel Pahud, Ryan M. Burkhardt, Kori Marie Quinn, Jayman Dalal, Zhengyou Zhang
  • Publication number: 20100198579
    Abstract: The claimed subject matter provides a system and/or a method that facilitates communication within a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. The telepresence session can include at least one virtually represented user that communicates in a first language, the communication is at least one of a portion of audio, a portion of video, a portion of graphic, a gesture, or a portion of text. An interpreter component can evaluate the communication to translate an identified first language into a second language within the telepresence session, the translation is automatically provided to at least one virtually represented user within the telepresence.
    Type: Application
    Filed: February 4, 2009
    Publication date: August 5, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Sharon Kay Cunnington, Jin Li, Michel Pahud, Rajesh K. Hegde, Zhengyou Zhang
  • Patent number: 7752545
    Abstract: A generator sequence is defined and used for providing an enhanced user experience and increased user control. A generator sequence is established as a series of user inputs that trigger an output. User inputs to the generator system managing the generator sequence are employed to define a user performance value for a specific point, or position, in the generator sequence. The user performance value is then used to establish a new user point, or position, in the generator sequence. The user inputs and/or the new user generator sequence point are used to identify one or more feedback effects files and/or functions and one or more benefit effects files and/or functions for producing user output. A user can utilize the feedback effects and/or benefit effects to alter their inputs to control the generator sequence.
    Type: Grant
    Filed: March 15, 2006
    Date of Patent: July 6, 2010
    Assignee: Microsoft Corporation
    Inventors: Howard William Phillips, II, Michel Pahud, Margaret Johnson, Heinz Wilfried Schuller
  • Patent number: 7730403
    Abstract: Various technologies and techniques are disclosed that improve the instructional nature of fonts and/or the ability to create instructional fonts. Font characters are modified based on user interaction to enhance the user's understanding and/or fluency of the word. The font characters can have sound, motion and altered appearance. When altering the appearance of the characters, the system operates on a set of control points associated with characters, changes the position of the characters, and changes the influence of the portion of characters on a set of respective spline curves. A designer or other user can customize the fonts and user experience by creating an episode package that specifies words to include in the user interface, and details about actions to take when certain events fire. The episode package can include media effects to play when a particular event associated with the media effect occurs.
    Type: Grant
    Filed: March 27, 2006
    Date of Patent: June 1, 2010
    Assignee: Microsoft Corporation
    Inventors: Margaret K. Johnson, Heinz W. Schuller, Howard W. Phillips, Michel Pahud