Patents by Inventor Michel Pahud

Michel Pahud has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090315895
    Abstract: Font animation technique embodiments are presented which animate alpha-numeric characters of a message or document. In one general embodiment this is accomplished by the sender transmitting parametric information and animation instructions pertaining to the display of characters found in the message or document to a recipient. The parametric information identifies where to split the characters and where to rotate the resulting sections. The sections of each character affected are then translated and/or rotated and/or scaled as dictated by the animation instructions to create an animation over time. Additionally, if a gap in a stroke of an animated character exists between the sections of the character, a connecting section is displayed to close the stroke gap making the character appears contiguous.
    Type: Application
    Filed: June 23, 2008
    Publication date: December 24, 2009
    Applicant: Microsoft Corporation
    Inventors: Michel Pahud, William Buxton, Sharon Cunnington
  • Patent number: 7512537
    Abstract: The subject invention provides a unique system and method that facilitates integrating natural language input and graphics in a cooperative manner. In particular, as natural language input is entered by a user, an illustrated or animated scene can be generated to correspond to such input. The natural language input can be in sentence form. Upon detection of an end-of-sentence indicator, the input can be processed using NLP techniques and the images or templates representing at least one of the actor, action, background and/or object specified in the input can be selected and rendered. Thus, the user can nearly immediately visualize an illustration of his/her input. The input can be typed, written, or spoken—whereby speech recognition can be employed to convert the speech to text. New graphics can be created as well to allow the customization and expansion of the invention according to the user's preferences.
    Type: Grant
    Filed: March 22, 2005
    Date of Patent: March 31, 2009
    Assignee: Microsoft Corporation
    Inventors: Michel Pahud, Takako Aikawa, Lee A. Schwartz
  • Publication number: 20080055316
    Abstract: Various technologies and techniques are disclosed for programmatically representing sentence meaning. Metadata is retrieved for an actor, the actor representing a noun to be in a scene. At least one image is also retrieved for the actor and displayed on the background. An action representing a verb for the actor to perform is retrieved. The at least one image of the actor is displayed with a modified behavior that is associated with the action and modified based on the actor metadata. If there is a patient representing another noun in the scene, then patient metadata and at least one patient image are retrieved. The at least one patient image is then displayed. When the patient is present, the modified behavior of the actor can be performed against the patient. The nouns and/or verbs can be customized by a content author.
    Type: Application
    Filed: August 30, 2006
    Publication date: March 6, 2008
    Applicant: Microsoft Corporation
    Inventors: Michel Pahud, Howard W. Phillips
  • Publication number: 20070226641
    Abstract: Various technologies and techniques are disclosed that improve the instructional nature of fonts and/or the ability to create instructional fonts. Font characters are modified based on user interaction to enhance the user's understanding and/or fluency of the word. The font characters can have sound, motion, and altered appearance. When altering the appearance of the characters, the system operates on a set of control points associated with characters, changes the position of the characters, and changes the influence of the portion of characters on a set of respective spline curves. A designer or other user can customize the fonts and user experience by creating an episode package that specifies words to include in the user interface, and details about actions to take when certain events fire. The episode package can include media effects to play when a particular event associated with the media effect occurs.
    Type: Application
    Filed: March 27, 2006
    Publication date: September 27, 2007
    Applicant: Microsoft Corporation
    Inventors: Margaret K. Johnson, Heinz W. Schuller, Howard W. Phillips, Michel Pahud
  • Publication number: 20070226615
    Abstract: Various technologies and techniques are disclosed that improve the instructional nature of fonts and/or the ability to create instructional fonts. Font characters are modified based on user interaction to enhance the user's understanding and/or fluency of the word. The font characters can have sound, motion and altered appearance. When altering the appearance of the characters, the system operates on a set of control points associated with characters, changes the position of the characters, and changes the influence of the portion of characters on a set of respective spline curves. A designer or other user can customize the fonts and user experience by creating an episode package that specifies words to include in the user interface, and details about actions to take when certain events fire. The episode package can include media effects to play when a particular event associated with the media effect occurs.
    Type: Application
    Filed: March 27, 2006
    Publication date: September 27, 2007
    Applicants: Microsoft Corporation, Microsoft Patent Group
    Inventors: Margaret K. Johnson, Heinz W. Schuller, Howard W. Phillips, Michel Pahud
  • Publication number: 20070220018
    Abstract: A generator sequence is defined and used for providing an enhanced user experience and increased user control. A generator sequence is established as a series of user inputs that trigger an output. User inputs to the generator system managing the generator sequence are employed to define a user performance value for a specific point, or position, in the generator sequence. The user performance value is then used to establish a new user point, or position, in the generator sequence. The user inputs and/or the new user generator sequence point are used to identify one or more feedback effects files and/or functions and one or more benefit effects files and/or functions for producing user output. A user can utilize the feedback effects and/or benefit effects to alter their inputs to control the generator sequence.
    Type: Application
    Filed: March 15, 2006
    Publication date: September 20, 2007
    Applicant: Microsoft Corporation
    Inventors: Howard Phillips, Michel Pahud, Margaret Johnson, Heinz Schuller
  • Publication number: 20060248086
    Abstract: Story generation models are disclosed for assisting users, such as young children, to create stories, for example. These story generation models may implement several tools for helping users manipulate sophisticated images, such as for changing the perspective view of any images being incorporated into a story being created, for example. Other tools may be provided that may request the users to handwrite or type the names of any selected images they may desire incorporating into a story for educational purposes. The story generation models may also implement tools for defining and/or enforcing rules during the story generation process, such as requesting the users to spell correctly and/or providing the correct spelling for any misspellings, for example. Further, the story generation models may utilize tools for enabling users to collaborate in real time, such as by allowing the users to see other's story contributions.
    Type: Application
    Filed: May 2, 2005
    Publication date: November 2, 2006
    Applicant: MICROSOFT ORGANIZATION
    Inventor: Michel Pahud
  • Publication number: 20060217979
    Abstract: The subject invention provides a unique system and method that facilitates integrating natural language input and graphics in a cooperative manner. In particular, as natural language input is entered by a user, an illustrated or animated scene can be generated to correspond to such input. The natural language input can be in sentence form. Upon detection of an end-of-sentence indicator, the input can be processed using NLP techniques and the images or templates representing at least one of the actor, action, background and/or object specified in the input can be selected and rendered. Thus, the user can nearly immediately visualize an illustration of his/her input. The input can be typed, written, or spoken—whereby speech recognition can be employed to convert the speech to text. New graphics can be created as well to allow the customization and expansion of the invention according to the user's preferences.
    Type: Application
    Filed: March 22, 2005
    Publication date: September 28, 2006
    Applicant: Microsoft Corporation
    Inventors: Michel Pahud, Takako Aikawa, Lee Schwartz
  • Publication number: 20060059431
    Abstract: A real-time collaborative graphics application and method (such as Microsoft® Visio®) that runs on top of a collaborative networking platform (such as Microsoft® ConferenceXP) and provides real-time collaboration. The real-time collaborative graphics application and method personalizes local objects created by a local to readily distinguish them from remote objects created by remote users. Identifiers are used to allow the authorship of an object to be easily determined by all users. Local objects, remote objects, and a combination of the two can be moved and manipulated by any users. A local user may avoid sharing his local objects with remote users. Moreover, a local user can decide to hide remote objects created by remote users. If at a later time the local user decides to once again view the hidden remote objects, all updates since the remote shapes were hidden are automatically updated at the local user's document.
    Type: Application
    Filed: August 30, 2004
    Publication date: March 16, 2006
    Applicant: Microsoft Corporation
    Inventor: Michel Pahud