Patents by Inventor Guillaume Belrose

Guillaume Belrose has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7593854
    Abstract: A user is presented with a picture image either in hard-copy or electronic form. Particular picture features in the image each have associated information that is presented to the user upon the user requesting such information by at least selecting the picture feature using a feature-selection arrangement. Should the user select a picture feature for which no information is provided, an identifier of the feature, for example its image coordinates, are output to inform a person involved in providing the picture and related information. Preferably, to request information about a picture feature, the user as well as selecting the feature, also inputs a query by voice; in this case, where the selected feature has no associated information, the user query is also provided back to the person involved in providing the picture and related information.
    Type: Grant
    Filed: December 6, 2002
    Date of Patent: September 22, 2009
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Guillaume Belrose
  • Patent number: 7171361
    Abstract: The grammar of the speech input to a voice service system is normally specified by the voice service system. However, this can produce problems in respect of idioms, such as dates, which are expressed different ways by different users. To facilitate the handling of idioms, a user is permitted to specify their own idiom grammar which is then used by the voice service system to interpret idioms in speech input from that user. Typically, the normal grammar of speech input is specified by grammar tags used to mark up a voice page script interpreted by a voice page browser; in this case, it will generally be the voice browser that is responsible for employing the user-specified idiom grammar to interpret the corresponding idiom in the speech input by the user. The user-specified grammar can be pre-specified directly to the voice browser by the user or fetched by the browser from a remote location on the fly.
    Type: Grant
    Filed: December 13, 2001
    Date of Patent: January 30, 2007
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Andrew Thomas, Mariann Hickey, Stephen John Hinde, Guillaume Belrose
  • Patent number: 7113911
    Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a beacon device at or near the entity, the beacon device transmitting, over a short-range communication link, contact data identifying a voice service associated with, but hosted separately from, the entity. The transmitted contact data is picked up by equipment carried by a nearby person and used to contact the voice service over a wireless network. The person then interacts with the voice service, the latter acting as a voice proxy for the local entity. The contact data can be presented to the user in other ways, for example, by being inscribed on the local entity for scanning or user input into the equipment.
    Type: Grant
    Filed: November 21, 2001
    Date of Patent: September 26, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen John Hinde, Paul St John Brittan, Marianne Hickey, Lawrence Wilcock, Guillaume Belrose, Andrew Thomas
  • Patent number: 7103548
    Abstract: A text message generated at a sending device is converted into audio form by a message-conversion system for delivery to a target recipient. This conversion is effected in a manner enabling emotions, encoded by indicators embedded in the text message, to be expressed through multiple types of presentation feature in the audio form of the message. The mapping of emotions to feature values is pre-established for each feature type whilst the sender selection of one or more feature types to be used to express encoded emotions is specified by type indications inserted into the message at its time of generation.
    Type: Grant
    Filed: June 3, 2002
    Date of Patent: September 5, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Robert Francis Squibbs, Paul St. John Brittan, Guillaume Belrose
  • Publication number: 20050174997
    Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a beacon device at or near the entity, the beacon device transmitting, over a short-range communication link, contact data identifying a voice service associated with, but hosted separately from, the entity. The transmitted contact data is picked up by equipment carried by a nearby person and used to contact the voice service over a wireless network. The person then interacts with the voice service, the latter acting as a voice proxy for the local entity. The contact data can be presented to the user in other ways, for example, by being inscribed on the local entity for scanning or user input into the equipment.
    Type: Application
    Filed: April 13, 2005
    Publication date: August 11, 2005
    Inventors: Stephen Hinde, Paul Brittan, Marianne Hickey, Lawrence Wilcock, Guillaume Belrose, Andrew Thomas
  • Publication number: 20030144843
    Abstract: A user is presented with a picture image either in hard-copy or electronic form. Particular picture features in the image each have associated information that is presented to the user upon the user requesting such information by at least selecting the picture feature using a feature-selection arrangement. Should the user select a picture feature for which no information is provided, an identifier of the feature, for example its image coordinates, are output to inform a person involved in providing the picture and related information. Preferably, to request information about a picture feature, the user as well as selecting the feature, also inputs a query by voice; in this case, where the selected feature has no associated information, the user query is also provided back to the person involved in providing the picture and related information.
    Type: Application
    Filed: December 6, 2002
    Publication date: July 31, 2003
    Applicant: HEWLETT-PACKARD COMPANY
    Inventor: Guillaume Belrose
  • Publication number: 20030112267
    Abstract: A system for presenting a multi-modal picture includes picture presentation equipment for displaying an image of the picture and for enabling a user to interact with the picture by selecting a particular picture feature and asking a specific query relating to the feature. A voice browser system controlled according to dialog scripts associated with the picture, determines an appropriate response having regard to the spoken user query and the selected picture feature. Each picture can have multiple narrators associated with it and the can choose which narrator is currently active. Picture authoring apparatus is also provided.
    Type: Application
    Filed: December 6, 2002
    Publication date: June 19, 2003
    Applicant: HEWLETT-PACKARD COMPANY
    Inventor: Guillaume Belrose
  • Publication number: 20030095669
    Abstract: An audio user interface is provided in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate. To facilitate user navigation in the audio field, sound sources are arranged to indicate their location in the audio field relative to a preset reference (such as the user's current facing direction or straight-ahead facing direction) by a corresponding audio indication such as an approximate relative bearing—for example “upper left”. The audio indication given by a sound source is arranged to change dynamically as the sound source and/or the present reference changes position.
    Type: Application
    Filed: January 29, 2002
    Publication date: May 22, 2003
    Applicant: Hewlett-Packard Company
    Inventors: Guillaume Belrose, Robert Francis Squibbs
  • Publication number: 20020198715
    Abstract: A method is provided of generating an artificial language for use, for example, in human speech interfaces to devices. The language generation method involves using a genetic algorithm to evolve a population of individuals over a plurality of generations, the individuals forming or being used to form candidate artificial-language words. These words are evaluated against a predetermined fitness function with the results of this evaluation being used to select individuals to be evolved to form the next generation of the population. To produce languages suitable for human speech interfaces to devices, the fitness function preferably takes account both of correct recognition of candidate words when spoken to a speech recognition system, and the similarity of candidate words to words in a set of user-favourite words.
    Type: Application
    Filed: June 11, 2002
    Publication date: December 26, 2002
    Applicant: HEWLETT PACKARD COMPANY
    Inventor: Guillaume Belrose
  • Publication number: 20020198712
    Abstract: A method is provided of generating an artificial language for use, for example, in human speech interfaces to devices. In a preferred implementation, the language generation method involves using a genetic algorithm to evolve a population of individuals over a plurality of generations, the individuals forming or being used to form candidate artificial-language words. The method is carried in a manner favouring the production of artificial-language words which are more easily correctly recognised by a speech recognition system and have a familiarity to a human user. This is achieved, for example, by selecting words for evolution on the basis of an evaluation carried out using a fitness function that takes account both of correct recognition of candidate words when spoken to a speech recognition system, and the similarity of candidate words to words in a set of user-favourite words.
    Type: Application
    Filed: June 11, 2002
    Publication date: December 26, 2002
    Applicant: HEWLETT PACKARD COMPANY
    Inventors: Stephen John Hinde, Guillaume Belrose
  • Publication number: 20020193996
    Abstract: A text message generated at a sending device is converted into audio form by a message-conversion system for delivery to a target recipient. This conversion is effected in a manner enabling emotions, encoded by indicators embedded in the text message, to be expressed through multiple types of presentation feature in the audio form of the message. The mapping of emotions to feature values is pre-established for each feature type whilst the sender selection of one or more feature types to be used to express encoded emotions is specified by type indications inserted into the message at its time of generation.
    Type: Application
    Filed: June 3, 2002
    Publication date: December 19, 2002
    Applicant: HEWLETT-PACKARD COMPANY
    Inventors: Robert Francis Squibbs, Paul St. John Brittan, Guillaume Belrose
  • Publication number: 20020191757
    Abstract: A text message, such as sent using a short message service of a mobile network, is converted into audio form for delivery to a target recipient. The message includes tags that serve to identity user-related recordings that are to be included in the audio form of the message. In converting the message into audio form, the tags in the message are identified and result in the corresponding recordings being combined with the output of a text-to-speech converter to produce the audio form of the message. The message tags preferably map to recordings according to mapping data specified by either the message sender or target recipient.
    Type: Application
    Filed: June 3, 2002
    Publication date: December 19, 2002
    Applicant: HEWLETT-PACKARD COMPANY
    Inventor: Guillaume Belrose
  • Publication number: 20020150256
    Abstract: An audio user interface is provided in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate. The sound sources are located in the audio field relative to an audio field reference. In order to rotate the audio field either in response to user input or to achieve a particular stabilisation of the audio field, the audio-field reference can be offset relative to presentation reference determined by a mounting configuration of audio output devices through which the sound sources are synthesised. To assist the user in appreciating the current orientation of the audio field, a visual indication is given of the orientation of the audio-field reference relative to a predetermined indicator reference taking account, at least at a component level, of any change in value of said offset and any change in value of indicator-reference orientation relative to the presentation reference.
    Type: Application
    Filed: January 29, 2002
    Publication date: October 17, 2002
    Inventors: Guillaume Belrose, Jeroen Geert Bijsmans
  • Publication number: 20020147586
    Abstract: The presence in a user's current environment of a real or virtual entity, or a representation of it, is announced to the user by an audio announcement. The audio announcement has a presentation character at least one aspect of which, other than or additional to its loudness, that is set in dependence on the range distance between the user and the entity, or its representation, in the current environment. In one embodiment, the environment is an audio field in which the entity is represented by a synthesised sound source; in this case, the range distance is that between the user and the sound source. In another embodiment, the environment is the real world with the entity being a real-world entity; in this case, the range distance is that between the user and the entity. The announcement presentation-character aspect that is range dependent is, for example, speaking style, speaking voice, vocabulary, etc.
    Type: Application
    Filed: January 29, 2002
    Publication date: October 10, 2002
    Applicant: Hewlett-Packard Company
    Inventor: Guillaume Belrose
  • Publication number: 20020128840
    Abstract: New spoken languages are provided that can be easily understood by automated speech recognizers associated with equipment, the languages being learnt by human users in order to speak to the equipment. These new languages are simplified in terms of vocabulary and structure and are specifically designed to minimize recognition errors by automated speech recognizers by being made up of phonemes or other uttered elements that are not easily confused with each other by a speech recognizer. The uttered elements are preferably chosen from an existing language. Apparatus and methods for controlling equipment using these recognizer-friendly languages are also provided as are training systems for training human users to speak these languages, and methods and systems for creating new language instances.
    Type: Application
    Filed: December 21, 2001
    Publication date: September 12, 2002
    Inventors: Stephen John Hinde, Guillaume Belrose
  • Publication number: 20020128845
    Abstract: The grammar of the speech input to a voice service system is normally specified by the voice service system. However, this can produce problems in respect of idioms, such as dates, which are expressed different ways by different users. To facilitate the handling of idioms, a user is permitted to specify their own idiom grammar which is then used by the voice service system to interpret idioms in speech input from that user. Typically, the normal grammar of speech input is specified by grammar tags used to mark up a voice page script interpreted by a voice page browser; in this case, it will generally be the voice browser that is responsible for employing the user-specified idiom grammar to interpret the corresponding idiom in the speech input by the user. The user-specified grammar can be pre-specified directly to the voice browser by the user or fetched by the browser from a remote location on the fly.
    Type: Application
    Filed: December 13, 2001
    Publication date: September 12, 2002
    Inventors: Andrew Thomas, Marianne Hickey, Stephen John Hinde, Guillaume Belrose
  • Publication number: 20020082839
    Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a beacon device at or near the entity, the beacon device transmitting, over a short-range communication link, contact data identifying a voice service associated with, but hosted separately from, the entity. The transmitted contact data is picked up by equipment carried by a nearby person and used to contact the voice service over a wireless network. The person then interacts with the voice service, the latter acting as a voice proxy for the local entity. The contact data can be presented to the user in other ways, for example, by being inscribed on the local entity for scanning or user input into the equipment.
    Type: Application
    Filed: November 21, 2001
    Publication date: June 27, 2002
    Inventors: Stephen John Hinde, Paul St John Brittan, Marianne Hickey, Lawrence Wilcock, Guillaume Belrose, Andrew Thomas
  • Publication number: 20020082838
    Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by detecting the location of a user wishing to communicate with such entities, and comparing the user's location with the known locations of entities having associated voice services. The voice services are separately hosted from the entities themselves. Upon the user being determined to be close to a voice-enabled entity, contact is initiated between the user and the voice service associated with the local entity; for example, contact data for the voice service is passed to user equipment from where it is sent to a network voice browser and used by the latter to contact the voice service. The user then interacts with the voice service, the latter acting as a voice proxy for the local entity with voice output from the service being controlled to appear to emanate from the local entity.
    Type: Application
    Filed: November 21, 2001
    Publication date: June 27, 2002
    Inventors: Stephen John Hinde, Lawrence Wilcock, Paul St John Brittan, Guillaume Belrose
  • Publication number: 20020078148
    Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a receiving device at or near the entity, for picking up contact data transmitted by a nearby person wanting to talk to the local entity. This contact data is used by the receiving device to establish communication between a voice service associated with the local entity and equipment carried by the user. The voice service is hosted separately from the local entity, and takes the form, for example, of pages marked up with voice-markup tags for interpretation by a voice browser.
    Type: Application
    Filed: November 21, 2001
    Publication date: June 20, 2002
    Inventors: Stephen John Hinde, Lawrence Wilcock, Paul St. John Brittan, Guillaume Belrose
  • Publication number: 20020077826
    Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing an associated voice service hosted separately from the entity, the service being initiated when a user comes near the entity. The service uses audio input and output devices that are located either in user-carried equipment or in the locality of the entity. The voice service can be delivered to multiple users simultaneously with the users being joined into the same communication session with the voice service so that all users hear at least some of the same service output. The voice service can be arranged to serve a group of associated entities, not necessarily near each other.
    Type: Application
    Filed: November 21, 2001
    Publication date: June 20, 2002
    Inventors: Stephen John Hinde, Lawrence Wilcock, Paul St. John Brittan, Guillaume Belrose