Patents by Inventor Stephen John Hinde
Stephen John Hinde has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9398140Abstract: According to one aspect of the present invention there is provided a method, in a telecommunication system, of processing a communication event associated with a subscriber of a communication network, the method comprising: obtaining context data relating to the subscriber, and in response to receiving the communication event, the telecommunications system processing the communication event in accordance with one or more rules generated by the telecommunications system based on the obtained context.Type: GrantFiled: July 20, 2006Date of Patent: July 19, 2016Assignee: Hewlett Packard Enterprise Development LPInventors: Robert H. Hyerle, Stephen John Hinde
-
Patent number: 7190794Abstract: An audio “desktop” user-interface is provided in which services are represented by audio labels presented in an audio field through respective synthesized sound sources. A desired service is selected by identifying it through its sound source or audio label. A user can modify the layout of service-representing sound sources and preferably add and remove services.Type: GrantFiled: January 29, 2002Date of Patent: March 13, 2007Assignee: Hewlett-Packard Development Company, L.P.Inventor: Stephen John Hinde
-
Patent number: 7171361Abstract: The grammar of the speech input to a voice service system is normally specified by the voice service system. However, this can produce problems in respect of idioms, such as dates, which are expressed different ways by different users. To facilitate the handling of idioms, a user is permitted to specify their own idiom grammar which is then used by the voice service system to interpret idioms in speech input from that user. Typically, the normal grammar of speech input is specified by grammar tags used to mark up a voice page script interpreted by a voice page browser; in this case, it will generally be the voice browser that is responsible for employing the user-specified idiom grammar to interpret the corresponding idiom in the speech input by the user. The user-specified grammar can be pre-specified directly to the voice browser by the user or fetched by the browser from a remote location on the fly.Type: GrantFiled: December 13, 2001Date of Patent: January 30, 2007Assignee: Hewlett-Packard Development Company, L.P.Inventors: Andrew Thomas, Mariann Hickey, Stephen John Hinde, Guillaume Belrose
-
Patent number: 7113911Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a beacon device at or near the entity, the beacon device transmitting, over a short-range communication link, contact data identifying a voice service associated with, but hosted separately from, the entity. The transmitted contact data is picked up by equipment carried by a nearby person and used to contact the voice service over a wireless network. The person then interacts with the voice service, the latter acting as a voice proxy for the local entity. The contact data can be presented to the user in other ways, for example, by being inscribed on the local entity for scanning or user input into the equipment.Type: GrantFiled: November 21, 2001Date of Patent: September 26, 2006Assignee: Hewlett-Packard Development Company, L.P.Inventors: Stephen John Hinde, Paul St John Brittan, Marianne Hickey, Lawrence Wilcock, Guillaume Belrose, Andrew Thomas
-
Patent number: 6970824Abstract: Voice-controlled apparatus is provided which minimises the risk of activating more than one such apparatus at a time where multiple voice-controlled apparatus exist in close proximity. To start voice control of the apparatus, a user needs to be looking at the apparatus when speaking. Preferably, after the user stops looking at the apparatus, continuing voice control can only be effected whilst the user continues speaking without breaks longer than a predetermined duration. Detection of whether the user is looking at the apparatus can be effected in a number of ways including by the use of camera systems, by a head-mounted directional transmitter, and by detecting the location and direction of facing of the user.Type: GrantFiled: December 4, 2001Date of Patent: November 29, 2005Assignee: Hewlett-Packard Development Company, L.P.Inventors: Stephen John Hinde, Timothy Alan Heath Wilkinson, Stephen B Pollard, Andrew Arthur Hunter
-
Patent number: 6909999Abstract: A browser with a sound input receives a sound passage associated with a content site. The browser sends a representation of the sound passage to a service system where it is compared with stored representations of sound passages that each have an associated URI. On finding a match, the service system sends back the URI associated with the matched stored sound-passage representation. The browser uses this URI to access the content site.Type: GrantFiled: December 4, 2001Date of Patent: June 21, 2005Assignee: Hewlett-Packard Development Company, L.P.Inventors: Andrew Thomas, Stephen John Hinde, Martin Sadler, Simon Edwin Crouch
-
Patent number: 6664892Abstract: When a person first enters an unfamiliar work space, it is useful for that person to know what devices are present in the space and often the person will spend the first few minutes looking around, effectively carrying out an inventory of the devices present. In order to simplify this process the devices are arranged to announce their existence by sound in response to a prompt, such as a handclap. To avoid the announcements being made all at once in an unintelligible manner, the devices interact with each other to order their announcements so that each device announcement is, at least in due course, made uninterrupted by announcements from other devices. Typically, this interaction involves the devices using a collision-detection and back-off protocol applied to the announcements themselves.Type: GrantFiled: November 28, 2001Date of Patent: December 16, 2003Assignee: Hewlett-Packard Development Company, L.C.Inventors: Andrew Thomas, Stephen John Hinde, Paul St John Brittan
-
Patent number: 6549142Abstract: Audio alerts are provided in an environment, such as a house, concerning categorized events to be reported. Examples of the events are receipt of e-mails and voice mails. The presence of a person entering or leaving a space of the environment is detected and a processing system determines reportable event categories that have occurred. Each possible event category has a corresponding audio signature. The event categories signatures that have occurred are played either simultaneously or sequentially, within the hearing of the person detected.Type: GrantFiled: November 28, 2001Date of Patent: April 15, 2003Assignee: Hewlett-Packard CompanyInventors: Andrew Thomas, Stephen John Hinde, Martin Sadler
-
Publication number: 20030018477Abstract: An audio “desktop” user-interface is provided in which services are represented by audio labels presented in an audio field through respective synthesized sound sources. A desired service is selected by identifying it through its sound source or audio label. A user can modify the layout of service-representing sound sources and preferably add and remove services.Type: ApplicationFiled: January 29, 2002Publication date: January 23, 2003Inventor: Stephen John Hinde
-
Publication number: 20020198712Abstract: A method is provided of generating an artificial language for use, for example, in human speech interfaces to devices. In a preferred implementation, the language generation method involves using a genetic algorithm to evolve a population of individuals over a plurality of generations, the individuals forming or being used to form candidate artificial-language words. The method is carried in a manner favouring the production of artificial-language words which are more easily correctly recognised by a speech recognition system and have a familiarity to a human user. This is achieved, for example, by selecting words for evolution on the basis of an evaluation carried out using a fitness function that takes account both of correct recognition of candidate words when spoken to a speech recognition system, and the similarity of candidate words to words in a set of user-favourite words.Type: ApplicationFiled: June 11, 2002Publication date: December 26, 2002Applicant: HEWLETT PACKARD COMPANYInventors: Stephen John Hinde, Guillaume Belrose
-
Publication number: 20020133352Abstract: A sound service system participates in a multi-turn sound exchange with a human user, this sound exchange involving one or more cycles in each of which the service and user take turns to provide a noise or utterance the form or content of which is already public. The service system preferably also participates in normal voice dialog exchanges with the human user, the service system using a respective manager for the normal voice dialogs and the multi-turn sound exchanges with control passing between the two managers as required, each manager when in control effecting this control according to a corresponding script.Type: ApplicationFiled: December 7, 2001Publication date: September 19, 2002Inventors: Stephen John Hinde, Robert Francis Squibbs
-
Publication number: 20020128845Abstract: The grammar of the speech input to a voice service system is normally specified by the voice service system. However, this can produce problems in respect of idioms, such as dates, which are expressed different ways by different users. To facilitate the handling of idioms, a user is permitted to specify their own idiom grammar which is then used by the voice service system to interpret idioms in speech input from that user. Typically, the normal grammar of speech input is specified by grammar tags used to mark up a voice page script interpreted by a voice page browser; in this case, it will generally be the voice browser that is responsible for employing the user-specified idiom grammar to interpret the corresponding idiom in the speech input by the user. The user-specified grammar can be pre-specified directly to the voice browser by the user or fetched by the browser from a remote location on the fly.Type: ApplicationFiled: December 13, 2001Publication date: September 12, 2002Inventors: Andrew Thomas, Marianne Hickey, Stephen John Hinde, Guillaume Belrose
-
Publication number: 20020128840Abstract: New spoken languages are provided that can be easily understood by automated speech recognizers associated with equipment, the languages being learnt by human users in order to speak to the equipment. These new languages are simplified in terms of vocabulary and structure and are specifically designed to minimize recognition errors by automated speech recognizers by being made up of phonemes or other uttered elements that are not easily confused with each other by a speech recognizer. The uttered elements are preferably chosen from an existing language. Apparatus and methods for controlling equipment using these recognizer-friendly languages are also provided as are training systems for training human users to speak these languages, and methods and systems for creating new language instances.Type: ApplicationFiled: December 21, 2001Publication date: September 12, 2002Inventors: Stephen John Hinde, Guillaume Belrose
-
Publication number: 20020105575Abstract: Voice-controlled apparatus is provided which minimises the risk of activating more than one such apparatus at a time where multiple voice-controlled apparatus exist in close proximity. To start voice control of the apparatus, a user needs to be looking at the apparatus when speaking. Preferably, after the user stops looking at the apparatus, continuing voice control can only be effected whilst the user continues speaking without breaks longer than a predetermined duration. Detection of whether the user is looking at the apparatus can be effected in a number of ways including by the use of camera systems, by a head-mounted directional transmitter, and by detecting the location and direction of facing of the user.Type: ApplicationFiled: December 4, 2001Publication date: August 8, 2002Inventors: Stephen John Hinde, Timothy Alan Heath Wilkinson, Stephen B. Pollard, Andrew Arthur Hunter
-
Publication number: 20020107596Abstract: To encode a URL in sound, the characters of the URL are mapped to sound codewords each of which is used to produce, in a sound output, a sound feature particular to that codeword, the nature of the sound features and of the overall mapping between characters and sound features being such that at least certain character combinations that occur frequently in URLs produce sound sequences of a musical character. Decoding of the sound URL effects the reverse mapping.Type: ApplicationFiled: December 4, 2001Publication date: August 8, 2002Inventors: Andrew Thomas, Stephen John Hinde, Martin Sadler, Simon Edwin Crouch
-
Publication number: 20020107696Abstract: Voice-controlled apparatus is provided which minimises the risk of activating more than one such apparatus at a time where multiple voice-controlled apparatus exist in close proximity. To start voice control of the apparatus, a user needs to be touching the apparatus when speaking. Preferably, after the user stops touching the apparatus, continuing voice control can only be effected whilst the user continues speaking without breaks longer than a predetermined duration. The touch sensitive area of the apparatus is made of substantial size in the top front part of the apparatus.Type: ApplicationFiled: December 4, 2001Publication date: August 8, 2002Inventors: Andrew Thomas, Stephen John Hinde
-
Publication number: 20020107942Abstract: A browser with a sound input receives a sound passage associated with a content site. The browser sends a representation of the sound passage to a service system where it is compared with stored representations of sound passages that each have an associated URI. On finding a match, the service system sends back the URI associated with the matched stored sound-passage representation. The browser uses this URI to access the content site.Type: ApplicationFiled: December 4, 2001Publication date: August 8, 2002Inventors: Andrew Thomas, Stephen John Hinde, Martin Sadler, Simon Edwin Crouch
-
Publication number: 20020082838Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by detecting the location of a user wishing to communicate with such entities, and comparing the user's location with the known locations of entities having associated voice services. The voice services are separately hosted from the entities themselves. Upon the user being determined to be close to a voice-enabled entity, contact is initiated between the user and the voice service associated with the local entity; for example, contact data for the voice service is passed to user equipment from where it is sent to a network voice browser and used by the latter to contact the voice service. The user then interacts with the voice service, the latter acting as a voice proxy for the local entity with voice output from the service being controlled to appear to emanate from the local entity.Type: ApplicationFiled: November 21, 2001Publication date: June 27, 2002Inventors: Stephen John Hinde, Lawrence Wilcock, Paul St John Brittan, Guillaume Belrose
-
Publication number: 20020082839Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a beacon device at or near the entity, the beacon device transmitting, over a short-range communication link, contact data identifying a voice service associated with, but hosted separately from, the entity. The transmitted contact data is picked up by equipment carried by a nearby person and used to contact the voice service over a wireless network. The person then interacts with the voice service, the latter acting as a voice proxy for the local entity. The contact data can be presented to the user in other ways, for example, by being inscribed on the local entity for scanning or user input into the equipment.Type: ApplicationFiled: November 21, 2001Publication date: June 27, 2002Inventors: Stephen John Hinde, Paul St John Brittan, Marianne Hickey, Lawrence Wilcock, Guillaume Belrose, Andrew Thomas
-
Publication number: 20020078148Abstract: A local entity without its own means of voice communication is provided with the semblance of having a voice interaction capability. This is done by providing a receiving device at or near the entity, for picking up contact data transmitted by a nearby person wanting to talk to the local entity. This contact data is used by the receiving device to establish communication between a voice service associated with the local entity and equipment carried by the user. The voice service is hosted separately from the local entity, and takes the form, for example, of pages marked up with voice-markup tags for interpretation by a voice browser.Type: ApplicationFiled: November 21, 2001Publication date: June 20, 2002Inventors: Stephen John Hinde, Lawrence Wilcock, Paul St. John Brittan, Guillaume Belrose