Abstract: To provide a system capable of appropriately proposing a search term candidate for each page of a document. Provided is a search document information storage device comprising: a vocabulary extraction means 3; a keyword storage means 5; a keyword extraction means 7; a topic term storage means 9; a topic term extraction means 11; a search term candidate extraction means 13; a search term candidate display means 17; a search term input means 19; and a document search information storage means 21.
Abstract: To provide a presentation system for effectively displaying a keyword for effecting the selection of the next slide during a presentation. Provided is a display device comprising: a voice recognition unit 53; a conversation-derived term extraction unit 55; a search keyword storage unit 57; a search keyword extraction unit 59; a material storage unit 61; a relevant page information extraction unit 63; a selection term extraction unit 65; and a selection term display unit 71.
Abstract: To provide a system capable of appropriately proposing a search term candidate for each page of a document. Provided is a search document information storage device comprising: a vocabulary extraction means 3; a keyword storage means 5; a keyword extraction means 7; a topic term storage means 9; a topic term extraction means 11; a search term candidate extraction means 13; a search term candidate display means 17; a search term input means 19; and a document search information storage means 21.
Abstract: An electronic translator translates input speech into multiple streams of data that are simultaneously delivered to the user, such as a hearing impaired individual. Preferably, the data is delivered in audible, visual and text formats. These multiple data streams are delivered to the hearing-impaired individual in a synchronized fashion, thereby creating a cognitive response. Preferably, the system of the present invention converts the input speech to a text format, and then translates the text to any of three other forms, including sign language, animation and computer generated speech. The sign language and animation translations are preferably implemented by using the medium of digital movies in which videos of a person signing words, phrase and finger spelled words, and of animations corresponding to the words, are selectively accessed from databases and displayed.
Type:
Grant
Filed:
July 7, 2000
Date of Patent:
April 23, 2002
Assignee:
Interactive Solutions, Inc.
Inventors:
Morgan Greene, Jr., Virginia Greene, Harry E. Newman, Mark J. Yuhas, Michael F. Dorety