Patents by Inventor Yasunari Obuchi
Yasunari Obuchi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20090207131Abstract: There is disclosed an acoustic pointing device that is capable of performing pointing manipulation without putting any auxiliary equipment on a desk.Type: ApplicationFiled: November 12, 2008Publication date: August 20, 2009Inventors: Masahito TOGAMI, Takashi Sumiyoshi, Yasunari Obuchi
-
Publication number: 20090067646Abstract: The present invention provides elimination of atmosphere that is not suitable to a space concerned by controlling atmosphere. The present invention is characterized that atmosphere in space is analyzed based on voice, and if atmosphere that is not suitable to the space is detected, the atmosphere in the space is controlled by choosing and irradiating illumination that enables to create atmosphere suitable to the space, atmosphere in the space is controlled for creating atmosphere that is suitable to the space.Type: ApplicationFiled: April 13, 2005Publication date: March 12, 2009Inventors: Nobuo Sato, Yasunari Obuchi
-
Patent number: 7467085Abstract: A method of providing an interpretation service, is disclosed. The method includes the steps of receiving an incoming telephone call from a user, forming a plurality of databases, wherein the plurality of databases includes at least one sentence registered to individual user, receiving at least one user information item via the incoming telephone call, searching at least one of the plurality of databases for at least one sentence correspondent to the at least one information item, outputting, according the step of searching, a translation, from at least one of the plurality of databases, of the at least one sentence correspondent to the at least one information item, and outputting, in audio on the incoming telephone call, the translation of the at least one sentence correspondent to the at least one information item.Type: GrantFiled: July 27, 2004Date of Patent: December 16, 2008Assignee: Hitachi, Ltd.Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
-
Patent number: 7298256Abstract: Provided is a crisis monitoring system that detects a crisis by identifying a person's emotion from his/her utterance includes an input unit to which an audio signal is inputted, a recording unit which records information necessary to judge a crisis situation, and a control unit which controls the input unit and the recording unit. The recording unit records emotion attribute information, which includes a feature of a specific emotion in an audio signal, and the control unit determines a person's emotion by comparing an audio signal inputted to the input unit with the emotion attribute information, and executes a predetermined emergency processing when it is judged that the determined emotion indicates a crisis situation.Type: GrantFiled: January 27, 2005Date of Patent: November 20, 2007Assignee: Hitachi, Ltd.Inventors: Nobuo Sato, Yasunari Obuchi
-
Publication number: 20070192103Abstract: The invention provides a conversational speech analyzer which analyzes whether utterances in a meeting are of interest or concern. Frames are calculated using sound signals obtained from a microphone and a sensor, sensor signals are cut out for each frame, and by calculating the correlation between sensor signals for each frame, an interest level which represents the concern of an audience regarding utterances is calculated, and the meeting is analyzed.Type: ApplicationFiled: February 14, 2007Publication date: August 16, 2007Inventors: Nobuo Sato, Yasunari Obuchi
-
Patent number: 7130801Abstract: A speech interpretation server, and a method for providing a speech interpretation service, are disclosed. The server includes a speech input for receiving an inputted speech in a first language from a mobile terminal, a speech recognizer that receives the inputted speech and converts the inputted speech into a prescribed symbol string, a language converter that converts the inputted speech converted into the prescribed symbol string into a second language, wherein the second language is different from the first language, and a speech output that outputs the second language to the mobile terminal.Type: GrantFiled: March 20, 2001Date of Patent: October 31, 2006Assignee: Hitachi, Ltd.Inventors: Yoshinori Kitahara, Yasunari Obuchi, Atsuko Koizumi, Seiki Mizutani
-
Publication number: 20060224438Abstract: The objects of the present invention are, in connection with the provision of information mainly through images to the general public or to individuals, to detect whether the user or users who is or are at a place from where he, she or they can observe the image is or are watching the image or not and to efficiently provide good information by finding out the interest and attributes of the user or users. In order to achieve the above objects, the voice data acquired by the voice inputting unit, the image data currently being provided and information added to the image data are compared, and the degree of attention of the subjects is estimated based on the degree of similitude of these data. And the language used by the user or users is estimated by a language identifying device, and information is provided by using the language.Type: ApplicationFiled: January 31, 2006Publication date: October 5, 2006Inventors: Yasunari Obuchi, Nobuo Sato, Akira Date
-
Patent number: 7117223Abstract: An interpretation service for voice based on sentence template retrieval allows a translation database to be customized without burdening users and enables sentences needed by users to be accurately interpreted. A sentence to be stored in a translation database for customization can be described as a sentence template including a slot which allows words to be replaced. A condition for selecting sentence templates is extracted from a registered user profile (UP). A sentence template matching the condition is retrieved from those stored in the translation database for customization and is registered in a translation database customized for each user. A word extracted from the UP is inserted into the sentence template's slot for registration to a sentence dictionary customized for each user.Type: GrantFiled: February 14, 2002Date of Patent: October 3, 2006Assignee: Hitachi, Ltd.Inventors: Atsuko Koizumi, Yoshinori Kitahara, Yasunari Obuchi, Seiki Mizutani
-
Patent number: 7047195Abstract: A translation device which has both advantages of a table look-up translation device and advantages of a machine translation device by leading the user's utterance through a sentence template suitable for the user's intent of speech is realized. Since the translation device searches for sentence templates suitable for the user's intent of speech with an orally inputted keyword and displays retrieved sentences, the user's utterance can be lead. In addition, the user is free from a troublesome manipulation for replacing a word since an expression uttered by the user is inserted into a replaceable portion (slot) within the sentence template, and the translation device translates a resulting sentence with the replaced expression embedded in the slot.Type: GrantFiled: January 26, 2005Date of Patent: May 16, 2006Assignee: Hitachi, Ltd.Inventors: Atsuko Koizumi, Hiroyuki Kaji, Yasunari Obuchi, Yoshinori Kitahara
-
Publication number: 20060045289Abstract: Collecting the sound while rotating at least one or more microphone around a rotational axis, the filter processing is carried out in accordance with the positional information of the microphone at each point.Type: ApplicationFiled: March 7, 2005Publication date: March 2, 2006Inventors: Toshihiro Kujirai, Masahito Togami, Yasunari Obuchi
-
Publication number: 20050264425Abstract: Provided is a crisis monitoring system that detects a crisis by identifying a person's emotion from his/her utterance includes an input unit to which an audio signal is inputted, a recording unit which records information necessary to judge a crisis situation, and a control unit which controls the input unit and the recording unit. The recording unit records emotion attribute information, which includes a feature of a specific emotion in an audio signal, and the control unit determines a person's emotion by comparing an audio signal inputted to the input unit with the emotion attribute information, and executes a predetermined emergency processing when it is judged that the determined emotion indicates a crisis situation.Type: ApplicationFiled: January 27, 2005Publication date: December 1, 2005Inventors: Nobuo Sato, Yasunari Obuchi
-
Patent number: 6917920Abstract: A translation device which has both advantages of a table look-up translation device and advantages of a machine translation device by leading the user's utterance through a sentence template suitable for the user's intent of speech is realized. Since the translation device searches for sentence templates suitable for the user's intent of speech with an orally inputted keyword and displays retrieved sentences, the user's utterance can be lead. In addition, the user is free from a troublesome manipulation for replacing a word since an expression uttered by the user is inserted into a replaceable portion (slot) within the sentence template, and the translation device translates a resulting sentence with the replaced expression embedded in the slot.Type: GrantFiled: January 6, 2000Date of Patent: July 12, 2005Assignee: Hitachi, Ltd.Inventors: Atsuko Koizumi, Hiroyuki Kaji, Yasunari Obuchi, Yoshinori Kitahara
-
Publication number: 20050131673Abstract: A translation device which has both advantages of a table look-up translation device and advantages of a machine translation device by leading the user's utterance through a sentence template suitable for the user's intent of speech is realized. Since the translation device searches for sentence templates suitable for the user's intent of speech with an orally inputted keyword and displays retrieved sentences, the user's utterance can be lead. In addition, the user is free from a troublesome manipulation for replacing a word since an expression uttered by the user is inserted into a replaceable portion (slot) within the sentence template, and the translation device translates a resulting sentence with the replaced expression embedded in the slot.Type: ApplicationFiled: January 26, 2005Publication date: June 16, 2005Inventors: Atsuko Koizumi, Hiroyuki Kaji, Yasunari Obuchi, Yoshinori Kitahara
-
Publication number: 20040267538Abstract: A method of providing an interpretation service, and an interpretation service, are disclosed.Type: ApplicationFiled: July 27, 2004Publication date: December 30, 2004Applicant: Hitachi, Ltd.Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
-
Patent number: 6789093Abstract: A method and apparatus for providing an interpretation service are disclosed. The method includes the steps of receiving an incoming telephone call from a user, forming a plurality of databases, receiving at least one user information item via the incoming telephone call, searching at least one of the plurality of databases for at least one sentence correspondent to the at least one information item, outputting a translation from at least one of the plurality of databases of the at least one sentence correspondent to the at least one information item, and outputting, in audio on the incoming telephone call, the translation. The apparatus includes an interpreter and a registration service. The registration service includes a private information manager that receives an incoming telephone call from a user, wherein the private information manager manages a plurality of databases, wherein the plurality of databases includes at least one database of sentences registered to the individual user.Type: GrantFiled: March 20, 2001Date of Patent: September 7, 2004Assignee: Hitachi, Ltd.Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
-
Publication number: 20030033312Abstract: An interpretation service for voice based on sentence template retrieval allows a translation database to be customized without burdening users and enables sentences needed by users to be accurately interpreted. A sentence to be stored in a translation database for customization can be described as a sentence template including a slot which allows words to be replaced. As means for customization, an interpretation server maintains a registered user profile (UP). Namely, a user registration screen displays a telephone number, name, itinerary, accommodation facility, interested things, shopping list, physical condition, etc. When a user enters an answer and sends it, the interpretation server creates a UP. A condition for selecting sentence templates is extracted from the UP. A sentence template matching the condition is retrieved from those stored in the translation database for customization and is registered in a translation database customized for each user.Type: ApplicationFiled: February 14, 2002Publication date: February 13, 2003Inventors: Atsuko Koizumi, Yoshinori Kitahara, Yasunari Obuchi, Seiki Mizutani
-
Publication number: 20020046206Abstract: A method of providing an interpretation service, and an interpretation service, are disclosed.Type: ApplicationFiled: March 20, 2001Publication date: April 18, 2002Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
-
Publication number: 20020046035Abstract: A speech interpretation server, and a method for providing a speech interpretation service, are disclosed. The server includes a speech input for receiving an inputted speech in a first language from a mobile terminal, a speech recognizer that receives the inputted speech and converts the inputted speech into a prescribed symbol string, a language converter that converts the inputted speech converted into the prescribed symbol string into a second language, wherein the second language is different from the first language, and a speech output that outputs the second language to the mobile terminal.Type: ApplicationFiled: March 20, 2001Publication date: April 18, 2002Inventors: Yoshinori Kitahara, Yasunari Obuchi, Atsuko Koizumi, Seiki Mizutani
-
Patent number: 5953693Abstract: A sign language interpretation apparatus for performing sign language recognition and sign language generation generates easily read sign language computer graphics (CG) animation by preparing sign language word CG patterns on the basis of actual motion of the hand through the use of a glove type sensor to generate natural sign language CG animation, and by applying correction to the sign language word CG patterns. Further, in the sign language interpretation apparatus, results of translation of inputted sign language or voice language are confirmed and modified easily by the individual input persons, whereby results of translation of the inputted sign language or voice language are displayed in a combined form desired by the user to realize smooth communication. Also, candidates obtained as a result of translation are all displayed and can be selected easily by the input person with a device such as a mouse.Type: GrantFiled: May 9, 1997Date of Patent: September 14, 1999Assignee: Hitachi, Ltd.Inventors: Tomoko Sakiyama, Eiji Oohira, Hirohiko Sagawa, Masaru Ohki, Kazuhiko Sagara, Kiyoshi Inoue, Yasunari Obuchi, Yuji Toda, Masahiro Abe
-
Patent number: 5659764Abstract: A sign language interpretation apparatus for performing sign language recognition and sign language generation generates easily read sign language computer graphics (CG) animation by preparing sign language word CG patterns on the basis of actual motion of the hand through the use of a glove type sensor to generate natural sign language CG animation, and by applying correction to the sign language word CG patterns. Further, in the sign language interpretation apparatus, results of translation of inputted sign language or voice language are confirmed and modified easily by the individual input persons, whereby results of translation of the inputted sign language or voice language are displayed in a combined form desired by the user to realize smooth communication. Also, candidates obtained as a result of translation are all displayed and can be selected easily by the input person with a device such as a mouse.Type: GrantFiled: February 23, 1994Date of Patent: August 19, 1997Assignee: Hitachi, Ltd.Inventors: Tomoko Sakiyama, Eiji Oohira, Hirohiko Sagawa, Masaru Ohki, Kazuhiko Sagara, Kiyoshi Inoue, Yasunari Obuchi, Yuji Toda, Masahiro Abe