Patents by Inventor Yasunari Obuchi

Yasunari Obuchi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090207131
    Abstract: There is disclosed an acoustic pointing device that is capable of performing pointing manipulation without putting any auxiliary equipment on a desk.
    Type: Application
    Filed: November 12, 2008
    Publication date: August 20, 2009
    Inventors: Masahito TOGAMI, Takashi Sumiyoshi, Yasunari Obuchi
  • Publication number: 20090067646
    Abstract: The present invention provides elimination of atmosphere that is not suitable to a space concerned by controlling atmosphere. The present invention is characterized that atmosphere in space is analyzed based on voice, and if atmosphere that is not suitable to the space is detected, the atmosphere in the space is controlled by choosing and irradiating illumination that enables to create atmosphere suitable to the space, atmosphere in the space is controlled for creating atmosphere that is suitable to the space.
    Type: Application
    Filed: April 13, 2005
    Publication date: March 12, 2009
    Inventors: Nobuo Sato, Yasunari Obuchi
  • Patent number: 7467085
    Abstract: A method of providing an interpretation service, is disclosed. The method includes the steps of receiving an incoming telephone call from a user, forming a plurality of databases, wherein the plurality of databases includes at least one sentence registered to individual user, receiving at least one user information item via the incoming telephone call, searching at least one of the plurality of databases for at least one sentence correspondent to the at least one information item, outputting, according the step of searching, a translation, from at least one of the plurality of databases, of the at least one sentence correspondent to the at least one information item, and outputting, in audio on the incoming telephone call, the translation of the at least one sentence correspondent to the at least one information item.
    Type: Grant
    Filed: July 27, 2004
    Date of Patent: December 16, 2008
    Assignee: Hitachi, Ltd.
    Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
  • Patent number: 7298256
    Abstract: Provided is a crisis monitoring system that detects a crisis by identifying a person's emotion from his/her utterance includes an input unit to which an audio signal is inputted, a recording unit which records information necessary to judge a crisis situation, and a control unit which controls the input unit and the recording unit. The recording unit records emotion attribute information, which includes a feature of a specific emotion in an audio signal, and the control unit determines a person's emotion by comparing an audio signal inputted to the input unit with the emotion attribute information, and executes a predetermined emergency processing when it is judged that the determined emotion indicates a crisis situation.
    Type: Grant
    Filed: January 27, 2005
    Date of Patent: November 20, 2007
    Assignee: Hitachi, Ltd.
    Inventors: Nobuo Sato, Yasunari Obuchi
  • Publication number: 20070192103
    Abstract: The invention provides a conversational speech analyzer which analyzes whether utterances in a meeting are of interest or concern. Frames are calculated using sound signals obtained from a microphone and a sensor, sensor signals are cut out for each frame, and by calculating the correlation between sensor signals for each frame, an interest level which represents the concern of an audience regarding utterances is calculated, and the meeting is analyzed.
    Type: Application
    Filed: February 14, 2007
    Publication date: August 16, 2007
    Inventors: Nobuo Sato, Yasunari Obuchi
  • Patent number: 7130801
    Abstract: A speech interpretation server, and a method for providing a speech interpretation service, are disclosed. The server includes a speech input for receiving an inputted speech in a first language from a mobile terminal, a speech recognizer that receives the inputted speech and converts the inputted speech into a prescribed symbol string, a language converter that converts the inputted speech converted into the prescribed symbol string into a second language, wherein the second language is different from the first language, and a speech output that outputs the second language to the mobile terminal.
    Type: Grant
    Filed: March 20, 2001
    Date of Patent: October 31, 2006
    Assignee: Hitachi, Ltd.
    Inventors: Yoshinori Kitahara, Yasunari Obuchi, Atsuko Koizumi, Seiki Mizutani
  • Publication number: 20060224438
    Abstract: The objects of the present invention are, in connection with the provision of information mainly through images to the general public or to individuals, to detect whether the user or users who is or are at a place from where he, she or they can observe the image is or are watching the image or not and to efficiently provide good information by finding out the interest and attributes of the user or users. In order to achieve the above objects, the voice data acquired by the voice inputting unit, the image data currently being provided and information added to the image data are compared, and the degree of attention of the subjects is estimated based on the degree of similitude of these data. And the language used by the user or users is estimated by a language identifying device, and information is provided by using the language.
    Type: Application
    Filed: January 31, 2006
    Publication date: October 5, 2006
    Inventors: Yasunari Obuchi, Nobuo Sato, Akira Date
  • Patent number: 7117223
    Abstract: An interpretation service for voice based on sentence template retrieval allows a translation database to be customized without burdening users and enables sentences needed by users to be accurately interpreted. A sentence to be stored in a translation database for customization can be described as a sentence template including a slot which allows words to be replaced. A condition for selecting sentence templates is extracted from a registered user profile (UP). A sentence template matching the condition is retrieved from those stored in the translation database for customization and is registered in a translation database customized for each user. A word extracted from the UP is inserted into the sentence template's slot for registration to a sentence dictionary customized for each user.
    Type: Grant
    Filed: February 14, 2002
    Date of Patent: October 3, 2006
    Assignee: Hitachi, Ltd.
    Inventors: Atsuko Koizumi, Yoshinori Kitahara, Yasunari Obuchi, Seiki Mizutani
  • Patent number: 7047195
    Abstract: A translation device which has both advantages of a table look-up translation device and advantages of a machine translation device by leading the user's utterance through a sentence template suitable for the user's intent of speech is realized. Since the translation device searches for sentence templates suitable for the user's intent of speech with an orally inputted keyword and displays retrieved sentences, the user's utterance can be lead. In addition, the user is free from a troublesome manipulation for replacing a word since an expression uttered by the user is inserted into a replaceable portion (slot) within the sentence template, and the translation device translates a resulting sentence with the replaced expression embedded in the slot.
    Type: Grant
    Filed: January 26, 2005
    Date of Patent: May 16, 2006
    Assignee: Hitachi, Ltd.
    Inventors: Atsuko Koizumi, Hiroyuki Kaji, Yasunari Obuchi, Yoshinori Kitahara
  • Publication number: 20060045289
    Abstract: Collecting the sound while rotating at least one or more microphone around a rotational axis, the filter processing is carried out in accordance with the positional information of the microphone at each point.
    Type: Application
    Filed: March 7, 2005
    Publication date: March 2, 2006
    Inventors: Toshihiro Kujirai, Masahito Togami, Yasunari Obuchi
  • Publication number: 20050264425
    Abstract: Provided is a crisis monitoring system that detects a crisis by identifying a person's emotion from his/her utterance includes an input unit to which an audio signal is inputted, a recording unit which records information necessary to judge a crisis situation, and a control unit which controls the input unit and the recording unit. The recording unit records emotion attribute information, which includes a feature of a specific emotion in an audio signal, and the control unit determines a person's emotion by comparing an audio signal inputted to the input unit with the emotion attribute information, and executes a predetermined emergency processing when it is judged that the determined emotion indicates a crisis situation.
    Type: Application
    Filed: January 27, 2005
    Publication date: December 1, 2005
    Inventors: Nobuo Sato, Yasunari Obuchi
  • Patent number: 6917920
    Abstract: A translation device which has both advantages of a table look-up translation device and advantages of a machine translation device by leading the user's utterance through a sentence template suitable for the user's intent of speech is realized. Since the translation device searches for sentence templates suitable for the user's intent of speech with an orally inputted keyword and displays retrieved sentences, the user's utterance can be lead. In addition, the user is free from a troublesome manipulation for replacing a word since an expression uttered by the user is inserted into a replaceable portion (slot) within the sentence template, and the translation device translates a resulting sentence with the replaced expression embedded in the slot.
    Type: Grant
    Filed: January 6, 2000
    Date of Patent: July 12, 2005
    Assignee: Hitachi, Ltd.
    Inventors: Atsuko Koizumi, Hiroyuki Kaji, Yasunari Obuchi, Yoshinori Kitahara
  • Publication number: 20050131673
    Abstract: A translation device which has both advantages of a table look-up translation device and advantages of a machine translation device by leading the user's utterance through a sentence template suitable for the user's intent of speech is realized. Since the translation device searches for sentence templates suitable for the user's intent of speech with an orally inputted keyword and displays retrieved sentences, the user's utterance can be lead. In addition, the user is free from a troublesome manipulation for replacing a word since an expression uttered by the user is inserted into a replaceable portion (slot) within the sentence template, and the translation device translates a resulting sentence with the replaced expression embedded in the slot.
    Type: Application
    Filed: January 26, 2005
    Publication date: June 16, 2005
    Inventors: Atsuko Koizumi, Hiroyuki Kaji, Yasunari Obuchi, Yoshinori Kitahara
  • Publication number: 20040267538
    Abstract: A method of providing an interpretation service, and an interpretation service, are disclosed.
    Type: Application
    Filed: July 27, 2004
    Publication date: December 30, 2004
    Applicant: Hitachi, Ltd.
    Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
  • Patent number: 6789093
    Abstract: A method and apparatus for providing an interpretation service are disclosed. The method includes the steps of receiving an incoming telephone call from a user, forming a plurality of databases, receiving at least one user information item via the incoming telephone call, searching at least one of the plurality of databases for at least one sentence correspondent to the at least one information item, outputting a translation from at least one of the plurality of databases of the at least one sentence correspondent to the at least one information item, and outputting, in audio on the incoming telephone call, the translation. The apparatus includes an interpreter and a registration service. The registration service includes a private information manager that receives an incoming telephone call from a user, wherein the private information manager manages a plurality of databases, wherein the plurality of databases includes at least one database of sentences registered to the individual user.
    Type: Grant
    Filed: March 20, 2001
    Date of Patent: September 7, 2004
    Assignee: Hitachi, Ltd.
    Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
  • Publication number: 20030033312
    Abstract: An interpretation service for voice based on sentence template retrieval allows a translation database to be customized without burdening users and enables sentences needed by users to be accurately interpreted. A sentence to be stored in a translation database for customization can be described as a sentence template including a slot which allows words to be replaced. As means for customization, an interpretation server maintains a registered user profile (UP). Namely, a user registration screen displays a telephone number, name, itinerary, accommodation facility, interested things, shopping list, physical condition, etc. When a user enters an answer and sends it, the interpretation server creates a UP. A condition for selecting sentence templates is extracted from the UP. A sentence template matching the condition is retrieved from those stored in the translation database for customization and is registered in a translation database customized for each user.
    Type: Application
    Filed: February 14, 2002
    Publication date: February 13, 2003
    Inventors: Atsuko Koizumi, Yoshinori Kitahara, Yasunari Obuchi, Seiki Mizutani
  • Publication number: 20020046206
    Abstract: A method of providing an interpretation service, and an interpretation service, are disclosed.
    Type: Application
    Filed: March 20, 2001
    Publication date: April 18, 2002
    Inventors: Yasunari Obuchi, Atsuko Koizumi, Yoshinori Kitahara, Seiki Mizutani
  • Publication number: 20020046035
    Abstract: A speech interpretation server, and a method for providing a speech interpretation service, are disclosed. The server includes a speech input for receiving an inputted speech in a first language from a mobile terminal, a speech recognizer that receives the inputted speech and converts the inputted speech into a prescribed symbol string, a language converter that converts the inputted speech converted into the prescribed symbol string into a second language, wherein the second language is different from the first language, and a speech output that outputs the second language to the mobile terminal.
    Type: Application
    Filed: March 20, 2001
    Publication date: April 18, 2002
    Inventors: Yoshinori Kitahara, Yasunari Obuchi, Atsuko Koizumi, Seiki Mizutani
  • Patent number: 5953693
    Abstract: A sign language interpretation apparatus for performing sign language recognition and sign language generation generates easily read sign language computer graphics (CG) animation by preparing sign language word CG patterns on the basis of actual motion of the hand through the use of a glove type sensor to generate natural sign language CG animation, and by applying correction to the sign language word CG patterns. Further, in the sign language interpretation apparatus, results of translation of inputted sign language or voice language are confirmed and modified easily by the individual input persons, whereby results of translation of the inputted sign language or voice language are displayed in a combined form desired by the user to realize smooth communication. Also, candidates obtained as a result of translation are all displayed and can be selected easily by the input person with a device such as a mouse.
    Type: Grant
    Filed: May 9, 1997
    Date of Patent: September 14, 1999
    Assignee: Hitachi, Ltd.
    Inventors: Tomoko Sakiyama, Eiji Oohira, Hirohiko Sagawa, Masaru Ohki, Kazuhiko Sagara, Kiyoshi Inoue, Yasunari Obuchi, Yuji Toda, Masahiro Abe
  • Patent number: 5659764
    Abstract: A sign language interpretation apparatus for performing sign language recognition and sign language generation generates easily read sign language computer graphics (CG) animation by preparing sign language word CG patterns on the basis of actual motion of the hand through the use of a glove type sensor to generate natural sign language CG animation, and by applying correction to the sign language word CG patterns. Further, in the sign language interpretation apparatus, results of translation of inputted sign language or voice language are confirmed and modified easily by the individual input persons, whereby results of translation of the inputted sign language or voice language are displayed in a combined form desired by the user to realize smooth communication. Also, candidates obtained as a result of translation are all displayed and can be selected easily by the input person with a device such as a mouse.
    Type: Grant
    Filed: February 23, 1994
    Date of Patent: August 19, 1997
    Assignee: Hitachi, Ltd.
    Inventors: Tomoko Sakiyama, Eiji Oohira, Hirohiko Sagawa, Masaru Ohki, Kazuhiko Sagara, Kiyoshi Inoue, Yasunari Obuchi, Yuji Toda, Masahiro Abe