Patents by Inventor Yasuo Okutani

Yasuo Okutani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150067492
    Abstract: An information processing apparatus for representing at least one candidate for a character string to be input based on at least one input character includes an acquisition unit configured to obtain situation information which represents the situation in which the information processing apparatus exists based on the information detected by the at least one sensor. The information processing apparatus further includes a prediction unit configured to predict at least one character string to be input based on the at least one character input by a user operation, a storage unit configured to store two or more character strings with each of the two or more character strings being associated with situation information which represents the situation in which the character string is used and a representation unit configured to represent at least one character string predicted by the prediction unit.
    Type: Application
    Filed: August 21, 2014
    Publication date: March 5, 2015
    Inventors: Eriko Ozaki, Makoto Hirota, Shinya Takeichi, Yasuo Okutani, Hiromi Omi
  • Publication number: 20150010214
    Abstract: An information processing device includes an imaging unit, a storage unit that stores face images of at least two persons, including an owner of the information processing device, in association with a communication device owned by each of the at least two persons, an identification unit that identifies, based on a first group of face images and a second group of face images, a person associated with a face image detected from an image including face images of a plurality of persons imaged by the imaging unit, the first group of face images includes the face image of each person detected from the image imaged by the imaging unit, and the second group of face images includes the faces stored in the storage unit, and a decision unit that decides a person as a receiver from the identified persons excluding the owner.
    Type: Application
    Filed: July 2, 2014
    Publication date: January 8, 2015
    Inventors: Masayuki Ishizawa, Yasuo Okutani
  • Patent number: 8848082
    Abstract: An image capturing apparatus according to exemplary embodiments of the present invention includes a viewfinder and a display, switches between a display mode for displaying an image on the viewfinder, and a display mode for displaying an image on the display, sets processing of speech input to a mode for close-talking, when an image is displayed on the viewfinder, sets processing of speech input to a mode for non-close-talking, when an image is displayed on the display, and inputs by speech a control command that has been set in advance according to a mode of speech input that has been set.
    Type: Grant
    Filed: November 24, 2009
    Date of Patent: September 30, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hiroki Yamamoto, Yasuo Okutani
  • Publication number: 20140108014
    Abstract: The present invention is configured to display a screen that includes a voice output position with a simple operation, even when another text that does not include the voice output position is displayed by manipulation during output of a text as voice. Therefore, when an input unit 101 detects an operation by a user while outputting a text as voice, a display control unit executes processing that corresponds to this operation such as scrolling, and displays the designated part of the text. Thereafter, when the input unit 101 further detects an operation and if the detected operation and the immediately previous operation are opposite operations to each other, a text that includes a current voice output position is displayed.
    Type: Application
    Filed: October 1, 2013
    Publication date: April 17, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Tomonori Tanaka, Yasuo Okutani
  • Publication number: 20130329264
    Abstract: A reading apparatus includes a first generation unit, a determination unit, a second generation unit, and an association unit. The first generation unit generates data of a first target object obtained by using an image capture unit attached above a reading platen to read the first target object placed in a reading area of the reading platen. The determination unit determines, in a state where the first target object is placed in the reading area, whether a second target object has been placed in the reading area. The second generation unit generates data of a second target object obtained by using the image capture unit to read the second target object. The association unit associates, in response to determining that the second target object has been placed in the reading area, the data of the first target object with the data of the second target object.
    Type: Application
    Filed: May 30, 2013
    Publication date: December 12, 2013
    Inventor: Yasuo Okutani
  • Patent number: 8170874
    Abstract: A speech recognition apparatus which improves the sound quality of speech output as a speech recognition result is provided. The speech recognition apparatus includes a recognition unit, which recognizes speech based on a recognition dictionary, and a registration unit, which registers a dictionary entry of a new recognition word in the recognition dictionary. The recognition unit includes a generation unit, which generates a dictionary entry including speech of the new recognition word item and feature parameters of the speech, and a modification unit, which makes a modification for improving the sound quality of the speech included in the dictionary entry generated by the generation unit. The recognition unit includes a speech output unit, which outputs speech which is included in a dictionary entry corresponding to the recognition result of input speech, and is modified by the modification unit.
    Type: Grant
    Filed: July 1, 2008
    Date of Patent: May 1, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventors: Masayuki Yamada, Toshiaki Fukada, Yasuo Okutani, Michio Aizawa
  • Publication number: 20120050194
    Abstract: An information processing apparatus may include a detection unit and a switching unit. The detection unit detects an amount of change in a position of an object of interest per a predetermined time period. The switching unit switches between a first mode for determining a first operation position on a display surface based on the position and direction of an object of interest and a second mode for determining a second operation position on the display surface based on a position where the object of interest is in contact with the display surface using the detected amount of change.
    Type: Application
    Filed: August 19, 2011
    Publication date: March 1, 2012
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Tomonori Tanaka, Yasuo Okutani, Toshiaki Fukada
  • Patent number: 8041569
    Abstract: A language processing unit identifies a word by performing language analysis on a text supplied from a text holding unit. A synthesis selection unit selects speech synthesis processing performed by a rule-based synthesis unit or speech synthesis processing performed by a pre-recorded-speech-based synthesis unit for a word of interest extracted from the language analysis result. The selected rule-based synthesis unit or pre-recorded-speech-based synthesis unit executes speech synthesis processing for the word of interest.
    Type: Grant
    Filed: February 22, 2008
    Date of Patent: October 18, 2011
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasuo Okutani, Michio Aizawa, Toshiaki Fukada
  • Publication number: 20100134677
    Abstract: An image capturing apparatus according to exemplary embodiments of the present invention includes a viewfinder and a display, switches between a display mode for displaying an image on the viewfinder, and a display mode for displaying an image on the display, sets processing of speech input to a mode for close-talking, when an image is displayed on the viewfinder, sets processing of speech input to a mode for non-close-talking, when an image is displayed on the display, and inputs by speech a control command that has been set in advance according to a mode of speech input that has been set.
    Type: Application
    Filed: November 24, 2009
    Publication date: June 3, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Hiroki Yamamoto, Yasuo Okutani
  • Patent number: 7576786
    Abstract: In cases where at least one item of sound information has been associated with at least image, at least one desired item of sound information is selected and the sound information is played back in a prescribed order. According, in an information processing apparatus, a playback sequence decision unit (103) reads in image data as well as sound data, which has been assigned within the image data, from a image/sound data storage unit (107), generates a still image in which the positions at which sound data has been recorded is denoted on the image, and displays the generated still image on a image display unit (106). A sound data specifying unit (102) searches the image/sound data storage unit (107) for sound data that has been associated with the interior of an image area specified by an input from a user. When applicable sound data is found to exist, the playback sequence decision unit (103) decides the order in which the applicable sound data is to be played back.
    Type: Grant
    Filed: February 7, 2005
    Date of Patent: August 18, 2009
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasuo Okutani, Yasuhiro Komori
  • Publication number: 20090012790
    Abstract: A speech recognition apparatus which improves the sound quality of speech output as a speech recognition result is provided. The speech recognition apparatus includes a recognition unit, which recognizes speech based on a recognition dictionary, and a registration unit, which registers a dictionary entry of a new recognition word in the recognition dictionary. The recognition unit includes a generation unit, which generates a dictionary entry including speech of the new recognition word item and feature parameters of the speech, and a modification unit, which makes a modification for improving the sound quality of the speech included in the dictionary entry generated by the generation unit. The recognition unit includes a speech output unit, which outputs speech which is included in a dictionary entry corresponding to the recognition result of input speech, and is modified by the modification unit.
    Type: Application
    Filed: July 1, 2008
    Publication date: January 8, 2009
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Masayuki Yamada, Toshiaki Fukada, Yasuo Okutani, Michio Aizawa
  • Publication number: 20080228487
    Abstract: A language processing unit identifies a word by performing language analysis on a text supplied from a text holding unit. A synthesis selection unit selects speech synthesis processing performed by a rule-based synthesis unit or speech synthesis processing performed by a pre-recorded-speech-based synthesis unit for a word of interest extracted from the language analysis result. The selected rule-based synthesis unit or pre-recorded-speech-based synthesis unit executes speech synthesis processing for the word of interest.
    Type: Application
    Filed: February 22, 2008
    Publication date: September 18, 2008
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Yasuo Okutani, Michio Aizawa, Toshiaki Fukada
  • Patent number: 7418384
    Abstract: A data input device for inputting numeric data by voice includes a range prediction part, a history holding part, a speech recognition part, a recognition result holding part, a comparison part, a presentation part, and a result storing part. The range prediction part estimates a range of a value expected to be input on the basis of meter-reading history data held in the history holding part. The speech recognition part recognizes speech representing a meter reading and stores the recognition result in the recognition result holding part. The comparison part determines whether or not the meter reading for this month represented by the data stored in the recognition result holding part is within the prediction range. If the meter reading for this month is within the prediction range, the presentation part presents the recognition result to a user, and the speech recognition result is stored in the result storing part.
    Type: Grant
    Filed: October 20, 2003
    Date of Patent: August 26, 2008
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasuo Okutani
  • Publication number: 20080177548
    Abstract: A speech synthesis method includes selecting a segment, determining whether to conduct prosodic modification on the selected segment, calculating a target value of prosodic modification of a segment on which prosodic modification has been determined to be conducted based on a result of the determination, conducting prosodic modification such that a prosody of the segment on which prosodic modification has been determined to be conducted takes the target value of prosodic modification, and concatenating the segment on which prosodic modification has been conducted or a segment on which prosodic modification has been determined not to be conducted as a result of the determination.
    Type: Application
    Filed: May 29, 2006
    Publication date: July 24, 2008
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Masayuki Yamada, Yasuo Okutani, Michio Aizawa
  • Publication number: 20080159584
    Abstract: An information processing apparatus includes an image acquisition unit configured to acquire image data, an output unit configured to select a question from a plurality of questions stored on a storage element and output the selected question, a response acquisition unit configured to acquire response contents responding to the question, and a storage unit configured to relate the image data acquired by the image acquisition unit to the response acquired by the response acquisition unit, and store the related data and contents.
    Type: Application
    Filed: February 9, 2007
    Publication date: July 3, 2008
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Kazue Kaneko, Tsuyoshi Yagisawa, Yasuo Okutani
  • Patent number: 7318033
    Abstract: Even when a copying machine with a voice guidance function is used, a problem of wastefully copying wrong documents or documents with missing pages remains unsolved for visually impaired persons. To this end, a document image is read, character strings on the read document image are recognized, a character string indicating the contents of the document is chosen from the recognized character strings, the chosen character string is converted into speech, and synthetic speech is output.
    Type: Grant
    Filed: July 28, 2003
    Date of Patent: January 8, 2008
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasuo Okutani, Tetsuo Kosaka
  • Publication number: 20070124148
    Abstract: A permission portion to permit application of fast-forward and an inhibition portion to inhibit application of fast-forward are discriminated in text. Upon speech synthesis of the text in a fast-forward setting, speech synthesis in the fast-forward setting is performed on the permission portion. Further, upon speech synthesis of the text in the fast-forward setting, regarding the inhibition portion, speech synthesis is performed in a manner different from that of the speech synthesis in the fast-forward setting, e.g., at a normal speaking rate.
    Type: Application
    Filed: November 16, 2006
    Publication date: May 31, 2007
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Yasuo Okutani, Masayuki Yamada
  • Publication number: 20060242331
    Abstract: An information processing apparatus is provided which includes a holding unit configured to hold a setting of the information processing apparatus; a resetting unit configured to reset the setting; a detecting unit configured to detect whether the resetting unit has been operated for a predetermined period of time; and a setting unit configured to set a speech mode when the resetting unit has been operated for the predetermined period of time.
    Type: Application
    Filed: April 14, 2006
    Publication date: October 26, 2006
    Applicant: Canon Kabushiki Kaisha
    Inventors: Masayuki Yamada, Yasuo Okutani, Satoshi Ookuma, Tsuyoshi Yagisawa, Kouhei Awaya
  • Publication number: 20060200352
    Abstract: In a phoneme-selection-type speech synthesis apparatus, sound quality when a suitable phoneme is not found is prevented from being deteriorated without changing an input sentence. A plurality of pieces of reading prosody information are obtained. The cost when an optimum phoneme sequence is selected with respect to each of the plurality of pieces of reading prosody information is calculated. Speech with respect to the reading prosody information in which the cost is minimized is synthesized.
    Type: Application
    Filed: February 15, 2006
    Publication date: September 7, 2006
    Applicant: Canon Kabushiki Kaisha
    Inventors: Michio Aizawa, Yasuo Okutani
  • Patent number: 7054814
    Abstract: A speech segment search unit searches a speech database for speech segments that satisfy a phonetic environment, and a HMM learning unit computes the HMMs of phonemes on the basis of the search result. A segment recognition unit performs segment recognition of speech segments on the basis of the computed HMMs of the phonemes, and when the phoneme of the segment recognition result is equal to a phoneme of the source speech segment, that speech segment is registered in a segment dictionary.
    Type: Grant
    Filed: March 29, 2001
    Date of Patent: May 30, 2006
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasuo Okutani, Yasuhiro Komori, Toshiaki Fukada