Patents by Inventor Yasuharu Asano

Yasuharu Asano has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090089833
    Abstract: An information processing method includes the steps of: obtaining biometric information expressing biometric responses exhibited by a user during content playback; obtaining metadata for each content of which biometric information is obtained; identifying attributes linked to the biometric information within the attributes included in the obtained metadata and identifying, in the case of content wherein identified attribute values differ but the user exhibits similar biometric responses during playback, the different value of the attribute linked to the biometric information as a value not necessary to be distinguished; reconfiguring a profile by merging the information relating to the value which is identified which is not necessary to be distinguished, from the information included in the user profile; identifying recommended content based on the reconfigured profile; and presenting the identified recommended content information to the user.
    Type: Application
    Filed: December 1, 2008
    Publication date: April 2, 2009
    Inventors: Mari Saito, Noriyuki Yamamoto, Mitsuhiro Miyazaki, Yasuharu Asano, Tatsuki Kashitani
  • Publication number: 20090055006
    Abstract: An information processing apparatus performing a process for generating a playlist defining a reproduction sequence of contents includes: a model information holding part storing therein probability models corresponding to a time series pattern of content feature volumes being feature information about contents; a content feature extracting part acquiring a content feature volume corresponding to each of contents to be reproduced; a playlist generating part comparing a time series pattern of the content feature volumes extracted in the content feature extracting part corresponding to each of permutation patterns of a reproduction sequence of contents to be reproduced with a probability model held in the model information holding part, and generating a playlist in which a reproduction sequence of contents is set in accordance with a time series pattern of content feature volumes most analogous to the probability model; and a content reproducing part reproducing contents in accordance with the generated playlis
    Type: Application
    Filed: August 8, 2008
    Publication date: February 26, 2009
    Inventor: Yasuharu ASANO
  • Patent number: 7321853
    Abstract: The present invention relates to a speech recognition apparatus and a speech recognition method for speech recognition with improved accuracy. A distance calculator 47 determines the distance from a microphone 21 to a user uttering. Data indicating the determined distance is supplied to a speech recognition unit 41B. The speech recognition unit 41B has plural sets of acoustic models produced from speech data obtained by capturing speeches uttered at various distances. From those sets of acoustic models, the speech recognition unit 41B selects a set of acoustic models produced from speech data uttered at a distance closest to the distance determined by the distance calculator 47, and the speech recognition unit 41B performs speech recognition using the selected set of acoustic models.
    Type: Grant
    Filed: February 24, 2006
    Date of Patent: January 22, 2008
    Assignee: Sony Corporation
    Inventor: Yasuharu Asano
  • Publication number: 20080010060
    Abstract: Disclosed herein is an information processing apparatus configured to generate metadata corresponding to content including, a data output, a data input section, a control section, an interaction execution section, an interaction metadata generation section, and a content control section.
    Type: Application
    Filed: June 8, 2007
    Publication date: January 10, 2008
    Inventors: Yasuharu Asano, Ugo Profio, Keiichi Yamada
  • Patent number: 7249017
    Abstract: In order to prevent degradation of speech recognition accuracy due to an unknown word, a dictionary database has stored therein a word dictionary in which are stored, in addition to words for the objects of speech recognition, suffixes, which are sound elements and a sound element sequence, which form the unknown word, for classifying the unknown word by the part of speech thereof. Based on such a word dictionary, a matching section connects the acoustic models of an sound model database, and calculates the score using the series of features output by a feature extraction section on the basis of the connected acoustic model. Then, the matching section selects a series of the words, which represents the speech recognition result, on the basis of the score.
    Type: Grant
    Filed: February 24, 2004
    Date of Patent: July 24, 2007
    Assignee: Sony Corporation
    Inventors: Helmut Lucke, Katsuki Minamino, Yasuharu Asano, Hiroaki Ogawa
  • Publication number: 20070160294
    Abstract: The present invention provides an image processing apparatus, including extraction means, parameter retaining means, context retaining means, and decision means. The extraction means is configured to extract a characteristic amount of a region in which a recognition object may possibly be included from within an image of a processing object. The parameter retaining means is configured to retain a parameter regarding the recognition object. The context retaining means is configured to retain a context regarding the recognition object. The decision means is configured to decide based on the characteristic amount extracted by the extraction means, the parameter retained in the parameter retaining means, and a result of arithmetic operation performed using the context retained in the context retaining means whether or not an image in the region is the recognition object.
    Type: Application
    Filed: December 13, 2006
    Publication date: July 12, 2007
    Inventor: Yasuharu Asano
  • Patent number: 7240002
    Abstract: The present invention provides a speech recognition apparatus having high speech recognition performance and capable of performing speech recognition in a highly efficient manner. A matching unit 14 calculates the scores of words selected by a preliminary word selector 13 and determines a candidate for a speech recognition result on the basis of the calculated scores. A control unit 11 produces word connection relationships among words included in a word series employed as a candidate for the speech recognition result and stores them into a word connection information storage unit 16. A reevaluation unit 15 corrects the word connection relationships one by one. On the basis of the corrected word connection relationships, the control unit 11 determines the speech recognition result. A word connection managing unit 21 limits times allowed for a boundary between words represented by the word connection relationships to be located thereat.
    Type: Grant
    Filed: November 7, 2001
    Date of Patent: July 3, 2007
    Assignee: Sony Corporation
    Inventors: Katsuki Minamino, Yasuharu Asano, Hiroaki Ogawa, Helmut Lucke
  • Publication number: 20070033050
    Abstract: An information processing apparatus includes an obtaining unit that obtains meta-information concerning content; a predicting unit that predicts an emotion of a user who is viewing the content from the meta-information obtained by the obtaining unit; and a recognizing unit that recognizes an emotion of the user using the emotion predicted by the predicting unit and user information acquired from the user.
    Type: Application
    Filed: August 3, 2006
    Publication date: February 8, 2007
    Inventors: Yasuharu Asano, Noriyuki Yamamoto
  • Publication number: 20060217962
    Abstract: An information processing device includes a referring unit configured to refer to a table in which a characteristic of each piece of first information is expressed as distribution of model parameters in a plurality of semantic classes, in units of pieces of the first information; an obtaining unit configured to obtain second information to be searched for; a calculating unit configured to calculate similarities between the second information and the respective pieces of the first information; and a first reading unit configured to read the pieces of the first information from the table in descending order of the similarity.
    Type: Application
    Filed: March 6, 2006
    Publication date: September 28, 2006
    Inventor: Yasuharu Asano
  • Publication number: 20060143006
    Abstract: The present invention relates to a speech recognition apparatus and a speech recognition method for speech recognition with improved accuracy. A distance calculator 47 determines the distance from a microphone 21 to a user uttering. Data indicating the determined distance is supplied to a speech recognition unit 41B. The speech recognition unit 41B has plural sets of acoustic models produced from speech data obtained by capturing speeches uttered at various distances. From those sets of acoustic models, the speech recognition unit 41B selects a set of acoustic models produced from speech data uttered at a distance closest to the distance determined by the distance calculator 47, and the speech recognition unit 41B performs speech recognition using the selected set of acoustic models.
    Type: Application
    Filed: February 24, 2006
    Publication date: June 29, 2006
    Inventor: Yasuharu Asano
  • Patent number: 7065490
    Abstract: An voice synthesizing unit performs voice synthesizing processing, based on the state of emotion of a robot at an emotion/instinct model unit. For example, in the event that the emotion state of the robot represents “not angry”, synthesized sound of “What is it?” is generated at the voice synthesizing unit. On the other hand, in the event that the emotion state of the robot represents “angry”, synthesized sound of “Yeah, what?” is generated at the voice synthesizing unit, to express the anger. Thus, a robot with a high entertainment nature is provided.
    Type: Grant
    Filed: November 28, 2000
    Date of Patent: June 20, 2006
    Assignee: Sony Corporation
    Inventors: Yasuharu Asano, Hongchang Pao
  • Patent number: 7031917
    Abstract: The present invention relates to a speech recognition apparatus and a speech recognition method for speech recognition with improved accuracy. A distance calculator 47 determines the distance from a microphone 21 to a user uttering. Data indicating the determined distance is supplied to a speech recognition unit 41B. The speech recognition unit 41B has plural sets of acoustic models produced from speech data obtained by capturing speeches uttered at various distances. From those sets of acoustic models, the speech recognition unit 41B selects a set of acoustic models produced from speech data uttered at a distance closest to the distance determined by the distance calculator 47, and the speech recognition unit 41B performs speech recognition using the selected set of acoustic models.
    Type: Grant
    Filed: October 21, 2002
    Date of Patent: April 18, 2006
    Assignee: Sony Corporation
    Inventor: Yasuharu Asano
  • Patent number: 7013277
    Abstract: A preliminary word-selecting section selects one or more words following words which have been obtained in a word string serving as a candidate for a result of speech recognition; and a matching section calculates acoustic or linguistic scores for the selected words, and forms a word string serving as a candidate for a result of speech recognition according to the scores. A control section generates word-connection relationships between words in the word string serving as a candidate for a result of speech recognition, sends them to a word-connection-information storage section, and stores them in it. A re-evaluation section corrects the word-connection relationships stored in the word-connection-information storage section 16, and the control section determines a word string serving as the result of speech recognition according to the corrected word-connection relationships.
    Type: Grant
    Filed: February 26, 2001
    Date of Patent: March 14, 2006
    Assignee: Sony Corporation
    Inventors: Katsuki Minamino, Yasuharu Asano, Hiroaki Ogawa, Helmut Lucke
  • Patent number: 6961701
    Abstract: An extended-word selecting section calculates a score for a phoneme string formed of one more phonemes, corresponding to a user's speech, and searches a large-vocabulary-dictionary for a word having one or more phonemes equal to or similar to those of a phoneme string having a score equal to or higher than a predetermined value. A matching section calculates scores for the word searched for by the extended-word selecting section in addition to a word preliminary word-selecting section. A control section determines a word string as the result of recognition of the speech uttered by the user.
    Type: Grant
    Filed: March 3, 2001
    Date of Patent: November 1, 2005
    Assignee: Sony Corporation
    Inventors: Hiroaki Ogawa, Katsuki Minamino, Yasuharu Asano, Helmut Lucke
  • Patent number: 6961705
    Abstract: Speech of a user is recognized in a speech recognizing unit. Based on a result of the speech recognition, a language processing unit, a dialog managing unit and a response generating unit cooperatively created a dialog sentence for exchanging a dialog with the user. Also, based on the speech recognition result, the dialog managing unit collects user information regarding, e.g., interests and tastes of the user. Therefore, the user information regarding, the interests and tastes of the user can be easily collected.
    Type: Grant
    Filed: January 19, 2001
    Date of Patent: November 1, 2005
    Assignee: Sony Corporation
    Inventors: Seiichi Aoyagi, Yasuharu Asano, Miyuki Tanaka, Jun Yokono, Toshio Oe
  • Publication number: 20050240413
    Abstract: An information processing apparatus performing dialogue processing includes acquisition means for acquiring text data described in a natural language; example storage means for storing a plurality of examples each including an example statement and frame information described using a frame format and corresponding to the example statement; similarity calculation means for calculating a similarity between the text data and the example statement; example selection means for selecting an example corresponding to an example statement whose similarity to the text data is the highest from among the plurality of examples in accordance with the calculated similarity; text data processing means for processing the text data to convert the text data into the frame format in accordance with the frame information corresponding to the example selected by the example selection means; and dialogue processing means for performing the dialogue processing in accordance with the text data converted into the frame format.
    Type: Application
    Filed: April 12, 2005
    Publication date: October 27, 2005
    Inventors: Yasuharu Asano, Keiichi Yamada, Seiichi Aoyagi, Kazumi Aoyama
  • Patent number: 6904334
    Abstract: A robot apparatus which may be turned to a sound source direction by a spontaneous whole-body concerted operation. With the possible range of rotation of the neck unit of a robot apparatus 1 of ±Y° and with the relative angle of the direction of a sound source S to the front side of the robot apparatus 1 of X°, the entire body trunk unit of the robot apparatus 1 is rotated through (X?Y)°, using the leg units, while the neck joint yaw axis of the robot apparatus is rotated through Y° to the direction of the sound source S, so that the robot apparatus is turned to the direction of the sound source S.
    Type: Grant
    Filed: March 17, 2003
    Date of Patent: June 7, 2005
    Assignee: Sony Corporation
    Inventors: Yasuharu Asano, Junichi Yamashita
  • Publication number: 20050075877
    Abstract: A speech recognizing device for efficient processing while keeping a high speech recognizing performance. A matching unit (14) computes the score of a word preliminarily selected by a word preliminary selection unit (13) and determines candidates of the speech recognition result on the basis of the score. A control unit (11) creates a word connection relation between the words of a word sequence, which is a candidate of the speech recognition result and stores them in a word connection information storage unit (16). A revaluation unit (15) corrects the word connection relation serially, and the control unit ( 11) defines the speech recognition result on the basis of the word connection relation corrected. A word connection relation managing unit (21) limits the time corresponding to the boundary of a word expressed by the word connection relation, and a word connection relation managing unit (22) limits the starting time of the word preliminarily selected by the word preliminary selection unit (13).
    Type: Application
    Filed: November 7, 2001
    Publication date: April 7, 2005
    Inventors: Katsuki Minamino, Yasuharu Asano, Hiroaki Ogawa, Helmut Lucke
  • Publication number: 20050004710
    Abstract: Conventional robot apparatus etc. can not perform name-learning naturally. Learning the name of an object is performed such a manner that the name of a target object is obtained through dialog with a human being, the name is stored in association with plural items of different characteristic data detected for the target object, and a new object is recognized based on the stored data and associative information, the name and characteristic data of the new person are obtained and this associative information is stored.
    Type: Application
    Filed: March 5, 2003
    Publication date: January 6, 2005
    Inventors: Hideki Shimomura, Kazumi Aoyama, Keiichi Yamada, Yasuharu Asano, Atsushi Okubo
  • Publication number: 20040167779
    Abstract: In order to prevent degradation of speech recognition accuracy due to an unknown word, a dictionary database has stored therein a word dictionary in which are stored, in addition to words for the objects of speech recognition, suffixes, which are sound elements and a sound element sequence, which form the unknown word, for classifying the unknown word by the part of speech thereof. Based on such a word dictionary, a matching section connects the acoustic models of an sound model database, and calculates the score using the series of features output by a feature extraction section on the basis of the connected acoustic model. Then, the matching section selects a series of the words, which represents the speech recognition result, on the basis of the score.
    Type: Application
    Filed: February 24, 2004
    Publication date: August 26, 2004
    Applicant: SONY CORPORATION
    Inventors: Helmut Lucke, Katsuki Minamino, Yasuharu Asano, Hiroaki Ogawa