Patents by Inventor Haru Ando
Haru Ando has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140045162Abstract: A device of structuring learning contents includes a lesson state data acquisition unit acquiring data of a state of a lesson given by displaying the learning content using an electronic blackboard and a server connected to the electronic blackboard, the lesson state data acquisition unit and a database through a network, in which the server has a lesson state data analysis unit extracting a feature amount of the lesson from the lesson state data, a type classification unit analyzing the feature amount of the lesson and classifying the lesson as a lesson type and an attitude type, a content evaluation unit evaluating the learning content from the feature amount of the lesson, and a content-tag data generation unit giving a content tag including content tag data based on the analysis/evaluation result to the learning content to structure the learning contents.Type: ApplicationFiled: June 25, 2013Publication date: February 13, 2014Inventors: Haru ANDO, Ryuji MINE, Masakazu FUJIO
-
Patent number: 8014716Abstract: An information management server and information distribution system utilizes a video of the class for creating educational contents matching the learning conditions of the student, and also provides instruction using these same contents.Type: GrantFiled: December 8, 2003Date of Patent: September 6, 2011Assignee: Hitachi, Ltd.Inventors: Haru Ando, Nobuhiro Sekimoto, Takashi Hasegawa
-
Publication number: 20090132637Abstract: The invention, taking an electronic content utilization casing as the basis, extracts input activity to the same casing; estimates data input positions within the content by calculating similarity values and difference values among the same activity data, and similarity values and difference values between the activity data and model data; estimates the input state of the user from the estimated input positions; and presents the same estimation values as the content utilization state.Type: ApplicationFiled: November 21, 2008Publication date: May 21, 2009Inventor: Haru ANDO
-
Patent number: 7308479Abstract: The mailing system of the invention converts the text information into movement information using animations, object information, and background information, and sends/receives the converted information as a mail. The system analyzes the information inputted or selected by a user, creates an animation movement by using the analyzed information, and selects an object and a background by using the analyzed information. The system sends/receives the created or selected animation movement information, object information, and background information as a mail, and displays the mail as if the sender and receiver were engaging in a dialogue.Type: GrantFiled: June 24, 2003Date of Patent: December 11, 2007Assignee: Hitachi, Ltd.Inventors: Haru Ando, Junichi Matsuda
-
Patent number: 7190770Abstract: A user-requested command with both visual and hearing information is indicated. Additionally, with a user friendly method, the user understands what content to input. The user-requested command is expressed in the form of a template sentence. Template part in the template sentence is vocalized, and a slot area in the template sentence is expressed using a sound or voice. The user inputs his or her voice to be input to the slot area.Type: GrantFiled: July 29, 2002Date of Patent: March 13, 2007Assignee: Hitachi, Ltd.Inventors: Haru Ando, Yoshito Nejime, Junichi Matsuda
-
Publication number: 20050021665Abstract: A content is delivered in consideration of a terminal used by a user, an ambient environment of the user and the terminal, and the characteristics and preferences of the user. A content delivery server has an input/output unit for transmitting and receiving information between itself and a terminal, a content management unit for managing contents composed of modalities, and a control unit for controlling the input/output unit and the content management unit.Type: ApplicationFiled: August 12, 2003Publication date: January 27, 2005Inventors: Nobuhiro Sekimoto, Haru Ando
-
Publication number: 20040254983Abstract: An information management server and information distribution system utilizes a video of the class for creating educational contents matching the learning conditions of the student, and also provides instruction using these same contents.Type: ApplicationFiled: December 8, 2003Publication date: December 16, 2004Applicant: Hitachi, Ltd.Inventors: Haru Ando, Nobuhiro Sekimoto, Takashi Hasegawa
-
Publication number: 20040152060Abstract: A program and a system for judging the learning conditions of each user from behavior information or living body information of the user, and for evaluating contents or lesson details used for learning. A change of concentration during learning is judged from a result of a blood flow rate measured in time series by a near infrared measuring device. Further, the result of the judgment and a result of analysis of user's behavior (image recognition, voice recognition and instrument input operation event) are analyzed synthetically. Thus, the change of attention of the user is extracted, and the learning conditions of the user are judged. True learning conditions of the user can be grasped in real time, and the learning contents or the lesson details can be evaluated. Further, the result of the evaluation can be reflected on the next lesson so as to enhance the learning effect.Type: ApplicationFiled: June 30, 2003Publication date: August 5, 2004Inventors: Haru Ando, Takeshi Hoshino, Nobuhiko Matsukuma
-
Publication number: 20040038670Abstract: The mailing system of the invention converts the text information into movement information using animations, object information, and background information, and sends/receives the converted information as a mail. The system analyzes the information inputted or selected by a user, creates an animation movement by using the analyzed information, and selects an object and a background by using the analyzed information. The system sends/receives the created or selected animation movement information, object information, and background information as a mail, and displays the mail as if the sender and receiver were engaging in a dialogue.Type: ApplicationFiled: June 24, 2003Publication date: February 26, 2004Applicant: Hitachi, Ltd.Inventors: Haru Ando, Junichi Matsuda
-
Publication number: 20030156689Abstract: A user-requested command with both visual and hearing information is indicated. Additionally, with a user friendly method, the user understands what content to input. The user-requested command is expressed in the form of a template sentence. Template part in the template sentence is vocalized, and a slot area in the template sentence is expressed using a sound or voice. The user inputs his or her voice to be input to the slot area.Type: ApplicationFiled: July 29, 2002Publication date: August 21, 2003Inventors: Haru Ando, Yoshito Nejime, Junichi Matsuda
-
Patent number: 6570588Abstract: An editing system having a dialogue operation type interface directs a next operation by referring an operation history. User information is inputted by using speech input/output, pointing by a finger and 3-D CG. A human image representing the system is displayed as an agent on the image output device, and a user error, availability to a queue and a utilization environment are extracted by the system and informed to the user through the agent. The system responds to the user intent by the image display or the speech output by using the agent as a medium so that a user friendly interface for graphics edition and image edition is provided.Type: GrantFiled: May 20, 1998Date of Patent: May 27, 2003Assignee: Hitachi, Ltd.Inventors: Haru Ando, Nobuo Hataoka
-
Patent number: 6549887Abstract: Inputted sign language word labels and editing items such as speeds and positions of moving portions for specifying manual signs and/or sign gestures corresponding to the respective sign language word labels are displayed on an editing screen. These editing items are modified by the user to add non-language information such as emphasis/feeling information to the contents of communication, thereby generating modified sign language animation information data including the inputted sign language word label string having the added non-language information. For communication or interaction, the non-language information is extracted from the modified sign language animation information data and stored into a memory with the inputted sign language word label string. When a hearing impaired person communicates or interacts with another person through text, the user can emphasize the contents of communication or show the user's feeling for the contents of communication to the other person.Type: GrantFiled: January 20, 2000Date of Patent: April 15, 2003Assignee: Hitachi, Ltd.Inventors: Haru Ando, Hirohiko Sagawa, Masaru Takeuchi
-
Patent number: 6477495Abstract: A prosodic parameter for an input text is computed by storing a sentence of vocalized speech in a speech corpus memory, searching for a stored text having a similar prosody to an input text as a key to the speech corpus and modifying the prosodic parameter based upon the search results. Because a plurality of prosodic parameters are handled as a linking data, a synthesized sound similar to natural speech having a natural intonation and prosody is produced.Type: GrantFiled: March 1, 1999Date of Patent: November 5, 2002Assignee: Hitachi, Ltd.Inventors: Nobuo Nukaga, Yoshinori Kitahara, Keiko Fujita, Haru Ando, Shunichi Yajima
-
Patent number: 5864808Abstract: A user inputs voice through a voice recognition program, a microphone and an A/D converter while pointing by use of a pointing gesture, touch pen or the like with reference to an image displayed on a display unit. For the result of recognition of the inputted voice, a processing or display indicated by a candidate having the first rank of reliability of recognition is performed and an indication showing a plurality of candidates having the second rank and the lower ranks than that is displayed in a menu form on a display screen.Type: GrantFiled: September 22, 1997Date of Patent: January 26, 1999Assignee: Hitachi, Ltd.Inventors: Haru Ando, Hideaki Kikuchi, Nobuo Hataoka, Yasumasa Matsuda, Shigeto Oheda, Tsukasa Hasegawa
-
Patent number: 5777614Abstract: An editing system having a dialogue operation type interface directs a next operation by referring an operation history. User information is inputted by using speech input/output, pointing by a finger and 3-D CG. A human image representing the system is displayed as an agent on the image output device, and a user error, availability to a queue and a utilization environment are extracted by the system and informed to the user through the agent. The system responds to the user intent by the image display or the speech output by using the agent as a medium so that a user friendly interface for graphics edition and image edition is provided.Type: GrantFiled: October 13, 1995Date of Patent: July 7, 1998Assignee: Hitachi, Ltd.Inventors: Haru Ando, Nobuo Hataoka
-
Patent number: 5600765Abstract: A method of accepting multimedia operation commands wherein, while pointing to either of a display object or a display position on a display screen of a graphics display system through a pointing input device, a user commands the graphics display system to cause an event on a graphics display, through a voice input device; comprising a first step of allowing the user to perform the pointing gesture so as to enter a string of coordinate points which surround one area for either of the display object and any desired display position; a second step of allowing the user to give the voice command together with the pointing gesture; a third step of recognizing a command content of the voice command by a speech recognition process in response to the voice command; a fourth step of recognizing a command content of the pointing gesture in accordance with the recognized result of the third step; and a fifth step of executing the event on the graphics display in accordance with the command contents of the voice commandType: GrantFiled: October 19, 1993Date of Patent: February 4, 1997Assignee: Hitachi, Ltd.Inventors: Haru Ando, Yoshinori Kitahara