ROBOT AND CONTROL METHOD THEREOF

The present invention relates to a robot and a control method adapted for the robot. The robot stores audio data of stories, first relationships among the audio data, time periods, and key information of the stories, second relationships between the key information and actions, and actions. Each of the key information is assigned a time period indicating when the key information should be fetched. The method includes: a) telling a story; b) measuring time; c) fetching corresponding key information when an elapsed time reaches each time period of each of the key information; d) fetching an action according to the fetched key information; and e) performing the action.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

1. Technical Field

The disclosure relates to a robot and, more particularly, to a robot and a control method adapted for the robot.

2. Description of the Related Art

There are many electronic toys that play audio books, and many robots for entertainment that can perform various actions. What is needed though, is a robot that can perform corresponding actions at the same time when telling a story.

BRIEF DESCRIPTION OF THE DRAWINGS

The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the robot. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.

FIG. 1 is a block diagram of a hardware infrastructure of a robot in accordance with an exemplary embodiment.

FIG. 2 is an example of an information story table of the robot of FIG. 1.

FIG. 3 is an example of an action information table of the robot of FIG. 1.

FIG. 4 is a flowchart illustrating a control method of the robot of FIG. 1.

DETAILED DESCRIPTION

FIG. 1 is a block diagram of a hardware infrastructure of a robot in accordance with an exemplary embodiment. The robot 1 includes a storage unit 10, an input unit 20, a processing unit 30, a digital-to-analog (D/A) converter 40, a speaker 50, and at least one actuator 60. The storage unit 10 stores an audio database 11, an information story table 12, an action information table 13, and an action database 14. The audio database 11 stores a list of audio data of stories that can be played by the robot 1.

FIG. 2 is an example of the information story table 12 of the robot of FIG. 1. Each audio data includes a plurality of key information, and each of the key information is assigned a time period indicating when the key information should be fetched. The information story table 12 stores relationships among audio data, time periods, and key information. The information story table 12 includes an audio data column, a time period column, and a key information column. The audio data column records a plurality of audio data of stories. The time period column records a plurality of time periods. The key information column records a plurality of key information. The key information is selected from the group consisting of words, phrases, and a combination of words and phrases.

For the purpose of better understanding the relationships among audio data, time periods, and key information, an example is described as follows. For example, the audio data of story “S1” includes the key information “A2” (e.g., a key word “walk”) and “A4” (e.g., a key phrase “sit down”), and the key word “walk” is assigned a first time period and the key phrase “sit down” is assigned a second time period. Accordingly, when an elapsed time of the audio data of story “S1” reaches the first time period when it is being played, the key word “walk” should be fetched, and when the elapsed time of the audio data of story “S1” reaches the second time period when it is being played, the key phrase “sit down” should be fetched.

FIG. 3 is an example of the action information table 13 of the robot of FIG. 1. Each of the key information of the audio data of stories is assigned one or more actions. The action information table 13 shows relationships between each of key information and associated action, and includes a key information column and an action column. The key information column records a plurality of key information. The action column records a plurality of actions performed at the same time when the key information is fetched. For example, the key information “A1” is assigned the actions “XA11, XA12, and XA13,” A1 is a key word “bye-bye,” and the action XA11 is “extend and wave the left hand,” the action XA12 is “extend and wave the right hand,” and the action XA13 is “blow a kiss.” The action database 14 stores a list of actions that can be performed by the robot 1.

The input unit 20 is configured for generating instructions for determining a story to be played in response to user input. The processing unit 30 further includes an audio fetching module 31, an audio outputting module 32, a timer 33, a relationship fetching module 34, and an action performing module 35. The audio fetching module 31 is configured for fetching audio data from the audio database 11 according to an instruction generated from the input unit 20 when a user inputs an action request. The audio outputting module 32 is configured for outputting the fetched audio data. The D/A converter 40 is configured for converting the audio data into analog data. The speaker 50 outputs the analog data, which in this embodiment is a story.

The timer 33 starts measuring time when the speaker 50 begins outputting the story. The relationship fetching module 34 is configured for fetching each of the key information of the audio data when the elapsed time of the timer 33 reaches each time period of each of the key information from the information story table 12, and fetching a corresponding action associated with each of the fetched key information from the action information table 13. The action performing module 35 is configured for fetching the action defined in the action database 14, and controlling the actuator 60 to perform the action. The actuator 60 performs the action via moving parts of the robot 1. In this embodiment, the relationship fetching module 34 fetches the corresponding action associated with each of the fetched key information from the action information table 13 randomly. The action performing module 35 further judges whether all actions corresponding to the story are performed. When all actions corresponding to the story are performed, the robot 1 finishes telling the story. In other words, a user selects and inputs a story, and then the robot 1 begins telling the story while accessing and performing the actions associated with the story. If the story has other key information and actions associations, those actions will also be performed during the course of the story.

FIG. 4 is a flowchart illustrating a control method of the robot of FIG. 1. In step S400, the audio fetching module 31 receives the instruction generated from the input unit 20 and fetches the audio data of a story from the audio database 11. In step S410, the audio outputting module 32 outputs the audio data, the D/A converter 40 converts the audio data into analog data, and the speaker 50 begins telling a story associated with the audio data. In step S420, the timer 33 starts measuring time. In step S430, the elapsed time of the timer 33 reaches each time period of each of the key information of the audio data in the information story table 12. In step S440, the relationship fetching module 34 fetches the corresponding key information according to each time period from the information story table 12. In step S450, the relationship fetching module 34 fetches the action according to the fetched key information from the action information table 13 randomly. In step S460, the action performing module 35 controls the actuator 60 to perform the action.

In step S470, the action performing module 35 judges whether all actions corresponding to the story are performed. If all actions aren't performed, the procedure returns to step S420, that is, when the elapsed time of the timer 33 reaches the next time period of the next key information of the audio data, the action performing module 35 controls the actuator 60 to perform the next action according to the next key information. In step S480, if all actions are performed, all the key information of the story are fetched and all actions of all the key information are performed as a grand finale, and the robot 1 finishes telling the story.

It is understood that the invention may be embodied in other forms without departing from the spirit thereof. Thus, the present examples and embodiments are to be considered in all respects as illustrative and not restrictive, and the invention is not to be limited to the details given herein.

Claims

1. A robot, comprising:

a storage unit, configured for storing audio data of stories, first relationships among the audio data, time periods, and key information of the stories, second relationships between the key information and actions, and the actions, wherein each of the key information is assigned a time period indicating when the key information should be fetched;
a speaker, configured for telling a story associated with the audio data;
a timer, configured for measuring time when the speaker begins telling the story;
a relationship fetching module, configured for fetching corresponding key information when an elapsed time of the timer reaches each time period of each of the key information from the storage unit, and fetching an action according to the fetched key information from the storage unit; and
an actuator, configured for performing the action.

2. The robot as recited in claim 1, further comprising an input unit, configured for generating instructions for determining the story to be told in response to user input.

3. The robot as recited in claim 2, further comprising an audio fetching module, configured for fetching the audio data from the storage unit according to an instruction from the input unit.

4. The robot as recited in claim 3, further comprising an audio outputting performing module, configured for controlling the output of the audio data.

5. The robot as recited in claim 4, further comprising a digital-to-analog converter, configured for converting the audio data into analog data as the story.

6. The robot as recited in claim 1, further comprising an action performing module, configured for controlling the actuator to perform the action and judging whether all actions corresponding to the story are performed.

7. The robot as recited in claim 6, wherein when the action performing module has performed all actions associated with the key information of the audio data, the speaker finishes telling the story.

8. The robot as recited in claim 1, wherein the key information is selected from the group of consisting of words, phrases, and a combination of words and phrases.

9. The robot as recited in claim 1, wherein the relationship fetching module fetches the action according to the fetched key information randomly.

10. A control method for a robot, the robot storing audio data of stories, first relationships among the audio data, time periods, and key information of the stories, second relationships between the key information and actions, and the actions, wherein each of the key information is assigned a time period indicating when the key information should be fetched, the method comprising:

telling a story;
measuring time;
fetching corresponding key information when an elapsed time reaches each time period of each of the key information;
fetching an action according to the fetched key information; and
performing the action.

11. The control method as recited in claim 10, further comprising:

receiving an instruction and fetching audio data of the story.

12. The control method as recited in claim 11, further comprising:

converting the audio data into analog data as the story.

13. The control method as recited in claim 10, further comprising:

fetching the action according to the fetched key information randomly.

14. The control method as recited in claim 10, further comprising:

judging whether all actions corresponding to the story are performed.

15. The control method as recited in claim 14, further comprising:

finishing telling the story, after all actions associated with each of the key information of the audio data are performed.

16. The control method as recited in claim 10, wherein the key information is selected from the group of consisting of words, phrases, and a combination of words and phrases.

Patent History
Publication number: 20100048090
Type: Application
Filed: Apr 29, 2009
Publication Date: Feb 25, 2010
Applicant: HON HAI PRECISION INDUSTRY CO., LTD. (Tu-Cheng)
Inventors: CHUAN-HONG WANG (Tu-Cheng), HSIAO-CHUNG CHOU (Tu-Cheng), LI-ZHANG HUANG (Tu-Cheng)
Application Number: 12/432,685
Classifications
Current U.S. Class: Having Sounding Means (446/297)
International Classification: A63H 3/28 (20060101);