Interactive Hi-Tech doll

-

A high-tech doll produces human-like expressions, recognizes words, and is able to carry on a conversation with a living person, as example, addresses time based subjects, such as the food to eat at various times of the day, and expresses attitudes, emulating a living child. A child player acquires an emotional bond to the doll whilst the doll appears to bond to the child. The doll exhibits facial expressions produced concurrently with spoken words or separately to provide body language, representing emotions, such as happiness, sadness, grief, surprise, delight, curiosity, and so on, reinforcing the emulation of a living child. Additional features add to the play.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This non-provisional application for patent is related to an earlier-filed provisional application for patent of the inventors, Ser. No. 60/748,391, filed Dec. 7, 2005, entitled Interactive Hi-Tech Doll, the entire content of which is incorporated herein by reference in its entirety. Applicant claims the benefit under 35 U.S.C. 119(e) and 35 U.S.C. 120 based on the foregoing provisional application.

FIELD OF THE INVENTION

The present invention relates to high-tech dolls and, more particularly, to an interactive talking doll that in both speech and facial expression simulates the attitude and conversational capability of a very young child, chews food, asks questions, understands answers, sings songs, plays games and, all in all, is a great companion for a young child.

BACKGROUND

One plaything that's endured over the years is the doll. Typically, a young girl's parents will at some time purchase a doll as a gift for their daughter, fully expecting the gift to be received with appreciation. The parents are never disappointed. Often the girl's grandparents do the same for their granddaughter. They regard the play with dolls to be a wholesome activity and, importantly, an activity that's a good deal of entertainment for the child. One finds that young girls in that doll play develop a tender, nurturing relationship with the doll, often mimicking their mother's behavior toward themselves in “mothering” the doll for hours on end and/or treating the doll as a friend or confidant.

Today for the most part dolls are fabricated of better materials, are more real or life-like in appearance, feel and dress, and, importantly, are more sophisticated technologically than in the past. In other words, the dolls of today can be more “high-tech.” The addition of sophisticated technology is intended to increase the play value of the doll, giving the child additional challenges and better engaging the child's creativity. One recently marketed doll that significantly advanced the doll technology is the Amazing Amy doll, earlier marketed by the Playmates Company of California and described in my prior patent, U.S. Pat. No. 6,554,679, granted Apr. 29, 2003, entitled Interactive Virtual Character doll.

One expects that the term “doll” may have a different meaning to different persons. Hence, before proceeding further into the background to the present invention, it should prove helpful to provide some definition of “doll.” That should aid one to better understand the prior art and the invention, or at least ensure that the reader's understanding of the meaning is the same as the applicant's. Although the foregoing paragraph describes a particular doll that represents a small child in appearance, one realizes that the technology is not so limited in application. To avoid unnecessarily limiting the present invention, a broader definition is appropriate.

According to conventional thinking, a doll is a figure that in appearance looks like a person, usually a small girl or boy, and constitutes a plaything for a child, typically for a young girl. In more modern times that kind of plaything has de-facto expanded in definition in one direction to include small male figures, referred to as action figures, that are played with by young boys without the stigma of femininity (e.g. dolls for boys), and even virtual beings resident in an electronic game and visible to the child only as an image on the liquid crystal display of the electronic game The scope of the doll category has also expanded in definition in another direction to include animals, cartoon characters and fantasy characters. Thus, the term doll as used herein is intended to encompass all such forms and should so be understood when reading this specification and the described technology that improves all those structures. That's true, even when the preferred embodiment of the present invention, as will be seen, is of the form of a small child, as later herein described.

In all of the foregoing forms described in the initial paragraphs, the doll contains a body or torso, a head and at least some appendages, such as arms and/or legs. Assuming the doll is to mimic a young female child, the arms, legs, torso and head are typically designed to appear natural and life-like in appearance and feel.

The discussion of the background in my prior patent, U.S. Pat. No. 6,554,679, granted Apr. 29, 2003, entitled Interactive Virtual Character doll (the “'679 patent”), provides an ample summary of the innovations that preceded that invention in an interactive virtual character doll, which is also of interest to the present invention. Indeed, the entire content of that prior patent is believed material to the present invention and that content is incorporated by reference herein in its entirety.

The '679 patent recognized various types of player inputs to the doll that resembled real life activities, such as feeding, grooming, playing, dressing and the like. That input was accomplished in part by inclusion of two sensors carried in the doll that, with the aid of an electronic controller in the doll, essentially recognized objects. One sensor was located in the mouth of the doll and recognized insertion of various simulated fruits and vegetables into the mouth of the doll. Each simulated fruit or vegetable carried an electrical resistance of a value that was unique to the object. That resistance was included in a series circuit between two electrical contacts on the simulated food object. By engaging electrical contacts carried on the sensor with the corresponding contacts on the simulated food object, the resistance is effectively placed in a resistance measuring circuit in the electronic controller in the doll. The resistance measuring circuit measured the value of the electrical resistance between those contacts, thereby identifying the particular food by the resistance value assigned thereto by the doll designers, and the controller processed that information. The second sensor carried on the back of the doll recognizes the insertion of (e.g. dressing of) an article of clothing on the doll.

Further, a magnetic sensor, a magnetic switch, was also carried in the head of the doll. The magnetic switch identified the presence of a hair brush that was moved across the hair of the doll by the child, when the magnet that was carried in the hair brush actuated the magnetic switch as the hairbrush was swept across the hair of the doll.

Many dolls (and other character toys) emulate the emotions of the kind of character the respective doll or character toy represents. The emulation uses sound effects and/or facial sculptures. Those sculptures may be of plastic, vinyl, porcelain, soft sculpted fabric or plush, or other means to depict the emotion. For example, the face of the doll is sculpted to appear sad. The child holding that doll may believe the doll character is unhappy, because the doll character has an appearance that, through experience, the child has learned is consistent with sadness or, in the case of an animal or other non-human form of doll, the character's anthropomorphic appearance of sadness. Those actions are recognized by the child as being associated with sadness. The illusion of sadness is reinforced if a crying or whimpering sound is introduced and broadcast from the loudspeaker on the doll.

Realism is further enhanced by introduction of mechanical animatronics, adding the opening and closing of the mouth of the doll, the opening and closing eyes/eyelids of the doll, and some additional movements of facial parts, such as eyebrows and lips that were somewhat synchronized to the audio sounds originating from the doll.

However, until the present invention, no character doll was known that could be trained to at least recognize the child's voice. Then too, no character doll was known that was able to actually carry on a conversation with the child, enabling the doll to actually interact verbally, like a person in a tete-a-tete by giving an appropriate response to the child as a conversation between the doll and the child occurs, back and forth. Although an illusion, the tete-a-tete appears very real to the child who's playing with the doll.

In addition, until the present invention no character doll was known that enabled the doll to react with emotions to what the child said or did not say, such as being upset, angry, sad, happy, surprised exuberant, excited, sleepy, hungry, fussy, needy, lonesome, in want of companionship, play, or its mother in the speech, phrases, sound effects, that convey readily recognized emotions. That kind of conversational interactivity is combined for the first time with facial animatronics that actually emulate the virtual emotional feeling expressed by the character of the doll.

Though the Internet applicant learned that academics at the Massachusetts Institute of Technology previously succeeded in some degree to translate the emotions of a small child within a robot, more specifically within a skin-less robot head. The MIT study focuses on the construction of robots that engage in meaningful social exchanges with humans.

Although the field of robots is far removed from being a child's playthings and toys, it may be of interest in learning of the sophisticated open-ended learning procedures and the human emotions assertedly reproduced as facial expressions. The reader may access the internet and review the robotic head, referred to as Kismet, at the website www.ai.mit.edu/projects/kismet. The robot's vision system consists of four color CCD cameras mounted on a stereo active vision head. Two wide field of view (fov) cameras are mounted centrally and move with respect to the head. These are 0.25 inch CCD lipstick cameras with 2.2 mm lenses manufactured by Elmo Corporation. They are used to decide what the robot should pay attention to, and to compute a distance estimate. There is also a camera mounted within the pupil of each eye. These are 0.5 inch CCD foveal cameras with an 8 mm focal length lenses, and are used for higher resolution post-attention processing, such as eye detection.

The foregoing appears to involve very sophisticated techniques to enable the robot “brain” to learn, as well as to convey reaction in a facial expression, using controlled actuators that are controlled by a series of networked desktop—computers to control the simulated lips and the eyes and eyelids of the robot. As example, happiness is represented in the robot head by lips turned upwardly at the sides and fully open eyes to produce what the robot developer perceives to be a happy face. However, the robot head is unable (e.g. unequipped) to talk or carry on a conversation with a living person, as it's only auditory communication is babbles of no known human language. It is clear from the website that much thought and government-funded work has gone to developing the Kismet robot over the past ten years or so to the present stage of development, but the device still appears to be a work-in-progress to explore how socially stimulated learning is served by exploiting the types of interaction that arise between a nurturing caretaker and an immature learner. The sorts of capabilities targeted for learning are those social and communication skills exhibited by human infants within the first year of life. The demonstrated robot head is physically and electronically incomplete, is obviously unsuited as a plaything (and likely will never be a plaything). Even if the developers of Kismet someday succeed in developing a human-like robotic infant head that is able to learn, on its own, such a device would hardly be capable of serving as a child's baby doll, companion and friend as does the present invention. Nor would it be affordable as a consumer product intended for use by a child.

Devices, such as lockets that transmit certain signals recognized by the character doll, rings containing magnets that are detected and recognized, and other such physical devices worn by a child have been used in the past to enable the character doll, to recognize the child wearing or holding the device or, in the case of a little girl's doll, e.g. the pretend mother of the doll. However, until the present invention, no inanimate character doll was able to recognize the voice of a child or the pretend mother as would represent a bonding of the doll with the person. The present invention enables the character of the doll to actually recognize the respective voice of the owner and/or player of the doll, the child, and/or the pretend mother of the child. Hence, the doll “knows” that the person to which the doll character is bonded is present.

Bonding is the close personal relationship that forms between people, such as parent and child. The person in that relationship experiences emotional need with the other person and communicates that need through expressions of love to that person when present. Bonding is to express the need of the presence of the one to which the character is bonded. Additionally, no character doll is known that actually emulates the emotional bonding of the character doll to a child, emulating actual emotional attachment to the owner, player and/or child and a need for that person.

In the present invention the emulation or illusion of bonding is achieved through combining, enabling the doll-to know the bonded person is present; animatronics, expressing emotional facial expressions regarding the feelings of the character for the bonded person, the make-believe mother, and emotional conversational interaction with. For the player, owner and/or child player, the foregoing creates for the first time an intimacy and feeling of bonding with the character because of the actual appearance of real communication and understanding between the character and the bonded person. The doll may request something of the child and the child in response declines that request (e.g. says “no”) which the doll understands. In response, the demeanor of the doll changes to disappointment and sadness, which the child observes. The child recognizes thereby that it has “hurt” the doll and is “touched” thereby in the heart. How can this action not produce an emotional bonding?

The doll invention is able to recognize accessories without requiring the electrical contacts and electrical resistors used in the preferred embodiment of the prior '679 patent. RFID readers and RFID tags assist in accomplishing the function of identifying the accessory, without physical contact between the tag and reader. That structure is transparent to the child. As an advantage, the invention is more magical to a young child.

Additionally, in the prior '679 patent, the sensor inside the doll's mouth required direct physical contact with the simulated food (piece) placed in the doll's mouth in order to make the appropriate eating or drinking sounds when the food was inserted in the mouth of the doll. Using electrical resistors to simulate feeding an animatronic doll with a mouth that is undergoing the motion of chewing is quite difficult because of the need for the physical contact of the metal contact points connected to the resistor in the food accessory play piece and the two metal sensors in the mouth, when the mouth is moving. The prior Amazing Amy doll, hence, did not possess that capability. As an additional advantage, the present invention is able to easily detect the particular food that is inserted in the mouth, even though the mouth is opening and closing.

OBJECTS OF THE INVENTION

Accordingly, a principal object of the present invention is to provide a interactive high-tech doll that is able to preoccupy the interest of a young child.

A further object is to provide an interactive doll that is able to recognize at least its mother's voice so the doll can know with whom the doll is conversing.

A still further object of the invention is to provide a doll that may in one embodiment interact with such things as singing songs and playing games with anyone or in other embodiments only with those whose voices the doll recognizes.

And, an additional object of the invention is to create a doll that's capable of carrying on a limited conversation, such as by asking a question and recognizing the answer given in reply, and, provide an additional response.

SUMMARY OF THE INVENTION

In accordance with the foregoing objects and advantages, a high-tech doll according to the present invention produces human-like expressions, recognizes words, is able to play games, carry on a conversation, and express attitudes, emulating a living child. The seeming magic of the doll appearing to actually respond emotionally to what a child says or does not say, or to what a child does or does not do when the doll verbally prompts the child, a child acquires an emotional bond to the doll. The doll is capable of bonding to the child and, conversely, the child is able to bond to the doll. The doll recognizes what the child says in response to prompts from the doll and is capable of carrying on a limited conversation with the child. In so doing, the doll exhibits facial expressions, produced by facial animators in synchronism with spoken words or separately, to effectively provide visible facial body language, representing any of happiness, sadness or grief, surprise, delight, curiosity, and so on to reinforce the emulation of a living child. The doll recognizes the voice of the child at particular points in the doll's programming.

A preferred embodiment of the invention may include the feature of voice recognition, wherein the doll is trained to recognize a word (or words) spoken in the child's voice (and that word spoken in that voice, e.g. a voice print, is categorized by the doll as the “mother” of the doll), the feature of speech recognition of words spoken to the doll by the mother and any others, both of the foregoing features or both a limited amount of voice recognition and mostly speech recognition. With a limited vocabulary, the doll either recognizes that the doll is being addressed by the pseudo-mother or some one else. The doll creates a response to the child's verbal stimulus by answering the child with spoken words, usually accompanied by an appropriate facial expression or, alternatively, answers only with an appropriate facial expression as body language. Ideally, the child is taken in by the illusion that the doll is a living person.

In other more sophisticated embodiments, the voice recognition system of the doll is able to voice print members of the “family” in addition to the mother, such as grandma, grandpa, uncle and aunt and other family members, is able to identify which family member is addressing the doll, and provide individually tailored verbal messages or other responses to the particular family member identified. In other play patterns, the doll is able to establish relationships with respective family members and those relationships are different in some respects to the relationship of the doll has with the bonded person, the pseudo-mother.

Further in accordance with another aspect to the invention various simulated food accessories may be inserted in the mouth of the doll, the doll recognizes the food and the doll simulates chewing that food. The food accessories contain RFID tags, electronically identifying the respective articles, while the doll includes RFID sensors, controlled by the controller of the doll, that reads the RFID tags carried by the simulated food products that are inserted into the mouth of the doll. That RFID tag reading is transparent to the child. The doll may then broadcast spoken words indicative of knowledge of the particular simulated food product, further adding to the illusion of reality.

The foregoing and additional objects and advantages of the invention, together with the structure characteristic thereof, which were only briefly summarized in the foregoing passages, will become more apparent to those skilled in the art upon reading the detailed description of a preferred embodiment of the invention, which follows in this specification, taken together with the illustrations thereof presented in the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings:

FIGS. 1A and 1B are respective pictorials of the preferred embodiments of the doll viewed from the front with the doll wearing a dress and the doll unclothed, respectively;

FIGS. 2A and 2B are like pictorials of the respective views of the doll of FIGS. 1A and 1B, respectively, as viewed from the rear;

FIG. 3 is a block diagram of the electronic package carried in the embodiment of FIG. 1;

FIG. 4 is a pictorial of a potty accessory for use with the embodiment of FIG. 1;

FIG. 5A is a block diagram illustrating the actuators located in the head of the doll for controlling movement and/or positioning of various parts of the doll face, such as the lips, jaw, cheeks, eyebrows and the like;

FIG. 5B is an exploded view of a preferred mechanism for controlling movement of various parts of the doll face incorporated in a practical embodiment of a doll in lieu of that of FIG. 5A;

FIG. 5C is a side view of the mechanism of FIG. 5B as assembled viewed from one side and FIG. 5D is a side view of that mechanism from the opposite side;

FIG. 6A is a pictorial of a dress in which to clothe the embodiment of FIG. 1 illustrating the position of the associated RFID tag in that clothing article and FIG. 6B is a pictorial of a nighty in which the embodiment of FIG. 1 is dressed for bed-time and showing the respective RFID tag location;

FIG. 7A illustrates various food articles that are recognized and used by the doll of FIG. 1 in play incorporating RFID tags with the respective articles;

FIG. 7B illustrates a dish accessory for play with the embodiment of FIG. 1;

FIG. 7C illustrates a toothbrush accessory used in play with the embodiment of FIG. 1; and

FIGS. 8A through 8F illustrate a variety of facial expressions produced in the face of the doll embodiment of FIG. 1 during play.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

The description that follows presents the best mode presently contemplated for carrying out the invention. Both the general principles of the invention and specific details for a practical embodiment of the invention are disclosed. Those disclosures should not be taken as limiting the scope of the invention, which is best determined from the numbered claims appended to the end of this specification. The invention is described in reference to a doll that emulates a living child. The doll interacts with a player and implements a virtual character, and, in the preferred embodiment, carries out the daily activities of an active child or incidents in a child's activities. It is understood that other characters both real and fictitious and even other figures and objects may be implemented in accordance with the invention, without departing from the scope of the invention.

FIGS. 1A and 1B are pictorial front views of one embodiment of the doll 10, dressed and undressed, respectively, and FIGS. 2A and 2B illustrates those dolls from the rear. Referring to FIG. 1B, doll 10 contains an outer shell or torso 12 and a head 14. That torso includes a number of parts that are attached together, such as sewn together with thread: Arms and legs of padded vinyl, as example, such as Kraton® based soft skin, or rotocast polyvinyl chloride (“PVC”) or the like that is typically stuffed with a filler material, as example, polyfil, and are shaped to resemble and flex like human skin and a short-sleeved and short-pant sweater-like portion 11 of fabric material that is soft and attaches to one end of the arms and legs. The head 14 of the doll is formed of a hard stiff plastic skull that is covered by a molded-to-shape skin that's formed from a soft-to-the-feel thermoplastic elastomer (TPE) material or the like that is molded to shape to fit over, overlie, the plastic skull (not visible in the figure) and contains openings for the eyes and mouth of the doll that are included in the underlying skull. The covering skin contains a head of hair 13, preferably formed of strands of Nylon, that's fastened to the skin. The doll head 14 is attached by a neck portion indirectly to the torso and is carried by that torso.

The interior of torso 12 of the doll is hollow and contains a plastic box 18, represented in dash lines, that houses the active electronic components, later more fully described, as well as a surround of bulk padding material to fill remaining space. Access to the interior is made by opening the Velcro-closed seam 19 on the backside of the doll in the sweater-like portion 11 of the torso to expose a cover panel 16 of the plastic torso box and allow access to the battery compartment should battery replacement become necessary. That seam is closed by a Velcro® fastener, not illustrated, that closes off access to the internal hardware and electronics, but which may be repeatedly pulled open and then re-closed. The shape of doll 10 and associated appendages are not particularly important conceptually to the invention, as becomes apparent from the description, but in this embodiment, preferably, conforms with the proportions and appendages of a child. Due to the limpness of the sweater-like material 11, the attached appendages may be manually placed in a variety of positions, wherein the doll may be sitting or lying down. In other more complex embodiments the doll may be articulated if desired, either fully or partially, and/or have arms, legs, and the torso molded instead of partially made of fabric and, if so, may be placed in a greater variety of positions, including a standing or walking position. The overall size and weight of the torso 12 is shaped and sized to allow the player (i.e., typically a young child) to comfortably hold the doll in the child's arms without being unduly burdensome. As an example, the torso and head may be shaped as shown in the drawings, and may have an overall dimension (height×width×thickness) about 20″×7″×3″ (inches).

The electronics package 18, represented in a dash-line block in FIG. 1B and in a more detailed block diagram in FIG. 3 is referred to by that latter figure. The package includes a programmable electronic controller 20, the doll controller. The doll controller is a programmed general purpose electronic processor or microprocessor (or microcomputer), also known as a microcontroller, preferably in the form of a single integrated circuit (“IC”) chip, which integrates a speech-optimized digital to analog converter (DAC) and an analog to digital converter (ADC) into a single IC chip capable of accurate speech recognition as well as high quality low data-rate compressed speech. The IC preferably provides the further on-chip integration of features, including a microphone pre-amplifier, twin-DMA units, vector accelerator, hardware multiplier, three timers and 4.8 kbytes of RAM. In addition to read-only memory (“ROM”) an interface for external ROM including, input and output ports, various utilities and is controlled by a program, represented by dash-line block 21, that's installed in the external ROM 22. Controller 20 includes various inputs 26 and outputs 28 and is powered by an external battery 30, which may be rechargeable and/or replaceable. Electrical current from the battery is supplied through the main power switch 31, an on-off type switch. Switch 27 is a reset switch, which may be a momentary operate type switch. In addition to internal memory, the controller may include access to external or add-on memory 22 provided on a separate memory chip. and electrically erasable programmable read only memory (“EEPROM”), not separately illustrated, which retains the memory contents even when electrical power is removed from the circuit. The inputs and outputs are electrically connected to various sensors and other components, such as the microphone and loudspeaker, later herein described, by electrical conductors or cabling, not illustrated in the figure to preserve clarity.

The controller is programmed to perform the various actions described herein, including the utility functions of a clock-calendar, speech synthesis and the process of speaker independent (speech) recognition and dependent (voice recognition). Although those and other utilities are included as stages in the programmed microcontroller 20, as an aid to understanding of the invention those functions may be separately represented in the figure in dash line blocks associated with the programmed microcontroller 20 with speech and voice recognition illustrated as block 25, and electronic clock-calendar as block 24

Loudspeaker 39, also installed in the chest of the doll, connects to an output 28 of the controller 20. The loudspeaker converts any verbal information contained in the electrical signal that is output from controller 20 to an audible sound, suitably an analog sound, and acoustically broadcasts that sound to those near the doll who would be listening. As is conventional that output from the controller may include the function of and/or be supplied from a digital-to-analog converter. If desired an amplifier or other sound reproducer may be coupled in between the digital to analog converter and the loudspeaker for enhanced sound. A microphone, also installed in the chest of the doll, connects to an input of controller 20. At particular points in the programmed place of the doll, the doll “listens” for an answer to a particular prompt or question that the doll has spoken. At that point and for a particular predetermined length of time, usually several seconds, the microphone receives responses and sound from those persons who are nearby, and converts any audible verbal information or other sounds contained in the analog speech output from those nearby into a digital signal, and inputs that signal to controller 20. The speech recognition technology, programming, algorithms, and recognition nets/sets compares the converted analog, now digital, information received with the digital information that the doll was programmed to anticipate receiving and sends to controller 20 a “yes,” indicating the information expected was received, or a “no,” indicating the information received was unexpected. The programming causes the doll to react verbally and animatronically by means of facial expression, when the word is one that was expected by the programming, or to a different reaction when the word was one that was unexpected or unrecognized, as the case may be,

In effect, the doll appears to speak like a person. The doll issues verbal instructions and other information or requests to the child player by essentially speaking those instructions, which are played, that is, broadcast through the loudspeaker. In so speaking, the face of the doll is also animated: The mouth of the doll opens and closes, the eyelids may flutter and so on like a live person. The doll may also prompt the player at times during the course of an interactive conversation to respond with one of several possible words or phrases that the doll (or anyone else) might anticipate hearing in reply and, when recognized by the doll, responds with an intelligent reply or behavior.

The various sensors in the doll include a mouth sensor 32, located inside the doll mouth, that aids in identification of foods (and other objects) that are placed in the mouth of the doll; a hug sensor 33 (e.g. a push button switch), located in the front mid-section of the doll, that aids in determining if the doll is being hugged; a butt sensor 34, located in the rear buttock of the doll that aids in determining when the doll is seated on an accessory, such as seated on the simulated toilet 40 (of FIG. 4) to go “potty;” a clothing sensor 35, located beneath the lower neck, that aids in identifying the clothing worn by the doll; a brush sensor 37, located at the front top of the head of the doll, that aids in detecting whether the hair of the doll is being brushed. The latter sensor may be in the form of a magnetic switch. And, of course, there is the audio sensor 36, a microphone, located in the front chest of the doll that detects sound and converts sound to an electrical signal and thus aids in the electronic recognition of particular sounds and verbal information. A force sensor 38 is included in the right hand of the doll to detect if the right hand is being squeezed. Another force sensor 38B may be included in the left hand to detect if the left hand is being squeezed.

The term “aid” was chosen to describe the action of the foregoing sensors, because in order for the action to occur, it is necessary to have the output signals of the respective sensors or switches interpreted by controller 20 before the identification or determination can be regarded as complete. The foregoing sensors and switches are not visible to the child during play with the doll so as not to detract from the illusion that the doll is a living person.

Mouth sensor 32, butt sensor 34 and clothing sensor 35 are formed of radio frequency identification tag readers (“RFID reader”), a technology that's previously found application in inventory control and in supermarket applications. The terms sensor for RFID tags and RFID reader and RFID tag reader (that essentially defines the respective sensor) are synonymous terms. The RFID tag reader detects RFID tags that are placed on or in products. Those tags contain encoded information about the product to which the tag is attached. Such tags may be “smart” in which the information on the tag may be changed (e.g. reprogrammed) or “dumb,” in which the information does not change, the latter being the kind preferred in the present application.

Very small in size, about one inch square, RFID tag reader 32 is installed in the upper roof of the mouth of the doll. That reader is coupled, wired, to an input and an output, 26 and 28, of controller 20 (FIG. 3), permitting the controller through the tag reader to interrogate an RFID tag during the course of the system program of controller 20 and receive the output, such as the identification information contained in an RFID tag (not illustrated) that is properly oriented to and positioned in the doll mouth adjacent reader 32.

The RFID tag reader is able to pick up information from the RFID tag on the food item even though that reader may be spaced by as much as an inch from the food item. Thus, even as the mouth of the doll opens and closes by a distance of less than one inch to simulate the act of chewing and varying the distance between the tag and reader, the RFID reader inside the doll's mouth is still able to read the RFID tag carried by the food accessory and glean the identity of the food accessory.

The inputs and outputs of the second RFID reader 34, installed in the lower back of the doll, specifically in the butt of the doll, are also coupled, that is connected by an electrical conductor or wired, respectively, to respective output and input, 28 and 26, respectively, of controller 20. The second RFID reader permits the controller to interrogate an RFID tag during the running of the system program by the controller and receive the output, such as the identification information contained in an RFID tag 34′ that is installed within, as example, an accessory seat, such as a toilet seat of a child's potty 40, pictorially illustrated in FIG. 4. The tag is read when the RFID tag in the seat is properly oriented and positioned relative to RFID reader 34 in the butt region of the doll while the doll is in the seated position on the accessory toilet seat.

The third RFID reader 35, installed in the back of the upper right shoulder or neck of the doll, is similarly electrically connected to an input and output of controller 20 so that any RFID tag that is positioned closely adjacent to and properly oriented parallel to the reader, such as occurs when a tagged clothing article of clothing is worn by the doll, may be polled for information by the controller when required by the program during the course of the doll's activity during play. As a result, the controller is able to identify the particular clothing article that is being worn. One article of clothing may be a dress 41 and another a nighty 43, as respectively pictorially illustrated from the rear in FIGS. 6A and 6B to which reference is made. Dress 41 includes the identifying RFID tag 42 on the upper side by the collar of the dress so that the tag aligns with reader 35 when the dress is properly fitted on the doll. Likewise, nighty 43 includes the identifying RFID tag 44 that is interrogated by reader 35 when the nighty is properly fitted on the doll. The dress that appears on the doll in FIG. 1A is another article of clothing that may carry an RFID tag in the foregoing manner.

Referring again to FIG. 1, switch 23 serves as a position or movement sensor and is of any conventional design. The switch actuates if the doll is lifted or moved, causing a small metal ball inside the switch to move inside a metal cylinder, closing a switch at one end or the other of the metal cylinder. Switch actuation thereby provides a signal to doll controller 20 which the controller is programmed to interpret whether the doll is lying down, sitting up or is upside down and take the appropriate next steps called for by the programming, such as later herein described.

The microphone 36 is output to an input 26 of controller 20 through which any audio picked up by the microphone is processed by the voice and/or speech recognition program 25 of the controller. The microphone picks up verbal information and applies that information via controller 20 as an electric signal to voice recognition processing. In a practical embodiment the foregoing function may be assisted by an analog-to-digital converter, a known device, if one desires or finds need to apply digital signals to the controller, and then is processed by the voice speech recognition program 25.

Although voice recognition and speech recognition are used interchangeably by those in the field, for the purpose of this application speaker dependent speech recognition is referred to herein as voice recognition. This type of technology creates a “voice print” by requesting the user to speak a particular word multiple times, usually twice. That speaker dependent word now stored in memory is user specific, not word specific. That difference distinguishes what is herein called voice recognition from speech recognition. Voice recognition technology enables identification of a specific word (or words) spoken by a specific user. In contrast, speech recognition enables identification of a specific word (or words) of a particular language and dialect, such as American English spoken spoken by anyone, and is thus speaker independent, depending on how that speech recognition technology is configured.

The recognition algorithm of the program detects whether the voice is one that was programmed into the doll during the setup procedure, that is, for which the program stored a voice print, and, when the recognition is confirmed, allows the controller to recognize the word or words that were used to make the voice print when spoken in that voice. Speech recognition is quite common nowadays, and is widely used in equipment. As example, speech recognition is used by the telephone company computers to receive (and respond to) customer queries for service and the like over the telephone. Speech recognition is also found in PC programs for the retail market like ScanSoft's Dragon Naturally Speaking® program and ViaVoice® program of the IBM company.

One preferred form of controller 20 for a practical embodiment of the invention is the controller chip that is available from Sensory, Inc. as model RSC 4128, a microcontroller chip with, among other conventional elements, 128 kilobits onboard read-only memory (“ROM”) to store data and an external memory interface, enabling interface with additional memory 22 that may be desired to store a greater number of words, phrases, sentences, facial expression tables and programming, speech recognition nets, speech recognition programming of the RSC 4128, and the programming of the logic and resultant behavior of the doll. If desired, a separate chip of flash memory can be added to separately record speaker-dependent voice recognition.

FIG. 7A pictorially illustrates a number of the food accessories that the doll may be programmed to recognize when the accessory is inserted in the mouth of the doll, including a cookie 51, milk bottle 53, carrot 54, pizza 50, pancakes 58, and juice 59. As shown in FIG. 7B, a baby's feeding bowl 55 contains three separate areas for holding three different spoons with each area holding a small portion of the simulated food in the associated spoon. In other words, one spoon contains spaghetti 56, another contains macaroni 57 and the third contains cereal 52. The simulated food that is held in a respective spoon may also be recognized when inserted in the mouth of the doll. Additionally, as shown in FIG. 7C a toothbrush 60 may be recognized when inserted into the mouth of the doll after the doll requests the player to brush the teeth of the doll following the partaking of food or before bedtime. The foregoing accessories are suitably fabricated of plastic that is decorated with the appropriate visual appearance of the respective food product that's being simulated and/or implement that the child recognizes (or learns what the accessory is intended to represent).

Referring again to FIG. 7A and cookie accessory 51, an RFID tag 51′ is embedded in the plastic of the simulated cookie accessory and contains the information that identifies the accessory. Like tags are also illustrated in dash lines in the other accessories shown in the figure (and those in FIGS. 7B and 7C as well), but are not labeled. The RFID tag in a respective accessory is uniquely configured to enable the RFID reader inside the doll to wirelessly identify the unique configuration of information in the tag and communicate that information to the doll controller 20. By means of the controller programming, the controller is able to associate each unique configuration with a particular one of the accessories for the doll, and thus let the doll “know” the identity of the accessory inside the mouth of the doll, what the doll is wearing, or on the object which the doll is seated, and so on. In the preferred embodiment, the foregoing identification is deferred until the program run by controller 20 of the doll achieves a state at which the program requires the reader to determine if the sensor is detecting an RFID tag. Until that stage in the program is reached, the doll does not make any kind of verbal or behavioral indication of the identification.

In practice RFID reader 32 is approximately one inch square in size and includes a small antenna, not illustrated, that is mounted parallel to and just above the vinyl skin in the top (roof) of the mouth of the doll. In order for the reader to read the RFID tag, the tag should be located directly below (e.g. adjacent) and parallel to the antenna in the RFID reader. Reader 32 senses the food accessory (e.g. senses the RFID tag in that accessory) when the accessory is inserted into the mouth of the doll. Because the very small sized reader 32 is located in the mouth of the doll, it is now possible to move the lips of the doll (varying the distance between the RFID tag and the tag reader) to simulate chewing when simulated food, such as those items illustrated in FIG. 7A, are inserted in the mouth of the doll. The RFID tag always remains in RF range, even though the mouth is open wide or is moving. Where the particular food article is especially thick, too thick for insertion into the mouth of the doll, then a portion of that article is formed with a reduced thickness portion decorated to simulate a piece of the food article that was previously eaten. That reduced thickness portion fits inside the mouth of the doll and contains the RFID tag. Cookie 51, and pancake 58 in FIG. 7A are shown to contain a simulated partially eaten portion.

Importantly, the mechanical mouth movement that emulates chewing (and sucking or sipping) does not interfere with the RFID reader's ability to read the identification of the food accessory, a novel feature. With the foregoing RFID tag and reader structure, the identification of the simulated food that's inserted in the mouth of the doll is transparent to the young child. That transparency reinforces the illusion that the doll is actually a live person or is magical.

The small size of the RFID reader 35 in the doll's back, more specifically, located at the back beneath the neck of the doll, enables the RFID tag on the clothing to be read transparently once the clothing is placed on the doll properly and the Velcro attachment on the clothing is closed to hold the RFID tag directly over the RFID reader.

Additionally, the RFID reader 34 in the doll's butt enables the controller to know (e.g. detect) when the doll is actually seated on an accessory that contains an RFID tag, and identify that accessory. This is particularly important in the potty accessory 40 of FIG. 4. Once placed on the potty, the illusion that needs to be conveyed to the child, consistent with the illusion that the doll is alive, is that the doll knows whether or not she is sitting on the potty and that “she did it!” if a pee-pee or poo-poo sound occurred while the doll was so seated. The face of the doll in that event will present concurrently a satisfied demeanor. If the doll had a simulated bowel movement or urination before reaching the potty, then the doll had an “accident” and should be a little upset at her mistake. The demeanor of the doll face concurrently would be changed by the facial control actuators to represent that the doll is upset.

As those skilled in the art appreciate, the incorporation of an RFID tag reader in the butt (e.g. behind) of a doll (or the backside thereof), so that the accessory on which the doll is seated, if any, may be identified, is novel. Further, the use of the foregoing identification of an accessory seat 40 in order to initiate conversational interactivity with the child is also novel. Applicant refers to this feature “Virtual Conversational Interactivity™”

The facial expressions of the doll head are varied and controllable. Those expressions include those that accompany natural speech, referred to as natural speech expressions; a smile, chewing expressions, listening, sleepiness, yawning, surprise, unhappiness, crying and excited. Actuators, such as electromechanical or electromagnetic actuators, carried in the doll head and controlled by doll controller 20 are coupled to locations in the head of the doll that control the movement of the flexible lips of the doll's mouth, the vertical spread of the jaw, hence, lips and mouth of the doll, the smiling (upturn) or unhappy (downturn) of the right and left ends of the lips of the doll, the upward movement of the cheek of the doll while the mouth of the doll is concurrently positioned in a smile, the upward movement of the inner portion of both eyebrows of the doll as the doll eyes open in an expression of wonderment or surprise, the projection of the lower lip of the doll mouth downward and forward in a pout, the slow opening of the mouth combined with sleepily closing eyelids in the expression of a yawn, the fluttering eyelids on wakening of the doll, the eyebrow and the eyeballs of both eyes of the doll. The actuator that opens and closes the doll mouth can be called a mouth actuator; and, collectively, the actuators for the face of the doll may be said to constitute a facial actuator. The foregoing actuator arrangement is pictorially illustrated in FIG. 5A to which reference is made.

Head 14 contains actuators 61, 63 and 65 for controlling the shape and position of the lips 32 of the doll. The actuators are coupled to a respective output 28 of controller 20, not illustrated in this figure, and are either energized or deenergized by the computer in accordance with the computer application program that is being “run.” Actuators 61 and 63, when energized, pull the ends of the lips, which effectively changes the appearance of the mouth from a normal one to a wide one. When de-energized, the skin material, which is elastic, restores to the normal size, and the mouth returns to a normal appearance. Actuator 65, when energized by the controller spreads the upper and lower lips apart (e.g. raise the upper lip and lowers the lower jaw, hence, lower lip), effectively opening the mouth of the doll. When deenergized the lips (and jaws) are restored to a closed appearance, which is the normal appearance of the mouth. Thus, actuator 65 essentially serves as a mouth control actuator. Actuators 67 and 67 are coupled to respective outputs 28 of the controller and respectively control the left and right cheeks of the doll, which are formed in the elastic skin of the doll face. When energized by the controller, the actuators move the cheeks upward. When de-energized, the cheeks restore to the normal position. When the doll is to smile, actuators 61 and 63 spread the lips to a wide position and actuators 67 and 69 move the cheeks upward. Those movements produce a smile.

The actuators may be bi-directional in which case the controller program directs the movement and direction of the respective actuator. Alternatively, the actuators or some of the actuators may be a unidirectional type with a spring that moves in one direction and tensions the spring (or the elasticity of the doll skin) when energized and when deenergized the actuator is returned to the initial position by the energy stored in the spring.

Actuators 71 and 73 are coupled to respective outputs 28 of the controller and respectively control the right and left eyelids of the doll to thereby open (or close) the eyes of the doll. In the embodiment, the eyelid is attached to the eye of the doll, a spherical member, and covers a circumferential portion of that sphere. A shaft mounts that sphere in the eye socket in the doll face for rotational (e.g. pivotal) movement about an axis defined by the shaft. The latter actuators produce a rotational movement when energized. By pivoting the eye, the attached eyelid is pivoted counterclockwise, away from the front of the face to the rear and the eyeball is concurrently pivoted to the normal position in which the eye appears to be looking straight ahead. When energized, actuators 71 and 73 essentially cause the doll to open the doll's eyes. Actuators 75 and 77 are also coupled to a respective output of the controller and simultaneously control the right and left eyebrows of the doll, respectively. When energized by the controller, actuators 75 and 77 push (or pull) on the elastic skin of the doll face to slightly stretch the skin. Since the skin carries the image of the eyebrows, stretching the skin effectively raises the eyebrows. When de-energized, the skin contracts and resiliently restores to the normal condition, moving the eyebrows back to the normal position. When expressing surprise, the dolls mouth would be opened, the eyes would be wide open and the eyebrows would be raised, a familiar appearance of a surprised person.

When a simulated food article is inserted in the mouth of the doll, controller 20 indirectly detects the food article. The controller is programmed to have the doll, among other actions, taste and chew the food. For the foregoing action the controller is further programmed with an expression control module or subsidiary program, and one operational routine in that control module or program is the act of chewing. With the program giving the command for the doll to chew, the controller issues voice messages that broadcast chewing sounds from the loudspeaker 39, concurrently with issuing commands to the mouth actuator 65 and lip actuators, 61 and 63, to move the jaws or lips to simulate the facial expressions that normally accompanies chewing by a live person. Preferably, as the two lips come together and/or the lower jaw moves up and down repeatedly in a chewing cycle, if the food is potato chips or apples, the doll concurrently broadcasts a “munch” sound through the loudspeaker.

Reference is made to FIGS. 8a through 8f. The actuator arrangement that is coupled to the skin of the doll face should be capable of contorting the mouth, eyelids, cheeks and mouth to produce various expressions. Some of those expressions are illustrated, such as expressions normally associated with yawning (FIG. 8a), sleepiness, attentive listening, feigned smile (FIG. 8b), surprise (FIG. 8c), unhappiness, crying (FIG. 8d), excited speech (FIG. 8e), chewing, and waking with fluttering eyelids (FIG. 8f). The eyelids (FIG. 8f) of the doll can also be fluttered by the actuators, an action that may occur when a child is awakening. The number of possible physical manipulations of the doll face appears almost infinite, limited only by the imagination of the doll designer. Typically, the foregoing expressions are accompanied by verbal statements or sound uttered by (i.e. audibly broadcast by) the doll, enhancing the impact of the expression on the player and reinforcing the meaning of the expression where there might be doubt. Those verbal statements are elsewhere herein discussed.

A control system that uses such individual electromagnetic actuators is more expensive at present than desired. Such a control system would necessitate a price for the resultant doll that at the present time is so high at both wholesale and retail that the doll would not sell to buyers or would have very limited distribution and low volume sales. Therefore, a specific mechanism was developed for the doll by others to whom the requirements for construction of a practical embodiment of the invention was made known; and that mechanism proved less complicated and less expensive than individual actuators for a practical embodiment. The device was also slightly less versatile. That device used two motors to move gears and/or a number of cams that could accomplish the foregoing facial movements within acceptable limits. The face of the doll is formed of a flexible, but strong, thermoplastic elastomeric material, a rubber like plastic material that presents a human-like skin in feel and appearance. That material forms a skin that fits over and overlays a stiff plastic skull and contains suitable openings in the skin to accommodate the mouth and eyes carried in the skull, and the skull houses the controller-controlled actuator mechanism. The actuator mechanism should be able to move or distort that skin so as to produce facial expressions on the doll that appear realistic. This thermoplastic elastomeric skin is injection molded to shape and when formed stretches over and onto the rigid plastic skull. Preferably, small rigid plastic connector tabs are insert molded into the interior of the of the foregoing skull skin. Those connector tabs connect certain parts of the skin to respective actuators, enabling the actuators, when operated, to more positively push or pull the skin to change the appearance of the doll face.

In one practical embodiment, the mechanism is achieved with two motors and a double cam mechanism later herein more fully illustrated and described. That design includes two double sided cams, each operated by a single motor that is able to rotate the cams clockwise or counterclockwise. The cams, in combination with two gearboxes operating them, move levers that attach to respective ones of the rigid insert molded plastic connectors that are molded into the inside surface of the skin material that covers the skull and forms the doll's face. By strategically configured placement and shape of the cam surfaces and the positions in the skin at which the actuator connectors are specifically molded into the skin, in order to take advantage of this actuator, enables facial movements of the strategically selected portions of the doll's face. Those facial movements achieve far more realistic human movements in the applicant's view than any doll ever made or marketed previously.

Using two motors, each operating independently of one another but under control of the doll controller, the number of facial expressions that can be obtained is maximized with a minimum number of motors. Each of the two motors is bi-directional and is capable of rotating either clockwise or counterclockwise (forward or reverse), the motor speed could change to rotate fast or slow and the duration of the rotation of the motor shaft occurs as directed by controller 20 (FIG. 3). The direction, speed and duration of rotation for each facial movement is determined by the doll controller and that in turn is controlled by the programming logic and game flow directing the actions.

The motors connect to gears and/or cams and/or levers that rotates the eyeballs of the doll to a closed position as the lower jaw of the doll is moved downwards and then immediately rotates the eyeballs to the open position and the jaw upwards. That action produces “blinking” of the eyes in a natural manner while the doll is “talking.” During the course of broadcasting voice statements the controller causes the doll eyes to blink repetitively simulating the natural eyelid movement of a living person who is in the course of speaking. The motors could also open the eyelids and eyes of the doll to a wide open position and concurrently move the eyebrows of the doll upward to simulate a facial expression of surprise; could also lower the eyelids of the doll slowly while concurrently widening the mouth of the doll to simulate an expression of yawning; or close the eyelids and simulate the appearance of sleeping. The motors could also lift the cheeks of the face of the doll upwards and backwards and the corners of the mouth of the doll upwards and back to produce a smile on the doll face in a very human-like manner or turn the corners of the mouth of the doll down and or protrude the lower lip so that the doll face gives the expression of unhappiness or even appear to pout.

A more detailed picture of the mechanism described in the prior paragraph is presented in FIGS. 5B through 5D to which reference is made. FIG. 5B illustrates the mechanism in an exploded view of the doll head from which the covering custom elastomeric skin is omitted. The front and rear head plates of rigid plastic, 100 and 101, join together to define an internal region in which the remaining parts illustrated in the figure are housed. The front head plate contains the various openings or windows for the eyes, cheeks, nose, and mouth, as illustrated. The two cams 103 and 104 are mounted side-by-side in the head for rotation about a common axis. Each cam is formed of a molded stiff plastic that is circular and generally flat in shape, disk-like, with the front and back faces of that disk containing at least one, and possibly two cam tracks. Each cam track is formed between a parallel pair of raised ridges located on the face of the disk, that simulates and is equivalent to a grooved track. Those cam tracks are strategically configured irregular ovoid/elliptic shaped tracks about the center of the disk. The cam track controls the somewhat lateral position of a cam follower, formed of a short round peg or lever, as a function of the rotational position of the cam. One or more such cam-track is included on each of the two faces of each cam, only one of which faces on each cam is illustrated in the figure.

Cam 103 controls the movement and change of position of the eyes of the doll by means of the cam-track on one face of the cam and controls the eyebrows of the doll by means of the cam-track on the opposite face of that cam. Therefore cam 103 is sometimes referred to herein as the eye and eyebrow cam. Cam 104 controls the movement and change of the position of the mouth of the doll by means of the cam-track on one face of the cam and controls the cheeks of the doll by means of the cam-track on the opposite face of the cam. Therefore cam 104 is sometimes referred to herein as the mouth and cheek cam.

The doll head contains a pair of eyes 105, an eyebrow actuator connector 106, and an eye lever 107. The mouth of the doll is represented by structure 108. The head further contains additional eye levers 119 and 120. Eye lever 119 is connected to and follows a cam-track in cam 103. A pair of cheek actuators 109 connects to a cheek actuator connector 110. In turn a cheek lever 111 connects to cheek connector 110.

A battery operated DC motor 112 drives cam 104, indirectly, through the appropriate set of three gears 113, with power being supplied to the motor under control of the controller and, hence, the program being run by the controller. The motor attaches to a motor casing 114, enclosing the gears 113 and that casing includes a rotating joint 115 driven by the lower gear in the figure that rotates the shaft that drives cam 104. The doll head also includes a motor housing 116 that houses a second DC motor, not illustrated, that drives cam 103, indirectly, through an appropriate set of gears, also not illustrated. Motor housing 116 includes a rotatable shaft 117 that is exposed on the far side. A wiper 118 is placed in abutment with shaft 117 and defines a shaft position indicator that senses position where the metal contacts of the wiper contact metal contacts that rotate with the shaft. Through appropriate wiring, not illustrated, wiper 118 reports the shaft position to the controller. That information or feedback is useful to enable the controller to properly turn the cams so that the mouth of the doll is opened wide in the case of expressing surprise or is opening slightly when the doll is to express a smile.

The mechanism includes eye levers 119 and 120; second and third cheek levers 121 and 122, in addition to cheek lever 111, earlier noted; and mouth levers 123 and 124. Mouth extenders 125 and 126, mouth levers 127, 128 and 129, and an additional mouth extender 130. A lever plate 131 is included to support the aforementioned levers. The strangely bent strip shown to the left of cam disk 104 constitutes the main crank 132. Casing 133 closes the rear end of the mechanism.

FIG. 5C shows many of the same elements as viewed from the same side as the exploded view of FIG. 5B, but assembled together, and FIG. 5D shows the elements as viewed from the opposite side of the skeletal head. As shown in FIG. 5C, cheek lever 122 and mouth lever 123 are coupled to a cam-track in cam 103. Suitably, those cam tracks are formed by parallel ridges formed on the sides of the disk-like cam that extend a small distance from the otherwise flat side of the disk. Forward motion on lever 123 pushes the cheek actuators 109 (in FIG. 5B) and those actuators, coupled to the connector tabs attached to the inside surface of the elastic doll head skin in the region of the cheeks, in turn press on the covering elastic skin to raise or lower the cheek area of the doll face. The mouth lever 123 likewise follows a cam-track in the cam disk and through the additional mouth levers 124, mouth extenders 125, 126 and 130, and mouth levers 127, 128 and 129 operates the mouth of the doll, not all of which are labeled in FIGS. 5C and 5D.

The foregoing mechanism basically accomplishes essentially the same functions as one that relies on the electromagnetic actuators that were described in connection with FIG. 5A, but is believed less expensive to fabricate. As one appreciates, it is not the purpose of this application to delve into the mechanical intricacies of the mechanism component of third party sources, however, ingenious the mechanical engineering may be, other than to note the availability of such mechanisms and the reasons why applicant prefers that mechanism for the embodiment of the doll. That double motor double cam mechanism is found to possess the capability of accomplishing the proper control of the “facial muscles” of the doll face to produce the contortions that are appropriate to each of the emotions above noted, such as those illustrated in FIGS. 8a through 8f, including, but not limited to flutter, yawn, sleepy, listen, smile, chew and speech, surprise, unhappiness, crying, excited speech.

The strategic configuration of the irregular elliptic cam-track on the eye movement face of cam 103 and on cam-track for the eyebrow movement on the opposite face of cam 103 create three appropriate positions for both the eye and eyebrows as cam 103 rotates forward or backwards, seamlessly moving either clockwise (or counterclockwise) for various distances and at various speeds to change the eyes and eyebrows from one position to another. In a like manner the strategic configuration of the irregular elliptic cam-track on the cheek movement face of cam 104, and on the mouth movement opposite face of cam 104 create appropriate positions for both the cheeks and the mouth as cam 104 rotates forward (or backwards) seamlessly moving either clockwise (or counterclockwise) for various distances and at various speeds to change the cheeks and mouth from one position to another. The movement of the eyes may occur concurrently with movement of the eyebrows or independently of eyebrow movement. The mechanism is mounted in the head of the doll and the electrical inputs to that mechanism are connected to the appropriate outputs of the controller 20.

In applicant's view, the doll is able to produce emotional expressions that surpass anything previously attempted. The apparent ability of the actuators to achieve facial positions simulating various emotions that enable the doll to respond emotionally by means of facial expression alone or accompanied by speech in response to Conversational Interaction™ by means of voice and speech recognition when the player, whether mother or child, speaks to the doll, appears revolutionary.

Returning to FIG. 3, controller 20 is preferably implemented in the form of a battery operated programmable microprocessor or microcontroller, as variously termed, and associated memory, including voice ROM, and a digital-to-analog converter and appropriate input and output interface circuits. The microcontroller may also include an analog-to-digital converter and digital filters. The foregoing may be implemented in a custom semiconductor integrated circuit chip, although separate chips may be used as an alternative, all of which are known and have appeared heretofore in interactive toys. Optionally, the digital clock need not be a separate unit as earlier described, but instead, is also integrally formed on the chip. The chip's inputs are respectively connected to respective sensors (and digital clock) described and its respective outputs to the loudspeaker. The micro-controller is programmed in accordance with the foregoing description and that program, the software, is stored in another portion of non-volatile memory or ROM.

As described, the preferred embodiment the doll operates entirely on power supplied by batteries, that is, is a self-contained battery operated unit. When the electrical battery (or batteries) 30 are initially inserted in the battery compartment inside the doll and the power switch is set to “on”, the right hand of the doll is squeezed actuating switch 38 and the programmed set-up procedure for the doll commences. A few moments thereafter, the doll voices a yawn, contorts the doll face accordingly, and introduces herself as Amanda. The doll then asks the player to say the word “pizza.” When spoken, the word is recognized by doll. The doll confirms that the doll heard the player state “pizza” and then asks the player to say the word “spaghetti.” When spoken by the player, that word is also recognized by the doll and the doll confirms aloud that it heard the word “spaghetti.” In this embodiment, with voice recognition software analyzing the spoken words “pizza” and “spaghetti” and storing the analysis in memory as a code or pattern, the doll attaches the personage of “mom” to that analysis. In that way the child player becomes a de facto “mom.”

When powered on for the first time, or when the mother receives a special command from the doll, the doll interrogates the mother in the most natural way possible in order to determine (and set) the current date and time on the clock of the controller; whether the doll should observe daylight savings time, the wake-up time for the doll, and bedtime. Each time an answer is given by the child, that answer is verbally confirmed by the doll before proceeding. Additional topics could be programmed into the controller program for interrogation, provided that the topic is relevant to a function of the doll.

The doll first speaks a particular year, such as 2005, and then asks the player if the year that was spoken was correct. If the player answers negative, the doll tells the player to squeeze the right hand of the doll and release that hand from the squeeze only when the correct year is spoken. The doll then broadcasts the various years in serial order. When the year 2005 is spoken, the player releases the right hand and the set-up procedure continues. If the player makes a mistake, the player is able to note the mistake when the doll tells the player the year and asks for verbal confirmation from the player, a “yes” or a “no” is spoken. If negative, the doll repeats the entire procedure, until a correct year is confirmed by the player.

Once the year date is settled, the doll recites the name of a month and asks if that is the correct current month, requesting an answer of either “yes” or “no.” If the player states “yes,” the answer is recognized and the set up procedure next addresses the date in the month; if “no” the doll recognizes that and commences to repeat the same procedure described above for an incorrect year, but for an incorrect month instead, which includes squeezing the right hand of the doll. The doll then speaks the date number for the day of the month and requests an answer from the player. The protocol for settling upon a correct date is the same as described for the year date and the month. Once the correct date is settled, the doll makes a statement of the correct date: It is now Nov. 4, 2005, as example.

The program then moves to stating a wake-up time and a sleep time, which is negotiated with the player and settled upon. Alternatively, The program moves to stating a wake-up time and a sleep time and gives the player the option to choose those times or skip doing so. If the player wishes to select a wake-up and/or sleep time, then those times of day are selected in the same manner previously described herein for selection of the year, month and day.

The foregoing information is programmed into the programmable EEPROM (electrically programmable read only) memory during the set-up procedure and is retained even when the doll is powered off. The clock function continues running so long as the power switch to the DC power supply remains in the power-on position. When that main power switch is turned to off the clock ceases to run but the calendar information, that is time, day, month and year as of the time the power was turned off are saved in memory. When the main power switch 31 is again turned to the “on” position the doll program causes the microcontroller to run a brief set-up procedure that enables the user to reset the time from the prior time that was saved. That information may be edited or changed, however, by actuating a reset key 27, shown in FIG. 3.

In an alternative embodiment, the doll may verbally query the child to speak the name of the doll so that the doll may obtain the required voice sample of the child to identify the child as the mommy. Additional topics could be programmed into the controller program for interrogation, provided that the topic is relevant to a function of the doll. Many variations can be made in the set up procedure. In one alternative, the doll broadcasts a question through the speaker 39 and asks: “Say my name mommy” and may state that request twice. The doll will know when her make-believe “mommy” states the name of the doll, which in the preferred embodiment is Amanda. The voice recognition software within the controller 20 (e.g. virtual recognition process 25) analyzes the response and produces an electronic voice pattern of the “mommy.” The doll may ask again and with the additional reply at this stage, the doll recognizes the person doing the speaking as “mommy,” and the doll is programmed to reply with a statement, such as “I love you mommy.” Thereafter, the doll continues with the set up procedure as specified in the set-up program for the controller.

Verbal messages are broadcast from the loudspeaker 39 under control of the microcontroller by outputting the contents of various locations in the voice ROM, and applying that digital information to a digital-to-analog converter or equivalent virtual converter, not illustrated, forming a speech synthesizer. From there the sound information propagates to loudspeaker 39 and is broadcast. The digital form of the message is converted to the analog form that drives the loudspeaker and produces the desired verbalization of audible sounds, words and other voice messages.

The verbal messages and sounds are preferably human voices that are recorded as digital information in a portion of the ROM memory, which portion may be referred to as the voice ROM, using any standard technique. Those verbal messages, such as those earlier described, may be stored as complete sentences or, alternatively, as words and partial phrases, dependent in part on the amount of memory available or which one prefers to include. The inventor's script also contains commands for the dual motor dual cam facial expression maker for achieving the facial expressions to the face of the doll that the inventor wishes to have accompanied by those verbal messages, including the verbal statements presented in this text.

An exemplary listing of additional verbal statements and the positions of the four controlled facial aspects that are to accompany the respective statements, namely the eyebrows, the eyes, the mouth and the cheek is presented in APPENDIX A to this specification. Appendix A contains statements that are spoken by the doll and the accompanying changes to the components of the doll face to change the facial expression of the doll. The latter are arranged in two columns. The right hand column recites the position of the eyebrows, eyes, mouth and cheeks prior to (and following) the change, and the left hand column describes the change to those components from the foregoing default condition. As example when the doll broadcasts “Yea, Amanda loves dinner!” the eyebrows move from the normal position upward, the eyes remain at the normal configuration, that is, open, the mouth changes from the normal position to wide and the cheeks of the doll move up. Each of those actions, voice and physical action is scripted by the designer and combined to produce a life-like and natural response that one might expect from a living child.

In addition to short phrases, the verbal messages may include songs that are sung by the doll, music and/or special sound effects. In the preferred embodiment the doll is capable of singing several songs. It is also capable of producing the “pee-pee” and “poo-poo” sound effects of a child eliminating on the toilet, sounds which amuse and excite young children.

Memory chips have become relatively inexpensive, which minimizes any necessity for reducing the size of memory included in the doll. Hence, the preferred approach is to store complete sentences of spoken words. That allows for a higher quality of sound reproduction. To minimize the amount of memory required, if desired, messages may be stored as appropriate as individual words, partial phrases, and/or full expressions, and then pasted together for broadcast, e.g. concatenated. As an example, the verbal message: “I want a banana” may be parsed in separate parts and stored in different areas of the memory as “I want a” and as “banana”. Under program control, when the message for a banana is called for during the course of the program, the microcontroller selects and consecutively outputs the two sections from the memory in proper order. Other verbal requests for a pizza or Popsicle, as examples may likewise be constructed using the same initial phrase “I want a”, thereby requiring storage space for that phrase only once. The individual words and sub-phrases may be used repeatedly, allowing them to be played back in various sequences. The quality of audio reproduction using those space saving techniques is less than that available by storing complete phrases, hence the space saving technique described is less preferred than storing and reproducing complete phrases.

The digitized audio may be compressed using any conventional compression algorithm during the recording process to further minimize memory requirements; and the program should then include implementation of an algorithm for decompressing that compressed digitized audio as it is played back.

The doll may be modified to incorporate a separate clock calendar, such as a digital clock calendar chip, in lieu of the clock in the prior embodiment. In a similar manner to the way in which the controller 20 is programmed with information enabling it to keep track of the daily passing of time (i.e., a clock function), the device may, in addition to the heretofore mentioned clock function, be programmed to keep track of the weekly, monthly, and yearly passing of time (i.e., a calendar function). The calendar would be set in a similar manner to the clock, where as earlier described, in the “set the time model”, the clock would be set to the hour of the day, the minutes of the day, and whether or not it was AM or PM. Once that information was set in the clock, such as by toggling the right hand sensor in the prior embodiment, the “Set the date model” would automatically occur on the program menu, permitting the parent to also set the month by inputting a number between 1 and 12, then set the day of the month by inputting a number between 1 and 31, and then set the year by inputting the appropriate four numbers for the current year, such as 2005.

The set-up procedure is accomplished verbally and manually, which avoids the necessity for inclusion of a visual display, such as an LCD. The doll instructs the child to squeeze the right hand of the doll and to release that squeeze when the child hears the doll broadcast the correct number, starting with the year. The doll then starts by speaking 2005, 2006, 2007 and so on, until the child releases the squeeze of the hand of the doll, thereby selecting the year. Then the doll makes the same request for the correct month, and speaks the months, one, two, and so on (recycling if necessary each time the twelve figure is passed) until the child releases the squeeze of the right hand of the doll; then repeats the procedure for the day of the month. Thereafter the procedure continues with the hour and minutes of day through squeezing and releasing the squeeze; and after that is set requesting a verbal answer, yes or no, as to whether the time is a morning time (AM time) or evening time (PM time). The controller includes information in memory of the date that daylight savings time commences in each of the U.S., the U.K, Australia and New Zealand, all of which are English speaking countries.

As a specific example, the doll broadcasts an instruction to the player to squeeze the right hand of the doll, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct year is broadcast from said speaker, and, thereafter, terminating the recital of the years when the squeeze is released and places the name of the year last recited into a memory. Then, broadcasting the year last recited, and requesting the player to confirm that the stage so broadcast is correct or not by answering yes or no. If the answer is no, the immediately preceding three steps are repeated until the foregoing answer is YES.

Alternatively, if one squeezes the right hand of the doll and immediately release the squeeze, the doll broadcasts the first year, 2005, and with each squeeze the doll broadcasts the next successive year. That continues until one stops squeezing the doll hand for a predetermined time, whereupon the doll broadcasts the last year broadcast and queries the person if the year spoken was correct. The foregoing approach is in a sense forgiving. One can obtain the year date in either of the two described protocols or any combination thereof.

When the answer to the foregoing query regarding the year is yes, an instruction is broadcast to the player to again squeeze the doll hand, wherein pressure is applied to the pressure operated switch inside the hand, and to release the squeeze when the correct month of the year is broadcast. The recital of the months of the year is terminated on release of the squeeze the name of the month last recited is placed in the memory. The month of the year last recited is then broadcast in confirmation as the month selected and the player is requested to confirm the correctness by answering yes or no. If the answer is no, the immediately preceding three steps are repeated until the foregoing answer is YES. When the answer to the foregoing query is yes, an instruction is broadcast to the player to squeeze the doll hand and to release that squeeze when the correct day of the month is broadcast, followed by broadcasting the days of the month in serial order through said loudspeaker; terminating the recital of the days of the month when the squeeze is released and placing the day of the month last recited into a memory. Then broadcasting the day of the month last recited in confirmation as the day selected and requesting the player to confirm that the day so broadcast is correct or not by answering yes or no; and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES.

If the answer is yes, the month, day, and year stored in the memory is broadcast by the doll and the player is requested to confirm that the information broadcast is correct or not by answering yes or no; and, if the answer is no, repeating the preceding steps of setting the year, month and date from the beginning until the final answer becomes yes. Next the clock time is set. The time set up begins by broadcasting a statement to the player as to the positions of the small and large hands of a clock and requesting the player to confirm the correctness or not of that statement by answering yes or no; and, if the answer is no, broadcasting an instruction to the player to squeeze the doll hand, wherein pressure is applied to said pressure operated switch, and release the squeeze when the correct hour of day is broadcast from said speaker, followed by broadcasting the hours of the day in serial order through said loudspeaker. The recital of the hours of the day is terminated when said squeeze is released on attainment of the current hour, removing pressure from said pressure operated switch, and placing the hour of the day that was last recited into a memory. Then an instruction is broadcast to the player to squeeze the hand, and release that squeeze when the correct minutes of the day is broadcast from said speaker, and, thereafter, broadcasting the minutes of the day in five minute increments in serial order through said loudspeaker. The recital of the minutes of the hour is terminated when the player releases the squeeze on attainment of the current minute increment, removing pressure from said pressure operated switch, and placing the minute increment of the hour that was last recited into a memory. Either as the next step or when the answer to the foregoing query is yes, information is broadcast to the player on the meaning of AM and PM and querying the player if it is necessary to reset what the clock indicates as to the stage of the clock by answering yes or no. When the answer to the foregoing query is yes, broadcasting a query to the player to answer if the time of day is AM by answering yes or no; and, if the answer is yes, placing AM in said memory, but if no, then placing PM in said memory.

Then, the player is next queried whether the player observes daylight savings time and the player is requested to answer yes or no. The answer given by the player is recognized and stored in memory. If the player answers yes, then the controller is set to change the time at the beginning of the next event or happening of either changing the time forward one hour on the particular days of the year that daylight saving s time begins or moving the time back one hour on the particular day of the year that daylight savings time ends. As a further stage in the set up, the bedtime and wake-up times for the doll may be in a similar manner either by the hand squeeze approach or a verbal answer to a question of yes or no as to whether the player wishes to set a wake-up and bed-time for the doll.

In addition, readily available information regarding the specific date of each year on which a holiday, such as Easter or Thanksgiving falls when such days vary year by year, can be stored in ROM memory and be programmed into the controller. Such information can be stored for however many years into the future as desired, fifteen years in the case of the Amanda doll, as example. Similar information regarding all holidays that fall on the same date of a specific month each year, such as Christmas, can likewise be programmed into the device.

Play Patterns

The preferred embodiment features a variety of play patterns. Those play patterns are defined in the following table and an abbreviated version as used in a practical embodiment appears in APPENDIX B to this application:

Amazing Amanda Play Patterns Categorized Broad Category Sub-Category 1 Breakfast 2 Lunch 3 Dinner 4 Snacks & Drinks 5 Potty 6 Wake Up 7 Go to Bed Sleep Nap Time Bed Prep Delay Tactics 8 Games Simon Says Color Game Shape Game Find Something Favorites Sing a Song Others 9 Hugs & Kisses 10 Dressing & Grooming Change from Jammies to Daytime Outfit Change from Daytime Outfit to Jammies Dress Up Hair Play Jewelry 11 Go Out Restaurant/Fast Food Car Routine Plane Shopping Mall Grocery Store (Dressing) 12 Exercise 13 Recognition Mom Grandpa (Papa) Grandma (Nanna) Best Friend Great Grandma Great Grandpa 14 Daycare 15 Babysitter 16 Illness

The play patterns can be either child-initiated or doll-initiated.

Child-initiated Play. In child-initiated play, the child commands the doll to initiate a particular play pattern. The easiest way to think of the play is to visualize a hierarchical menu system, triggered via voice activation.

The figure presented in Appendix C to this specification shows how such a menu would look. Some of the play patterns (such as play) have an additional sub-menu of choices. As example of how the menu works consider the following script:

Amanda: “Mommy what should we do?”

Mom: “Let's Play.”

Amanda: “What should we play, Funny Face, Animal Talk, Let's Pretend?

Mom: “Funny Face.”

The doll then begins the Funny Face game routine.

Doll-initiated Play. The internal clock keeps track of the date and the hours and minutes. The program of the controller uses the clock to determine the following behaviors. Eating: When told that “Its time to eat,” or when the doll decides it is “hungry” between 6 am and 10 am the doll will ask for breakfast; between 10 am and 3 PM the doll will ask for lunch; and between 3 pm and 8 PM the doll will ask for dinner. At other times the doll will ask for a snack. Each of the eating routines involves different logic and accessories, such as the simulated food stuffs elsewhere herein described.

Waking and Sleeping. The doll will express a desire for bed at the bedtime specified by the user in the initial set-up procedure. The programmed child-like behavior includes some behavior in anticipation. As example, the doll may speak “Mommy, it's almost time for bed.” The doll may also be programmed to wake-up from the sleep condition at a set wake-up time specified by the user.

Greetings and Announcements. On waking, the doll may greet her “mommy” with an appropriate time-based phrase, such as “Good afternoon mommy!” Occasionally on waking, the doll will also announce the time of day.

Dressing. On awakening, the doll may ask for different clothing. In the morning the doll asks for her dress. In the evening, the doll asks for her nightie.

Holidays and Special Days. The doll knows (e.g. is programmed to recognize) a number of holidays. On those days the doll will occasionally speak out with “Happy (holiday) mommy!” or similar phrase, where (holiday) is the given day. The doll is also programmed to anticipate the holiday, say a month in advance. In anticipation of the holiday, the doll is programmed to speak in anticipation. As example the doll may be programmed to speak “Mommy, Santa Claus is coming soon!” some weeks prior to Christmas.

Frequency of Activity. The doll keeps track of the frequency of performance of many behaviors and tries to avoid repeating the behavior too many times in a given interval of time. The doll also uses time intervals to create more realistic behavior. As example, if the doll has visited the potty recently and is asked to go to the potty again, the doll will speak: “Mommy I just went potty. You want me to try again.”

The simulated day of the doll is divided into a series of ten sequential time windows. Within each window, certain play patterns may be initiated by the doll, occurring with a particular frequency. Some play patterns occur only within a particular time window, such as lunch or waking, while other play patterns are present in many windows. Each play pattern is assigned a percentage chance of occurring likelihood at intervals within a window.

There are some rules to keep in mind when establishing the patterns: 1. Ordination: Certain play patterns follow other play patterns within a given window. Dressing follows waking, for example. 2. Frequency Limits: Play patterns will only occur so often. If the doll has engaged in a play pattern five minutes ago, she will (e.g. must) wait a certain period of time before initiating that play pattern again.

To determine which play patterns might appear in a particular time window reference may be made to the table that follows:

Time Windows and Play Patterns Time Window Available Play Patterns 1 Wake Up <Wake up sequence> I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 2 Breakfast I'm hungry, can I have breakfast? I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 3 Between Time I'm hungry, can I have a snack?* I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 4 Lunch I'm hungry, can I have lunch? I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 5 Between Time (see above) 6 Dinner I'm hungry, can I have dinner? I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 7 Between Time (see above) 8 Bed Prep Need to get ready for bed. I'm hungry, can I have a snack?* I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 9 Bed Time <Delay Tactics> I'm hungry, can I have a snack?* I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. <Going to sleep.> 10 Sleep <Wake response.>
*snack chosen randomly

Special Days. In addition to daily, time-based, doll-initiated play, the preferred doll embodiment also features annual, time-based, doll-initiated play. Programmed with a series of special days, the doll controller enters an “anticipation mode” two weeks prior to the special day, and then a “special day mode” on the day itself. While in these modes, certain aspects of the simulated doll behavior are changed. Some of the regular play patterns will be changed to reflect the special day. For example, upon waking on the little girl's birthday she would say “Happy birthday mommy!” instead of (or in addition to) her normal waking behavior.

Other new play patterns may be introduced by the special holiday. For example, Amanda might say something about the special day at any point in the day. One may think of these modes as overlays on the regular time window table.

Transition from Child Initiated Play to Doll Initiated Play. To see how Amanda moves between child and doll initiated play patterns, consider what happens when the doll character, Amanda, awakens from sleep. When wakened from a sleeping state via her right hand being squeezed or by being hugged she asks for her mom (the little girl player). Upon hearing her mom speak her name, she asks mom what she wants to do. At this point Amanda is waiting in the child-initiated play mode. If mom does not give a response within a certain time period, Amanda will ask for something to do that is taken from the doll-initiated play patterns, depending on which time window the doll then is resident. The doll's clock is always running. If Amanda still receives no response, she goes into a sleep routine and essentially falls asleep.

Another way Amanda may enter doll-initiated play is after finishing a child-initiated sequence. Upon receiving no input for fifteen seconds, Amanda will suggest something from her list of options available in that time window, or she may ask again what her mom wants to do. If Amanda receives no response, she will fall asleep naturally.

Special Circumstances: Some special circumstances will alter the play patterns available in both child initiated and doll-initiated play.

The clothing that is worn by the doll will impact the “dressing” play pattern. Holiday's and birthdays will impact doll-initiated play as outlined above in the “doll-initiated play” section. Child-initiated play will be affected by changes to the birthday & holiday menu. For example:

    • Mom: What is the next holiday?
    • Amanda: Umm, Christmas mommy!
    • Mom: What do you want from Santa Claus?
    • Amanda: I want a new dress <or some other after-market accessory> from Santa Claus.

Power-Down/Up: If, during the regular time-triggered play of the doll, the doll fails to receive a response from her mom, say within ten minutes, the doll will go to sleep, essentially entering a power-down mode, during which only the internal clock remains powered up and continues to operate. The foregoing allows the child to place the doll in bed and refrain from responding to the doll so the doll can go to sleep, that is, enter the sleep mode. Squeezing the hand of the doll or hugging the doll causes the doll to awaken. Actuation of the hand switch or hug switch is detected by the controller 20, which is programmed to recognize the input during the power down mode as requiring restoration of the electrical power and re-commencement of the doll activity program, signifying the awakening from slumber.

When the doll is in the sleep mode, the internal clock in the doll continues to run, and, as recalled, during the set-up procedure, the doll may have had a definite wake-up time set by the mother (or child). That wake-up time is usually some time during the morning, say 7:00 AM. Thus should the sleep mode continue through to that wake-up time, the controller detects attainment of the wake up time, and essentially wakes up the doll, placing the doll into the mode for normal activities, elsewhere herein described.

The doll has and can also be placed in a quiet mode, when desired. As example, if the child is distracted by some other project, and the doll constantly nags the child but has not yet gone to sleep, the child may wish temporary silence. By squeezing the right hand (thereby operating the right hand sensor) for a predetermined period of time, suitably seven seconds, the controller is programmed to interpret the action as a command for the doll to power down. Once the controller determines that action, the controller issues instructions to broadcast a verbal message, specifically, “O.K. Mommy, I'll be quiet now,” giving the child oral feedback of the activation, and then prevents further audible broadcasts from the doll. After four minutes in that condition, the doll broadcasts the query: “Can we talk now?” If the doll does not hear a response from the child within a short interval, the doll controller 20 powers down the electronics, placing the doll in sleep mode, and waits for a wake-up command. In either the quiet mode or the sleep mode, the doll is reactivated or, as otherwise stated, is awakened simply by either squeezing the right hand of the doll (e.g. operating the right hand sensor 38) or by giving the doll a hug (e.g. operating the hug switch 33)

Response Variation: Depending on how much room there is for voice information, some of Amanda's responses can vary slightly. This should be done as much towards the top of her menu tree as possible. Amanda will also occasionally (rarely) disagree with her mom, suggesting another play pattern.

Discipline Routine: During child initiated play, Amanda may occasionally, but rarely, misbehave. The following exchange between the doll and the child can take place.

    • Amanda: No! I don't want to!
    • Mom: Amanda you are doing a bad thing.
    • Amanda: I'm sorry mommy.
    • Mom: Good girl Amanda.

Although the foregoing embodiment has been described in connection with the English language, it is appreciated that other languages may alternatively be employed in dolls intended for children in non-English speaking countries.

In other more expensive embodiments one can include additional voice ROM and store the same messages in additional languages to provide a multi-language doll. The start up programming for the user would then include an additional set up step containing a display of and requesting selection of the particular language for the doll to speak. Language selection is accomplished by toggling the left and right hand sensors in the same manner as in seeing the wake and sleep times. Such a multi-language doll may be attractive also to parents who wish their child to learn a second language.

In addition to the incorporation of multi-languages as larger size memories become practical for doll products, as earlier noted, the doll can store and be programmed to sing songs, accompany the speaking parts with music and/or sound effects, with or without parsing of short messages as above described, and with or without digital compression.

In the foregoing description of a preferred embodiment there is included voice recognition that enables the doll to recognize the individual who has initially turned power on and set up the doll as the “mother” of the doll. Indeed in alternative embodiments of the doll, the doll may be programmed to recite to the player to “say my name mommy,” in which case the doll is able to confirm that the person that states the name of the doll is the same voice print pattern as that of the pseudo-mother. In still other embodiments of the invention, the doll may be programmed to recognize multiple persons and distinguish between those persons based on different voice prints. As example, voice prints can be made of a person who is to be the grandmother, the grandfather, uncle or aunt of the doll or any other family members and the doll is able to recognize and distinguish between those persons.

If the invention is incorporated within a male action figure, bonding may represent camaraderie, where the child is leader of a band of heroes, or, conversely, a fried of the doll owner as leader of the enemy force. The foregoing form of bonding goes beyond bonding as being a mere attachment to a loved and needed person.

As those skilled in the art appreciate, the foregoing implementation is illustrative and many other forms of specific semiconductor circuits may be substituted to accomplish the described functions of my invention.

As further example, depending on the voice (or voices) that the doll individually recognizes, the doll can be programmed to interact with different persons in different ways, depending on the voice that the doll recognizes is speaking. That is, the memory of the doll may hold different speech messages and say something different depending on the identity of the person doing the speaking with the doll. For example, the doll could say “grandma” every time the doll would otherwise say the word “mommy.” The doll could say something entirely different.

The preferred embodiment of the doll is of the form of a female child under thirteen years of age and, more specifically, about two years of age, and is of the appropriate facial anatomy of that age and is dressed as such child. The doll produces the sounds, phrases, and words of the kind typically spoken by a two (2) year old female child by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak. In alternative embodiments the doll could be of the form of a human toddler and would be dressed as a toddler. The doll produces the sounds, phrases, and words of the kind typically spoken by a toddler by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.

In still other embodiments the doll may be of the form of a teenage or adult female, possesses the facial appearance of a teenager and is dressed as such teen age female. The doll produces the sounds, phrases, and words of the kind typically spoken by a teen age female by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.

In still other embodiments the doll is of the form of a male child, toddler, two-year old, tween, teen and so on, respectively dressed as such respective character, and/or is of the appearance of a fantasy character and is dressed as such fantasy character. The doll produces the sounds, phrases, and words of the kind expected to be spoken by a fantasy character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak. The doll in still other embodiments could be fabricated in the form of a animal character, either one that produces animal sounds or is designed to speak like a human. The doll produces the sounds, phrases, and words of the kind typically verbalized by the animal or by the animatronic character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.

In still another embodiment, the doll is of the form of a male child infant or toddler, in which case the primary play is for the user to act as a caregiver for the doll. In an embodiment of the doll as an older male child below the age of thirteen, the primary play would be for the doll to serve as a best buddy for the child and do such things together, such as play games, tell stories, go on adventures, role play various characters together, and learn and play things together. In still other embodiments, the doll appears as a fantasy character and is dressed as such fantasy character and produces the sounds, phrases, and words expected of a fantasy character by means of which the fantasy character asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.

In an embodiment of the doll as a teenage male or adult male, the primary play is as an action figure, a heroic or villainous, good or evil character, The young man imagines and plays-out situations, scenarios, and adventures in which he assumes the persona of the character with whom he is playing, i.e., mentally the boy “IS” the character. Male action figures and characters presently available for boys play are not interactive. Some contain mechanical mechanisms that enable movement when the boy pushes a lever on the torso, appendages, or head of the action figure. Often weapons are carried by action figures, and projectiles are often launched to emulate the firing of weapons and so forth. Depending on the size of the action figure, electronics are sometimes included that contain lights and sounds of weapons or short phrases made by the action figure.

For the first time ever in this embodiment, the action figure contains animatronics that moves the face into expressions of happiness, rage/anger, fierceness, puzzlement, excitement, boredom, fatigue, impatience, nobility, and so forth. Such visual expression of actual emotions in an action figure is also believed to be revolutionary. Even more revolutionary is for the action figure to respond to a user both emotionally and with words in conversation. The figure could become angry or feign anger if the wrong response occurred. An example of that anger might be that the action figure wanted to retrieve his weapon from the user, and the user would not turn the weapon over to the figure.

Perhaps even more revolutionary, is for an action figure to communicate with the boy as either an ally or a combatant. The voice recognition feature enables a male player to assume the role of ally or enemy, i.e., friend or foe, when play commences. The action figure then assumes the friend or foe relationship with the player and through Virtual Conversational Interactivity plays out battles, adventures, and other scenarios with the boy. The embodiment thus enables an entirely new level of play. The ability for the figure to recognize clothing, protective armor, battle gear, weapons, and so forth and interact conversationally with the player is entirely new. This enables the figure and the young player to strategize how to game or battle. It also enables the player to “command” or instruct the action figure how to battle, attack, fight, and so forth. When two brothers or friends play together, and each has an action figure of this invention, then one player and his figure might assume the role of “good guys” and the other player and his figure the role of the villains. For the first time, real camaraderie is enabled between both young players and their action figures. They are a “real team.” And the other “guys” are “real enemies.”

The additional technology enabling the figures to recognize either clothing or protective armor, weapons, playthings, vehicles in which it sits, and other play accessories, together with the speech recognition technology, animatronics, and programming, enables all embodiments of the invention to possess a form of artificial intelligence. That intelligence provides an additional basis for the initiation of virtual conversational interactivity and play scenarios.

In still another embodiment of the invention the doll is of the appearance of a real animal, insect, fish, crustacean, or other living creature or presents that living creature as a cartoon, such as, but not limited to a dog, cat, bear, bird, bunny, reptile, horse, or pony, in which the animal's visual animatronic movements of the face and/or body suggests to the user that the doll possesses human characteristics, such as feelings and emotions. People instinctively associate such human emotions with the supposed facial expression and body language of living animals and, particularly in the case of pets, believe that an animal may be excited, happy, angry, scared, sleepy, and so forth. The addition of sounds of the living animal to such an embodiment, such as, but not limited to, a bark for a dog, a chirp for a bird, a meow for a cat, and so forth, adds realism. Additionally, the tone of such sounds are varied appropriately through the doll's programming to communicate what the animal is thinking. One example is the whine of a dog to indicate that something is wrong, pain, a need for attention, or begging. Such is the present interactivity of a real animal with a human, and such interactivity is often back and forth in that a human might ask a real pet, “what's wrong?”, or “shut up!”, or “are you hungry?”, but not actually expect the pet to answer. In this case the dog may wag his tail and the human may believe that is a yes answer to the question, or the dog may continue to whine and the human may believe that the question was not the correct cause of the whining.

In like manner, the programming of the doll dog causes the dog to have a need, and the child (or adult) user will need to discover the need of the animal in order to satisfy that need. Such back and forth interactivity in addition to the use of the animal doll's (dog's) facial animatronics, which is anthropomorphic in nature, i.e., suggesting to the child or user that the animal has feelings and emotions. I refer to that action as Anthropomorphic Virtual Interactivity™, “AVI™”. Such AVI interactivity is back and forth in a real way that humans and communicate. Sensor devices to see if the animal has been fed, or groomed are present just as in the human embodiment. A sensor in the dog's butt will enable the puppy to know if it went potty on a newspaper, or in a cat's butt if it is the litter box, so they can be toilet trained. Voice recognition is present to bond the animal doll with it's owner or master (the child or user). Speech recognition is present to enable the animal doll to recognize what the human is saying, i.e., it enables the animal doll to be “trained,” to learn “commands”, to learn what is good behavior (“good dog” or “good kitty”) or bad behavior (“bad dog!” or “bad kitty.”). Pet clothing can be put on the animal doll and it will know that a particular coat for warmth or a collar to go for a walk has be put in place, and enable the animal doll to “know” or assume what the intention of the owner was in play. The AVI technology of the animal doll can additionally cause the animal to express feelings to the owner.

The doll in still other embodiments could be fabricated in the form of a animal character that is designed to speak like a human. The doll produces the sounds, phrases, and words of the kind typically verbalized by the animal or by the animatronic character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.

A child learns disappointment early on in life when “mom” denies the child its request to do or be given something with a sternly spoken “no.” The child feels the hurt of that rejection. It may cry because of the humiliation the child feels from rejection or in any case give mom a sorrowful and sad look as if to say you don't love me anymore. So too when the doll of this invention makes a request of its pseudo-mother, the child player may find herself called opinion to draw the limit and say “no” to the doll's request. Hearing the word “no” through speech recognition and being programmed to understand the meaning of the child's response, the doll will put on a sorrowful and sad face. And the child having personally experienced those same emotions recognizes the humiliation that the doll should feel, gives the child affirmation of sorts that the doll is a real person, even if only for the duration of the play period, and strengthens an emotional bond that should exist between the two.

The versatility of RFID programming adds to the capability for doll play in additional embodiments in which the RFID sensor that is used to read at least two separate RFID tags essentially at about the same time. This capability is illustrated in connection with the potty accessory 40 in FIG. 4 to which reference is again made.

As recalled, doll 10 includes an RFID sensor positioned in the butt end or rear and that sensor reads the RFID tag contained in the doll potty 40, whereby the doll “knows” it is seated on the potty. One of the articles of clothing worn by the doll is a diaper and that diaper would also contain an RFID tag so as to identify the diaper to the RFID sensor in the doll butt. In that way the doll “knows” it is wearing a diaper. If the program, as earlier described in this specification, calls for the doll to request to go potty. If the doll is wearing a diaper at the time the call is made for the player to place the doll on the potty, the doll program thereby would have the doll request that the diaper be removed first so that the doll is able to go to the potty without interference from the diaper. After the diaper is removed, and the doll broadcasts a “thank you, mommy” or like message, and asks again to be placed on the potty. The doll will know if the doll is seated on the potty since the RFID sensor detects the RFID tag in the potty, and the doll will continue with the various programmed sounds of defecation earlier discussed.

However, if through error the player fails to remove the diaper but seats the doll on the potty wearing the diaper, the RFID sensor in the doll but will sense and read both RFID tags. The doll knows that the player neglected to remove the diaper. Accordingly, the doll will provide a message possibly correcting or berating the player, and decline to defecate until the diaper is first removed.

By using a larger antenna on the RFID tag sensor, the sensor can achieve wider detection coverage and is able to pick up RFID tags displaced at greater distances from the sensor. Thus the RFID tag in the diaper need not directly overlie the RFID tag in the potty when the doll is seated on the potty wearing the diaper. The sensor reads them both, and can take read them in any order called for by the program.

As those skilled in the art recognize, an embodiment of a doll need not include each and every feature described herein. Some embodiments may contain a full complement of features, while other embodiments may omit one or more features and contain a lesser number of features. All such embodiments are included in the invention. These various embodiments of the invention may be addressed separately and/or in combination in the claims which follow in this specification.

It is believed that the foregoing description of the preferred embodiments of the invention is sufficient in detail to enable one skilled in the art to make and use the invention without undue experimentation. However, it is expressly understood that the detail of the elements comprising the embodiment presented for the foregoing purpose is not intended to limit the scope of the invention in any way, in as much as equivalents to those elements and other modifications thereof, all of which come within the scope of the invention, will become apparent to those skilled in the art upon reading this specification. Thus, the invention is to be broadly construed within the full scope of the appended claims.

Appendix A Achieving Facial Expressions

Defined Expressions: Eating Charts (Dinner, Snack, Breakfast, Lunch)

Dinner Chart

“Yea! Amanda loves dinner!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! Amanda loves Pizza!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! Amanda loves chicken nuggets!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! I love juice.” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “All done mommy!” Eyebrows: UP Eyes: NORMAL Mouth: NORMAL Cheeks: UP “That was the bestest dinner ever!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! I love to have my teeth brushed!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL

Snack Chart

<various phrases, all containing “Yea!”> Mouth opens wide on “Yea!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “That was the bestest snack ever!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “That was the bestest dinner ever!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL

Breakfast Chart

“Yea! Amanda loves breakfast!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! Amanda loves pancakes!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Surprise me mommy - I'll close my eyes.” Eyebrows: NORMAL Eyes: CLOSED Mouth: OPEN Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Surprise me mommy - I'll close my eyes.” Eyebrows: NORMAL Eyes: CLOSED Mouth: OPEN Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! I love to have my teeth brushed!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL <various responses to food surprise> Eyebrows: UP Eyebrows: NORMAL Eyes: BLINK TWICE Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: NORMAL Cheeks: NORMAL <various responses to food surprise> Eyebrows: UP Eyebrows: NORMAL Eyes: BLINK TWICE Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: NORMAL Cheeks: NORMAL

Lunch Chart

“Yea! Amanda loves lunch!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “Yea! I love juice.” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: WIDE Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “All done Mommy!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL “That was the bestest lunch ever!” Eyebrows: UP Eyebrows: NORMAL Eyes: NORMAL Eyes: NORMAL Mouth: NORMAL Mouth: NORMAL Cheeks: UP Cheeks: NORMAL

APPENDIX B Amazing Amanda Play Patterns Categorized Broad Category Sub-Category Child & Doll-Initiated 1 Eating Breakfast Lunch Dinner Snack 2 Play Songs Pretend (Birthday or Tea Party) Animal Talk Game Funny Face Game 3 Potty 4 Sleep 5 Wake Doll-Initiated 1 Hugs 2 Sick 3 Love 4 Hair Play 5 Special Calendar Days 6 Tutorial Optional Accessory Play 1 Dress Up 2 Birthday Party* 3 Tea Party* 4 Stroller 5 Tote
*This play is separate and distinct from the “pretend” tea party and birthday party play patterns.

Claims

1. An interactive doll, said doll including a doll face and electronic means for initiating and conducting a verbal tete-a-tete on at least one subject with a living person, said verbal tete-a-tete being accompanied with facial expressions on the face of the doll that visually reinforce the tenor of the words spoken by the doll during said verbal tete-a-tete and/or constitute a visually expressed response to words spoken by said living person during said verbal tete-a-tete.

2. The interactive doll as defined in claim 1, wherein said interactive doll further comprises a self-contained battery powered doll.

3. The interactive doll as defined in claim 1 wherein said electronic means for initiating and conducting a verbal tete-a-tete includes:

a microphone;
a loudspeaker;
a processor for broadcasting verbal statements through said loudspeaker and for listening to verbal statements of said living person received through said microphone; and
a facial animator for producing an expression on said doll face that is indicative of a human emotion.

4. The interactive doll as defined in claim 1, wherein said at least one subject comprises a food selected for a meal.

5. The interactive self-contained battery powered doll defined in claim 3, wherein said electronic means further comprises:

a clock for keeping track of the time of day; and wherein said subject of a food selected for a meal is automatically selected by said electronic means based in part on the time-of-day tracked by said clock.

6. An interactive doll, said doll including a doll face and a programmed electronic processor for initiating and conducting a verbal tete-a-tete on at least one subject with a living person, said verbal tete-a-tete being accompanied with facial expressions of the doll face that either visually reinforce the tenor of the words spoken by the doll during said verbal tete-a-tete with said living person or constitute a visually expressed reply to words spoken by said living person during said verbal tete-a-tete.

7. The interactive doll as defined in claim 6, further including a microphone; and a loudspeaker; and wherein said programmed electronic processor for initiating and conducting a verbal tete-a-tete with a living person includes:

a first circuit, coupled to said microphone, for converting received verbal statements of said living person to an electrical signal that is output from said microphone;
a second circuit, coupled to said loudspeaker, for coupling electronic signals output from said electronic processor to said loudspeaker, said electronic signals output from said electronic processor being representative of verbal statements, whereby verbal statements are broadcast through said loudspeaker; and
facial actuators for producing an expression on said doll face that is indicative of a human emotion.

8. The interactive doll defined in claim 6, wherein said at least one subject comprises a food selected for a meal.

9. The interactive doll as defined in claim 8, wherein said interactive doll further comprises a self-contained battery powered doll.

10. The interactive self-contained battery powered doll defined in claim 8, wherein said electronic processor further comprises:

clock means for keeping track of the time of day; and wherein said subject of a food selected for a meal is selected by said electronic processor located in said doll based in part on the time-of-day tracked by said clock means.

11. An interactive doll that simulates chewing of a food product by a living creature comprising:

a doll, said doll comprising:
at least a torso and a head, said head including at least a mouth, and said mouth including at least a roof and a lower jaw;
said head further including a mouth control actuator for repetitively pivoting said lower jaw between a closed and open position to repetitively vary the size of said mouth opening between small and large to simulate chewing;
a radio frequency identification tag reader located in said roof of said mouth;
a memory for storing as digital data a chewing verbalization or sound produced by a person chewing;
a loudspeaker; and
electronic means coupled to said loudspeaker for reproducing and broadcasting through said loudspeaker said digital data concurrently with repetitive pivoting of said jaw between said open and closed positions by said mouth control actuator to both reproduce and broadcast said verbalization or sounds of a person that is chewing a food product and visually signify the act of chewing;
a controller;
said radio frequency identification tag reader for detecting the presence of a radio frequency identification tag of a simulated food product that is inserted into said mouth and supplying information in said tag to said controller; and wherein said controller, responsive to receiving information from said tag reader, for supplying a signal to said sound reproducer and to said mouth actuator to concurrently broadcast verbalizations of chewing from said doll and produce pivoting movement of said lower jaw to simulate chewing.

12. An interactive doll that simulates the body function of excreting body waste, comprising in combination:

a doll;
a potty;
said doll including at least a torso and a head;
said head including a face and facial control actuators for changing the demeanor of said face;
said torso including a butt, and said butt including at least a radio frequency identification tag reader;
said potty including a radio frequency identification tag for identifying said potty to a radio frequency identification tag reader;
a programmed controller;
a loudspeaker coupled to said programmed controller;
said radio frequency identification tag reader for reading said radio frequency identification tag located in said potty when said doll is seated on said potty and communicating information read from said tag to said programmed controller;
said programmed controller for broadcasting through said loudspeaker a verbal notice to a player of a need to eliminate body waste;
said programmed controller for also broadcasting through said loudspeaker a sound effect of body waste being excreted, said sound effect being broadcast within a predetermined period of time following broadcast of said notice to the player, irrespective of whether or not said doll is seated on said potty;
said programmed controller further for also broadcasting through said loudspeaker a verbal message when said torso was seated on said potty at the time of occurrence of said sound effect of body waste being excreted; and
said programmed controller still further for energizing said facial control actuators to change the demeanor of said face to a demeanor that represents a person that is upset when said torso was not seated on said potty at the time of broadcast of said sound effect of body waste being excreted.

13. The interactive doll system as defined in claim 12, wherein said programmed microcontroller is also for energizing said facial control actuators to change the demeanor of said face to present a satisfied demeanor when said torso is seated on said potty at the time of broadcast of said sound effect of waste being excreted

14. An interactive electronic doll for producing and broadcasting voiced statements to a player, said doll including eyelids that blink repetitively during the broadcast of said voiced statements to simulate the naturally occurring eyelid movement of a living person during the course of speaking.

15. The interactive electronic doll as defined in claim 14, wherein said doll includes a doll head having a face, and further comprising:

a pair of eye sockets in said face;
a pair of eyeball and eyelid combinations, each said combination in said pair including a spherical member, and an axis of rotation;
a shaft in each of said eye sockets for supporting said spherical member in respective ones of said eye sockets;
each of said spherical members being mounted to a respective shaft for rotation in respective ones of said pair of eye sockets;
said face of said doll head including a mouth, said mouth including at least a lower jaw and said lower jaw being mounted to a pivot for pivoting movement;
a programmed microcontroller;
a mouth actuator for producing pivotal movement of said lower jaw responsive to energization by said microcontroller;
eyeball actuators for respectively rotating a respective one of said spherical members about a respective axis as directed by said programmed controller, either in one rotational direction to close said eyes or in an opposite rotational direction to open said eyes;
a loudspeaker coupled to said programmed microcontroller;
a sound reproducer, coupled to said loudspeaker, for reproducing and broadcasting through said loudspeaker selected verbalizations and/or sounds stored in ROM when energized by said microcontroller;
said programmed microcontroller energizing said eyeball and eyelid combination actuators to repetitively rotate said eyeball and eyelid combination to an angular position in which said eyelid lowers to cover said eye socket to simulate a closed eye, and immediately thereafter rotates that sphere in the opposite direction to lift said eyelid to a separate angular position and expose the eye to thereby produce a blink of said eye, and for concurrently energizing said sound reproducer, whereby verbalizations are broadcast through said loudspeaker, and for concurrently energizing said mouth actuator to repetitively pivot said lower jaw up and down and forward and backward to simulate speaking.

16. The interactive electronic doll as defined in claim 15, wherein

said programmed microcontroller commands said eyeball and eyelid combination actuators to produce multiple blinks of said eyes and concurrently reproduce and broadcast said voiced statements.

17. The interactive electronic doll as defined in claim 14, wherein said doll includes a doll head having a face, and further comprising:

a pair of eye sockets in said face;
a pair of eyeball and eyelid combinations, each said combination in said pair including a spherical member, an axis of rotation, an eyelid, said eyelid applied to a circumferential portion of said spherical member, and an eyeball applied to another circumferential portion of said spherical member;
a shaft in each of said eye sockets for supporting said spheres in respective eye sockets;
each of said eyeball and eyelid combinations being mounted to a respective shaft for rotation in respective ones of said pair of eye sockets;
a programmed controller;
eyeball and eyelid combination actuators for respectively rotating a respective one of said eyeball and eyelid combinations about a respective axis as directed by said controller, said rotating being in either a clockwise direction to close said eyes or in a counterclockwise direction to open said eyes;
said programmed controller energizing said eyeball and eyelid combination actuators to repetitively rotate said sphere to a angular position in which said eyelid covers said eye socket, simulating a closed eye, and immediately rotates that sphere in the opposite direction to retract said eyelid to a separate angular position for exposing the eye, to produce a simulated blink of said eye.

18. An interactive electronic doll, said doll including a electronic controller for controlling the functions performed by said doll, a loudspeaker and a microphone;

said electronic controller including:
a speaker dependent voice recognition program for enabling recognition of a player through the voice of the player, said voice recognition program for capturing a player's voice and assigning that voice with a title or name;
said voice recognition program for
broadcasting a verbal query from said doll requesting a person in the vicinity of the doll to speak a specified word,
receiving said specified word spoken in response to said query,
producing a first voice print from said received word,
storing said first voice print in a memory, and
assigning said stored first voice print with the title or name of an individual;
said voice recognition program additionally for
generating a follow-up verbal query requesting a person to again speak said specified word;
receiving a word spoken by said person in reply to said follow-up verbal query; and
producing a second voice print of said received word; and
comparing said second voice print to said voice print stored in memory under the name or title assigned to said individual, and, if the characteristics of said second voice print are essentially the same as said voice print stored in memory, broadcasting a verbal message that includes identifying said individual by name and/or title.

19. The interactive electronic doll as defined in claim 18, wherein if the characteristics of said second voice print are not essentially the same as said voice print in memory, broadcasting a verbal message announcing that the person that produced the second voice print is not entitled to be addressed by said name and/or title of said individual assigned to said first voice print; and wherein said assigned name or title comprises a word that is or has the same meaning of the English word “mommy,” whereby to define a relationship between the doll and the apparent mother of the doll.

20. An interactive electronic doll, said doll including a programmed electronic controller for controlling the functions performed by said doll, a loudspeaker and a microphone;

said programmed electronic controller including:
a speaker dependent voice recognition program for enabling recognition of a player through the voice of the player, said voice recognition program for capturing a player's voice when voicing a specified word, assigning that voice with the title or name of an individual and recognizing the individual who again speaks the specified word when requested to do so;
a speech recognition program for enabling recognition of a set of spoken words;
a control program for initiating said speaker dependent voice recognition program to determine if the voice of the player is recognized by said speaker dependent voice recognition program, but, if not, initiating said speech recognition program and terminating said speaker dependent voice recognition program.

21. An interactive electronic doll, comprising:

a torso, a head supported on said torso, and at least one pair of appendages;
at least one of said appendages including a hand;
a loudspeaker;
a microphone;
a memory;
a programmed electronic controller for running programs stored in said memory, broadcasting verbal statements through said loudspeaker, and for listening to verbal statements of a player that are received through said microphone;
said programs defining functions carried out by said doll, including a clock program and a calendar program and a speech recognition program;
said memory further storing words and phrases in digital form;
a pressure operated switch located in said hand, wherein squeezing of said hand operates said pressure operated switch;
said programmed electronic controller monitoring the operation of said pressure operated switch to detect whether said switch is operated or not;
an interactive program for setting the initial year, month and date of said calendar program and the initial time of said clock program; said program including the steps of:
broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct year is broadcast from said speaker, and, thereafter,
terminating the recital of the years when said squeeze is released from said hand and placing the name of the year last recited into a memory;
broadcasting the year last recited, and requesting the player to confirm that the stage so broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES;
and, when the answer to the foregoing query is yes, broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct month of the year is broadcast from said speaker;
terminating the recital of the months of the year when said squeeze is released from said hand and placing the name of the month last recited into said memory;
broadcasting said month of the year last recited in confirmation as the month selected and requesting the player to confirm that the month so broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES;
and, when the answer to the foregoing query is yes, broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct day of the month is broadcast from said speaker, followed by broadcasting the days of the month through said loudspeaker in serial order;
terminating the recital of the days of the month when said squeeze is released from said hand and placing the day of the month last recited into a memory;
broadcasting said day of the month last recited in confirmation as the day selected and requesting the player to confirm that the day so broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES;
and, if the answer is yes, broadcasting the month, day, and year stored in said memory and requesting the player to confirm that the information broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the preceding steps of setting the year, month and date from the beginning until the final answer is YES;
broadcasting a statement to the player as to the positions of the small and large hands of a clock and requesting the player to confirm the correctness or not of that statement by answering yes or no;
and, if the answer is no, broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct hour of day is broadcast from said speaker, followed by broadcasting the hours of the day in serial order through said loudspeaker;
terminating the recital of the hours of the day when said squeeze is released from said hand on attainment of the current hour, removing pressure from said pressure operated switch, and placing the hour of the day that was last recited into a memory;
broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct minutes of the day is broadcast from said speaker, and, thereafter,
broadcasting the minutes of the day in five minute increments in serial order through said loudspeaker;
terminating the recital of the minutes of the hour when said squeeze is released from said hand on attainment of the current minute closest to the five minute increment of one through twelve on the clock, removing pressure from said pressure operated switch, and placing the minute increment of the hour that was last recited into a memory;
and, either as the next step or when the answer to the foregoing query is yes, broadcasting information to the player on the meaning of AM and PM and querying the player if it is necessary to reset to the present state of the clock from one of AM or PM to the other by answering yes or no;
and, when the answer to the foregoing query is yes, broadcasting a query to the player to answer if the time of day is AM by answering yes or no;
and, if the answer is yes, placing AM in said memory, but if no, then placing PM in said memory;
then, querying the player if they wish to observe daylight savings time and requesting the player to answer yes or no.

22. The interactive electronic doll as defined in claim 21, further comprising:

storing the answer to the last named query in memory and, when the answer is yes, automatically resetting the internal clock to move forward one hour in the spring on the day that daylight savings begins, and to move the clock backwards one hour in the fall when daylight savings time ends, whereby the doll is aware of whether or not the clock is in daylight savings time, and, thereafter, automatically change the time either forward 1 hour or backwards 1 hour as is appropriate at each daylight savings time event that occurs.

23. An electronic doll comprising:

a programmed electronic microcontroller:
said programmed electronic microcontroller including programs that are run by said programmed electronic microcontroller;
a memory;
speech phrases and sound effects stored in said memory;
a loudspeaker for enabling broadcast of sounds in the immediate environment of said electronic doll;
a microphone for receiving sounds from the immediate environment for processing by said programmed electronic microcontroller;
sensors to detect the presence of a force impressed on the electronic doll and/or an object placed in contact with the electronic doll and enable said electronic microcontroller to identify the respective force or object; and
wherein said programs run by said electronic controller include at least a speech recognition program to recognize speech received via said microphone and a program for broadcasting words, phrases and sound effects through said loudspeaker;
said electronic doll further including a torso and doll head, and said doll head including a doll face; and
doll face manipulators for manipulating said doll face to produce a variety of facial expressions that portray a variety of human-like feelings, said doll face manipulators being controlled by said electronic controller to produce facial expressions, concurrently with the broadcast of words or sounds of a tenor that are not inconsistent with the emotional feelings of a living person portrayed by the respective facial expressions.

24. The electronic doll as defined in claim 23, wherein said programs run by said electronic microcontroller include a range of human emotional facial expressions and verbal responses, each of which matches the tenor of at least one of the human emotion facial expressions, which facial expressions are activated by the electronic microcontroller upon particular words or phrases spoken, or not spoken, by the player are “heard”, or “not heard”, by the electronic microcontroller by means of the microphone coupled to an analog to digital converter which sends the now digital information heard to the electronic microcontroller and which data is understood by means of the speech recognition program, and thereby causing by means of the logic programmed into the memory of the doll a particular emotional response to the reply or lack thereof of the player to a request or query that the doll made of the player;

and which particular emotional response is communicated by the doll visually, by the electronic microcontroller activating the actuators of the facial expression as set forth in the programmed logic controlling the signals of electronic microcontroller to the actuators, and audibly, by causing the electronic microcontroller to broadcasting a single or one of several verbal responses matching the tenor of the actuated human emotion facial expression, which verbal responses are stored in memory and are selected by the logic and programming of the electronic microcontroller;
wherein said reply or lack thereof of the player to a request or query that the doll made of the player is dependent on electronic microcontroller “hearing or not hearing” a particular verbal response anticipated by the programming logic and speech recognition program, but also detecting or not detecting other non-verbal such things as
the pressing or not pressing of a requested switch, such as asking for a hug,
the presence or lack thereof of a requested food being fed to the doll by the presence or lack of a RFID tag in the doll's mouth which is read or not read by the RFID reader in the roof of the doll's mouth;
being placed or not placed on the potty when the doll has expressed a need to go potty, by means of the RFID reader in the butt detecting or not detecting the RFID tag in the potty,
the requested dress or nightie by means of the RFID reader in the shoulder of the doll detecting or not detecting the correct RFID tag in the clothing or not detecting any RFID tag in any clothing in which case the doll thinks she is not dressed and becomes cold;
not being allowed to have her own way or be given what she wants, all of which and/or such things in the programming and logic within the programs in the doll, create recognizable and understandable human emotional reactions of a human child the age of the embodiment of this doll, to how the player responds verbally or physically to a verbal query or request by the doll during a tete-a-tete between the doll and the player; or by the player not physically manipulating, putting, dressing, or in any other way handling the doll; and
wherein the said programs and associated logic run by said electronic microcontroller thus enable the player to affect the emotional state expressed visually by the doll by means of the doll changing the facial expressions and expressed verbally by the doll by means of the doll broadcasting words or sound effects that by their tenor and tone further emphasize and embellish the changing emotional state of the doll through verbal or physical interaction of the player with the doll.

25. The electronic doll described in claim 23, wherein said sensors include an electrical switch that operates in response to a squeezing force exerted by the player.

26. The electronic doll described in claim 23, wherein said sensors include an electrical switch located in the doll that operates in response to the doll being hugged by the player.

27. The electronic doll described in claim 23, wherein said sensors include an electrical switch that operates when the doll is moved in position and/or turned.

28. The electronic doll described in claim 23, wherein said sensors include a magnetic switch that operates in response to placement of a magnet in proximity to the magnetic switch.

29. The electronic doll described in claim 23, wherein said sensors include a brush sensor located in said doll head that recognizes the sweeping of a hairbrush over said doll head.

30. The electronic doll described in claim 28, wherein said sensors include a magnetic switch located in said doll head that operates in response to the sweeping of a hairbrush over said doll head, said hairbrush including a permanent magnet.

31. The electronic doll described in claim 23, wherein said sensors include a resistance measuring circuit for connection to an electrical resistor carried by a simulated food product in which the value of the electrical resistor identifies the particular food product.

32. The electronic doll described in claim 23, wherein said sensors include an RFID reader for reading RFID tags that are placed in proximity to said RFID reader.

33. The electronic doll described in claim 32, wherein said doll face includes a mouth, and wherein said RFID reader is embedded in the roof of said mouth for reading RFID tags carried on simulated food objects that are placed inside said mouth.

34. The electronic doll described in claim 32, wherein said RFID reader is embedded in the butt of said doll torso for reading an RFID tag carried on a simulated toilet accessory.

35. The electronic doll described in claim 32, wherein said RFID reader is embedded in the upper back of the doll torso for reading an RFID tag carried in an article of clothing worn on the doll.

36. The electronic doll described in claim 23, wherein said sensors include:

a first electrical switch that operates in response to a squeezing force exerted by the player; a second electrical switch located in the doll that operates in response to the doll being hugged by the player; a third electrical switch that operates when the doll is moved in position and/or turned; a first magnetic switch that operates in response to placement of a magnet in proximity to the magnetic switch; a second magnetic switch located in said doll head that operates in response to the sweeping of a hairbrush over said doll head, said hairbrush including a permanent magnet; a first RFID reader for reading RFID tags carried by simulated food objects that are placed inside the mouth of said doll face; a second RFID reader for reading an RFID tag carried on a simulated toilet accessory; and a third RFID reader for reading an RFID tag carried in an article of clothing worn on the doll.

37. The electronic doll described in claim 23, wherein said programs includes means for enabling the doll to conduct a conversational tete-a-tete with a player on the subject of a simulated food object placed in the mouth of the doll.

38. A self-contained battery-powered interactive electronic doll in the form of a female child, said doll including:

a head; a torso; a pair of arms carried by said torso, each of said arms having a hand at the distal end of the arm;
a programmed electronic microcontroller;
stored speech phrases of the kind typically spoken by a female child; and
sound effects;
a speaker coupled to the microcontroller to enable the doll to broadcast appropriate verbalizations in the immediate environment by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player will speak;
a microphone for picking up audible sounds and words propagating in the immediate environment of the doll and supplying those sounds and words to said electronic microcontroller;
speech recognition technology controlled by said controller that enables the doll so recognize certain spoken words received through said microphone;
a plurality of sensors carried by the doll, said sensors including:
an RFID reader embedded in the mouth of the doll for reading RFID tags that contained in simulated food objects that are placed inside the mouth of the doll to identify the respective simulated food object;
an RFID reader embedded in the back side of the doll for reading RFID tags carried on clothing worn by the doll to identify the respective article of clothing worn by the doll; and
an RFID reader embedded in the butt of the doll for reading RFID tags carried on utilitarian objects on which the doll may be seated to identify the respective object on which the doll is seated;
a first pressure operated electrical switch carried in a hand of the doll that operates in response to a squeezing force exerted on the hand by the player;
a second pressure operated electrical switch located in the doll that operates in response to the player hugging the doll; a third electrical switch that operates in response to movement of the doll in position and/or rotation;
a magnetic switch located in said doll head that operates in response to the sweeping of a hairbrush over said doll head, said hairbrush including a permanent magnet;
said head including a doll face;
said programmed electronic processor further for initiating and conducting a verbal tete-a-tete with a living person on at least on the subject of a simulated food object placed in the mouth of the doll, said verbal tete-a-tete being accompanied with facial expressions of the doll face that either visually reinforce the tenor of the words spoken by the doll during said verbal tete-a-tete with said living person or constitute a visually expressed reply to words spoken by said living person during said verbal tete-a-tete;
a doll face manipulator operated by said programmed electronic controller for manipulating the face of the doll to produce a variety of facial expressions that portray human-like feelings and to produce those expressions concurrently with the utterance by the doll of words or sounds of a tenor not inconsistent with the feelings portrayed by the respective facial expression; and
wherein said program for said doll includes program means for enabling the doll to conduct a conversational tete-a-tete with a player at least on the subject of a simulated food object placed in the mouth of the doll.
a clock-calendar for keeping track of the time of day and date; and wherein said subject of a food selected for a meal is selected by said electronic processor located in said doll based in part on the time-of-day tracked by said clock means; a pair of eye sockets in said face;
a pair of eyeball and eyelid combinations, each said combination in said pair including a spherical member, an axis of rotation, an eyelid, said eyelid applied to a circumferential portion of said spherical member, and an eyeball applied to another circumferential portion of said spherical member;
a shaft in each of said eye sockets for supporting said spherical member in respective ones of said eye sockets;
each of said eyeball and eyelid combinations being mounted to a respective shaft for rotation in respective ones of said pair of eye sockets;
said face of said doll head including a mouth, said mouth including at least a lower jaw and said lower jaw being mounted to a pivot for pivoting movement;
a mouth actuator for producing pivotal movement of said lower jaw responsive to energization by said controller; and
eyeball and eyelid combination actuators for respectively rotating a respective one of said eyeball and eyelid combinations about a respective axis as directed by said programmed controller, either in one rotational direction to close said eyes or in an opposite rotational direction to open said eyes;
said programmed controller energizing said eyeball and eyelid combination actuators to repetitively rotate said eyeball and eyelid combination to a angular position in which said eyelid covers said eye socket to simulate a closed eye, and immediately thereafter rotates that sphere in the opposite direction to retract said eyelid to a separate angular position and expose the eye to produce a blink of said eye, and for concurrently energizing said sound reproducer, whereby verbalizations are broadcast through said loudspeaker, and for concurrently energizing said mouth actuator to repetitively pivot said lower jaw up and down while said lower jaw is pivoting back and forth to simulate speaking.

39. A doll, said doll including a head and a face on said head and a microcontroller for manipulating the appearance of said face to present at least the appearance of a happy child and, alternatively the appearance of a person who is emotionally saddened, for initiating verbal requests and for recognizing if a player verbally responds to a request with a positive or negative response, wherein said microcontroller in response to recognizing a negative response from a player following the issuance of a verbal request to the player produces said emotionally saddened appearance, whereby the player feels that the player has emotionally hurt the doll.

40. An interactive doll, said doll including a doll face and electronic means for initiating and conducting a verbal tete-a-tete on at least one subject with a living person, with said living person's verbal responses to a query by said doll in the verbal tete-a-tete causing said doll to express facial expressions that are visually recognizable as human emotions and feelings communicated by particular configurations of the brows, eyes, eyelids, cheeks, mouth, and lips.

41. The interactive doll as defined in claim 40, wherein the facial expression in response to words spoken by said living person to a query by said doll create particular feelings in said human towards said doll because of said doll's visual expression of doll's feelings to said human regarding said human's responses during said verbal tete-a-tete.

42. The interactive doll as defined in claim 41, wherein the feelings in said human towards said doll as verbal tete-a-tete continues become one of bonding with said doll with mothering and nurturing feelings, which feelings increase exponentially and said human is able to create the appearance of feelings in said doll towards said human, i.e., the human's ability to see increasing feelings of love and happiness or surprise or sadness in said doll caused by said humans response to and interaction with said doll;

and in turn the human's ability to feel increasing feelings of love and happiness and concern for said doll's well being and happiness and contentment.

43. A doll and simulated food product combination that recognizes the simulated food product being fed to the doll selected from amongst multiple simulated food products, comprising in combination:

a plurality of different simulated food products, each of said food products including a respective RFID tag identifying the respective food product;
a doll torso and a doll head; said doll head including a mouth for receiving a simulated food product selected from amongst said plurality of different simulated food products and inserted in said mouth by a player;
a radio frequency identification (“RFID”) tag reader located in said doll;
a loudspeaker; and
a controller;
said RFID tag reader for wirelessly detecting the presence of an RFID tag of a simulated food product inserted into said doll mouth and receiving information from said RFID tag and supplying that information in said RFID tag to said controller;
said controller, responsive to receiving information from said RFID tag reader, for supplying output to said loudspeaker to broadcast information to the player relating to at least one food product.

44. The electronic doll described in claim 43, wherein said doll mouth includes a mouth roof, and wherein said RFID reader is embedded in said mouth roof for reading an RFID tag carried on a simulated food object that is placed inside said mouth.

45. An electronic doll comprising:

a torso and doll head;
a programmed electronic microcontroller, said programmed electronic microcontroller including programs that are run by said programmed electronic microcontroller and a memory;
at least one RFID tag sensor carried by said torso for wirelessly detecting the presence of an RFID tagged object placed in proximity to said tag sensor and receiving information therefrom and identifying that object to said electronic microcontroller; and
a plurality of articles, each respective article including a respective RFID tag identifying the respective article;
said RFID tag sensor wirelessly receiving information from any of said RFID tags placed in proximity to said tag sensor by a player.

46. The electronic doll as defined in claim 45, wherein said plurality of articles comprise a plurality of articles of clothing adapted to fit on said torso, each respective article including a respective RFID tag identifying the respective article, said RFID tag being placed in proximity to said RFID tag sensor when the article is placed on said torso by said player, whereby said electronic microcontroller receives information identifying that article.

47. The electronic doll as defined in claim 46, wherein said RFID tag sensor is embedded in the upper back of said torso of said doll.

48. The electronic doll as defined in claim 45, wherein said RFID tag sensor is embedded in said head of said doll, and wherein said articles comprise head-mounted articles, including head coverings and hair accessories.

49. The electronic doll as defined in claim 45, wherein said doll further comprises a pair of doll legs attached to said torso, and respective doll feet attached to respective ones of said doll legs; wherein said RFID tag reader is carried in at least one of said doll feet; and wherein said articles comprise shoes to fit on said respective doll feet.

Patent History
Publication number: 20070128979
Type: Application
Filed: Nov 20, 2006
Publication Date: Jun 7, 2007
Applicant:
Inventors: Judith Shackelford (Moorpark, CA), Adam Anderson (Ventura, CA), Jason Heller (Somis, CA)
Application Number: 11/602,882
Classifications
Current U.S. Class: 446/484.000
International Classification: A63H 29/22 (20060101);