Modular interaction device for toys and other devices

A modular interaction device includes a wireless identification reading module to receive a unique identifier from a wireless identification tag attached with an object. The unique identifier may be sent to a server. A voice prompt is received by the modular interaction device and at least a portion of the voice prompt is sent to the server, in accordance with at least one embodiment. An audio response may be received from the server and provided to an audio output device of the modular interaction device. The audio response is generated based at least in part on a character profile associated with the unique identifier, in accordance with at least one embodiment. The audio response may be responsive to the voice prompt.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

Multiple companies have developed virtual assistants or digital assistants that allow users to receive an audio response in response to the user speaking to the virtual assistant. Typically, a user will speak a statement or question and the virtual assistant will respond to the statement or question with an audio response that is relevant to the statement or question. For example, a user may say “tell me the weather” and the virtual assistant will respond with an audio weather report for the location of the user. Or, the user may ask, “how many people are there in the United States?” and the virtual assistant will respond with an audio response of “the population of the United States is approximately 310 million people.” Interactive experiences utilizing virtual assistant technology are often times facilitated using a mobile device such as a smartphone or a tablet.

Toys have included buttons that activate a pre-programmed audio message being played on a speaker that is permanently attached to the toy. However, conventional toys have shortcomings with respect to cost, effectiveness and/or efficiency, for example. Some conventional toys are not capable of generating a response to a user's voice based on an analysis of words spoken by a user of the toy. Furthermore, some conventional toys don't change an interactive experience by sensing the presence of other toys.

BRIEF DESCRIPTION OF THE DRAWINGS

Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which:

FIG. 1 illustrates an example interactive system that includes an object, a Modular Interactive Device (“MID”), and a server;

FIG. 2 illustrates an example interactive system that includes an object, an MID, a device, and a server;

FIG. 3 is a block diagram of an example interactive engine that includes a character profile;

FIG. 4 is a block diagram of another example interactive engine;

FIG. 5 is a block diagram of yet another example interactive engine;

FIG. 6 depicts an illustrative flow chart demonstrating an example process for facilitating an interactive experience;

FIG. 7 depicts an illustrative flow chart demonstrating another example process for facilitating an interactive experience; and

FIG. 8 illustrates an environment in which various embodiments can be implemented.

DETAILED DESCRIPTION

In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described.

Systems and methods described herein include using a unique identifier of an object (e.g. a physical item such as a toy or costume) to generate an interactive experience using an interactive engine. A modular interactive device may facilitate the interactive experience by reading the unique identifier (e.g. an RFID tag) of the object and transmitting a voice prompt from a user of the object to a server that includes the interactive engine. In accordance with at least one embodiment, the modular interactive device is dimensioned to be inserted into or attached with the object.

In accordance with at least one embodiment, a stuffed figurine includes a wireless identification tag such as a passive (e.g., unpowered) RFID tag and also includes an opening or pouch to receive the modular interactive device. The modular device can be reused by inserting or attaching with other objects. The modular interactive device may include a wireless identification reader such as an RFID reader to read the wireless identification tag, a microphone to receive a voice prompt from a user, a wireless interface to connect to the server that includes the interactive engine, and a speaker to play a voice response generated by the interactive engine in response to receiving the voice prompt. The stuffed figurine may be in the likeness of a character from a book, television show, motion picture, or otherwise. The voice response generated by the interactive engine may be in the voice of the character. For example, if the stuffed figurine is a particular action hero from a motion picture, the voice response played on the speaker of the modular device (inserted inside the figurine) may be recorded audio from the actual motion picture in which the action hero was featured.

In accordance with at least one embodiment, Ana buys a modular device for a first toy used by her child, Ben. Ana inserts the modular device into the first toy. Ben is then able to talk to the toy and the toy talks back to Ben to provide an interactive experience. The modular device facilitates the interactive experience leveraging virtual assistant technology. Another parent Catherine purchases a modular device for a second toy used by her son, Diego. When Ben and Diego play together, the first and second toy may detect the presence of the other toy based on sensing the RFID tag of the other toy. The interactive experience facilitated by the modular device may be adjusted based on the two toys being in proximity to each other. For example, the interactive experience may include voice responses that include and/or reference both toys. When Ben is tired of the first toy, the modular device can be reused in a new toy used by Ben. The interactive experience with the new toy can change completely from the first toy since the particular interactive experience in the new toy will be specific to the RFID tag in the new toy rather than the RFID tag of the first toy.

FIG. 1 illustrates a system 100 that includes a Modular Interactive Device (“MID”) 120 and an object 110 to facilitate an interactive experience that utilizes an interactive engine. The interactive engine is executed by a server 190 connected to network 150, in accordance with at least one embodiment. In FIG. 1, object 110 is a stuffed animal—a stuffed bear to be specific. Object 110 may also be a plastic figurine, a stuffed figurine, a doll, a costume, or other object. FIG. 1 also illustrates a second object 115, which is a dwarf figurine, in the illustrated embodiment. Of course, object 115 may also be a stuffed figurine, a costume, a doll, or other object.

Objects 110 and 115 may bear a likeness to a character from a book, television show, motion picture, and/or internet-based video channel, for example. In accordance with at least one embodiment, objects 110 and 115 are intended for use by children in a given age range. In other embodiments, objects 110 and 115 are intended for use by adults or children above a given age (e.g. 13 years old). Throughout this disclosure, it is understood that descriptions that include object 110 and unique identifier 130 may also be applied to object 115 and unique identifier 132, even when object 115 and unique identifier 132 are not specifically mentioned.

Object 110 includes a unique identifier 130 that is attached to, attached with, or in some way physically linked to object 110. Similarly, object 115 includes a unique identifier 132 that is attached to, attached with, or in some way physically linked to object 115. In accordance with at least one embodiment, unique identifier 130 is included in a Radio-Frequency Identification (“RFID”) tag. Unique identifier 132 may also be included in an RFID tag. Unique identifiers 130 and 132 may take the form of a number or a set of alpha-numeric characters. For clarity, the examples of RFID tags and RFID readers are used throughout this description, however, embodiments may incorporate any suitable wireless identification tag and/or wireless identification tag reader. In accordance with at least one embodiment, unique identifiers described herein are unique among a set of identifiers associated with a set of character profiles (e.g., each of the set of character profiles may be uniquely associated with an identifier among the set of identifiers). The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are examples of organizations that set RFID standards. In accordance with at least one embodiment, ISO/IEC 14443 is used as the standard for communication between the described RFID tag and the described RFID reader.

In accordance with at least one embodiment, the RFID tag is a passive tag, in that it does not require power from a battery. A specific example of an RFID tag is a near-field communication (“NFC”) tag. ISO/IEC 14443 may be used as the standard for an NFC tag/reader pair. Passive RFID tags don't require power to send the unique identifier of the RFID tag to an RFID reader. Rather, an RFID reader (e.g. RFID reader 126) broadcasts or transmits and interrogation signal, such as interrogation signal 142. The passive RFID tag may generate a reflection signature in response to receiving interrogation signal 142. The reflection signature includes the unique identifier and the RFID reader decodes the reflection signature to determine the unique identifier. Alternatively, a passive RFID tag may use energy from an interrogation signal received from the RFID reader to temporarily power a circuit that transmits a response signal that includes the unique identifier. For unique identifier 130, signal 144 may represent the reflection signature or the response signal, depending on the specific RFID technology used. Similarly, for unique identifier 132, signal 146 may represent the reflection signature or the response signal, depending on the specific RFID technology used.

In FIG. 1, MID 120 includes a controller 135, a speaker 122, a microphone 124, an RFID reader 126, and a wireless interface 128. Speaker 122 is an example of an audio output device. MID 120 may include a battery compartment and a power regulator (not illustrated) to power the different components of MID 120. RFID reader 126 may be configured to transmit interrogation signal 142 and receive signals 144 and/or 146 when an RFID tag that includes unique identifier 130 or 132 is proximate to RFID reader 126. The proximity may depend on the specific RFID technology implemented and the power used to broadcast interrogation signal 142. In accordance with at least one embodiment, the proximity required to send unique identifier 130 to RFID reader 126 is less than 12 inches.

Controller 135 is coupled to RFID reader 126, in FIG. 1. Controller 135 is configured to receive the unique identifier received by RFID reader 126 by way of signal 144 or 146. Controller 135 is also coupled to microphone(s) 124 to receive a voice prompt 148. Voice prompt 148 is generated by a user of object 110 or 115. Voice prompt 148 may include an activation phrase or word (a.k.a. “wake word”) that marks the beginning of the voice prompt 148. There may be a default activation word that is stored in memory accessible to controller 135. The default activation word may be programmable and be changed by a user of MID 120. In accordance with at least one embodiment, the default activation word is based at least in part on the unique identifier. For example, where the unique identifier is attached with a character, the character's name may be the activation word. Controller 135 may execute instructions so that it is always receiving and analyzing audio from microphone 124, but only starts recording a voice prompt (for streaming or temporary storage purposes) when an activation word is received in audio from microphone 124. Controller 135 may execute instructions leveraging virtual assistant technology, as is known to those skilled in the art. Examples of conventional virtual assistants include Siri® (implemented by Apple, Inc.) and Alexa® (implemented by Amazon.com, Inc.).

Voice prompt 148 may be transformed into a suitable digital audio format for sending to server 190. Similarly, audio response 168 may be generated from a suitable digital audio format. Voice prompt 148 may refer to the voice of a user and/or an electronic representation thereof, as will become apparent by the context.

Once the voice prompt is received, the unique identifier 130 and the voice prompt may be sent to wireless interface 128 by controller 135, which is coupled to wireless interface 128. Wireless interface 128 sends both the unique identifier 130 and the voice prompt to server 190 via network 150. Wireless interface 128 is configured to provide a wireless connection 172 to network 150. Wireless interface 128 may be configured to use an Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard to communicate with a wireless router/switch with an internet connection to network 150, for example. In some examples, network 150 may include any one or a combination of many different types of networks, such as cable networks, the Internet, wireless networks, cellular networks, and other private and/or public networks.

The unique identifier 130 may be sent to server 190 prior to the voice prompt 148 being sent. Server 190 includes an interactive engine that uses a character profile to generate a response to voice prompt 148. Routing information and any necessary passwords to access server 190 may be stored in a memory accessible to controller 135. The character profile used to generate a response to voice prompt 148 depends on the unique identifier. In accordance with at least one embodiment, audio response 168 is responsive to voice prompt 148 by audio response 168 being related to a subject matter of words included in the voice prompt. In accordance with at least one embodiment, audio response 168 is considered responsive to voice prompt 148 when audio response is played by MID 120 within one second of the end of the voice prompt. In accordance with at least one embodiment, the character profile used is associated with the unique identifier so that the audio response is relevant to the object that includes the unique identifier. For example, where object 110 is a figurine of an action hero character, the audio response may be relevant to the action hero character because the unique identifier is linked to the action hero character. The audio response may be in a voice specific to the action hero character in that it includes any accents, cadence, and/or pitch that are unique to the action hero character. In accordance with at least one embodiment, the audio responses are recordings from the same actor that played the action hero (or lent their voice to an animated action hero character) in a motion picture.

Controller 135 receives the audio response from the interactive engine of server 190 via network 150 and wireless interface 128. In accordance with at least one embodiment, controller 135 is configured to drive speaker 122 to output the audio response 168 so that the user can hear audio response 168. Audio response 168 may be a sentence or a sound effect that is relevant to a character embodied by object 110.

Controller 135 may be a processor, microprocessor, field-programmable gate array (FPGA), or other programmable logic device. Controller 135 may include internal memory and/or external memory (not illustrated) to store executable instructions, unique identifiers, digital versions of voice prompt(s) 148, and/or digital versions of audio response(s) 168. Storing versions of voice prompt(s) 148 and audio response(s) 168 may include storing in a temporary buffer for streaming purposes rather than long-term storage.

In contexts where two (or more) unique identifiers (e.g. 130 and 132) are proximate to MID 120 or have recently been proximate to MID 120, MID 120 may send both unique identifiers to server 190. Audio response 168 may then be generated based at least in part on a second character profile associated with the second unique identifier. In accordance with at least one embodiment, object 110 and object 115 are related in that they bear a likeness to two characters that are part of a same book, television show, or motion picture. Hence, the interactive engine of server 190 may generate an audio response 168 that accounts for the related nature of objects 110 and 115.

In accordance with at least one embodiment, a first unique identifier is included in a toy and a second unique identifier is included in an accessory related to the toy. The accessory may be any suitable object such as an object associated with the toy from a movie, book, or other media (e.g. luggage, hairbrush, tool, weapon, sword, hammer). In accordance with at least one embodiment, when both the first unique identifier from the toy and the second unique identifier from the accessory are sent to server 190, audio response 168 is generated based at least in part on a second character profile associated with the both the first and second unique identifiers. The second character profile may be an expanded version of a first character profile where the second character profile includes audio responses that incorporate the presence of the accessory.

FIG. 2 illustrates an example interactive system 200 that includes an object 210, a MID 220, a unique identifier 230, a device 270, a network 250, and a server 290. Object 210, MID 220, unique identifier 230, network 250, wireless connection 272, and server 290 are examples of object 110, MID 120, unique identifier 130, network 150, wireless connection 172, and server 190, respectively, of FIG. 1. Device 270 may be any suitable type of computing device such as, but not limited to, a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a server computer, a thin-client device, a tablet PC, etc.

Device 270 may establish a wireless connection 274 directly to MID 220 via a wireless interface (e.g. wireless interface 128) of MID 220. Wireless connection 274 may implement a BlueTooth® or IEEE 802.11 protocol, for example. Wireless connection 274 may be used to configure MID 220 to deliver a customized interactive experience. Device 270 may also establish a wired or wireless connection 276 to network 250 to configure MID 220 to deliver a customized interactive experience, which will be discussed in more detail below. Configuring MID 220 using device 270 may include utilizing a web browser or mobile application.

FIG. 3 illustrates a high-level block diagram of an interactive engine 300 that includes character profile 310. Character profile 310 may be associated with unique identifier 130 of FIG. 1, for example. A server that stores interactive engine 300 may use interpretation and response technology that is utilized for virtual assistants, as is known by those skilled in the art. Character profile 310 includes responses that include audio responses or texts for generating audio responses. Character profile 310 may be initially populated with audio responses or text for generating audio responses. From time to time, character profile 310 may be updated. Updating character profile 310 may include adding or subtracting responses from character profile 310. In one example, a sequel to a motion picture is released and character profile 310 is updated to include responses that incorporate themes, storylines, or dialogue introduced by the sequel. A studio producing the motion picture may have access to populate, repopulate, and update responses of character profile 310. In accordance with at least one embodiment, the voice prompt is translated into text and a search is done of text representations of audio responses in character profile 310 to select an audio response 368. In accordance with at least one embodiment, audio response 368 is generated randomly from an array of audio responses in character profile 310.

Executable instructions for interactive engine 300 and audio responses associated with character profile 310 may be stored in memory accessible to server 190 or 290, of FIG. 1 and FIG. 2, for example. In FIG. 3, interactive engine 300 receives voice prompt 348 and generates an audio response 368 based at least in part on receiving voice prompt 348. Voice prompt 348 and audio response 368 are examples of voice prompt 148 and audio response 168. Like voice prompt 348, filter element 304 may be an input to interactive engine 300. Examples of filter element 304 are age of the user of object 110 and location of MID 120. If the age of the user of object 110 is below a certain threshold (e.g. 13 years old), certain responses may be removed from character profile 310 due to the intellection complexity or due to their mature content, for example. The location of MID 220 or device 270 as a filter element 304 may influence the language of the voice responses 368, for example. Ambient factors such as time of day and/or weather derived from the location of MID 220 or device 270 may also be filter elements. Filter element 304 may be sent from MID 120 as a digital word to server 190 via network 150.

The age of the user of object 110 in FIG. 1 may be inputted to system 200 in FIG. 2 by the user of object 110 or by a parent/guardian/caretaker of the user of object 110. In accordance with at least one embodiment, the age of the user is inputted into a graphical interface (e.g. web browser or mobile application) of device 270 and transmitted to MID 220 via connection 274. MID 220 then reports the age as a filter element 304 to interactive engine 300 via network 250 and connection 272. Alternatively, the age of the user may be inputted into graphical interface of device 270 and transmitted to interactive engine 300 via network 250 and connection 276. In this case, the age of the user bypasses MID 220 and therefore can be sent even if device 270 is remote from MID 220.

FIG. 4 illustrates a block diagram of an example interactive engine 400, in accordance with at least one embodiment. Interactive engine 400 may be stored on server 190/290 of FIGS. 1 and 2. Interactive engine 400 includes a personal assistant audio interpretation engine (PAAIE) 403 and a customized character engine 407. PAAIE 403 utilizes virtual assistant technology to generate an interpreted response 458 in response to receiving voice prompt 448. Voice prompt 448 is an example of voice prompt 148. In accordance with at least one embodiment, PAAIE 403 identifies key words included in voice prompt 448 using any suitable speech recognition technique and generates interpreted response 458 based at least in part on the identified key words. Interpreted response 458 may include a digital representation of a voice of a digital assistant in XML (Extensible Markup Language) format, in accordance with at least one embodiment. Customized character engine 407 generates audio response 468 in response to receiving interpreted response 458, in FIG. 4. Audio response 468 is an example of audio response 168. Customized character engine 407 transforms interpreted response 458 into the voice that is specific to a character depicted by a toy or item, in accordance with at least one embodiment. In FIG. 4 customized character engine 407 receives one or more unique identifier(s) 495. Unique identifier(s) 495 are examples of unique identifiers 130 and 132. Customized character engine 407 customizes interpreted response 458 to match the unique identifier(s) 495. In one example, customized character engine 407 lays a voice profile over interpreted response 458 to generate audio response 468 in the voice of the character depicted by a toy identified by unique identifier(s) 495. In accordance with at least one embodiment, audio response 468 is a modified version of an XML file received as interpreted response 458.

FIG. 5 illustrates a block diagram of an example interactive engine 500. Interactive engine 500 is one example that may be implemented as interactive engine 300 of FIG. 3. Interactive engine 500 may be stored on server 190/290 of FIGS. 1 and 2. Interactive engine 500 includes interpretation engine 560, selector 570, customizer block 580, and character profile 510. Character profile 510 includes response subsets 520, 530, and 540. It is understood that although three response subsets are shown in character profile 510, a given character profile may include many more (or less) response subsets. Each response subset includes at least one response. Response subset 520 includes five responses; response subset 530 includes three responses, and response subset 540 includes one response, in the illustrated embodiment. Each response subset may have more or less responses than are illustrated.

Interpretation engine 560 receives voice prompt 548 from network 150/250. Voice prompt 548 is one example of voice prompts 148 and 348 of FIGS. 1 and 3. Interpretation engine 560 may convert voice prompt 548 to text and pass the text to selector 570. Selector 570 may analyze the received text from interpretation engine 560 and select a response subset based on the text. In one example, the voice prompt includes the phrase, “tell me a joke” and interpretation engine 560 converts the audio to the text, “tell me a joke.” Selector 570 may have at least two functions. For example, selector 570 may select the proper character profile associated with one or more unique identifiers 595 that is received from MID 220. Unique identifier 595 is an example of unique identifiers 130, 132, and 230. Unique identifier 595 specifies the character profile that is associated with unique identifier 595. As another example, selector 570 may analyze the text received from interpretation engine 560. For example, when selector 570 finds the word “joke” in the received text, selector 570 may select response subset 520 based on finding the word “joke” in the text. The dashed arrow illustrated in FIG. 5 to the right of selector 570 represents the selection ability of selector 570 to select among the illustrated response subsets. Response subset 520 may be populated with an array of responses that are audio recordings of jokes. The audio recordings may be in the voice of a character depicted by object 110/210. An audio recording of a joke stored as a response in response subset 520 may be selected randomly and outputted by interactive engine 500 as audio response 568. Audio response 568 is one example of audio response 168/368.

Filter element 504 may influence the responses in a given subset that are available as audio response 568, as discussed with the description associated with filter element 304. Staying with the joke example, jokes that include a certain level or complexity or abstract thinking may be eliminated if the user is under a certain age, for example. Or, if a location is included as a filter element 504, the response subset 520 may be limited to responses that are jokes told in a language that is predominantly used in the location of the MID 220 or device 270 of FIG. 2. The location of MID 220 or device 270 may be inputted to system 200 by way of a GPS receiver of device 270 or MID 220 (not illustrated). In the alternative to a location filter element, interpretation engine 560 may send a language indicator 562 to character profile that influences or filters the responses available as audio response 568, as shown in FIG. 5. For instance, if interpretation engine 560 recognizes voice prompt 548 as a Spanish language voice prompt, the responses available in the response subsets will be narrowed to Spanish language responses.

Character profile 510 is illustrated as having access to customizer block 580. Custom data 506 populates customizer block 580 with user inputs to interactive engine 500. In accordance with at least one embodiment, the user or parent/guardian/caretaker can type the user's name into the graphical user interface of device 270 in FIG. 2 for transmission to interactive engine 500 (via connection 276) as custom data 506. The user's name can then be incorporated into the audio response(s) 568 sent from interactive engine 500 to MID 120/220. In accordance with at least one embodiment, the user's name is recorded by the user or parent/guardian/caretaker using device 270 and sent as an audio file of custom data 506 to customizer block 580 via network 250. Staying with the joke example, if the user's name is Jason and the audio prompt includes the word “joke,” the response may start with, “Ok, Jason” followed by the joke, where “Jason” is the recording of the user saying their own name, a parent/guardian/caretaker saying Jason's name, or a reading of the name Jason based on the text input typed into the graphical user interface of device 270. The reading of the text “Jason” may be done in a voice specific to a character depicted by object 110 of FIG. 1. The customizer block 580 may apply any suitable audio customization to an audio response, or may supply additional custom audio response subsets and/or audio responses.

Referring back to FIG. 1, two or more objects (e.g. 110 and 115) may be proximate to MID 120 such that RFID reader 126 can read their unique identifiers (e.g. 130 and 132). As mentioned above, the two or more objects may be related in that they bear the likeness or depict characters that are part of a same book, television show, or motion picture, or other media content. When two or more objects are proximate to MID 120 such that the RFID reader 126 has received the unique identifiers of the objects, the two or more unique identifiers may be sent to interactive engine 500 as unique identifiers 595 of FIG. 5. In this case, audio response 568 is generated based at least in part by a second character profile associated with both the two or more unique identifiers. In accordance with at least one embodiment, when selector 570 receives two or more unique identifiers from MID 220, selector 570 selects a combined character profile that is different from a character profile associated with just a single unique identifier. For example, the combined character profile may include responses from both characters or audio responses that include audio of dialogue between the characters represented by the two or more objects. In accordance with at least one embodiment, the combined character profile includes the entire character profile associated with the first and second unique identifiers (e.g. 130 and 132) in addition to audio responses that include audio of dialogue between the characters represented by objects 110 and 115.

FIG. 6 depicts an illustrative flow chart demonstrating an example process 600 for facilitating an interactive experience. The process 600 is illustrated as a logical flow diagram, each operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be omitted or combined in any order and/or in parallel to implement this process and any other processes described herein.

Some or all of the process 600 (or any other processes described herein, or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications). In accordance with at least one embodiment, the process 600 of FIG. 6 may be performed by MID 120 or 220 of FIG. 1 and FIG. 2. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program including a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory.

In process block 602, a voice prompt (e.g. voice prompt 148) is received. The voice prompt may be received by a microphone (e.g. microphone 124) included with MID 120. The voice prompt is sent to the server in process block 604. A unique identifier (e.g. 130 or 132) is received from an RFID tag in process block 606. The unique identifier may be received by an RFID reader such as RFID reader 126 of FIG. 1. In process block 608, the unique identifier is sent to a server (e.g. server 190 or 290). In accordance with at least one embodiment, the unique identifier and the voice prompt are sent to the server at substantially the same time. In accordance with at least one embodiment, the voice prompt is sent to the server via a wireless interface (e.g. wireless interface 128). In process block 610, an audio response (e.g. audio response 168) is received from the server. The audio response is provided to a speaker in process block 612. In accordance with at least one embodiment, the audio response is provided to, and played on, speaker 122.

FIG. 7 depicts an illustrative flow chart demonstrating another example process 700 for facilitating an interactive experience. The process 700 is illustrated as a logical flow diagram, each operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be omitted or combined in any order and/or in parallel to implement this process and any other processes described herein.

Some or all of the process 700 (or any other processes described herein, or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications). In accordance with at least one embodiment, the process 700 of FIG. 7 may be performed by server 190 or 290. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program including a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory.

In process block 702, a voice prompt (e.g. 148) and a unique identifier (e.g. 130) are received. The voice prompt and the unique identifier may be sent by MID 120 of FIG. 1, for example. The unique identifier is included in an RFID tag read by an RFID reader of MID 120, in accordance with at least one embodiment. An interpreted response (e.g. 458) is generated from the voice prompt in process block 704. The interpreted response may be generated by an audio interpretation engine (e.g. 403). A character profile is selected based at least in part on the received unique identifier, in process block 706. The character profile may be selected by a customized character engine (e.g. 407) that receives the unique identifier. In process block 708, the selected character profile customizes the interpreted response to generate an audio response (e.g. 468). The customization may include overlaying the interpreted response with a voice profile specific to a character depicted by a toy associated with the unique identifier. In process block 710, the audio response is sent to a modular interactive device (e.g. 120).

FIG. 8 illustrates aspects of an example environment 800 for implementing aspects in accordance with various embodiments. As will be appreciated, although a Web-based environment is used for purposes of explanation, different environments may be used, as appropriate, to implement various embodiments. The environment includes an electronic client device 802 operable to send and receive requests, messages or information over an appropriate network 804. MID 120 and 220 are examples of electronic client device 802. Network 804 can include any appropriate network, including an intranet, the Internet, a cellular network, a local area network or any other such network or combination thereof. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. Communication over the network can be enabled by wired or wireless connections and combinations thereof. In this example, the network includes the Internet, as the environment includes a Web server 806 for receiving requests and serving content in response thereto, although for other networks an alternative device serving a similar purpose could be used as would be apparent to one of ordinary skill in the art. Network 804 is an example of network 150 or 250.

The illustrative environment includes at least one application server 808 and a data store 810. Interactive engines 300, 400, and 500 may be implemented by application server 808. It should be understood that there can be several application servers, layers, or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”) or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device 802 and the application server 808, can be handled by the Web server. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.

The data store 810 can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing production data 812 and user information 816, which can be used to serve content for the production side. The data store also is shown to include a mechanism for storing log data 814, which can be used for reporting, analysis or other such purposes. It should be understood that there can be many other aspects that may need to be stored in the data store, such as for page image information and to access right information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 810. The data store 810 is operable, through logic associated therewith, to receive instructions from the application server 808 and obtain, update or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about items of that type.

Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.

The environment in accordance with at least one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in FIG. 8. Thus, the depiction of the system 800 in FIG. 8 should be taken as being illustrative in nature and not limiting to the scope of the disclosure.

The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.

Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), Open System Interconnection (“OSI”), File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof.

In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers and business application servers. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase® and IBM®.

The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU”), at least one input device (e.g., a mouse, keyboard, controller, touch screen or keypad) and at least one output device (e.g., a display device, printer or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.

Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.) and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.

Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.

The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims.

Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions and equivalents falling within the spirit and scope of the disclosure, as defined in the appended claims.

The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.

Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.

Preferred embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for the disclosure to be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.

All references, including publications, patent applications and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.

Claims

1. A system for interactive toys, the system comprising:

a toy including a radio-frequency identification (RFID) tag attached with the toy, wherein the toy bears a likeness of a character; and
a modular interaction device dimensioned to be inserted into the toy, wherein the toy is configured at least to accept the modular interaction device, the modular interaction device including a wireless interface, an RFID reading module, a microphone, and a speaker, wherein the modular interaction device is configured to, at least: receive, with the microphone, a voice prompt from a user of the toy; send, with the wireless interface, the voice prompt to a server; receive, with the RFID reading module, a unique identifier from the RFID tag; send, with the wireless interface, the unique identifier to the server; receive, with the wireless interface, an audio response from the server, wherein the audio response is generated by the server based at least in part on a character profile associated with the unique identifier and the character, and wherein the audio response is in a voice specific to the character and is responsive to the voice prompt such that the audio response comprises language related to a subject matter of words included in the voice prompt; and output the audio response through the speaker.

2. The system of claim 1, wherein the modular interaction device is further configured to:

receive, with the RFID reading module, a second unique identifier from a second RFID tag attached to a second toy of a second character; and
send, with the wireless interface, the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.

3. The system of claim 1, wherein the modular interaction device is further configured to:

receive, with the RFID reading module, a second unique identifier from a second RFID tag attached with an accessory relating to the character; and
send, with the wireless interface, the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.

4. The system of claim 1, wherein the voice prompt includes an activation phrase or word, and wherein the activation phrase or word marks the beginning of the voice prompt.

5. A computer-implemented method, comprising:

receiving, by an audio input device of a modular device associated with an object, a voice prompt;
sending at least a portion of the voice prompt to a server;
receiving, with a wireless identification reading module of the modular device, a unique identifier from a wireless identification tag attached with the object;
sending the unique identifier to the server;
receiving, by the modular device, an audio response from the server, wherein the audio response is generated based at least in part on a character profile associated with the unique identifier, and wherein the audio response is in a voice associated with the character profile and responsive to the voice prompt such that the audio response comprises language related to a subject matter of words included in the voice prompt; and
outputting the audio response by an audio output device of the modular device.

6. The method of claim 5, further comprising:

receiving, with the wireless identification reading module of the modular device, a second unique identifier from a second wireless identification tag attached with a second object; and
sending the second unique identifier to the server, wherein the audio response is generated based at least in part by a second character profile associated with both the unique identifier and the second unique identifier.

7. The method of claim 5, wherein the unique identifier and the at least a portion of the voice prompt are sent to the server at substantially a same time that is subsequent to receiving the unique identifier and the at least a portion of the voice prompt.

8. The method of claim 5, wherein sending the unique identifier and the at least a portion of the voice prompt to the server includes transmitting the unique identifier and the at least a portion of the voice prompt over a wireless interface of the modular device, and wherein receiving the audio response from the server includes receiving the audio response from the server over the wireless interface of the modular device.

9. (canceled)

10. The method of claim 5, further comprising:

receiving at least one filter element, wherein the at least one filter element causes customization of at least one audio response of an array of audio responses available in the character profile, the audio response selected from the array of audio responses.

11. The method of claim 10, wherein the at least one filter element includes at least one of a location of the modular device or an age of a user of the object.

12. The method of claim 10, wherein the at least one filter element includes an ambient factor derived from a location of the modular device.

13. The method of claim 5 further comprising transmitting, with the wireless identification tag reading module, an interrogation signal that activates the wireless identification tag.

14. The method of claim 5, wherein the audio response is responsive to the voice prompt by the audio response being relevant to a subject matter of words included in the voice prompt.

15. The method of claim 5, wherein the object is a toy, the method further comprising:

receiving, with the wireless identification reading module, a second unique identifier from a second wireless identification tag attached with an accessory relating to the toy; and
sending the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.

16. A system comprising:

a modular device dimensioned to be inserted into an item, the modular device including a wireless identification reader and a wireless interface, wherein the modular device is configured to, at least: receive a voice prompt; transmit, with the wireless identification reader, an interrogation signal; receive a unique identifier from a wireless identification tag associated with the item in response to the wireless identification reader transmitting the interrogation signal; send, with the wireless interface, the unique identifier and the voice prompt to a server; and receive, for presentation to a user, with the wireless interface from the server, an audio response generated by the server based at least in part on a character profile associated with the unique identifier and the item, and wherein the audio response is in a voice associated with the character profile and relevant to the voice prompt such that the audio response comprises language related to a subject matter of words included in the voice prompt.

17. The system of claim 16, wherein the modular device is further configured to:

receive, with the wireless identification reader, a second unique identifier from a second wireless identification tag; and
send, with the wireless interface, the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.

18. The system of claim 16, wherein the voice prompt includes an activation phrase or word, and wherein the activation phrase or word marks the beginning of the voice prompt.

19. The system of claim 16, wherein the wireless identification tag is attached with the item bearing a likeness of a character, and wherein the audio response is in a voice specific to the character.

20. The system of claim 16, wherein the wireless identification reader includes a radio-frequency identification (RFID) reader, and wherein the wireless identification tag includes an RFID tag.

21. The system of claim 1, further comprising a user device separate from the module interaction device configured to, at least:

receive custom data from the user; and
transmit the custom data to at least one of the modular interaction device or the server, such that the audio response is generated to include a presentation of the custom data.
Patent History
Publication number: 20180272240
Type: Application
Filed: Dec 23, 2015
Publication Date: Sep 27, 2018
Inventors: Peter Milos Soudek (Seattle, WA), Hau Wing Calvin Kwok (Seattle, WA), Benjamin Guy Hills (Kirkland, WA)
Application Number: 14/757,823
Classifications
International Classification: A63H 3/36 (20060101); A63H 3/28 (20060101);