Modular interaction device for toys and other devices
A modular interaction device includes a wireless identification reading module to receive a unique identifier from a wireless identification tag attached with an object. The unique identifier may be sent to a server. A voice prompt is received by the modular interaction device and at least a portion of the voice prompt is sent to the server, in accordance with at least one embodiment. An audio response may be received from the server and provided to an audio output device of the modular interaction device. The audio response is generated based at least in part on a character profile associated with the unique identifier, in accordance with at least one embodiment. The audio response may be responsive to the voice prompt.
Multiple companies have developed virtual assistants or digital assistants that allow users to receive an audio response in response to the user speaking to the virtual assistant. Typically, a user will speak a statement or question and the virtual assistant will respond to the statement or question with an audio response that is relevant to the statement or question. For example, a user may say “tell me the weather” and the virtual assistant will respond with an audio weather report for the location of the user. Or, the user may ask, “how many people are there in the United States?” and the virtual assistant will respond with an audio response of “the population of the United States is approximately 310 million people.” Interactive experiences utilizing virtual assistant technology are often times facilitated using a mobile device such as a smartphone or a tablet.
Toys have included buttons that activate a pre-programmed audio message being played on a speaker that is permanently attached to the toy. However, conventional toys have shortcomings with respect to cost, effectiveness and/or efficiency, for example. Some conventional toys are not capable of generating a response to a user's voice based on an analysis of words spoken by a user of the toy. Furthermore, some conventional toys don't change an interactive experience by sensing the presence of other toys.
Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which:
In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described.
Systems and methods described herein include using a unique identifier of an object (e.g. a physical item such as a toy or costume) to generate an interactive experience using an interactive engine. A modular interactive device may facilitate the interactive experience by reading the unique identifier (e.g. an RFID tag) of the object and transmitting a voice prompt from a user of the object to a server that includes the interactive engine. In accordance with at least one embodiment, the modular interactive device is dimensioned to be inserted into or attached with the object.
In accordance with at least one embodiment, a stuffed figurine includes a wireless identification tag such as a passive (e.g., unpowered) RFID tag and also includes an opening or pouch to receive the modular interactive device. The modular device can be reused by inserting or attaching with other objects. The modular interactive device may include a wireless identification reader such as an RFID reader to read the wireless identification tag, a microphone to receive a voice prompt from a user, a wireless interface to connect to the server that includes the interactive engine, and a speaker to play a voice response generated by the interactive engine in response to receiving the voice prompt. The stuffed figurine may be in the likeness of a character from a book, television show, motion picture, or otherwise. The voice response generated by the interactive engine may be in the voice of the character. For example, if the stuffed figurine is a particular action hero from a motion picture, the voice response played on the speaker of the modular device (inserted inside the figurine) may be recorded audio from the actual motion picture in which the action hero was featured.
In accordance with at least one embodiment, Ana buys a modular device for a first toy used by her child, Ben. Ana inserts the modular device into the first toy. Ben is then able to talk to the toy and the toy talks back to Ben to provide an interactive experience. The modular device facilitates the interactive experience leveraging virtual assistant technology. Another parent Catherine purchases a modular device for a second toy used by her son, Diego. When Ben and Diego play together, the first and second toy may detect the presence of the other toy based on sensing the RFID tag of the other toy. The interactive experience facilitated by the modular device may be adjusted based on the two toys being in proximity to each other. For example, the interactive experience may include voice responses that include and/or reference both toys. When Ben is tired of the first toy, the modular device can be reused in a new toy used by Ben. The interactive experience with the new toy can change completely from the first toy since the particular interactive experience in the new toy will be specific to the RFID tag in the new toy rather than the RFID tag of the first toy.
Objects 110 and 115 may bear a likeness to a character from a book, television show, motion picture, and/or internet-based video channel, for example. In accordance with at least one embodiment, objects 110 and 115 are intended for use by children in a given age range. In other embodiments, objects 110 and 115 are intended for use by adults or children above a given age (e.g. 13 years old). Throughout this disclosure, it is understood that descriptions that include object 110 and unique identifier 130 may also be applied to object 115 and unique identifier 132, even when object 115 and unique identifier 132 are not specifically mentioned.
Object 110 includes a unique identifier 130 that is attached to, attached with, or in some way physically linked to object 110. Similarly, object 115 includes a unique identifier 132 that is attached to, attached with, or in some way physically linked to object 115. In accordance with at least one embodiment, unique identifier 130 is included in a Radio-Frequency Identification (“RFID”) tag. Unique identifier 132 may also be included in an RFID tag. Unique identifiers 130 and 132 may take the form of a number or a set of alpha-numeric characters. For clarity, the examples of RFID tags and RFID readers are used throughout this description, however, embodiments may incorporate any suitable wireless identification tag and/or wireless identification tag reader. In accordance with at least one embodiment, unique identifiers described herein are unique among a set of identifiers associated with a set of character profiles (e.g., each of the set of character profiles may be uniquely associated with an identifier among the set of identifiers). The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are examples of organizations that set RFID standards. In accordance with at least one embodiment, ISO/IEC 14443 is used as the standard for communication between the described RFID tag and the described RFID reader.
In accordance with at least one embodiment, the RFID tag is a passive tag, in that it does not require power from a battery. A specific example of an RFID tag is a near-field communication (“NFC”) tag. ISO/IEC 14443 may be used as the standard for an NFC tag/reader pair. Passive RFID tags don't require power to send the unique identifier of the RFID tag to an RFID reader. Rather, an RFID reader (e.g. RFID reader 126) broadcasts or transmits and interrogation signal, such as interrogation signal 142. The passive RFID tag may generate a reflection signature in response to receiving interrogation signal 142. The reflection signature includes the unique identifier and the RFID reader decodes the reflection signature to determine the unique identifier. Alternatively, a passive RFID tag may use energy from an interrogation signal received from the RFID reader to temporarily power a circuit that transmits a response signal that includes the unique identifier. For unique identifier 130, signal 144 may represent the reflection signature or the response signal, depending on the specific RFID technology used. Similarly, for unique identifier 132, signal 146 may represent the reflection signature or the response signal, depending on the specific RFID technology used.
In
Controller 135 is coupled to RFID reader 126, in
Voice prompt 148 may be transformed into a suitable digital audio format for sending to server 190. Similarly, audio response 168 may be generated from a suitable digital audio format. Voice prompt 148 may refer to the voice of a user and/or an electronic representation thereof, as will become apparent by the context.
Once the voice prompt is received, the unique identifier 130 and the voice prompt may be sent to wireless interface 128 by controller 135, which is coupled to wireless interface 128. Wireless interface 128 sends both the unique identifier 130 and the voice prompt to server 190 via network 150. Wireless interface 128 is configured to provide a wireless connection 172 to network 150. Wireless interface 128 may be configured to use an Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard to communicate with a wireless router/switch with an internet connection to network 150, for example. In some examples, network 150 may include any one or a combination of many different types of networks, such as cable networks, the Internet, wireless networks, cellular networks, and other private and/or public networks.
The unique identifier 130 may be sent to server 190 prior to the voice prompt 148 being sent. Server 190 includes an interactive engine that uses a character profile to generate a response to voice prompt 148. Routing information and any necessary passwords to access server 190 may be stored in a memory accessible to controller 135. The character profile used to generate a response to voice prompt 148 depends on the unique identifier. In accordance with at least one embodiment, audio response 168 is responsive to voice prompt 148 by audio response 168 being related to a subject matter of words included in the voice prompt. In accordance with at least one embodiment, audio response 168 is considered responsive to voice prompt 148 when audio response is played by MID 120 within one second of the end of the voice prompt. In accordance with at least one embodiment, the character profile used is associated with the unique identifier so that the audio response is relevant to the object that includes the unique identifier. For example, where object 110 is a figurine of an action hero character, the audio response may be relevant to the action hero character because the unique identifier is linked to the action hero character. The audio response may be in a voice specific to the action hero character in that it includes any accents, cadence, and/or pitch that are unique to the action hero character. In accordance with at least one embodiment, the audio responses are recordings from the same actor that played the action hero (or lent their voice to an animated action hero character) in a motion picture.
Controller 135 receives the audio response from the interactive engine of server 190 via network 150 and wireless interface 128. In accordance with at least one embodiment, controller 135 is configured to drive speaker 122 to output the audio response 168 so that the user can hear audio response 168. Audio response 168 may be a sentence or a sound effect that is relevant to a character embodied by object 110.
Controller 135 may be a processor, microprocessor, field-programmable gate array (FPGA), or other programmable logic device. Controller 135 may include internal memory and/or external memory (not illustrated) to store executable instructions, unique identifiers, digital versions of voice prompt(s) 148, and/or digital versions of audio response(s) 168. Storing versions of voice prompt(s) 148 and audio response(s) 168 may include storing in a temporary buffer for streaming purposes rather than long-term storage.
In contexts where two (or more) unique identifiers (e.g. 130 and 132) are proximate to MID 120 or have recently been proximate to MID 120, MID 120 may send both unique identifiers to server 190. Audio response 168 may then be generated based at least in part on a second character profile associated with the second unique identifier. In accordance with at least one embodiment, object 110 and object 115 are related in that they bear a likeness to two characters that are part of a same book, television show, or motion picture. Hence, the interactive engine of server 190 may generate an audio response 168 that accounts for the related nature of objects 110 and 115.
In accordance with at least one embodiment, a first unique identifier is included in a toy and a second unique identifier is included in an accessory related to the toy. The accessory may be any suitable object such as an object associated with the toy from a movie, book, or other media (e.g. luggage, hairbrush, tool, weapon, sword, hammer). In accordance with at least one embodiment, when both the first unique identifier from the toy and the second unique identifier from the accessory are sent to server 190, audio response 168 is generated based at least in part on a second character profile associated with the both the first and second unique identifiers. The second character profile may be an expanded version of a first character profile where the second character profile includes audio responses that incorporate the presence of the accessory.
Device 270 may establish a wireless connection 274 directly to MID 220 via a wireless interface (e.g. wireless interface 128) of MID 220. Wireless connection 274 may implement a BlueTooth® or IEEE 802.11 protocol, for example. Wireless connection 274 may be used to configure MID 220 to deliver a customized interactive experience. Device 270 may also establish a wired or wireless connection 276 to network 250 to configure MID 220 to deliver a customized interactive experience, which will be discussed in more detail below. Configuring MID 220 using device 270 may include utilizing a web browser or mobile application.
Executable instructions for interactive engine 300 and audio responses associated with character profile 310 may be stored in memory accessible to server 190 or 290, of
The age of the user of object 110 in
Interpretation engine 560 receives voice prompt 548 from network 150/250. Voice prompt 548 is one example of voice prompts 148 and 348 of
Filter element 504 may influence the responses in a given subset that are available as audio response 568, as discussed with the description associated with filter element 304. Staying with the joke example, jokes that include a certain level or complexity or abstract thinking may be eliminated if the user is under a certain age, for example. Or, if a location is included as a filter element 504, the response subset 520 may be limited to responses that are jokes told in a language that is predominantly used in the location of the MID 220 or device 270 of
Character profile 510 is illustrated as having access to customizer block 580. Custom data 506 populates customizer block 580 with user inputs to interactive engine 500. In accordance with at least one embodiment, the user or parent/guardian/caretaker can type the user's name into the graphical user interface of device 270 in
Referring back to
Some or all of the process 600 (or any other processes described herein, or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications). In accordance with at least one embodiment, the process 600 of
In process block 602, a voice prompt (e.g. voice prompt 148) is received. The voice prompt may be received by a microphone (e.g. microphone 124) included with MID 120. The voice prompt is sent to the server in process block 604. A unique identifier (e.g. 130 or 132) is received from an RFID tag in process block 606. The unique identifier may be received by an RFID reader such as RFID reader 126 of
Some or all of the process 700 (or any other processes described herein, or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications). In accordance with at least one embodiment, the process 700 of
In process block 702, a voice prompt (e.g. 148) and a unique identifier (e.g. 130) are received. The voice prompt and the unique identifier may be sent by MID 120 of
The illustrative environment includes at least one application server 808 and a data store 810. Interactive engines 300, 400, and 500 may be implemented by application server 808. It should be understood that there can be several application servers, layers, or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”) or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device 802 and the application server 808, can be handled by the Web server. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.
The data store 810 can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing production data 812 and user information 816, which can be used to serve content for the production side. The data store also is shown to include a mechanism for storing log data 814, which can be used for reporting, analysis or other such purposes. It should be understood that there can be many other aspects that may need to be stored in the data store, such as for page image information and to access right information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 810. The data store 810 is operable, through logic associated therewith, to receive instructions from the application server 808 and obtain, update or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about items of that type.
Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
The environment in accordance with at least one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in
The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.
Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), Open System Interconnection (“OSI”), File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof.
In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers and business application servers. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase® and IBM®.
The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU”), at least one input device (e.g., a mouse, keyboard, controller, touch screen or keypad) and at least one output device (e.g., a display device, printer or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.
Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.) and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.
Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims.
Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions and equivalents falling within the spirit and scope of the disclosure, as defined in the appended claims.
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Preferred embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for the disclosure to be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
All references, including publications, patent applications and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
Claims
1. A system for interactive toys, the system comprising:
- a toy including a radio-frequency identification (RFID) tag attached with the toy, wherein the toy bears a likeness of a character; and
- a modular interaction device dimensioned to be inserted into the toy, wherein the toy is configured at least to accept the modular interaction device, the modular interaction device including a wireless interface, an RFID reading module, a microphone, and a speaker, wherein the modular interaction device is configured to, at least: receive, with the microphone, a voice prompt from a user of the toy; send, with the wireless interface, the voice prompt to a server; receive, with the RFID reading module, a unique identifier from the RFID tag; send, with the wireless interface, the unique identifier to the server; receive, with the wireless interface, an audio response from the server, wherein the audio response is generated by the server based at least in part on a character profile associated with the unique identifier and the character, and wherein the audio response is in a voice specific to the character and is responsive to the voice prompt such that the audio response comprises language related to a subject matter of words included in the voice prompt; and output the audio response through the speaker.
2. The system of claim 1, wherein the modular interaction device is further configured to:
- receive, with the RFID reading module, a second unique identifier from a second RFID tag attached to a second toy of a second character; and
- send, with the wireless interface, the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.
3. The system of claim 1, wherein the modular interaction device is further configured to:
- receive, with the RFID reading module, a second unique identifier from a second RFID tag attached with an accessory relating to the character; and
- send, with the wireless interface, the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.
4. The system of claim 1, wherein the voice prompt includes an activation phrase or word, and wherein the activation phrase or word marks the beginning of the voice prompt.
5. A computer-implemented method, comprising:
- receiving, by an audio input device of a modular device associated with an object, a voice prompt;
- sending at least a portion of the voice prompt to a server;
- receiving, with a wireless identification reading module of the modular device, a unique identifier from a wireless identification tag attached with the object;
- sending the unique identifier to the server;
- receiving, by the modular device, an audio response from the server, wherein the audio response is generated based at least in part on a character profile associated with the unique identifier, and wherein the audio response is in a voice associated with the character profile and responsive to the voice prompt such that the audio response comprises language related to a subject matter of words included in the voice prompt; and
- outputting the audio response by an audio output device of the modular device.
6. The method of claim 5, further comprising:
- receiving, with the wireless identification reading module of the modular device, a second unique identifier from a second wireless identification tag attached with a second object; and
- sending the second unique identifier to the server, wherein the audio response is generated based at least in part by a second character profile associated with both the unique identifier and the second unique identifier.
7. The method of claim 5, wherein the unique identifier and the at least a portion of the voice prompt are sent to the server at substantially a same time that is subsequent to receiving the unique identifier and the at least a portion of the voice prompt.
8. The method of claim 5, wherein sending the unique identifier and the at least a portion of the voice prompt to the server includes transmitting the unique identifier and the at least a portion of the voice prompt over a wireless interface of the modular device, and wherein receiving the audio response from the server includes receiving the audio response from the server over the wireless interface of the modular device.
9. (canceled)
10. The method of claim 5, further comprising:
- receiving at least one filter element, wherein the at least one filter element causes customization of at least one audio response of an array of audio responses available in the character profile, the audio response selected from the array of audio responses.
11. The method of claim 10, wherein the at least one filter element includes at least one of a location of the modular device or an age of a user of the object.
12. The method of claim 10, wherein the at least one filter element includes an ambient factor derived from a location of the modular device.
13. The method of claim 5 further comprising transmitting, with the wireless identification tag reading module, an interrogation signal that activates the wireless identification tag.
14. The method of claim 5, wherein the audio response is responsive to the voice prompt by the audio response being relevant to a subject matter of words included in the voice prompt.
15. The method of claim 5, wherein the object is a toy, the method further comprising:
- receiving, with the wireless identification reading module, a second unique identifier from a second wireless identification tag attached with an accessory relating to the toy; and
- sending the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.
16. A system comprising:
- a modular device dimensioned to be inserted into an item, the modular device including a wireless identification reader and a wireless interface, wherein the modular device is configured to, at least: receive a voice prompt; transmit, with the wireless identification reader, an interrogation signal; receive a unique identifier from a wireless identification tag associated with the item in response to the wireless identification reader transmitting the interrogation signal; send, with the wireless interface, the unique identifier and the voice prompt to a server; and receive, for presentation to a user, with the wireless interface from the server, an audio response generated by the server based at least in part on a character profile associated with the unique identifier and the item, and wherein the audio response is in a voice associated with the character profile and relevant to the voice prompt such that the audio response comprises language related to a subject matter of words included in the voice prompt.
17. The system of claim 16, wherein the modular device is further configured to:
- receive, with the wireless identification reader, a second unique identifier from a second wireless identification tag; and
- send, with the wireless interface, the second unique identifier to the server, wherein the audio response is generated based at least in part on a second character profile associated with both the unique identifier and the second unique identifier.
18. The system of claim 16, wherein the voice prompt includes an activation phrase or word, and wherein the activation phrase or word marks the beginning of the voice prompt.
19. The system of claim 16, wherein the wireless identification tag is attached with the item bearing a likeness of a character, and wherein the audio response is in a voice specific to the character.
20. The system of claim 16, wherein the wireless identification reader includes a radio-frequency identification (RFID) reader, and wherein the wireless identification tag includes an RFID tag.
21. The system of claim 1, further comprising a user device separate from the module interaction device configured to, at least:
- receive custom data from the user; and
- transmit the custom data to at least one of the modular interaction device or the server, such that the audio response is generated to include a presentation of the custom data.
Type: Application
Filed: Dec 23, 2015
Publication Date: Sep 27, 2018
Inventors: Peter Milos Soudek (Seattle, WA), Hau Wing Calvin Kwok (Seattle, WA), Benjamin Guy Hills (Kirkland, WA)
Application Number: 14/757,823