INTERACTIVE TOY DRESSING SYSTEM

Systems and methods for providing an interactive toy dressing system. The system is designed to provide not only coordinating toy outfit purchasing advice, but can provide a play environment which assists users in determining coordinating clothing options that are available, and locating them for purchase.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATION(S)

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/697,530 filed Sep. 6, 2012, the entire disclosure of which is incorporated herein by reference.

BACKGROUND

1. Field of the Invention

This disclosure relates to systems and methods for providing for an interactive matching or dressing system, particularly for providing clothing for plush toys, that utilizes an interactive interface to provide outfit coordination and matching advice.

2. Description of Related Art

As the world has become more advanced, virtually every aspect of human existence has incorporated new technologies. In many respects, toys have been around for much of recorded human history and are generally objects used for play to train children for future life which are often designed to be versions of objects used by adults. Because toys are often designed to teach intended behavior, toys have changed as the world has changed to allow for children to play with objects that have become commonplace.

In the last 15 years, the manufacturing process of toys has become additionally personalized through the advent of toy stores where the toy is not just purchased off of a rack, but is, at least partially, manufactured by the user himself or herself. One such type of store is the Build-a-Bear Workshop™ store where a person can construct a plush toy from various provided components such as a shell and stuffing. Part of the entertainment value of the toy is the ability of the user to be part of the toy's process of manufacture. In these types of on-demand and self-service manufacturing methodologies, the user is present for the toy's creation and construction, and the toy is often more personalized because the user has made personal decisions about the design of the toy. This can include decisions as to the toy's design and what additional components or functionality it includes.

In effect, the toy becomes more capable of reflecting the toy's owner because its owner is also, in many respects, its creator and builder. This is beneficial both for children in making customized toys and for toys which are given to the child. The latter results in the toy often having a more personal connection as it is associated by the child with the person who made and gave the toy to the child because of the personalization. Further, the very process of building a toy is “play” which emulates modern manufacturing and construction techniques and can provide entertainment and learning as well. Further, toy stores are also increasingly becoming play destinations where the toy is viewed as a “friend” or “companion” allowed to take part in the child's activities instead of an inanimate object.

Personalization and anthropomorphism of toys by children as part of their play is not new. The “reality” of toys as things other than inanimate objects has been fertile ground for children's literature and entertainment for many years and virtually every child, at some time, sees a toy as more than just an object. It has long been recognized that children have a more difficult time separating fantasy from reality than adults and, therefore, such anthropomorphism is easily understood. Further, anthropomorphism of toys can allow a parent to use a child's imagination to assist in dealing with problems created by a child's imagination. Child-rearing books are filled with examples of using a child's plush toy, and a child's imagination, as a powerful hero that can defend the child from a child's imagined “monsters under the bed.”

Particularly when it comes to plush toys, the desire of children to anthropomorphize the toys can be strong. Such toys are very often comfort objects for children and are often used to calm and reassure children. A teddy bear going through an X-ray scanner prior to a child is a common image. Thus, there is often a natural push that plush toys are seen by children as real “people”. This particular anthropomorphism, and the specificity with which it is associated with a particular plush toy, leads to a need for play stations and a toy assembly workshop where the particular plush toy, regardless of its construction, is able to interact with the play environment as a “person” instead of a “thing”.

Part of the anthropomorphism of a toy, and particularly a plush toy, is to dress it. A toy such as a doll which is representative of a human figure is logically dressed as, in most cultures, humans are regularly clothed and the doll is supposed to represent a human. Learning to dress and feed an infant by dressing and feeding a doll designed to represent an infant fits the purpose of the toy as an object for children to learn behaviors they will use as adults. Even adults learning to care for infants are regularly provided with an infant doll in childbearing classes to get used to how to hold and carry an infant. Similarly, a doll representing a child or adult in specific situations (for example, a soldier, ballerina, or fireman) helps teach children where components of clothing are worn, specifics and uses of different clothing types, how to recognize individuals in certain professions, and how to coordinate outfits. This all fits with the educational purpose of toys to teach certain forms of behavior.

There is not so clear an educational reason, however, for clothing a plush toy animal (such as, but not limited to, a teddy bear, dog, cat, or monkey). While these toys often have a vaguely human shape and are often quite different in appearance from their animal world counterparts, these animals generally do not wear clothes in any circumstance outside of the toy realm. Further, while the anatomic similarity between many teddy bears and a human is undeniable, the ability of a person to be able to correctly clothe a teddy bear provides only some of the same teaching benefit as clothing a human doll.

Instead of being a training tool for correct dressing and child care, the clothing of plush toys is often to provide for them having a more individualized personality, reflect the personality of the toy's owner, and to provide for increased anthropomorphism of the toy to the owner. It is easier to see a teddy bear as a “real” superhero when they are dressed like a superhero with whom the child is familiar from comic books or television programs. Similarly, clothing of a toy can allow the toy to play “dress up” with the child. The plush toy can also take a particular role based upon its clothing, allowing for them to be a playmate. For example, a teddy bear can get dressed in a dress for a tea party or can get dressed as a fireman to rescue a second plush animal from a burning bookcase.

The purchase of clothing for a toy can, however, be a challenge for a parent, guardian, or grandparent. Many times, clothing for plush animals is licensed so that it allows the toy to specifically resemble a chosen character from television, movies, or books. Further, many clothing items are designed to go together to create a coherent outfit. However, a parent may not be entirely familiar with the subject matter that is being licensed and how the outfits are supposed to coordinate. The parent may not know that the child's desired character is the one that wears red shoes, not the one that wears blue shoes and inadvertently purchase parts of two different outfits when the child is interested in particular one. As any parent knows, such a fashion faux pas can result in a great deal of drama.

Further, even if the parent knows what they are looking for, or the child is present and can make sure there are no major mistakes from misunderstanding a desired outfit, locating the desired clothing from a plush toy clothing rack can in and of itself be difficult. Plush toy clothing today often does not involve a single item, but, much like real clothing, involves an entire outfit and coordinating accessories that are sold individually instead of as a complete package. This provides for multiple price points and the ability to mix and match if the child desires, but can increase the chance that a coordinating item is not readily found or frustrate the child and parent in trying to find items that match.

Thus, even if a parent has the shirt, they may not be able to find matching pants on the rack. As plush toy clothing becomes more sophisticated, the number of components can go up and the number of possible ways to coordinate them can go up as well making the ability to figure out coordinating outfits all the more difficult.

SUMMARY

Because of these and other problems in the art, described herein, among other things, are systems and methods for providing an interactive toy dressing system. The system is designed to provide not only coordinating toy outfit purchasing advice, but can provide a play environment which assists users in determining coordinating clothing options that are available, and locating them for purchase. In this way, a user can be pleased that the outfit they selected looks coherent, coordinated, and stylish.

There is described herein, an interactive dressing system comprising: a screen; a scanner; and a plush toy; wherein, the scanner identifies the plush toy; and wherein, based on the identification, the screen displays a corresponding outfit for the plush toy.

In an embodiment of the interactive dressing system, the scanner comprises a bar code reader.

In an embodiment of the interactive dressing system, the scanner identifies the plush toy by scanning a bar code on a tag attached to the plush toy.

In an embodiment of the interactive dressing system, the screen comprises a touchscreen.

In an embodiment of the interactive dressing system, the screen display comprises one of a plurality of displays which may be selected via the touchscreen.

In an embodiment of the interactive dressing system, the screen display displays the plush toy in the outfit.

In an embodiment, the interactive dressing system further comprises a housing for the screen.

In an embodiment of the interactive dressing system, the housing is shaped as an armoire.

In an embodiment of the interactive dressing system, the screen is part of a mobile phone.

In an embodiment of the interactive dressing system, the scanner comprises a camera.

There is also described herein an interactive dressing system comprising: a screen; a scanner; and an initial item of clothing for a plush toy; wherein, the scanner identifies the item of clothing; and wherein, based on the identification, the screen displays at least one additional item of clothing.

In an embodiment of the interactive dressing system, the at least one additional item of clothing is part of an outfit of which the initial item of clothing is also a part.

In an embodiment of the interactive dressing system, the at least one additional item of clothing is selected to be displayed because the at least one additional item of clothing has been recently sold with the initial item of clothing.

In an embodiment of the interactive dressing system, the at least one additional item of clothing and the initial item of clothing are displayed on a plush toy.

In an embodiment, the interactive dressing system further comprises: a plush toy; wherein the plush toy is also identified by the scanner; and wherein the at least one additional item of clothing and the initial item of clothing are displayed on the plush toy.

In an embodiment of the interactive dressing system, the scanner comprises a bar code reader.

In an embodiment of the interactive dressing system, the scanner identifies the initial item of clothing by scanning a bar code on a tag attached to the initial item of clothing.

In an embodiment of the interactive dressing system, the screen comprises a touchscreen.

In an embodiment of the interactive dressing system, the additional item of clothing can be purchased from the screen.

In an embodiment of the interactive dressing system, the additional item of clothing can be identified in an electronic communication sent from the screen.

BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the embodiments described herein and to show more clearly how they may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings which show at least one exemplary embodiment.

FIG. 1 provides a perspective view of an embodiment of an interactive toy dressing system.

FIG. 2 provides an embodiment of a screenshot of a model toy dressed in a sample outfit.

FIG. 3 provides an embodiment of a screenshot of a scroll or browse function.

FIG. 4 provides an embodiment of a screenshot of an outfit selection function.

FIG. 5 provides an embodiment of a screenshot of a view of outfit components.

FIG. 6 provides an embodiment of a screenshot of an item selection function.

FIG. 7 provides an embodiment of a screenshot of an item detail screen.

FIG. 8 provides an embodiment of a screenshot of an item scan match showing a model toy.

FIG. 9 provides an embodiment of a screenshot of an outfit selection function coming from an item scan match.

FIG. 10 provides an embodiment of a screenshot of outfit components matching a scanned item.

FIG. 11 provides an embodiment of a screenshot of an item detail screen including a future recall section.

FIG. 12 provides an embodiment of a screenshot of a coordinating outfit selection from a toy scan.

FIG. 13 provides a printout of a personalized clothing wishlist.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

FIG. 1 provides an embodiment of a dressing system (100) which is designed to resemble an armoire or other piece of furniture where clothing is traditionally stored and selected. This is not required, but can assist in the play value of the dressing system (100) by linking it to fashion and clothing. The dressing system (100) will generally be positioned in a retail environment in close proximity to racks of clothing for toys. This may be as part of an interactive toy assembly workshop such as, but not limited to, that described in U.S. Provisional Patent Application Ser. No. 61/684,420, the entire disclosure of which is herein incorporated by reference, or may be part of a more traditional retail store.

The dressing system (100) will generally comprise a computer having various pieces of computer hardware including, but not limited to, digital processors, display devices, input devices, local storage, and communication hardware, which hardware is effectively concealed in the station to enhance seamless data collection and eliminate the need to utilize traditional computer interface tools to the extent possible. The computer at the dressing system (100) may be in communication with other stations in the workshop or retail environment and may be in communication with remote computing tools such as storage devices and more powerful network machines in a manner well known to those of ordinary skill. They may also be in communication with other networks such as, but not limited to, the Internet. These types of networks of computers are well understood by those of ordinary skill. In an alternative embodiment, the dressing system computer can comprise a mobile device or client computer with the dressing system being provided in software functionality as a mobile “app” or Internet based application.

The embodiment of the dressing system (100) shown in FIG. 1 (which is designed for use in a retail environment) includes functional components of an armoire including a hook (103) which may be used to store accessories a user has already picked out prior to purchase, and a mirror (105). These are not required but can be useful to provide for an enhanced reality experience where the digital content provided by the dressing system (100) is integrated with “hands on” components. In this case, the mirror (105), for example, can be used to view the plush toy (53) after it is dressed, in the same manner as a dressing room, and the hook can be used to hold hangers and related items while the plush toy (53) is dressed so it is not necessary for the user to place these back on clothing racks.

The dressing system (100) will generally comprise a screen (101), which, in the depicted embodiment, is a touch screen to provide for a simplified digital interface and can also include speakers to provide for sound. It is preferred that the dressing system (100) have content running at all times on the screen (101) to provide for a pleasant appearance and to eliminate the appearance of the screen (101) as a “black void” when not in use. Further, this can enhance the recognition that the touchscreen (101) is an interface device. The content presented may be simple digital advertising, may provide for a welcome type of screen, may provide a sampling of functional screens, or may provide a static display indicative of the object that the dressing system (100) represents.

The dressing system (100) may include a scanner (307) for reading a machine readable indicia such as, but not limited to, a standard 2-D bar code, a 3-D bar code, a QR code, or any other machine readable code. The scanner (307) can be used to allow for the dressing system (100) computer and any associated network, to take in information from a user and to identify an object provided to it in a concise fashion. Alternatively, other identification methods, such as identification of images through a camera interface as particular objects could alternatively or additionally be used. This information may come from a variety of forms. In the most basic format, the scanner (307) is capable of reading information about a toy (53) or item of clothing by scanning a hang tag or other indicator which allows for information about the toy (53) to be transferred to the dressing system (100). The scanner (307), however, may also be used for enhanced marketing or user detection.

In an embodiment, the scanner (307) may be used outside of toys (53) for marketing. Users may be provided with marketing mailers, emails, or other content which may include a machine readable indicia. This material could be to promote a certain item, provided as a reward, or to recognize an event (such as, but not limited to, a birthday). The content could be brought in and scanned by the user (e.g., from a paper printout or from a screen display on an item such as, but not limited to, a smartphone).

Upon scanning, the dressing system (100) could present individualized or semi-individualized content. For example, for birthday related content, the dressing system (100) could provide birthday related imagery and wish the user a “happy birthday.” This content is semi-individualized as, while it acknowledges a specific event related to that user, it is not specific to that user and any user with birthday content could receive the identical message. In personalized content, the specific birthdate or user's name could be displayed so that message is not the same for all birthdays, but specific to that user.

In a still further embodiment, the dressing system (100) could react to a frequent purchaser card being scanned. This could allow for specific information about the user to be displayed and the enhanced reality function to interact with the user individually. For example, the user's name could be displayed or the dressing system could inquire as to whether an outfit is being sought for a particular toy (53) the user previously purchased, or if matching items to a previously purchased clothing item are desired. If the answer is yes, the pre-purchased toy's avatar imagery could be located and used in future queries or matches could be automatically retrieved.

As indicated, in most cases, the scanner (307) will be used to provide information based upon a user's current intended purchase of clothes for a plush toy (53). As such, the dressing system (100) will generally act on three different potential sources of information. In the first, nothing is scanned (FIGS. 2-7) but the dressing system will still interact with a user. In the second, an article of clothing is scanned (FIGS. 8-11), or otherwise identified to the dressing system (100), which implies that the user is looking for other articles coordinating with that one. In the third mode, a plush toy (53) is scanned (FIG. 12), or otherwise identified to the dressing system (100), implying the user is looking for dressing suggestions for that toy (53). It should be recognized that these three modes are all similar, in that the goal is to help the user obtain a matching coordinated outfit. However, these three modes of operation will be discussed individually as they can provide slightly different forms of information to the user.

The first mode of operation will be where a user has not scanned any items at the dressing system (100) but the dressing system (100) is intended to interact with the user. This mode of operation may be the one used as a default with the dressing system (100) operating in this mode unless and until something is scanned or identified. Alternatively, this mode of operation may initiate when a sleep, or similar, mode is ended, for example, by a user touching the screen (101).

In this mode, which is called “model mode,” the dressing system (100) will generally provide for a variety of outfits which can be selected and have further information obtained on them as a general “fashion show”. Generally, the model mode will run with an image of a toy (53), which is dressed in an entire coordinating outfit (55) (a “model”) as shown in FIG. 2. It is important to recognize that, in model mode, the toy images (153) are selected based upon decisions of the operator of the dressing system (100) and not really the user of the dressing system. Thus, the images (153) provided can be of wholly coordinated systems where a plush toy (53), and the outfit (55), change as different models are considered. In this arrangement, the model may have no overlap with the specific plush toy (53) for which the user is attempting to locate an outfit (55).

In the embodiment shown in FIG. 2, the outfit (55) and toy (53) making up the image (153) will generally be selected based upon external factors. For example, the display shown would be appropriate in the summertime (as the shirt is short-sleeved and the outfit includes shorts). Further, factors such as external events may also influence the choice of outfit (55) shown. In the case of FIG. 2, the proximity to the games of the XXX Olympiad could potentially influence interest in British themed clothing as could having the dressing system (100) in an area of London frequented by tourists with children.

In the embodiment of FIG. 2, the clothing outfit (55) is shown on a plush toy (53). This provides for a better indication of how the outfit (55) will look in use than simply providing a picture of it. It also provides an indication of the result when the entire outfit is purchased. As no specific plush toy (53) has been specified (scanned) in model mode, the plush toy (53) shown with the clothing (55) will generally be selected from available plush toys (53) and may be selected as it works particularly well with, or coordinates with, the particular outfit (55) the plush toy (53) is shown wearing. The plush toy (53) may also be selected because it is a newly released toy. For example, in FIG. 2, a lighter colored toy (53) works better with this particular outfit (55) which is predominantly dark colored.

While a single toy (53) and outfit (55) are shown on the screen in FIG. 2, the user is intended to advance through or browse different screens, such as by swiping the screen (101), to obtain another image (153) of an outfit (55) and plush toy (53) combination. An embodiment of such advancing or browsing is shown in FIG. 3. This model mode allows the dressing system (100) to operate in the form of a fashion catalog or fashion show where clothing is provided on models which serve to enhance the clothing's appearance, but which models do not necessarily reflect the way it will appear when placed on a different plush toy (53).

Once a user has found a particular image (153) they find appealing and an outfit (55) they want more information on, the user can select that particular image as shown in FIG. 4, and, in this case, by tapping the screen. Generally, the user would select the image (153) because it includes an outfit (55) they are interested in acquiring. Upon the selection being processed, the screenshot of FIG. 5 can be provided. The screenshot of FIG. 5 moves from an image (153) which is modeling the clothing, to an image (155) which is a component view or breakdown of all the clothing items that are a part of the selected outfit (55) along with descriptions and pricing information. It should be apparent that descriptions and pricing may not be provided in an alternative embodiment, but may be provided in an embodiment to allow for budgeting.

As the clothing outfit (55) may be seen as a coherent whole based on the display on the model image (153), it should be recognized that the outfit (55) components may be titled to show that they go together. The embodiment of this screen then generally provides a user of the cost of a total outfit, along with the components used to create it. FIG. 5 also includes an avatar image (161) of a toy. In this embodiment, the avatar image (161) shows the same toy (53) and clothing (55) as the model image (153) that was selected. Thus, the user can see how the particular articles of clothing are arranged on the model.

In order to get more detail on any particular clothing item, a user may indicate a particular article of clothing, for example, indicating the shorts (166) shown in FIG. 6, and that indication can load a detail screen (157) with more information about that particular item (166) as shown in FIG. 7. When a particular item (166) has been selected, additional functionality may be provided including a future recall option as discussed in conjunction with FIG. 11. In an alternative embodiment, location information for that particular item may be provided or an online electronic ordering system may be provided.

In an embodiment, once a particular item has been selected, the dressing system (100) can interact with inventory control or other software to provide an indication of where an item is. For example, in conjunction with the particular shorts (166) selected, the system may indicate a particular location in the store where the shorts (166) can be located. For example, if the clothing is arranged on a vertical wall display in a grid pattern, an alphanumeric indicator of the grid square may be provided. Alternatively, a particular header may be indicated under which the shorts appear. In a further embodiment, if the store currently does not have any of the shorts (166) available, the system may indicate that the item is out of stock and may offer a user an alternative item which can work as a substitute which the store does have. Alternatively, the system may provide the user with information for ordering the item online or obtaining it from an alternative store location.

As should be apparent from FIGS. 2-7, the dressing system (100) provides for a simple and easy way for a user to browse outfits that does not require them to look through disconnected racks of clothing. Instead, the user is provided with model presentations of the clothing which show the clothing as outfits (55) and as it would appear on a plush toy (53). Further, when the user sees a final outfit that they like, the user can easily get more information and understand all of the coordinating pieces that make it up. Further, the user can obtain additional information about each of the coordinating pieces so that the user can find the pieces. This can provide for a much more pleasant shopping experience than going through racks of disconnected clothing.

In the model mode of operation, the user generally is searching without having already settled on any item, or is not necessarily looking for an outfit (55) that coordinates with the plush toy (53) they have, but instead is browsing available items to see what they like. As such, it provides for a shopping experience that connects the clothing items both to each other and to a particular plush toy (53) in a way that can make the clothing options more appealing.

In the second mode of operation, the user has located a particular clothing item (801) (or any item including, but not limited to, a toy accessory, a toy companion, or a smaller plush animal) and is looking for coordinating items or other items that may be appropriately purchased together for any reason. This mode of operation is called “clothing matching mode” and can occur because the user has already selected a particular item of clothing that they like and want, but needs to determine what other articles go with it. Alternatively, the mode could be used because the user is trying to complete an outfit of which they already have a portion.

In FIG. 8, the user utilizes the scanner (307) to scan a particular clothing item (801). In this case, it is a stars and stripes headband and an image (803) of the item (801) is shown in conjunction with the scanning. This is generally preferred so the user knows that they got the right item scanned as tags connected to items may become separated or mismatched. In an alternative embodiment, alternative methods for locating a chosen item (801) may be provided instead of just scanning. For example, a user may be provided with a catalog of items on the screen they can scroll through. These may be organized by type (e.g., all headbands are together) so that, if they have an item which cannot be scanned (e.g., it no longer has a hang tag), the user may still be able to locate the item for the initial starting point relatively quickly.

Upon an item (801) being scanned, the dressing system (100), in an embodiment, will locate the outfit that the particular item (801) is a part of and then obtain the model image of it. This is shown in FIG. 8 with the headband being part of a stars and stripes outfit shown in image (253). It should be noted that the image (253) provided in this embodiment can be the same one as is provided as one of the models in the scroll of FIG. 3, or an image of a model that has been discontinued in the model mode. However, in this case, the user need not scroll through other outfits to find this outfit and additional information (such as the header (243) showing that the item is part of this outfit) can be provided. Further, the image (253) need not be the only image provided. If this item goes with multiple outfits (55), or if its corresponding outfit (55) is shown on multiple plush toys (53), all these images (253) may be provided with a scroll function as indicated in the model mode. Thus, this clothing matching mode can be seen as providing a subset of the images (153) of the model mode where the subset is selected based on the item (801) that was scanned. Alternatively, the clothing matching mode can provide images that are retired from active model mode status, or have unique images (253) similar to those of model mode, but specific to this mode.

In an alternative embodiment, the outfit may not be presented (55) with a model or coordinated outfit, but may be chosen based on crowd-sourced information related to sales of the particular item (801). For example, if the user was to pick out the headband item (801) which is a part of a stars and stripes outfit, instead of presenting that particular coordinated outfit, the system may determine that, recently, the particular headband (801) scanned has been increasingly sold with the London city shirt shown in FIG. 2 because customers have been making an “American Olympian” of their own design. In an embodiment, this crowd-sourced connection of outfit can be provided. Similarly to crowd-sourced information, matching items can be selected based upon marketing objectives, or based on inventory available, to make sure that a user obtains the piece of clothing they may decide to seek out.

Once the outfit has been provided in image (253), the user can again select the outfit (55) as shown in FIG. 9 to obtain additional information about the components. This will pull up the component view of FIG. 10. In an alternative embodiment, the image of FIG. 10 could be provided initially, for example, if there was no model image. Like in the image of FIG. 5, the component view again shows the various components of the outfit (55). In this embodiment, however, there are a couple of differences in presentation. In the first instance, this embodiment does not provide for pricing and description information, but that is by no means required.

One difference is that, since the screen of FIG. 10 is trying to provide for matching, the scanned object (although part of the outfit) may not be shown. Specifically FIG. 10 shows only the items of the outfit (shoes (866) and a dress (868)) other than the headband (801) scanned. This assumes that the user likely already has the scanned object in their hand and does not need to locate it. Instead, they are looking for what it goes with the item (801) and are not considering the outfit (55) in total. By not showing the headband (801) as a component (although it is still shown as the item (801) scanned), the screen is simplified to only provide for the coordinating, or otherwise selected, outfit components as shown in FIG. 10.

While the embodiment of FIG. 10 provides for only a single coordinating outfit, it should be understood that any item (801) may actually have a number of corresponding outfits. Should this be the case, not showing the selected item in the component item display can provide for a better understanding of what may coordinate. In this scenario, the components of outfits may be more “mix-and-match” where there is not a single coordinating outfit, but a number of coordinating outfits formed from similar components.

As in FIG. 6, one of the outfit components may again be selected to provide further information as shown in FIG. 11 with the dress of FIG. 10 having been selected. FIG. 11 also provides for additional screen displays related to the future recall of information. In this case, the user has selected the “email this screen” portion of the display in FIG. 11, which has opened a screen for the entry of email information (901) via a virtual keyboard (903). An option to print the screen (905) is also provided. An example of such a printed screen is provided in in FIG. 13. The printout of FIG. 13 is designed to look like a personalized “wish list” and includes pricing information (1301), a picture of the bear this outfit is for (and its name) (1303), an indication of the bear's owner (1305), and the location (1307) that the wish list was printed or where the items would be available.

This future recall can be useful in a variety of circumstances. For example, the printout could be printed out and carried around by the user in the store (or shown to an employee) to use as a picture reference to locate the clothing item. Alternatively, the user could bring in a piece of clothing that they have and determine the rest of a coordinating outfit, printing out all the remaining components to use as a potential gift list for a relative. The specific clothing items could also be sent via email to a different person who may be interested in purchasing the clothing items as a gift.

In this way, the information on the specific clothing item can be sent to the appropriate person and that person can order it directly from an online ordering system, or can print the email and bring it to the store to make sure they get the correct item. In order to facilitate the item being correctly purchased, the email could include a machine readable indicia which is suitable for use with the scanner (307) so that, if a printout was brought in, the printout could be scanned into the dressing system (100) and the specific item be shown on the screen along with associated purchase and/or location information as discussed previously.

FIG. 12 provides for an alternative mode also based on a scan. However, in this mode, the scan is of a plush toy (53) instead of scanning an article of clothing and is thus referred to as a “toy scan mode.” Based on this scan, the system presents at least one outfit that is designed to coordinate with this toy (53). In the depicted image, there are four outfits pictured which are seen to coordinate with this toy (53). As discussed above, the outfits may be selected from a variety of criteria including, but not limited to, crowd-sourced information, purposeful marketing, or product availability.

In order to show another embodiment of display, each of the outfits in this FIG. 12 does not include all the accessories for each outfit (55) in this image, but only a basic indication of the main components of the outfit (55). As has been indicated, the specific information displayed on each screen is interchangeable between modes. Further, the outfits (55) may not be shown on a plush toy (53) but instead shown alone. The outfits (55) of the mode of FIG. 12 may be selected for a variety of reasons including those discussed above in conjunction with a scanned clothing item.

Further, depending on the design of the toy (53), the toy (53) may only have certain outfits that are suitable for use with it. For example, a dog toy may not fit into clothes designed for a bear toy. Thus, the initial selection may generally be for outfits (55) suitable for the specific type of toy (53). A second level of decision may be made based on outfits (55) that coordinate with the toy (53) or where there are pictures available of that particular toy (53) in a particular outfit. In this way, the standard image of the toy (53) in an outfit (55) can be presented if the user selects any of the outfits (55).

In a still further embodiment, other matching criteria can be used. For example, as can be seen in FIG. 12, the bear selected has a floral pattern. As such, it is more likely that this plush toy (53) has been selected by a female child and therefore the matches depicted are generally representative of female clothing. The assumption that the plush toy (53) has been selected as having a female gender (which is more likely by a female child but also possible if a male child had selected this toy) is not necessarily correct, but as an initial guide for selecting which outfits are pictured, it may be useful in certain embodiments. Further, gender of the owner of the toy (53) can be obtained from other sources including over the network of which the dressing system (100) is part to verify the assumption.

It should be recognized that such assumed criteria may only be partially assumed in an embodiment. For example, in FIG. 12 instead of 4 generally female outfits, 3 generally female outfits and a gender neutral outfit or male outfit for the toy may be provided. If this male outfit was selected, further alternatives comporting with a male gender may be provided. Also, as in other modes, the ability to browse through multiple screens of information is also provided.

When a user selects one of the outfits (55) of FIG. 12, the dressing system (100) may provide a screen similar to that of FIG. 2 which may show the specific scanned toy (53) in the outfit (55), or a model toy (53) in the outfit (55), and proceed in accordance with the selection process of the model mode. Alternatively, selection of an outfit (55) may provide an indication of matching accessories in accordance with the clothing matching mode of operation. In a still further embodiment, the selection may result in the system bypassing a presentation of the plush toy (53) in the outfit (55), and go straight to a component display screen as is shown in FIGS. 5 and 11.

In this embodiment, the plush toy avatar (161) provided in the corner of the various screens may be selected to correspond with the scanned toy (53) as opposed to the model toy (53). In this way, the connection between the outfit (55) and the scanned toy (53) may be maintained, even if no imagery exists of this toy (53) in this outfit (55).

While the invention has been disclosed in conjunction with a description of certain embodiments, including those that are currently believed to be the preferred embodiments, the detailed description is intended to be illustrative and should not be understood to limit the scope of the present disclosure. As would be understood by one of ordinary skill in the art, embodiments other than those described in detail herein are encompassed by the present invention. Modifications and variations of the described embodiments may be made without departing from the spirit and scope of the invention.

Claims

1. An interactive dressing system comprising:

a screen;
a scanner; and
a plush toy;
wherein said scanner identifies said plush toy; and
wherein, based on said identification, said screen displays a corresponding outfit for said plush toy.

2. The interactive dressing system of claim 1, wherein said scanner comprises a bar code reader.

3. The interactive dressing system of claim 2, wherein said scanner identifies said plush toy by scanning a bar code on a tag attached to said plush toy.

4. The interactive dressing system of claim 1, wherein said screen comprises a touchscreen.

5. The interactive dressing system of claim 4, wherein said screen comprises one of a plurality of displays which may be selected via a touchscreen.

6. The interactive dressing system of claim 1, wherein said screen displays said plush toy in said outfit.

7. The interactive dressing system of claim 1, further comprising a housing for said screen.

8. The interactive dressing system of claim 7, wherein said housing is shaped as an armoire.

9. The interactive dressing station of claim 1, wherein said screen is part of a mobile phone.

10. The interactive dressing station of claim 1, wherein said scanner comprises a camera.

11. An interactive dressing system comprising:

a screen;
a scanner; and
an initial item of clothing for a plush toy;
wherein, said scanner identifies said item of clothing; and
wherein, based on said identification, said screen displays at least one additional item of clothing.

12. The interactive dressing station of claim 11, wherein said at least one additional item of clothing is part of an outfit of which said initial item of clothing is also a part.

13. The interactive dressing station of claim 11, wherein said at least one additional item of clothing is selected to be displayed because said at least one additional item of clothing has been recently sold with said initial item of clothing.

14. The interactive dressing station of claim 11, wherein said at least one additional item of clothing and said initial item of clothing are displayed on a plush toy.

15. The interactive dressing station of claim 11 further comprising:

a plush toy;
wherein said plush toy is also identified by said scanner; and
wherein said at least one additional item of clothing and said initial item of clothing are displayed on said plush toy.

16. The interactive dressing system of claim 11, wherein said scanner comprises a bar code reader.

17. The interactive dressing system of claim 16, wherein said scanner identifies said initial item of clothing by scanning a bar code on a tag attached to said initial item of clothing.

18. The interactive dressing system of claim 11, wherein said screen comprises a touchscreen.

19. The interactive dressing station of claim 18, wherein said additional item of clothing can be purchased from said screen.

20. The interactive dressing station of claim 18, wherein said additional item of clothing can be identified in an electronic communication sent from said screen.

Patent History
Publication number: 20140061295
Type: Application
Filed: Mar 13, 2013
Publication Date: Mar 6, 2014
Applicant: BUILD-A-BEAR WORKSHOP, INC. (St. Louis, MO)
Inventor: Brandon Elliott (St. Charles, MO)
Application Number: 13/802,166
Classifications
Current U.S. Class: Systems Controlled By Data Bearing Records (235/375)
International Classification: A63H 3/36 (20060101);