MOTION-BASED MEDIA CREATION
A system for motion-based media creation includes an article of footwear or apparel having at least one accelerometer or inertial measurement unit operative to monitor spatial motion of at least a portion of the article of footwear or apparel and generate a data stream indicative of the monitored spatial motion. The system further includes a processor in networked wireless communication with the article of footwear or apparel. The processor is configured to receive the data stream from the article of footwear or apparel; identify at least one motion primitive from the received data stream; and trigger the playback of an audio sample or a visual effect in response to the identified at least one motion primitive.
Latest NIKE, Inc. Patents:
The present application is a continuation of International Patent Application No. PCT/US2020/061882, filed Nov. 23, 2020, which claims the benefit of priority from U.S. Provisional Patent Application Nos. 62/939,309, filed 22 Nov. 2019, and 63/032,689, filed 31 May 2020, and all of which are incorporated by reference in their entirety.
TECHNICAL FIELDThe present disclosure relates to a system for converting a user's real time motion, as sensed by an electronic article of footwear or apparel, into an audio and/or visual composition.
BACKGROUNDIn the music industry, it is common for composers or DJs to layer electronic tracks to produce a musical composition. In creating these compositions, the artist may rely on discrete audio samples that may be triggered by a keyboard or other Musical Instrument Digital Interface (MIDI) controller. While the resulting output may be creatively expressive and may require exceptional skill to arrange, it typically lacks any performance attributes.
When considering physical performances (e.g., a dance performance), often an artist may respond or perform in synchrony with a musical composition, however, their movement is typically only a choreographed response to the music.
SUMMARYThe present disclosure generally relates to a system that can capture a user's motion and convert the sensed motion into a form of creative expression. The system may include an article of footwear or apparel having at least one accelerometer or inertial measurement unit operative to monitor spatial motion of at least a portion of the article of footwear or apparel and generate a data stream indicative of the monitored spatial motion. The system further includes a processor in networked wireless communication with the article of footwear or apparel. The processor is configured to receive the data stream from the article of footwear or apparel; identify at least one motion primitive from the received data stream; and trigger the playback of an audio sample or a visual effect in response to the identified at least one motion primitive.
The following discussion and accompanying figures disclose a system that uses directly sensed body motion to trigger the playback of one or more audio samples or visual effects. This technology is intended to create a new form of expression, whereby the movement of a dancer or athlete can drive the creation of an electronic multimedia experience.
In addition to simply providing the tools to compose an audio/visual (A/V) experience, some embodiments of the present technology may enable a social collaboration between a plurality of users. For example, in some configurations, multiple users of the system may collaborate in a local or networked manner to make joint/collaborative A/V compositions. In another example, multiple networked users may issue and/or respond to motion-based challenges from each other.
In a collaborative context, some embodiments of the present technology may enable multiple members of a dance troupe or social network to collaborate in the creation of an A/V composition, much in the same way a symphony performs. In particular, each user or small group of users may have unique sounds or visual effects associated with their movement. During the performance, the combined sound output resulting from each member's movement may produce a performance-based A/V composition where the various user's bodies become the “instruments.”
Regarding challenges in a social media context, it has become increasingly popular for individuals to make online challenges to each other via various social media platforms. These challenges often involve users performing one or more actions or dancing to a specific audio clip. One example of such a challenge involved users videotaping themselves dumping icy water on their head and then issuing the same challenge to another user. Other challenges involve performing specific or improvised dance sequences to a portion of a song. In each instance, users may videotape themselves performing the dance/action and post the resulting video clip to an online video hosting service. Examples of such hosting services include TIKTOK and DOUYIN, both operated by Beijing ByteDance Technology Co Ltd., or YOUTUBE, which is operated by Youtube, LLC, a subsidiary of Google, LLC. As will be described below, the present technology may be well suited for similar “challenges.”
In some embodiments, the output of the user's expression may exist solely within a separate medium and/or solely for the consumption by others (i.e., “viewers”) who are remote to the user. For example, the user's motions may be used to trigger one or more audio and/or visual effects within a virtual environment, such as may exist within a video game. Alternatively, the effects may be presented in an augmented reality (AR) environment, where they may be overlaid on a natural perception of the real world. In such an AR context, the effects may either be delivered to a user device of a person live to the event such that they may be superposed on the user's real world view—as with AR display glasses. Alternatively, the effects may be overlaid on a captured video feed, such as a streamed video (internet or television) broadcast, which may be viewed by a user device such as a mobile phone or television.
As schematically illustrated in
In the present technology, such as shown in
The communications circuitry 24 coupled with the motion sensing circuit 22 is configured to outwardly transmit the data stream to the user console 30. The communications circuitry 24 may include one or more transceivers, antenna, and/or memory that may be required to facilitate the data transmission in real time or near real time. This data transmission may occur according to any suitable wireless standard, however particularly suited communication protocols include those according to any of the following standards or industry recognized protocols: IEEE 802.11, 802.15, 1914.1, 1914.3; BLUETOOTH or BLUETOOTH LOW ENERGY (or other similar protocols/standards set by the Bluetooth SIG); 4G LTE cellular, 5G, 5G NR; or similar wireless data communication protocols.
Referring again to
The user interface 36 may be configured to provide the user with the ability to view and/or hear available A/V effects while also enabling the user to build out a correspondence table. In some embodiments, the user interface 36 may include one or more displays 38 operative to output visual information to a user and/or one or more user input devices 40 operative to receive an input from the user. Examples of user input devices 40 include a touch screen/digitizer, a mouse, a keyboard, and/or a control panel having a plurality of rotary knobs and/or buttons, a camera, a gesture-based input device, AR/VR virtual selection, etc.
With continued reference to
The one or more audio output devices 46 may include one or more speakers, amplifiers, headphones, or other devices operable to broadcast audible sound in response to a received digital or analog audio signal. It is through these audio output devices 46 that the user console 30 will output one or more audio samples in response to the sensed movement. The visual effect controllers 48 may include one or more devices operable to illuminate one or more lights, initiate one or more lighting sequences, initiate one or more pyrotechnic effects, or the like. In some embodiments, a visual effect controller 48 may be resident on the wearable 20, such as to drive an LED or fiber optic visual illumination. Examples of such implementations are described in further detail below.
In operation, the user console 30 may serve to translate one or more sensed motions (i.e., of the wearable 20) into one or more audio or visual effects. By chaining various motions together, the user may be able to “play” a sequence of sounds or visual effects.
Once a motion correspondence table is established, the console 30 may receive a data stream from the article of footwear or apparel (at 64) that is indicative of the motion of the wearable. The user console 30 may continuously analyze this data stream in an effort to identify at least one motion primitive (at 66). As used herein, a “motion primitive” is a defined “block” of motion representing a discrete user action. Examples of possible motion primitives may include sensed linear translation, arcuate translation, sinusoidal/periodic translation, rotation, acceleration, jerk, and/or impact. In a general sense, motion primitives may include any combination of pre-programmed motions, user defined motions, and/or auto detected motions.
If one or more motion primitives are recognized from the data stream (at 66), the console 30 may trigger the playback (at 68) of an audio sample or visual effect that has been previously associated with that motion primitive. In some embodiments, the audio sample or visual effect that has been previously associated with that motion primitive may be a sequence of audio samples or visual effects, or a repeating sequence of audio samples or video effects. For example, upon detection of a user stomping their foot (e.g., a motion primitive characterized by a downward velocity followed by a sudden deceleration), the user console 30 may trigger the playback of a single bass beat. Alternatively, it may trigger the playback of a plurality of bass beats, and/or may trigger the playback of a looped sequence of bass beats. In this manner, the system provides extreme flexibility for the user to define what audio or video effect each movement (or sequence of movements) may cause/initiate.
Referring to
In the simplest embodiment, the motion catalog 114 may simply be imported from a pre-existing movement database 112, which may identify typical or common movements within dance sequences, sports, activities, and the like. If a generic movement database would prove too unwieldy (i.e., too many inapplicable movement options) or not specific enough to the desired activity or dance, then, in some embodiments, the processor 32 may be capable of building out the motion catalog either by parsing movements of a connected wearable 20 (i.e., via a received data stream), or by extracting movement information from a supplied video source 42.
In an embodiment where movement information is extracted from a video, the video source 42 may include, for example, a live and locally captured video, a pre-recorded video, and/or a networked or internet video feed/stream. The video source 42 may be passed through an object recognition and tracking module 116 to recognize and estimate the three-dimensional motion of a depicted wearable (or portion of an individual's body). In one embodiment, the object recognition and tracking module 116 may utilize image processing techniques such as boundary/edge detection, pattern recognition, and/or machine learning techniques to recognize the wearable 20 and to gauge its movement relative to its environment or in a more object-centered coordinate system.
Once either the processor 32 has received the raw data stream from the wearable 20 or has recognized the motion of the depicted wearable from video stream, it may then pass the raw motion through a primitive detection module 118. In this module, the processor 32 may examine the raw motion for one or more motion primitives or sequences of primitives. For each new primitive or sequence detected, the processor 32 may catalog a new general motion or motion type, a new specific motion, or a new motion sequence in the motion catalog 114. A general motion or motion type may be, for example, a translation (e.g., any translation) or an impact. A specific motion may, for example, be a specific translation of the wearable 20 in a particular direction (e.g., a translation of an article of footwear in a medial direction or an impact of a left foot). Finally, a motion sequence may be, for example, multiple primitives sequenced together (e.g., translation in a lateral direction followed by translation in a medial direction).
Once the motion catalog 114 is established, either by direct import, active motion sensing, or video analysis and deconstruction, it may then be presented to a user, along with a collection of available audio samples and/or visual effects 120, via an interactive user interface 36. The user interface 36 may receive a user input 124 that is operative to link one or more cataloged motions (i.e., motion types, specific motions, or motion sequences from the motion catalog 114) with one or more audio samples and/or visual effects from the collection of available audio samples and/or visual effects 120. These established relationships between the motions and the audio samples and/or visual effects may then be stored in a correspondence table 110. In effect, the correspondence table 110 may be the translator that converts future movements into sound or light effects. In addition to making a correspondence between motion and sound/light, the correspondence table 110 may further link one or more motion primitives with haptic responses, such that a spectator, if equipped with the proper hardware in mechanical communication with their body, could feel a prescribed response following a certain motion primitive.
In some embodiments, the correspondence table 110 need not be an entirely separate data construct from the motion catalog 114, but instead may simply include a plurality of pointers, each being appended to a different respective motion entry within the motion catalog 114 and referencing a different effect. Additionally, in some embodiments, the correspondence table may further include a set of rules that modify the prescribed output according to, for example, timing considerations such as the rhythm, tempo, or flow, of the motion. In this example, the timing parameters may be employed to alter the pitch, tone, tempo, or speed of the prescribed output (or color, brightness, persistence or timing of a visual output).
Once the correspondence table 110 is created and initialization 100 is complete, the system 10 may then be set to a media generation mode 102. In the media generation mode 102, the processor 32 may be operative to receive a data stream 130 from the wearable 20 that is indicative of real-time sensed motion of at least a portion of the wearable 20. The data stream 130 may be received via communications circuitry associated with the processor 32, and may be made available to the processor in real time or near real time. From this received data stream 130, the processor 32 may analyze the motion with a primitive detector 132 (which may be similar or identical to the primitive detection module 118 used during initialization 100). As with above, the primitive detector 132 may examine the raw motion represented by the data stream 130 and detect one or more motion primitives or sequences of primitives.
In order to minimize processing time, and thus improve the responsiveness of the system 10, the primitive detector 132 may be configured to only look for motion primitives within the data stream 130 that have been previously defined in the motion catalog 114 and/or assigned with an associated audio sample and/or visual effect in the correspondence table 110. Upon detection of a primitive, the processor 32 may consult the correspondence table 110 and then may trigger or initiate the playback 134 of an audio sample or a visual effect in response to the identified at least one motion primitive.
In some embodiments, the collection of available audio samples and/or visual effects 120 may be populated from a pre-existing library that may be supplied with the software, or that may be downloaded from a connected distributed computing network (e.g. the Internet). In one embodiment, however, the user may be capable of populating or adding to the collection, such as by directly uploading one or more audio samples from a connected audio source 44 (e.g., a personal computer, a digital music player, a synthesizer, a keyboard, or the like), or by recording one or more sounds or sound sequences generated by the system 10 as a result of user movements. More particularly, if the user creates a particular beat/rhythm/composition via movement, they may be able to save that created work in the collection 120 for future playback/triggering by a single discrete motion. In this manner, it may be possible to layer different sounds/compositions for increased effect.
In one configuration, the user may have the ability to alter a degree of smoothing or auto-tuning between the various audio/video effects. In this manner, beginner users may be able to create compositions that sound or appear well produced, even if their movements are not 100% complete or in time. Similarly, the wearable 20 may be configured to anticipate motion primitives based on prior motion or preceding motion if the user is behind in their timing. Conversely, more advanced user may lessen the smoothing/auto-tuning to get more direct control over the output. In some embodiments, the smoothing/auto-tuning can use machine learning and artificial intelligence techniques to blend audio/video elements together, which may rely on the interspersing of additional beats, drawing out notes, predicting subsequent movements based on early motion indications, etc.
In one configuration, the user console 30 may be configured to take a previously recorded/produced audio track/song and divide it into a plurality of discrete segments (auto-segmentation). The nature, duration, and/or segmentation of the various segments may be customized or even separately created by the user (manual segmentation/segment modification). The user console 30 may then either automatically assign motion primitives to each segment, or may prompt the user to assign their own motion primitives to each segment. This may amount to choreographing a dance routine to a chosen song. Then, by performing the scripted motions in time, the user (or group of users) may initiate the playback of the song solely through their movements—segment by segment. In one embodiment, instead of having an absolute correspondence table, the correspondence table 110 may be conditional upon other factors. For example, a given motion primitive may initiate the replay of a first audio segment (of a song) if performed between time 00:00:00 and 00:00:10, or if performed as an initial move, however the same motion primitive may initiate the replay of a second audio segment if performed between time 00:01:40 and 00:02:00, or if it is between the 10th and 20th recognized primitive, or if it follows the recognition of a different primitive or playback of a prescribed audio segment. In a further embodiment, the varying amount of smoothing/auto-tuning may also be applied to this sequential, playback of music segments, whereas the segments may be blended or final notes may be extended to produce an output that flows from one segment to the next without appearing choppy.
In the case of choreographing existing songs, the user console 30 may include a local memory that has the song stored thereon, or the console may be in communication with an internet-based streaming audio source, for example that the user may separately subscribe to.
In addition to simply outputting basic sounds or beats, or outputting segments of a pre-recorded track/song, in some embodiments, the correspondence table may include one or more motion primitives that are linked to audio action/control commands. For example primitives may be used to initiate playback of a full song, initiate playback of the next song in a list, pause the song, rewind the song, alter the beat of the song, alter the tone of the song, alter the volume of the playback, fade in/out, etc. In this manner, a user, through his/her motions may act as a DJ or producer to playback pre-recorded audio from a local or networked source. Further, the audio action/control commands may operate in conjunction with the display of the user console. For example, primitives may be used to scroll through a list (e.g., to find a certain song/track).
For the purpose of any use-case examples described herein, it should be understood that any audio playback may include discrete sounds, collections of sounds, pre-recorded sounds, segments of audio tracks/songs stored on a local memory, full audio tracks/songs stored on a local memory, segments of audio tracks/songs drawn from an internet-based source (i.e., including songs that may be accessed via a subscription-based login), or full tracks/songs drawn from an internet based source, and the like.
As discussed above, and generally shown in
As schematically shown in
Finally,
Each of the embodiments illustrated in
Referring again to
Additionally, it should also be noted that in each embodiment described herein, some or all of the user console may be physically integrated with the wearable 20. For example, in one configuration, the remote user console 230 may not be a separate computing device, but may instead be a smart/connected wearable 220, such as a smart watch. In this embodiment, the smart/connected wearable 220 may include enough processing functionality and communications capabilities to transmit motion data to the distributed network, and may even be able to communicate with one or more audio output devices 46 or visual effect controllers 48 through a communication protocol such as BLUETOOTH. Likewise, the smart/connected wearable 220 may also serve as the user console 30 for one or more other wearables 20 that lack additional processing capabilities.
Building upon this notion of remote users collaborating over a distributed network, in some embodiments, the present system may be utilized in a game-like context whereby users may challenge each other to perform and/or create certain predetermined works of art, or to reproduce certain dance sequences. For example, in one context, a user may be presented with a sequence of movements on a display (e.g., an ordered sequence of motion primitives). Users may then attempt to replicate those movements in time, which if accurately performed, may generate a pre-composed audio or visual output. Deviation in timing or completeness of the user's movements from what was intended/displayed could alter the audible or visual output. Furthermore, in some embodiments, the processor 32 may be configured to determine, from the received the data stream, an accuracy metric that represents a correspondence between the monitored spatial motion of the wearable 20 and the ordered sequence of motion primitives. The accuracy metric may, for example, reflect deviations in timing, magnitude, and/or completeness of the user's motions relative to the presented sequence. In some embodiments, the presented sequence may vary in difficulty or complexity based on the experience of the user.
Following completion of a composition or challenge, the user console 30 may transmit an audio or video capture 148 of the composition and/or the accuracy metric/scoring 146 from the challenge to the remote computing device 144, where it may be hosted to be watched by one or more viewers 150 across the distributed network 50. Additionally, the first user 140 may then issue a subsequent challenge 152 to a second user 154 either directly or via the remote computing device 144 and/or distributed network 50.
In another collaborative/competitive example, instead of issuing direct challenges to other users, the present technology may be used to bring physically separated users together and/or gamify a video-based workout streamed over the internet. For example, during an online/streamed kickboxing class, each user watching or listening to the virtual class may be instructed to perform a repeating series of movements or actions. A user's wearable 20 (or the connected user console 30) may sense the motion primitives associated with the user's respective actions, and may cause one or more visual effects to be overlaid onto or displayed on the user's display 38 in conjunction with the instructional video. In one embodiment, the color or nature of the visual effect displayed on the display 38 may be altered according to the similarity, completeness, or timing of the sensed primitive when compared with what is expected or instructed. For example, the user may have green stars appearing or confetti raining down from the top of the display if the user is achieving a new personal best or is above a predefined accuracy threshold. Conversely, the display may provide one or more visual motivators to the user if significant deviation is recognized. In one embodiment, the user's movements may be scored according to the accuracy or completeness of their moves when compared to those of an instructor or idealized reference. This accuracy score may be a moving average of accuracy over a predetermined period of time, and may be displayed on the user's display 38 for personal reference/goal setting. In a situation where a plurality of users are distributed over a network and each watching the video, each user's accuracy score could be broadcast to the collective group (possibly anonymously) so that each user may know where he or she ranks. Likewise, the user's accuracy score may be displayed on the display 38 along with a score of a known influencer, professional athlete, or friend on a social network.
Referring to
As shown in
In still another embodiment, the visual traces may be scored with music that is stored in memory associated with the user's device 200, or accessible to the user's device 200 via a streaming media service/subscription. In this embodiment, the user's device may understand the timing and tempo of the motion and created visual output and may select an audio/music track that has a similar beat or rhythm. In instances where the available audio slightly mismatches the beat or timing of the motion, the user's device may be operative to select the closest possible audio and then modify the beat or tempo to match.
The use of electronic, motion-sensing apparel or footwear in the present manner enables a new form of expression and creativity that is not possible through other, more traditional electronic inputs. Each sport and each player within a sporting event may create their own unique auditory or visual experience that be unique to that athlete's style and performance. This experience may be broadcast to users either directly (e.g., via speakers or lights within the facility), or via one or more hand held devices that are in communication with the user console paired with the athlete's wearables. In a non-sporting sense, the present technology enables performers (both professional and those streaming on the interne at home) with a new means of digitally augmented expression, with their own movements being the direct trigger for an A/V experience.
Pre-set themes or genres may be used or applied in any of the above-referenced examples to introduce new suites of expression while altering the sounds or visual expressions according to a pre-defined rule. For example, an island/beach theme may alter the available sounds to a more calypso-themed suite of sounds and/or may alter the available colors and/or visual effects to ones within a blue/green/tan color palate.
In one embodiment, such as shown in
In the example of a flash mob, the wearable 20 worn by each user may have connectivity to a common distributed computing network (WAN, wireless LAN, etc), with each wearable being connected directly or through a smart phone, watch, or other connected device. The conversion from sensed motion primitives to a triggered response may occur either locally to each wearable 20, or more centrally on a networked server 304/data aggregation device. The collective user output may then be converted to an audio or visual output either via a local A/V system 306 (e.g., for playback by a local speaker 308 or visual effect device) or may be passed via the network to a third-party device 310 where it may be output as audio (via headphones or speakers) or displayed visually on a screen in augmented or mixed reality, in a similar spirit as shown in
As noted above, in some embodiments, one or more visual effects may be triggered in response to the sensed motion primitive. To this end,
As schematically illustrated in
In the configuration where the light emitting element 400 shines through the ground-facing outsole surface 424 of the article of footwear, the projected light would only be visible to surrounding observers only if the wearer of the article lifted their foot off the ground. To conserve power, in one configuration, the light emitting element 400 may be controlled to illuminate only when the article of footwear detects that it is in motion off of the ground, or when it may be detected that the foot has moved a sufficient distance away from the ground for the broadcast light to be visible to an observer (i.e., where position may be derived from sensed acceleration data). In still another embodiment, the light emitting element 400 may be controlled to illuminate only if the wearable 20/motion sensing circuit 22 senses an upward acceleration above a certain threshold. Such may indicate a jump, or a jump of a particular magnitude, and may further aid in ensuring a certain minimum duration for the illumination or that an ideal focal distance is achieved for the projector 402 to project an image of sufficient clarity.
As shown in
The upper 462 may include a variety of provisions for receiving and covering a foot, as well as for securing the article 444 to the foot. The upper 462 includes an opening 466 that provides entry for the foot into an interior cavity of upper 462. Some embodiments may include fastening provisions, including, but not limited to: laces, cables, straps, buttons, zippers as well as any other provisions known in the art for fastening articles.
As generally illustrated in
In some embodiments, the article of footwear 460 may include one or more illuminated design elements 442. These design elements 442 may include spotlighted features or discrete elements as opposed to more generally illuminated areas, such as the lighted panel 440. In some embodiments, however, there may be overlap between the two (i.e., a design element may be a panel-illuminated logo or accent). In this vein, the one or more illuminated design elements 442 may include illuminated logos or embellishments such as described in U.S. Pat. No. 8,056,269, illuminated straps such as described in U.S. Pat. No. 10,004,291, illuminated strands such as described in U.S. Pat. No. 8,813,395, and illuminated cavities or fluid-filled cushioning components such as described in U.S. Pat. No. 8,356,430 and US Patent Publication No. 2009/0158622, each of these references being incorporated by reference in their entirety.
While
In some configurations, the light emitting element 400 of
In one example, a body suit or jacket such as shown in
When using the present technology to compose an audio/visual performance, it may be particularly challenging to stay on beat in the absence of an external rhythm. More specifically, in such a use case, the user may lack many of the external cues (e.g., a bass beat) that may otherwise be used to set rhythm/timing. This problem may be particularly evident in a collaborative group composition, which may have otherwise relied on the beat of a song to synchronize the tempo and rhythm of the performers.
To aid the one or more users in beat tracking and/or staying synchronized with other collaborating individuals, the wearable 20 may include a vibration transducer 500 that is operative to transmit a tactile sensation to the body of the user.
The vibration transducer 500 may operate at the direction of the processor 28 to convey a beat or other tactile timing signal to the user. In multi-user environment, this timing signal could then be synchronized across each of a plurality of users to aid in creating a properly timed composition (e.g., during a flash mob). In one configuration, the vibration transmitted to the user by the vibration transducer 500 may be a switched or compound vibration that comprises a tactile waveform that is switched according to a specified beat. The tactile waveform may be a vibration having a frequency in the range of, for example and without limitation, about 100 Hz to about 300 Hz, or in the range of about 140 Hz to about 210 Hz, or in the range of about 230 Hz to about 260 Hz. This vibration may be selected so that the user may most easily perceive the notification from the vibration transducer 500 when such a notification is provided.
The intended beat of the song or composition may be represented by a periodic transmission of the tactile waveform with the periodic transmission having a duty cycle of less than about 50%, or more preferably less than about 30%, or in the range of about 5% to about 25%. The beat may have a transmission frequency of between about 0.1 Hz and about 3 Hz. More appropriately, if measured in beats per minute (BPM), the switched beat may transmit the tactile waveform approximately 30 to about 160 discrete times per minute. In some embodiments, every beat of a composition need not be expressed. Rather, only certain synchronizing beats might be expressed (e.g., one of 4 or one of 8 consecutive beats in a composition). Regardless of the specific frequency of the beat, the tactile waveform may represent a short-duration buzz, whereas the beat is the timing on which those short-duration buzzes occur.
In some embodiments, instead of a periodic vibration being conveyed by a vibration transducer, a similar beat may be conveyed by constricting or tightening a portion of the wearable 20 (or an article of footwear or apparel in communication with the wearable) about the body of the user. To accomplish the constriction, the wearable 20 (or other article) may include one or more tensioning mechanisms that are operative to tension one or more fibers, cords, laces, closure mechanisms, or other fit-adjusting aspects of the article in a tangential direction around a portion of the wearers body. In doing so, the tension in the article may then urge the article to constrict or reduce in size in a radial dimension, which would impart a compressive force against the wearer. Articles that may be particularly adapted for such a compression include shoes (i.e., adaptive lace tensioning), compression sleeves/garments, and watches/bracelets.
In some embodiments, the tensioning mechanism may include a motor that is operative to spool/unspool a tensile fiber embedded within the article in response to an actuation signal. In other embodiments, the tensioning mechanism may include one or more linear actuators, fast response active materials (e.g., piezo actuators), or micro-electromechanical systems (MEMS). The tensioning mechanism may be configured to respond to a provided switched beat to periodically induce a momentary compression against the user's foot/body. Additional descriptions of the tensioning mechanism in a footwear context are provided in U.S. Pat. No. 10,448,707, U.S. Patent Publications 2018/0199673, 2017/0265583, and/or U.S. application Ser. No. 16/694,306, each of which is incorporated by reference in its entirety.
As schematically shown in
When used as a party light, the energy storage device 614 (e.g., a battery) may supply the electrical power needed to illuminate the lighted elements 608 as well as the power required to facilitate any external communications or drive any powered aspects of the movement mechanism 616. The energy storage device 614 may be charged concurrently with the wearable 20 when the charging device is plugged in to an AC power source. Conversely, the energy storage device 614 may expend energy when the charging device 600 is operated as a party light.
The movement mechanism 616 may be configured to induce movement of the charging pucks 604, 606 and/or lighted elements 608 to provide a visual effect similar to a disco ball or other moving spotlight. The movement mechanism may include, for example, one or more motors, wind-up/spring driven mechanisms, or articulating mirrors/lenses. The movement mechanism 616 may generally create a rotation or oscillation of at least a portion of the charging puck or lighted element to alter how the light is projected. As schematically shown in
The lighting controller 618 may be responsible for illuminating the one or more lighted elements 608 to create the visual effect. In one configuration, the lighting controller 618 may simply illuminate the lighted elements 608 in a repeating pattern that may be pre-programed and/or user controlled. For example, a predefined collection of illumination patterns and sequences may be preprogrammed into the lighting controller at the time of manufacture. The user may then select the desired pattern/sequence, for example, by clicking a button on the hub 612 or by selecting it via a connected wireless device (e.g., remote, smartphone, computer, etc).
In another configuration, the lighting controller 618 may illuminate the lighted elements 608 in response to audio that may be sensed via an integrated or connected microphone. The lighting controller may, for example, pulse lights in response to a sensed low-frequency beat, or may alter lighting patterns or colors according to sensed pitch or frequency. In one configuration, the microphone may be integrated into a connected wireless device, which may transmit either raw sensed audio data, audio summary data (e.g., beat or pitch data), or program commands based on sensed raw audio or audio summary data.
In still another configuration, the lighting controller 618 may receive a digital signal from the wearable 20 and may illuminate the lighted elements 608 on the basis of one or more sensed motion primitives, as discussed above. As such, the lighting controller 618 may include, or may be in communication with the visual effect controller 48 and the charging device 600 may serve as means for displaying the visual expression of the user's movement (or else may complement the user's visual movement). In this manner, it may be possible for the lighted elements 608 to synchronize their movement and or flashing with the rhythmic movement of one or more people dancing near the light.
In some embodiments, instead of triggering the playback of an audio sample or a visual effect in response to the identified at least one motion primitive, the motion primitive may serve as the input to an external game. For example, in some embodiments, the shoe may serve as the input to a dance-based video game where a sequence of movements are displayed on a video display, and a user must do his or her best in replicating the movements to achieve a high score. In another embodiment, such as shown in
The wireless computing device 706 may further include the ability to authenticate a user's identity, such as by including an application that requires the user to enter a password, to present an identifying biometric feature (e.g., fingerprint or facial recognition), or to otherwise be securely logged in to the application. At step 804, the user's wireless computing device 706 may authenticate the user's identity and location as being physically present within a predefined proximity or geofence of the kiosk 700.
Once the user is authenticated and recognized in proximity to the kiosk 700, the server 708 may present the user with a particular challenge (at 806). In one configuration, the challenge may be presented to the user 702 via a display 712 that is coupled to, or in communication with the kiosk 700. In another embodiment, the challenge may be presented to the user via the communication network 710 and a display provided on their wireless computing device 706. In general, the challenge may comprise a series of movements, motions, or actions that the user is asked to perform in sequence. For example, the challenge may comprise a dance sequence, a set of exercises such as jumping jacks, lunges, burpees, high knees, kicks, or a series of yoga poses.
Once the user is presented with the challenge at 806, the user 702 may perform the requested physical activity and the server node 708 may determine that the challenge has been successfully completed (at 808). This determination may come either from direct observation (e.g., via a camera or pressure mat provided with the kiosk 700) or by receipt of a digital indication that the challenge has been performed. In one configuration, the received digital indication may be provided by a wearable 20 as a transmitted sequence of motion primitives, or else as a simple indication that the wearable 20 has detected a sequence of motion primitives that match or closely approximate what would be expected during successful completion of the challenge.
Following a determination that the user 702 has successfully completed the challenge (at 808), the kiosk 700 may then present the user with a code (at 810) that may be redeemed for a retail product (at 812), or combined with other codes to be collectively redeemed for a retail product (or limited release digital object). As generally shown in
In one embodiment, the kiosk 700 may be located at a retail store and the code may be handed to retail associate or entered into an internet interface to redeem a retail product of similar style or design as the product 704 displayed in the kiosk 700. In another embodiment, a user may need to acquire multiple codes, each from a different kiosk, to qualify for the product. In this multi-kiosk scenario, the plurality of kiosks may be distributed over a geographic area and/or at a plurality of retail stores, and the user may be required to search them out, such as described in U.S. Patent Publication 2019/0213619 and U.S. patent application Ser. No. 16/874,944, which are incorporated by reference in their entirety. In this configuration, the user may be sent on a scavenger hunt to accumulate the codes required for the product unlock. As generally discussed in U.S. patent application Ser. No. 16/874,944, in one embodiment, the user may be guided to the next kiosk through the use of turn by turn navigation built into the shoes (e.g., by selectively tensioning the laces, or through selective actuation of the haptic transducer illustrated in
While the retail kiosk 700 is illustrated in
While most of the disclosure provided above has been primarily focused on creating audio and/or visual effects within a live environment, similar experiences may also be created within a virtual world (e.g., as shown in
Referring to
In the embodiment shown in
While
In some embodiments, in addition to movement triggers, the generation of the visual effect may be further conditioned upon external factors, such as timing within a game, recent scoring activity, or the relative positioning of other players on the court/field. Furthermore, in some embodiments the existence, nature, or color of the effect may vary based on one or more preferences or attributes provided by the viewing user (i.e., via the user's respective web-enabled computing device 960). In such an embodiment, the user may first pre-select a favorite team or player, which may dynamically assign different motion correspondence tables to different players/teams. During the video broadcast, the pre-selected favorite team or player may then be augmented by a first set of sound/graphics, whereas other players (and in particular opposing players/teams) may be augmented by a second set of sound/graphics. The difference in in treatment between the designated favorite player/team and the opponent may have the effect of casting the pre-selected favorite player as the hero and casting the opposing player/team as the villain. This may be accomplished, for example, through the use of differing color palates, differing graphics, and/or differing sound effects.
Much like the enhanced sports broadcast 952, the visual effects ported into a virtual environment 902, such as shown in
In addition to simply changing a visual or audio effect, it may also be possible for the output of the correspondence table to be transmitted to the user in the form of one or more haptic signals. Each haptic signal may direct a haptic device on the body of the user to provide a tactile response. These tactile responses may be in the form of a vibration or constriction of an article of footwear or apparel disposed on the user, and may be synchronized with one or more of the visual or audio effects. In this manner, the present technology may come by multiple senses to provide a more immersive user experience. In one embodiment, the haptic signal may further attempt to convey or impart one or more tactile sensations that may resemble those being experienced by the live athlete.
Various features and methods of operation of the presently described technology are set forth in the following clauses:
Clause 1: A system for motion-based media creation comprises an article of footwear or apparel having at least one accelerometer or inertial measurement unit operative to monitor spatial motion of at least a portion of the article of footwear or apparel, and generate a data stream indicative of the monitored spatial motion; a processor in networked wireless communication with the article of footwear or apparel, the processor configured to: receive the data stream from the article of footwear or apparel; identify at least one motion primitive from the received data stream; and trigger the playback of an audio sample or a visual effect in response to the identified at least one motion primitive.
Clause 2: The system of clause 1, wherein the identified at least one motion primitive includes a first motion primitive and a second motion primitive, and the audio sample or visual effect is a first audio sample or first visual effect and is triggered in response to the first motion primitive; and the processor further configured to: trigger the playback of a second audio sample or a second visual effect in response to the second identified motion primitive; and wherein the first audio sample or first visual effect is different than the second audio sample or second visual effect.
Clause 3: The system of any of clauses 1-2, further comprising a user input device a display and a user input in communication, and further wherein the processor is further configured to: maintain a library of a plurality of audio samples; associate, on the basis of a received input from the user input device, a selected audio sample from the plurality of audio samples with a predefined motion primitive; match the identified motion primitive with the predefined motion primitive; and wherein the triggering of the playback of the audio sample or the visual effect in response to the identified motion primitive comprises outputting the selected audio sample in response to the matching of the identified motion primitive with the predefined motion primitive.
Clause 4: The system of any of clauses 1-3, further comprising a user input device a display and a user input in communication, and further wherein the processor is further configured to: maintain a library of a plurality of visual effects; associate, on the basis of a received input from the user input device, a selected visual effect from the plurality of visual effects with a predefined motion primitive; match the identified motion primitive with the predefined motion primitive; and wherein the triggering of the playback of the audio sample or the visual effect in response to the identified motion primitive comprises outputting the selected visual effect in response to the matching of the identified motion primitive with the predefined motion primitive.
Clause 5: The system of clause 4, wherein the article of footwear or apparel further comprises at least one light; and wherein outputting the selected visual effect comprises illuminating the at least one light on the article of footwear or apparel.
Clause 6: The system of any of clauses 1-5, wherein the article of footwear or apparel is an article of footwear.
Clause 7: The system of any of clauses 1-6, wherein the user console is in communication with a distributed network; and the processor is further configured to: receive a challenge from the distributed network, the challenge comprising an ordered number of motion primitives; display the challenge to a user via a user display device; determine, from the received the data stream, an accuracy metric representing a correspondence between the monitored spatial motion of the article of footwear or apparel and the ordered number of motion primitives.
Clause 8: The system of clause 7, wherein the processor is further configured to transmit the accuracy metric to the distributed network.
Clause 9. The system of any of clauses 1-8, wherein the processor is configured to trigger the playback of the audio sample or the visual effect in a virtual environment.
Clause 10. The system of claim 9, wherein the playback of the audio sample or the visual effect is the movement of an avatar within the virtual environment.
Clause 11. The system of any of clauses 1-8, wherein the processor is configured to trigger the playback of the audio sample or the visual effect as an overlay on a video feed.
Clause 12. The system of any of clauses 1-11, wherein the processor is further configured to determine a distance between the article of footwear or apparel and a predetermined location and wherein the processor is configured to trigger the playback of an audio sample or a visual effect only if the distance is below a predetermined threshold.
Clause 13: A method of creating an entertainment experience from user motion, the method comprising: receiving a plurality of data streams, each data stream from a different article of footwear or apparel and representative of a spatial motion of the respective article; and triggering the playback of an audio sample or a visual effect in response to each of the received plurality of data streams.
Clause 14: The method of clause 13, further comprising identifying a plurality of motion primitives from the plurality of data streams; and wherein triggering the playback of an audio sample or a visual effect includes triggering the playback of a different audio sample or visual effect for each different one of the identified plurality of motion primitives.
Clause 15: The method of clause 14, further comprising mapping a plurality of predefined motion primitives to a plurality of different audio samples or visual effects.
Clause 16: The method of clause 15, wherein the mapping includes mapping each of the plurality of motion primitives to a different one of the plurality of different audio samples or visual effects.
Clause 17: The method of any of clauses 14-126, further comprising: receiving a challenge from a distributed network, the challenge comprising an ordered number of motion primitives; displaying the challenge to one or more users via one or more user display devices; determining, from the received plurality of data streams, an accuracy metric representing a correspondence between the identified plurality of motion primitives and the ordered number of motion primitives.
Clause 18: The method of any of clauses 13-17, further comprising: maintaining a library of a plurality of audio samples and a library of a plurality of predefined motion primitives; associating, on the basis of a received input from a user input device, a different selected audio sample from the plurality of audio samples with each of the plurality of predefined motion primitives; identifying a plurality of motion primitives from the plurality of data streams; matching each of the identified motion primitives with one of the predefined motion primitives; and wherein the triggering of the playback of the audio sample or visual effect comprises outputting the audio sample associated with each predefined motion primitive that is matched with an identified motion primitive.
Clause 19: The method of any of clauses 13-17, further comprising: maintaining a library of a plurality of visual effects and a library of a plurality of predefined motion primitives; associating, on the basis of a received input from a user input device, a different selected visual effect from the plurality of visual effects with each of the plurality of predefined motion primitives; identifying a plurality of motion primitives from the plurality of data streams; matching each of the identified motion primitives with one of the predefined motion primitives; and wherein the triggering of the playback of the audio sample or visual effect comprises outputting the visual effect associated with each predefined motion primitive that is matched with an identified motion primitive.
Clause 20: The method of clause 19, wherein outputting the visual effect comprises illuminating at least one light on the article of footwear or apparel.
Clause 21: The method of any of clauses 13-20, wherein receiving the plurality of data streams includes receiving the data streams via a wireless digital communication protocol.
Clause 22: The method of any of clauses 13-21, further comprising identifying a first motion primitive from a first data stream of the plurality of data streams and identifying a second motion primitive from a second data stream of the plurality of data streams, wherein the first motion primitive and the second motion primitive are the same; and wherein triggering the playback of an audio sample or a visual effect includes triggering the playback of a first audio sample or visual effect in response to the identifying of the first motion primitive and triggering the playback of a second audio sample or visual effect in response to the identifying of the second motion primitive; and wherein the first audio sample or visual effect is different than the second audio sample or visual effect.
Claims
1. A system for motion-based media creation comprising:
- an article of footwear or apparel comprising at least one accelerometer or inertial measurement unit operative to monitor spatial motion of at least a portion of the article of footwear or apparel, and to generate a data stream indicative of the monitored spatial motion;
- a processor in networked wireless communication with the article of footwear or apparel, the processor configured to: receive the data stream from the article of footwear or apparel; identify at least one motion primitive from the received data stream; and trigger the playback of an audio sample or a visual effect in response to the identified at least one motion primitive.
Type: Application
Filed: May 23, 2022
Publication Date: Sep 8, 2022
Applicant: NIKE, Inc. (Beaverton, OR)
Inventors: Christopher Andon (Portland, OR), Bobby LeGaye (Portland, OR), Hien Tommy Pham (Beaverton, OR), Adam Tenuta (Portland, OR)
Application Number: 17/751,585