INTERACTIVE PODCAST PLATFORM WITH INTEGRATED ADDITIONAL AUDIO/VISUAL CONTENT
An interactive podcast with integrated additional audio/visual content. In one embodiment, a computing device comprising: an electronic processor; and a memory including an interactive podcast program that, when executed by the electronic processor, causes the electronic processor to perform a set of operations including retrieving, from a server, a podcast, one or more links to additional audio/visual content that is contextually related to the podcast, and metadata, and generating an interactive podcast by generating a plurality of graphical user interfaces based on the podcast, the one or more links to the additional audio/visual content, and the metadata. The plurality of graphical user interfaces is configured to provide a user interaction interface between the podcast and the one or more links to the additional audio/visual content.
Latest Giide Audio, Inc. Patents:
This application is a continuation of U.S. patent application Ser. No. 16/808,059, filed Mar. 3, 2020, which claims priority to and benefit of U.S. Provisional Application No. 62/813,553, filed on Mar. 4, 2019, the entire contents of which are incorporated herein by reference.
FIELDThe present disclosure relates generally to interactive content. More specifically, the present disclosure relates to an interactive podcast platform with integrated additional audio/visual content that is contextually related to a specific podcast.
SUMMARYIn embodiments of the present disclosure, the interactive podcast platform is a software platform for a creator/expert to create an interactive “podcast” and deliver that podcast to end users. The platform includes various interactive features relative to the underlying podcast to create an “interactive podcast.”
Also in embodiments of the present disclosure, the interactive podcast is an audio-centric, audio-guided, expert-curated, audio/visual learning experience, within a mobile application, a web module, or embeddable media player, that guides a user through a series of curated third-party links, resources, articles, videos, educational instructions, or other suitable information. The user interface design is rooted in a conventional media player design, with play/pause, rewind, and fast forward controls. However, unlike conventional media player designs, the interactive graphical user interfaces described herein are enhanced with various interactive features as described in greater detail below. For example, one interactive feature is a graphical user interface element embedded in the podcast graphical user interface that links the podcast to contextually-related third-party content based on the content of the podcast. Additionally, for example, another interactive features is a graphical user interface element embedded in the podcast graphical user interface that links the podcast to two-way interaction such as a question that is asked of the listener or an advertisement or call to action on the screen that the listener can interact with while listening the podcast audio.
One example embodiment of the present disclosure includes a computing device including an electronic processor and a memory. The memory includes an interactive podcast program that, when executed by the electronic processor, causes the electronic processor to perform a set of operations. The set of operations includes retrieving, from a server, a podcast, one or more links to additional audio/visual content that is contextually related to the podcast, and metadata. The set of operations also includes generating an interactive podcast by generating a plurality of graphical user interfaces based on the podcast, the one or more links to the additional audio/visual content, and the metadata. The plurality of graphical user interfaces is configured to provide a user interaction interface between the podcast and the one or more links to the additional audio/visual content.
Another example embodiment of the present disclosure includes a non-transitory computer-readable medium storing instructions that, when executed by an electronic processor, cause the electronic processor to perform a set of operations. The set of operations includes retrieving, from a server, a podcast, one or more links to additional audio/visual content that is contextually related to the podcast, and metadata. The set of operations also includes generating an interactive podcast by generating a plurality of graphical user interfaces based on the podcast, the one or more links to the additional audio/visual content, and the metadata. The plurality of graphical user interfaces is configured to provide a user interaction interface between the podcast and the one or more links to the additional audio/visual content.
Yet another example embodiment of the present disclosure includes a method for authoring an interactive podcast. The method includes recording, with an electronic processor, audio. The method includes converting, with the electronic processor, the audio into textual information. The method includes dividing, with the electronic processor, the audio into segments. The method includes creating, with the electronic processor, an interactive podcast by uploading additional material that is then linked to one or more timestamps in the audio. The method also includes uploading, with the electronic processor, the interactive podcast to a server.
Before any embodiments of the present disclosure are explained in detail, it is to be understood that the present disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The present disclosure is capable of other embodiments and of being practiced or of being carried out in various ways.
In the example of
The electronic processor 102 executes machine-readable instructions stored in the memory 104. For example, the electronic processor 102 may execute instructions stored in the memory 104 to perform the functionality described herein.
The memory 104 may include a program storage area (for example, read only memory (ROM)) and a data storage area (for example, random access memory (RAM), and other non-transitory, machine-readable medium). In some examples, the program storage area may store machine-executable instructions regarding an interactive podcast program 106 (e.g., an application programming interface (API)). In some examples, the data storage area may store data regarding a podcast database 108 and an additional audio/visual content database 110.
In some examples, the podcast database 108 is a database storing various podcasts that have been uploaded for retrieval by any one of the plurality of mobile computing devices 120A-102N via the interactive podcast program 106. Similarly, in some examples, the additional audio/visual content database 110 is a database storing, among other things, links to various third-party audio/visual content or links to various author-created audio/visual content that is contextually related to one or more podcasts in the podcast database 108, and has been uploaded for retrieval by any one of the plurality of mobile computing devices 120A-102N via the interactive podcast program 106. Additionally, in some examples, the additional audio/visual content database 110 may further include audio notes (as described in greater detail below) and other open source information that is contextually-related to one or more podcasts in the podcast database 108.
The links to various third-party audio visual content may include a third-party hyperlink, a third-party document, a third-party image, a third-party highlight, a third-party quote, a third-party audio, a third-party quiz, a third-party video, or a combination thereof. The links to various author-created audio/visual content may include an author-created hyperlink, an author-created document, an author-created image, an author-created highlight, an author-created quote, an author-created audio, an author-created quiz, an author-created video, or a combination thereof.
The term “audio/visual content” is defined as content that includes audio content, visual content, or a combination of audio and visual content. When the “audio/visual content” includes visual content, then the interactive podcast is “multimodal” in the sense that a user is experiencing the podcast audio content in combination with visual content to render a multimodal experience.
In some examples, the electronic processor 102, when executing the interactive podcast program 106, retrieves a specific podcast from the podcast database 108 along with links to the contextually-related content from the additional audio/visual content database 110, and integrates the specific podcast and the links to the contextually-related content together into the “interactive podcast” as described herein. Additionally, in some examples, when executing the interactive podcast program 106, the server 100 may receive user input, provide user output, or both by communicating with an external device (e.g., the mobile computing device 120A) over a wired or wireless connection. For example, the graphical user interfaces of the interactive podcast platform application 126 as described herein may instead be graphical user interfaces generated by the interactive podcast program 106 (e.g., a website hosted by the server 100) and receive inputs from mobile computing device 120A via the network 180. In some examples, the interactive podcast platform application 126 includes a mobile app, a web module, an embedded audio player, or a combination thereof.
The communication interface 112 receives data from and provides data to devices external to the server 100, such as the mobile computing device 120A via the network 180. For example, the communication interface 112 may include a port or connection for receiving a wired connection (for example, an Ethernet cable, fiber optic cable, a telephone cable, or the like), a wireless transceiver, or a combination thereof. In some examples, the network 180 is the Internet.
The optional input/output interface 114 receives inputs from one or more input mechanisms (for example, a touch screen, a keypad, a button, a knob, and the like) and provides outputs to one or more output mechanisms (for example, a speaker, and the like), or a combination thereof. The optional input/output interface 114 may receive input from an administrative user, provide output to an administrative user, or a combination thereof.
For ease of understanding, description of the plurality of mobile computing devices 120A-120N is limited to a mobile device 120A. However, the description of the mobile device 120A is equally applicable to the other mobile computing devices in the plurality of mobile computing devices 120A-120N.
Additionally, for ease of understanding, description of the plurality of mobile computing devices 120A-120N is limited to a mobile device 120 instead of a plurality of computing devices 120A-120N including a computing device 120A. However, the description of the plurality of mobile computing devices 120A-120N and the mobile device 120A is equally applicable to the other “non-mobile” computing devices, for example, a desktop computer device as the optional computing device 160 that implements a web module or embeddable interactive, multimodal player. Therefore, while the mobile computing device 120A may be a smartphone, the description with respect to the mobile computing device 120A may also be applied to a desktop computer as the optional computing device 160.
In the example of
The electronic processor 122 executes machine-readable instructions stored in the memory 124. For example, the electronic processor 122 may execute instructions stored in the memory 124 to perform the functionality described herein.
The memory 124 may include a program storage area (for example, read only memory (ROM)) and a data storage area (for example, random access memory (RAM), and other non-transitory, machine-readable medium). The program storage area includes an interactive podcast platform application 126. In some examples, the interactive podcast platform application 126 may be a standalone application. In other examples, the interactive podcast platform application 126 is a feature that is part of a separate application. The data storage area includes a podcast cache 128 and an additional audio/visual content cache 130.
The interactive podcast platform application 126 generates various graphical user interfaces to provide different interactive features with a specific podcast and audio/visual content that is contextually related to the podcast. Additionally, the interactive podcast platform application 126 generates various graphical user interfaces to provide a content creating toolkit feature for creating an interactive podcast with one or more links to the audio/visual content that are contextually related to a specific podcast.
In some examples, the podcast cache 128 stores various podcasts that have been downloaded from the server 100 via the interactive podcast program 106. Similarly, in some examples, the additional audio/visual content cache 130 stores, among other things, various links to content that is contextually related to one or more podcasts in the podcast cache 128, and has been downloaded from the server 100 via the interactive podcast program 106. In some examples, the electronic processor 122, when executing the interactive podcast platform application 126, retrieves a specific podcast from the podcast cache 128 (or the podcast database 108) along with the links to the contextually-related content from the additional audio/visual content cache 130 (or the additional audio/visual content database 110), and integrates the specific podcast and the links to the contextually-related content together into the “interactive podcast” as described herein.
In some examples, the mobile computing device 120A is a smartphone or other suitable computing device that includes a presence-sensitive display screen. In other examples, the mobile computing device 120A is a laptop, a desktop computer, or other suitable computing device that includes or is connected to an input mechanism. In these examples, the user may select one of the graphical elements corresponding to the integrated additional audio/visual content that is contextually related to the specific podcast via the presence-sensitive display screen or the input device. Upon selecting the graphical element, the user is able to “interact” and “drill down” into specific information regarding the selected additional audio/visual content.
In some examples, the mobile computing device 120A includes one or more user interfaces (not shown). The one or more user interfaces include one or more input mechanisms (for example, a touch screen, a keypad, a button, a knob, and the like), one or more output mechanisms (for example, a speaker, and the like), or a combination thereof. The one or more optional user interfaces receive input from a user, provide output to a user, or a combination thereof. In some embodiments, as an alternative to or in addition to managing inputs and outputs through the one or more optional user interfaces, the mobile computing device 120A may receive user input, provide user output, or both by communicating with an external device (e.g., the server 100) over a wired or wireless connection.
The communication interface 132 receives data from and provides data to devices external to the mobile computing device 120A, e.g., the server 100. For example, the communication interface 132 may include a port or connection for receiving a wired communication link (for example, an Ethernet cable, fiber optic cable, a telephone cable, or the like), a wireless transceiver for receiving a wireless communication link, or a combination thereof.
The display screen 134 is an array of pixels that generates and outputs images including the graphical user interface as described herein to a user. In some examples, the display screen 136 is one of a liquid crystal display (LCD) screen, a light-emitting diode (LED) and liquid crystal display (LCD) screen, a quantum dot light-emitting diode (QLED) display screen, an interferometric modulator display (IMOD) screen, a micro light-emitting diode display screen (mLED), a virtual retinal display screen, or other suitable display screen. The electronic processor 122 controls the display screen 136 to display the graphical user interfaces as described herein when executing the interactive podcast platform application 126.
The microphone 136 includes an audio sensor that generates and outputs audio data. The electronic processor 122 receives the audio data that is output by the microphone 136 and stores the audio as part of a podcast in the podcast cache 128 before being uploaded to the podcast database 108.
As explained above, unlike the conventional media player design, the interactive graphical user interfaces described herein is enhanced with the graphical user interface element 302 that links to contextually-related third-party content based on the content and step the user experiences within the interactive podcast. This element 302, on initial content load, is de-activated. In the course of the audio overview, as soon as the expert references the third-party content, the button activates. As illustrated in
In some examples, the element 302 may be a static element that links to only one piece of audio/visual content. In other examples, the element 302 may be a dynamic element that updates over the course of the audio overview to link to more than one piece of audio/visual content.
The user taps the element 302, which pauses the audio and launches the third-party link in a modal window, allowing the user to read, view, or listen to the additional audio/visual content that is contextually related to the podcast.
The web and mobile-based content creation toolkit provided by the graphical user interfaces 500-504 guide the content creator (e.g., a teacher or a client) through the process of creating content for both audio-only and audio/visual modes. As illustrated in
The fourth example graphical user interface 500 prompts for audio recordings of teaching scenarios that acknowledge the learner's learning mode—either audio/visual, or audio-only. The fourth example graphical user interface 500 in combination with the microphone 136 of the mobile computing device 120A may record audio files. The fourth example graphical user interface 500 may also link to a custom audio editing tool, library of audio assets, a form to upload links to third party images and videos, and a preview mode.
As illustrated in
Additionally, in some examples, a content creator may continually update a given podcast with relevant and up-to-date content. When the content creator finds a new article that is related to a particular topic in the podcast, the content creator may use the fourth example graphical user interface 500 to link the new article.
For example, through the fourth example graphical user interface 500, the content creator may add the new link with the new link button 512, add an accompanying 30-second audio preview to the link with the record button 506, and tap “save” with the publish button 516. Upon saving, the mobile computing device 120A appends the new content to the existing content by uploading the new content to the additional audio/visual content database 110 from the additional audio/visual content cache 130. In some examples, the mobile computing device 120A may also control the server 100 to transmit an in-app message to any users who have listened to the updated interactive podcast, letting the users know that new content is available. The user taps a link in the notification and is taken directly to the new content in the interactive podcast platform application.
The method 600 includes recording audio and/or providing text with the mobile device 120A (at block 602). For example, the audio may be recorded and the text may be provided by a smartphone or a desktop computer. As described and illustrated by graphical user interfaces 1600-1800 and
The method 600 includes converting audio to text and/or converting text to audio with the mobile device 120A (at block 603). For example, the audio is transcribed into text and the text is converted into computer-generated audio. As described and illustrated by graphical user interface 1900 and
The method 600 includes editing and dividing the audio and/or text with the mobile device 120A (at block 604). As described and illustrated by graphical user interface 2000 and
The method 600 includes creating an interactive podcast by uploading additional material (e.g., topic references, topic references, links, articles, exercises) that is then linked to a timestamp in the audio with the mobile device 120A (at block 606). Optionally, in some examples, the method 600 may also include determining additional content relative to topic using machine learning on the additional material that is uploaded with the mobile device 120A (at block 608). As described and illustrated by graphical user interfaces 2100-2400 and
The method 600 includes uploading the interactive podcast to the server 100 for administrative review before access by other users with the mobile device 120A (at block 610). As described and illustrated by graphical user interface 2500 and
Lastly, after the administrative review is completed, a mobile device 120N may search the server 100 and retrieve the interactive podcast from the server 100 (at block 612). As described and illustrated by graphical user interface 1500 and
As illustrated in
Alternatively, when in audio-only mode, the user hears the audio-based content, and the interactive podcast platform application 126 points out visual-based content based on an audio toggle of the element 704. However, in this instance, the interactive podcast platform application 126 encourages the user to return later to select the visual toggle and access the visual content.
Upon selecting “Visual,” the method 800 includes pausing the audio while the user consumes the visual content (at block 806). After consuming the visual content, the method 800 includes continuing with the audio playback (at block 808).
Upon selecting “Audio,” the method 800 includes playing the alternative audio that describes the visual content (at block 810). After consuming the alternative audio content, the method 800 includes reading the visual content to the user via text-to-audio (at block 812). After reading the visual content to the user, the method 800 includes continuing with the audio playback (at block 808).
In the example of
In the example of
Additionally, in some examples, the interactive podcast platform application 126 includes an audio-hyperlinks feature. For example, when the user is presented either visually or aurally with information that is not understood by the user, the interactive podcast platform application 126 may receive a trigger word or phrase from the user followed by “what is X?” The interactive podcast platform application 126 may perform an online or local search and respond with either a visual or aural prompt regarding X.
Additionally, in some examples, the interactive podcast platform application 126 includes a machine learning engine that learns based on a user's history. For example, the interactive podcast platform application 126 keeps a history of what a user listens to—what podcast and what steps; when the user rewinds, fast forwards, skips ahead, searches, repeats steps; what third-party content the user reads/listens to; what the user shares; and what the user comments on in the discussion group. The machine learning engine, as part of the interactive podcast program 106, analyzes this information from the interactive podcast platform application 126, compares the user's history to the behaviors and actions of other users, and controls the interactive podcast platform application 126 to present new learning recommendations based on these learnings. The interactive podcast program 106 understands the different modes of learning that are preferred by listeners.
The content creation section 1502 provides a means to an author to perform operations 602-610 as described above. The content repository section 1504 provides a means to a user to perform operation 612 as described above.
As illustrated in
As illustrated in
As illustrated in
As illustrated in
Lastly, as illustrated in
The first name field 2504 receives the first name of the author of the content. The last name field 2506 receives the last name of the author of the content. The profile field 2508 receives information regarding the profile of the author of the content. The biography field 2510 receives information regarding the biography of the author of the content.
The thumbnail selection section 2512 includes a thumbnail upload component 2516 and a thumbnail search component 2518. The thumbnail upload component 2516 provides a means for the author of the content to upload an image for association with the content. The thumbnail search component 2518 provides a means for the author of the content to search for an image to associate with the content. Lastly, the publish GUI element 2514 associates the information provided in the author information section 2502 with the content created through the flows with the graphical user interfaces 1600-2400 and stores the authored content for administrative review.
As illustrated in
As illustrated in
As illustrated in
As illustrated in
As illustrated in
As illustrated in
As illustrated in
The following are enumerated examples of the computing devices, non-transitory computer-readable media, and methods for authoring an interactive podcast. Example 1: A computing device comprising: an electronic processor; and a memory including an interactive podcast program that, when executed by the electronic processor, causes the electronic processor to perform a set of operations including retrieving, from a server, a podcast, one or more links to additional audio/visual content that is contextually related to the podcast, and metadata; and generating an interactive podcast by generating a plurality of graphical user interfaces based on the podcast, the one or more links to the additional audio/visual content, and the metadata, wherein the plurality of graphical user interfaces is configured to provide a user interaction interface between the podcast and the one or more links to the additional audio/visual content.
Example 2: The computing device of Example 1, wherein the metadata includes a first timestamp of the podcast that is associated with and contextually related to a first link of the one or more links to the additional audio/visual content.
Example 3: The computing device of Example 2, wherein the metadata includes a second timestamp of the podcast that is associated with and contextually related to a second link of the one or more links to the additional audio/visual content that is different from the first link.
Example 4: The computing device of Example 3, wherein the metadata includes a third timestamp of the podcast that is associated with and contextually related to a third link of the one or more links to the additional audio/visual content that is different from the first link and the second link.
Example 5: The computing device of Example 4, wherein the additional audio/visual content includes at least one content from a group consisting of: a third-party hyperlink, a third-party document, a third-party image, a third-party highlight, a third-party quote, a third-party audio, a third-party quiz, a third-party video, an author-created hyperlink, an author-created document, an author-created image, an author-created highlight, an author-created quote, an author-created audio, an author-created quiz, and an author-created video.
Example 6: The computing device of any of Examples 1-5, further comprising: a display screen, wherein the set of operations further includes controlling the display screen to display the interactive podcast.
Example 7: The computing device of any of Examples 1-5, wherein the set of operations further includes controlling a webpage to display the interactive podcast on a web browser.
Example 8: A non-transitory computer-readable medium storing instructions that, when executed by an electronic processor, cause the electronic processor to perform a set of operations, the set of operations comprising: retrieving, from a server, a podcast, one or more links to additional audio/visual content that is contextually related to the podcast, and metadata; and generating an interactive podcast by generating a plurality of graphical user interfaces based on the podcast, the one or more links to the additional audio/visual content, and the metadata,
wherein the plurality of graphical user interfaces is configured to provide a user interaction interface between the podcast and the one or more links to the additional audio/visual content.
Example 9: The non-transitory computer-readable medium of Example 8, wherein the metadata includes a first timestamp of the podcast that is associated with and contextually related to a first link of the one or more links to the additional audio/visual content.
Example 10: The non-transitory computer-readable medium of Example 9, wherein the metadata includes a second timestamp of the podcast that is associated with and contextually related to a second link of the one or more links to the additional audio/visual content that is different from the first link.
Example 11: The non-transitory computer-readable medium of Example 10, wherein the metadata includes a third timestamp of the podcast that is associated with and contextually related to a third link of the one or more links to the additional audio/visual content that is different from the first link and the second link.
Example 12: The non-transitory computer-readable medium of Example 11, wherein the additional audio/visual content includes at least one content from a group consisting of: a third-party hyperlink, a third-party document, a third-party image, a third-party highlight, a third-party quote, a third-party audio, a third-party quiz, a third-party video, an author-created hyperlink, an author-created document, an author-created image, an author-created highlight, an author-created quote, an author-created audio, an author-created quiz, and an author-created video.
Example 13: The non-transitory computer-readable medium of any of Examples 8-12, further comprising: controlling a display screen of a smartphone to display the interactive podcast.
Example 14: The non-transitory computer-readable medium of any of Examples 8-12, further comprising: controlling a webpage to display the interactive podcast on a web browser.
Example 15: The non-transitory computer-readable medium of Example 14, wherein controlling the webpage to display the interactive podcast on the web browser further includes embedding a media player into the webpage, wherein the media player is configured to playback the audio, and change between the plurality of graphical user interfaces to playback the additional audio/visual content in combination with the playback of the audio.
Example 16: A method for authoring an interactive podcast, the method comprising: recording, with an electronic processor, audio; converting, with the electronic processor, the audio into textual information; dividing, with the electronic processor, the audio into segments; creating, with the electronic processor, an interactive podcast by uploading additional material that is then linked to one or more timestamps in the audio; and uploading, with the electronic processor, the interactive podcast to a server.
Example 17: The method of Example 16, further comprising: receiving, with the electronic processor, an input of second textual information that is in addition the textual information that is converted from the audio, wherein the audio and the second textual information are divided into the segments.
Example 18: The method of Examples 16 or 17, wherein creating, with the electronic processor, an interactive podcast by uploading additional material that is then linked to one or more timestamps in the audio further includes uploading a first piece of material that is then linked to a first time stamp in the audio; and uploading a second piece of material that is then linked to a second time stamp in the audio, wherein the second piece of material is different from the first piece of material, and wherein the second time stamp is different from the first time stamp.
Example 19: The method of Example 18, wherein the first piece of material includes at least one from a first group consisting of: a third-party hyperlink, a third-party document, a third-party image, a third-party highlight, a third-party quote, a third-party audio, a third-party quiz, a third-party video, an author-created hyperlink, an author-created document, an author-created image, an author-created highlight, an author-created quote, an author-created audio, an author-created quiz, and an author-created video.
Example 20: The method of Example 18, wherein the second piece of material includes at least one from a second group consisting of: a third-party hyperlink, a third-party document, a third-party image, a third-party highlight, a third-party quote, a third-party audio, a third-party quiz, a third-party video, an author-created hyperlink, an author-created document, an author-created image, an author-created highlight, an author-created quote, an author-created audio, an author-created quiz, and an author-created video.
Thus, the present disclosure provides, among other things, an interactive podcast with integrated additional audio/visual content. Various features and advantages of the invention are set forth in the following claims.
Claims
1. A method for creating an interactive audio/visual experience comprising:
- receiving, with an electronic processor, textual content for the interactive audio/visual experience;
- recording, with the electronic processor, audio for the interactive audio/visual experience, the audio being based on the textual content;
- converting, with the electronic processor, the audio into textual information;
- selecting, with the electronic processor, a selected text within the textual information;
- linking, with the electronic processor, additional material to the selected text within the textual information; and
- generating, with the electronic processor, the interactive audio/visual experience including the audio and the additional material.
2. The method of claim 1, wherein:
- the audio and the textual content are divided into segments.
3. The method of claim 1, wherein the additional material includes at least one from a group consisting of:
- a hyperlink,
- a document,
- an image,
- a highlight,
- a quote,
- an audio,
- a quiz, and
- a video.
4. The method of claim 1, further comprising:
- receiving, with the electronic processor, title information for the audio/visual experience.
5. The method of claim 1, further comprising:
- receiving, with the electronic processor, chapter information for the audio/visual experience.
6. The method of claim 1, further comprising:
- receiving, with the electronic processor, author information for an author of the audio.
7. The method of claim 1, further comprising:
- uploading, with the electronic processor, the additional material.
8. The method of claim 1, further comprising:
- uploading, with the electronic processor, the interactive audio/visual experience to a server.
Type: Application
Filed: May 19, 2022
Publication Date: Sep 1, 2022
Applicant: Giide Audio, Inc. (Boulder, CO)
Inventors: Scott Prindle (Boulder, CO), Allison Kent-Smith (Boulder, CO)
Application Number: 17/664,176