AUDIO SEGMENT BASED AND/OR COMPILATION BASED SOCIAL NETWORKING PLATFORM
A device includes a transceiver, a storage device, and a processor. The transceiver receives an audio segment from a remote device, receives a request to communicate the audio segment to another remote device, and communicates the audio segment to the another remote device in response to the request to communicate the audio segment to the another remote device, the audio segment including at least one audio feature extracted from audio recorded by the device. The storage device stores the audio segment. The processor retrieves the audio segment from the storage device in response to the request to communicate the audio segment to the another remote device.
This present application is a continuation of U.S. patent application Ser. No. 16/458,899, filed Jul. 1, 2019, entitled “AUDIO SEGMENT BASED AND/OR COMPILATION BASED SOCIAL NETWORKING PLATFORM”, the entire specification of which is hereby incorporated by reference.
BACKGROUND OF THE DISCLOSURE 1. Field of the DisclosureThe disclosure relates in general to a social networking platform, and more particularly, to an audio segment and/or compilation based social networking platform.
2. Background ArtAn overload of photo editing and staging of photos exists for social media platforms, such as the leading Instagram platform. Automated beautifying photo apps dominate the app stores, such as the Apple app store. Even the staging of user photos became common as users leveled up their expectations due to being present on photo sharing networks. A good case study is the rise of Instagram pop ups, which are simply unusual backgrounds for the users. Users are limited to reinvent their visual content. However, acts such as going to one of the pop ups, waiting in line, and paying a premium just to have a delicate background, show that at the same time users are desperate for a reinvention. Users are torturing themselves, dedicating a lot of time and energy to keep up a high level by farming for a wanted number of likes. Fatigue is also evident for traditional social media users, because staging, editing, and the acknowledging options within these platforms are taking a toll on the user with regards to factors such as time required to keep up with continuous updates.
Furthermore, as limited capability devices such as smart speaker devices have appeared, traditional social networks have had trouble adapting. For example, visual segments could not translate over to the smart speaker devices. Not only could visual content not be consumed, but visual content could not be created. Thus, such limited capability devices have for the most part been excluded from social media platforms.
SUMMARY OF THE DISCLOSUREThe disclosure is directed to a method that comprises receiving, by a transceiver, an audio segment from a remote device, the audio segment including at least one audio feature extracted from audio recorded by the remote device and storing, by a storage device, the audio segment. The method further includes receiving, by the transceiver, a request to communicate the audio segment to another remove device and retrieving, by a processor, the audio segment from the storage device in response to the request to communicate the audio segment to the another remove device. The method yet further includes communicating, by the transceiver, the audio segment to the another remote device in response to the request to communicate the audio segment to the another remote device.
In some configurations, the audio segment is one of a plurality of audio segments, the method further comprising formulating an audio compilation including the plurality of audio segments and receiving, by the transceiver, a request to communicate the audio compilation to the another remove device. The method even further comprises communicating, by the transceiver, the audio compilation to the another remote device.
In some configurations, a server computer formulates the audio compilation.
In some configurations, the remote device formulates the audio compilation.
In some configurations, the at least one audio feature is extracted from the audio recorded by the remote device based on at least one of word recognition and sound recognition, the audio segment including audio surrounding at least one of a recognized word and a recognized sound.
In some configurations, the audio segment includes associated geolocation information indicating where the remote device was located when recording the audio segment, the method further comprising receiving location information associated with the another remote device and retrieving the audio segment based on the location information associated with the audio segment and the location information associated with the another remote device.
In some configurations, the audio segment is one of a plurality of audio segments, the method further comprising filtering the plurality of audio segments based on a time of creation of the plurality of audio segments and communicating the filtered plurality of the audio segments to the another remote device.
In some configurations, the communicating of the method comprises at least one of streaming the audio segment to the another remote device and uploading the audio segment to the another remote device.
In some configurations, the remote device and the another remote device are at least one of a smart phone, a smart speaker, a portable gaming device, a tablet computer, a personal computer, and a smartwatch.
In some configurations, a server computer implements the method.
In some configurations, the method further comprises receiving geographic information from the remote device and the another remote device, and establishing at least one of a call and connection between the remote device and the another remote device based on the received geographic information.
The disclosure is also directed to a device that comprises a transceiver, a storage device, and a processor. The transceiver receives an audio segment from a remote device, receives a request to communicate the audio segment to another remote device, and communicates the audio segment to the another remote device in response to the request to communicate the audio segment to the another remote device, the audio segment including at least one audio feature extracted from audio recorded by the device. The storage device stores the audio segment. The processor retrieves the audio segment from the storage device in response to the request to communicate the audio segment to the another remote device.
In some configurations, the audio segment is one of a plurality of audio segments, wherein the processor further to formulate an audio compilation including the plurality of audio segments and communicate the audio compilation to the another remote device and the transceiver further to receive a request to communicate the audio compilation to another remove device.
In some configurations, a server computer formulates the audio compilation.
In some configurations, the device formulates the audio compilation.
In some configurations, the processor further to extract at least one audio feature from the audio recorded by the device based on at least one of word recognition and sound recognition, the audio segment including audio surrounding at least one of a recognized word and a recognized sound.
In some configurations, the audio segment includes associated geolocation information indicating where the remote device was located when recording the audio segment, the transceiver further to receive location information associated with the another remote device and the processor further to retrieve the audio segment based on the location information associated with the audio segment and the location information associated with the another remote device.
In some configurations, the audio segment is one of a plurality of audio segments, the processor further to filter the plurality of audio segments based on a time of creation of the plurality of audio segments and the transceiver further to communicate the filtered plurality of the audio segments to the another remote device.
In some configurations, the transceiver at least one of streams the audio segment to the another remote device and uploads the audio segment to the another remote device.
In some configurations, the remote device and the another remote device are at least one of a smart phone, a smart speaker, a portable gaming device, a tablet computer, a personal computer, and a smartwatch.
In some configurations, the device is a server computer.
In some configurations, the transceiver further receives geographic information from the remote device and the another remote device, and establishes at least one of a call and connection between the remote device and the another remote device based on the received geographic information.
The disclosure will now be described with reference to the drawings wherein:
While this disclosure is susceptible of embodiment(s) in many different forms, there is shown in the drawings and described herein in detail a specific embodiment(s) with the understanding that the present disclosure is to be considered as an exemplification and is not intended to be limited to the embodiment(s) illustrated.
It will be understood that like or analogous elements and/or components, referred to herein, may be identified throughout the drawings by like reference characters. In addition, it will be understood that the drawings are merely schematic representations of the invention, and some of the components may have been distorted from actual scale for purposes of pictorial clarity.
Referring now to the drawings and in particular to
The social networking system 1000 addresses a need within the art for a social networking platform 1012, such as an “app”, that functions with anonymity so that a personal interface is the only way to unlock an identity of a user, but can be shared when the user wants to be judged for popularity. The remote device 1010 executes such an app performing such functionality. The remote device 1010 executes the social networking platform 1012 disclosed herein without requiring editing of content by a user on the remote device 1010. Even further, the remote device 1010 executes the social networking platform 1012 disclosed herein without staging, that is the social networking platform 1012 decides what content is communicated with other remote devices 1015a, 1015b, 1015c.
The remote device 1010 executes the social networking platform 1012 disclosed herein that captures audio, by implementing background recording, and creates audio segments from that background recording that get communicated to one or more of the other remote devices 1015a, 1015b, 1015c for others users to hear. For example, the server computer 1030 receives these audio segments and creates audio compilations and/or shorter audio segments (e.g., removes silent periods from the received audio segments) which are made available to a user that created the audio segments to hear and, in at least one embodiment, made available for others to hear. In at least one other embodiment, the remote device 1010 creates these audio segments and/or the audio compilations. While the social networking platform 1012 is recording, an algorithm extracts audio features from the recorded audio and if a criteria is met, the social networking platform 1012 will isolate, store, and upload audio segments based on the algorithm. The remote device 1010 records and chooses these recorded audio segments, via the social networking platform 1012, automatically once the social networking platform 1012 is executed, without requiring a user to pay attention to their remote device 1010 that is typically near at all times. The social networking platform 1012 also includes socializing functionality, as well as messaging functionality, e.g., data exchange functionality such as text messaging, pictures, other audio, and/or any other data exchange, together with the audio form disclosed herein. The social networking platform 1012 also requests audio segments and/or audio compilations for streaming to the one or more of the other remote devices 1015a, 1015b, 1015c. Thus, the social networking platform 1012 can send created audio segments and/or audio compilations to the server computer 1030 and request audio segments and/or audio compilations from the server computer 1030, dependent upon whether the user is recording and uploading audio segments or desires to listen to audio segments and/or audio compilations, respectively. Further functionality of the remote device 1010 and the one or more of the other remote devices 1015a, 1015b, 1015c is disclosed below with reference to
With reference to
With reference to
The general-purpose computing device 100 also typically includes computer readable media, which can include any available media that can be accessed by computing device 100. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the general-purpose computing device 100. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
When using communication media, the general-purpose computing device 100 may operate in a networked environment via logical connections to one or more remote computers. The logical connection depicted in
The general-purpose computing device 100 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
With reference to
The social networking platform 1012 comprises a user interface which can configure the remote device 1010 and the other remote devices 1015a, 1015b, 1015c. In many instances, the remote device 1010 and/or the other remote devices 1015a, 1015b, 1015c comprise a keypad and a display that is connected through a wired connection with the central processing unit 120. The remote device 1010 and/or the other remote devices 1015a, 1015b, 1015c can participated in the audio social network disclosed herein, either with limited functionality (e.g., smartwatch, smart speaker, etc.) or full functionality (e.g., smart television, personal computer, etc.). The server computer 1030 syncs the audio segments and/or audio compilations across all available remote devices 1015, either with limited functionality or full functionality, available to a user. Of course, with the different communication protocols associated with the network interface 170, the network interface 170 may comprise a wireless device that communicates with the communication network 1020 through a wireless communication protocol (i.e., Bluetooth, RF, WIFI, etc.). In other embodiments, the social networking platform 1012 may comprise a virtual programming module in the form of software that is on, for example, a smartphone, in communication with the network interface 170. In still other embodiments, such a virtual programming module may be located in the cloud (or web based), with access thereto through any number of different computing devices. Advantageously, with such a configuration, a user may be able to communicate with the social networking system 1000 remotely, with the ability to change functionality.
With reference to
Process 430 isolates an audio segment triggered by the detected audio feature(s) in process 420. For example, the remote device 1010 can store a predetermined length of audio on either side of the recognized word to isolate audio surrounding the recognized word, such as for example 10 seconds before and 10 seconds after the recognized word, to create the audio segment. Process 430 proceeds to process 440. Process 440 uploads the audio segment isolated in process 430. In an embodiment, the audio segment is uploaded by the server computer 1030. In at least one embodiment, the social networking platform 1012 creates and presents a user with a summary of significant audio segments from a user's day, and in at least one embodiment, in a form of an audio compilation that is created from these audio segments.
With reference to
Process 540 receives a request from a user to publish an audio segment(s) and/or an audio compilation(s). For example, the social networking platform 1012 that created audio segments and communicated the audio segments to the server computer 1030 can send a request to the server computer 1030 to publish and/or unpublish the audio segments and/or audio compilations, which were stored in process 530. In at least one embodiment, the server computer 1030 can stream the audio segments and/or audio compilations, that is either as a whole compilation or as individual audio segment(s) from the audio compilation, to the one or more of the other remote devices 1015a, 1015b, 1015c.
With reference to
Process 520 streams published audio compilations, that is two or more audio segments associated with particular users, respectively, based on geolocation information, time information, and/or any other user and system data. In at least one embodiment, individual audio segments and/or audio compilations are published, uploaded and/or streamed, for others to hear only after a user has reviewed and approved the audio segments and/or audio compilations for publication. For example, the server computer 1030 can formulate audio compilations based on at least one of the geolocation information and time information. The server computer 1030 stores a number of audio segments and/or audio compilations in a storage device, such as in database 1040 (
In at least one embodiment, the flowchart 500 can further include a process 530. Process 530 includes receiving a “lift” request from a user listening to the streamed audio compilations. In at least one embodiment, the lift request indicates that the user listening to a particular audio segment and/or compilation in process 520, from a particular other user, would like to listen to future audio segments and/or audio compilations from that particular user. For example, the particular user can be associated with a unique identifier (ID). In at least one embodiment, the user wanting to listen to further audio segments and/or audio compilations from the desired particular other user can select that particular user's ID with their social networking platform 1012. Process 530 proceeds to process 540. Process 540 adds the “lift” to the particular user owning the streamed audio compilation such that future audio segments and/or audio compilations will be automatically made available to the user submitting the lift request, irrespective of time information and geolocation information associated with such further audio segments and/or audio compilations.
With reference to
In at least one embodiment, the flowchart 700 can include a process 730. Process 730 includes interrupting a call, e.g., ending a call, such as that setup in process 720, between users based on call duration or based on geolocation information. For example, the server computer 1030 monitors the length of the call established in process 720. The server computer 1030 further monitors real-time geolocation information for the remote device 1010 and the one or more other remote devices 1015a, 1015b, 1015c for which the call was established in process 720. If a call longer than a time threshold or the geographic distance between remote devices is greater than a distance threshold, the server computer 1030 interrupts the call established in process 730.
Although the remote device 1010 is described herein as creating audio segments, one skilled in the art would recognize that other devices within the social networking system 1000 can create such audio segments, without departing from the scope of the embodiments. For example, the remote device 1010 can send non-segmented audio to the server computer 1030. In such an embodiment, the server computer 1030 includes functionality disclosed herein for the social networking platform 1012 to create the audio segments. Such a configuration can be implemented for devices with limited processing power, such as for smart speakers, to offload the creation of the audio segments described herein.
Process 2320 includes storing, by a storage device such as the database 1040 and/or the hard disk drive 141, the audio segment from process 2310. Process 2320 proceeds to process 2330. Process 2330 includes receiving, such as by the transceiver 1070, a request to communicate the audio segment received in process 2310 to another remove device, such as one or more of the other remote devices 1015a, 1015b, 1015c. Process 2330 proceeds to process 2340.
Process 2340 includes retrieving, by a processor such as the queue processor and/or the CPU 120, the audio segment from the storage device, such as database 1040 and/or the hard disk drive 141, in response to the request to communicate the audio segment to the another remove device in process 2330. Process 2340 proceed to process 2350.
Process 2350 includes communicating, such as by the transceiver 1070, the audio segment to the another remote device, such as one or more of the other remote devices 1015a, 1015b, 1015c, in response to the request in process 2330 to communicate the audio segment to the another remote device.
The foregoing description merely explains and illustrates the disclosure and the disclosure is not limited thereto except insofar as the appended claims are so limited, as those skilled in the art who have the disclosure before them will be able to make modifications without departing from the scope of the disclosure.
Claims
1-22. (canceled)
23. A method, comprising:
- receiving, by a transceiver and from a first remote device, a plurality of automatically extracted audio segments created by the first remote device, the plurality of automatically extracted audio segments being automatically extracted from audio background recorded by the first remote device;
- storing, by a storage device, the plurality of automatically extracted audio segments;
- formulating, from the audio background recorded by the first remote device, an audio summary compilation;
- receiving, by the transceiver, a request to communicate the audio summary compilation to a second remote device;
- retrieving, by a processor, the audio summary compilation from the storage device in response to the request to communicate the audio summary compilation to the second remote device; and
- communicating, by the transceiver, the audio summary compilation to the second remote device in response to the request to communicate the audio summary compilation to the second remote device.
24. The method according to claim 23, wherein the plurality of automatically extracted audio segments include at least one audio feature and the audio summary compilation includes the at least one audio feature.
25. The method according to claim 24, wherein the at least one audio feature is automatically extracted from the audio recorded by the first remote device based on at least one of word recognition and sound recognition, the plurality of automatically extracted audio segments including audio surrounding at least one of a recognized word and a recognized sound based on at least one of the word recognition and the sound recognition, respectively.
26. The method according to claim 23, wherein a server computer formulates the audio summary compilation.
27. The method according to claim 23, wherein the first remote device formulates the audio summary compilation.
28. The method according to claim 23, wherein the plurality of automatically extracted audio segments include associated geolocation information indicating where the first remote device was located when recording the plurality of automatically extracted audio segments, the method further comprising:
- receiving location information associated with the second remote device; and
- retrieving the plurality of automatically extracted audio segments based on the location information associated with the plurality of automatically extracted audio segments and the location information associated with the second remote device.
29. The method according to claim 23, further comprising:
- filtering the plurality of automatically extracted audio segments based on a time of creation of the plurality of automatically extracted audio segments; and
- communicating the filtered plurality of the automatically extracted audio segments to the second remote device.
30. The method according to claim 23, wherein the communicating comprises at least one of streaming the plurality of automatically extracted audio segments to the second remote device and uploading the plurality of automatically extracted audio segments to the second remote device.
31. The method according to claim 23, wherein the first remote device and the second remote device are at least one of a smart phone, a smart speaker, a portable gaming device, a tablet computer, a personal computer, and a smartwatch.
32. A server computer implementing the method according to claim 23.
33. The method according to claim 23, further comprising:
- receiving geographic information from the first remote device and the second remote device; and
- establishing at least one of a call and connection between the first remote device and the second remote device based on the received geographic information.
34. A device, comprising:
- a transceiver to receive a plurality of automatically extracted audio segments created by a first remote device, receive a request to communicate an audio summary compilation to a second remote device, and communicate the audio summary compilation to the second remote device in response to the request to communicate the audio summary compilation to the second remote device, the plurality of audio segments being automatically extracted from audio background recorded by the first remote device;
- a storage device to store the plurality of audio segments; and
- a processor to formulate, from the audio background recorded by the first remote device, the audio summary compilation and to retrieve the audio summary compilation from the storage device in response to the request to communicate the audio summary compilation to the second remote device.
35. The device according to claim 34, wherein the plurality of audio segments include at least one audio feature and the audio summary compilation includes the at least one audio feature.
36. The device according to claim 35, wherein the processor further to automatically extract the at least one audio feature from the audio recorded by the device based on at least one of word recognition and sound recognition, the plurality of automatically extracted audio segments including audio surrounding at least one of a recognized word and a recognized sound based on at least one of the word recognition and the sound recognition, respectively.
37. The device according to claim 34, wherein the device is a server computer.
38. The device according to claim 34, wherein the plurality of automatically extracted audio segments include associated geolocation information indicating where the first remote device was located when recording the plurality of automatically extracted audio segments, wherein:
- the transceiver further to receive location information associated with the another remote device; and
- the processor further to retrieve the plurality of automatically extracted audio segments based on the location information associated with the plurality of automatically extracted audio segments and the location information associated with the second remote device.
39. The device according to claim 34, wherein:
- the processor further to filter the plurality of automatically extracted audio segments based on a time of creation of the plurality of automatically extracted audio segments; and
- the transceiver further to communicate the filtered plurality of the automatically extracted audio segments to the second remote device.
40. The device according to claim 34, wherein the transceiver at least one of streams the plurality of automatically extracted audio segments to the second remote device and uploads the plurality of automatically extracted audio segments to the second remote device.
41. The device according to claim 34, wherein the first remote device and the second remote device are at least one of a smart phone, a smart speaker, a portable gaming device, a tablet computer, a personal computer, and a smartwatch.
42. The device according to claim 34, wherein the transceiver further receives geographic information from the first remote device and the second remote device, and establishes at least one of a call and connection between the first remote device and the second remote device based on the received geographic information.
Type: Application
Filed: Nov 5, 2020
Publication Date: Feb 25, 2021
Inventors: Bosko Ilic (Belgrade), Vanja Jovicevic (Belgrade), Nemanja Zbiljic (Belgrade), Stefan Brajkovic (Belgrade)
Application Number: 17/090,676