VIDEO-RELATED SYSTEM, METHOD AND DEVICE INVOLVING A MAP INTERFACE
A video-related method, system and device are disclosed herein. The method, system and device, in an embodiment, involve processing geographic information associated with participants and processing rating data related to videos. The method, system and device also involve displaying a map interface that displays symbols representing the participants. The symbols vary based, at least in part, on differences in the rating data.
This application is a continuation of, and claims the benefit and priority of, U.S. patent application Ser. No. 15/855,275 filed on Dec. 27, 2017. The entire contents of such application are hereby incorporated herein by reference.
BACKGROUNDIt is popular to use mobile devices, such as smartphones, to record videos of various events. For example, people use smartphones to record family trips and activities, sports games, ceremonies, and performances of family members, friends and others in the fields of athletics, education, entertainment and business. Many of these events involve interesting moments that occur over long stretches of time. During the events, it can be difficult to anticipate or predict when these interesting moments will occur. Consequently, even though a viewer may wish to only capture the interesting moments, the viewer must record the entire event to avoid missing the interesting moments. To develop highlight videos, the viewers must edit these videos after the recording, which can be painstaking, time consuming and labor intensive.
Also, while recording the video, it can be difficult to take note of important information. Conventionally, this requires the use of at least two separate tools—the smartphone's video recorder and a separate software program or paper. The viewer operates the video recorder to record the event. Another person, such as a friend or statistician, uses the software program or paper to note the important information regarding the interesting moments.
For example, the statistician might note that a specific participant scored a point or made a particular action.
It can be challenging for two people to manage these separate tools especially in high-paced events. If there is only one person available to view an event, the person may decide not to use one of the tools, losing the opportunity to gain valuable video or event information. Alternatively, the person may attempt to manage both of these tools at the same time. This can cause difficulty, stress, errors and oversights in the video recording process and note-taking process.
Furthermore, there are several shortcomings in the known processes for recording, storing, publishing, finding, rating and acting upon videos of participants in events. The shortcomings include, but are not limited to, the burdens of labor and time required to edit videos after they are recorded, inefficiencies in the processes of the human machine interface, the difficulty to find videos of a desired category, the overuse of data storage centers, the loss of data storage capacity on mobile devices such as smartphones, and the inaccuracies in the event information that is published in connection with videos. These shortcomings result in disadvantages and lost opportunities for viewers who record videos, the event participants and the viewers who watch videos.
The foregoing background describes some, but not necessarily all, of the problems, disadvantages and challenges related to video recording, video management, video access, video-related activities, event reporting, and the pursuits of event participants and viewers.
As illustrated in
The system 10 includes a plurality of computer-readable instructions, software, computer code, computer programs, logic, algorithms, data, data libraries, data files, graphical data and commands that are executable by the processor 14 and the electronic devices 20. In operation, the processor 14 and the electronic devices 20 cooperate with the system 10 to perform the functions described in this description.
In an embodiment, the system 10 includes a video generator 28, interface module 30, publication module 31, participant module 32, verification module 34 and connector module 36. The one or more data storage devices 12 store the system 10 for execution by the processor 14. The electronic devices 20 can access the system 10 over the network 16 to enable users to provide inputs and receive outputs as described below.
In addition, the one or more data storage devices 12 store a downloadable system 11. In an embodiment, the downloadable system 11 includes part or all of the system 10 in a format that is configured to be downloaded and installed onto the electronic devices 20. For example, in an embodiment, the downloadable system 11 includes: (a) a mobile app version of the system 10 that is compatible with the iOS™ mobile operating system; and (b) a mobile app version of the system 10 that is compatible with the Android™ mobile operating system. In an embodiment, the data sources 18 include databases of schools 38, databases of healthcare providers 40, databases of testing organizations 42, databases of benefit sources 44 and databases of sponsors 46.
From time to time in this description, the system 13, which includes the systems 10 and 11 or portions thereof, may be described as performing various functions with the understanding that such functions involve the execution role of the processor 14, another processor or the electronic devices 20. Depending upon the embodiment, the processor 14 and the electronic devices 20 can include one or more microprocessors, circuits, circuitry, controllers or other data processing devices. Although the system 13 is operable to control the input and output devices of the electronic devices 20, the system 13 may be described herein as generating outputs, displaying interfaces and receiving inputs.
The electronic devices 20 are configured to download, store and execute the downloadable system 11. As illustrated in
There are a variety of different types of users of the programmed devices 120 and the system 13, including, but not limited to, event participants (e.g., students and athletes), family members and friends of event participants, news media professionals and journalists, video producers, schools, colleges, coaches, sponsors of event participants, merchants (e.g., restaurants) and providers (e.g., sports clubs/teams, camp hosts, college recruiters, physical therapists, sports agents, trainer, academic tutors and others).
In an embodiment, the programmed device 120 includes an imaging device configured to record videos and generate images or photographs. The imaging device can include dual cameras or a camera unit with dual lenses (one for front imaging and one for rear imaging) to detect the user's gestures at the front while recording videos of action at the rear. In an embodiment, the imaging device has auto-zoom (zoom-in and zoom-out) functionality to maximize the capture of a tracked participant or wearable item (e.g., the bracelet 508 or shoestring tag 516 described below) that is paired with the programmed device 120.
As illustrated in
Once logged-in, the programmed device 120 displays the home interface 54 as illustrated in
In addition, the home interface 54 includes a plurality of icons or symbols at the bottom of the home interface 54. In the example shown, the home interface 54 displays a home symbol 72 that, upon selection, causes the programmed device 120 to display the home interface 54. The home interface 54 also displays a participant map symbol 74, a people follower symbol 76 enabling the user to search for, select and follow other users (e.g., athletes or participants), a video camera symbol 78, and a connection symbol 80, each of which is described below.
It should be appreciated that the home interface 54 can be a mobile app interface, a website, or another online or network-accessible portal or medium, including, but not limited to, a social media, cloud-based platform. For example, the home interface 54 can be the front interface of the YouTube™ online video platform.
As illustrated in
In the embodiment shown in
Also, the user can select the update filter element 97. In response to the user's selection of the update filter element 97, the programmed device 120 displays the update filter interface 105 as illustrated in
In response to the user's selection of one of these event elements, the system 13 changes the event element to correspond to the selected event element. In the example shown, the user selected basketball element 102, the programmed device 120 highlighted the basketball element 98, and the programmed device 120 displayed the basketball element 98 at the top of the event strip 121. In response to the user's selection of one of the gender elements, the system 13 changes the gender element to correspond to the selected gender element. In the example shown, the user selected female element 131, the programmed device 120 highlighted the female element 131, and the programmed device 120 displayed the female element 131 at the top of the gender strip 123. In response to the user's selection of one of the minimum age elements, the system 13 changes the minimum age element to correspond to the selected minimum age element. In the example shown, the user selected minimum age fifteen, the programmed device 120 highlighted the numeral fifteen, and the programmed device 120 displayed the numeral fifteen at the top of the minimum age strip 125. In response to the user's selection of one of the maximum age elements, the system 13 changes the maximum age element to correspond to the selected maximum age element. In the example shown, the user selected maximum age seventeen, the programmed device 120 highlighted the numeral seventeen, and the programmed device 120 displayed the numeral seventeen at the top of the maximum age strip 127. Accordingly, in this example, the user set a custom filter for videos that involve basketball and female participants (i.e., female basketball players) having an age within the range of fifteen to seventeen years old. The update filter interface 105 (
It should be appreciated that the search interface 312 can include or be operatively coupled to a plurality of descriptor categories other than those illustrated in
Returning to the home interface 54 (
As illustrated in
The search interface 312 (
Using conventional (prior art) video platforms like YouTube™, it can be difficult, burdensome and time consuming for recruiters and sports enthusiasts to identify athletes who match a desired profile, such as age, gender, sport type, performance statistic, height, weight, GPA or other descriptors of various descriptor categories. For example, a YouTube™ search for “top 17 year old high school girl basketball players in Cleveland, Ohio” may result in 83,900 results with the first five including: (a) The Best High School Basketball Player From Every State; (b) 7′7 freshman makes varsity debut; (c) 7-Foot-7 190 lbs Freshman; (d) 7′7″ basketball player in Ohio; and (e) Chargrin Falls' senior Hallie Thome named Cleveland.com's Girls Basketball Player of the Year. Four of the top five results do not even involve girl basketball players, and the fifth result involves a eighteen year old girl basketball player. The sought-after player may be buried in the 83,900 results, requiring searchers to spend hours to identify 17 year old girl basketball players in Cleveland, Ohio. The system 13 provides an improvement that overcomes or decreases the effects of this problem. In particular, the search interface 312 (
The map interface 108 enables recruiters to conveniently investigate the athletes within a desired geography. For example, without the map interface 108, recruiters might avoid traveling to a small town to view a single athlete. With the improvement and advantage provided by the map interface 108, a recruiter can virtually visit small towns and view the videos and information regarding the athletes there. In addition, as described above, the search interface 31 (
As illustrated in
If the user selects the standard mode element 112, the programmed device 120 automatically activates the standard cutback 116 and standard cutforward 120. The standard cutback 116 and standard cutforward 120 are the default values. In the example shown, the value of the standard cutback 116 is set at five seconds, and the value of the standard cutforward 122 is set at two seconds. It should be appreciated that these values can be adjusted by the implementor of the system 13.
If the user selects the manual mode element 114, the programmed device 120 deactivates the default cutback 116 and default cutforward 120, and the programmed device 120 enables the user to enter the desired data (e.g., time values in seconds) in the custom cutback field 118 and custom cutforward field 122. As described further below, the time values established in the recording options interface 110 affect the video clipping process.
In response to the user's selection of the recording features element 124, the programmed device 120 displays the recording features interface 126 as illustrated in
In response to the user's selection of the basic mode element 128, the system 13 activates a basic recording mode 140 as illustrated in
-
- (a) To activate the recording function of the programmed device 120, the user presses or taps the video camera symbol 78 as illustrated in
FIGS. 3A and 6A-6B . In response, the programmed device 120 displays a recording interface 142 as illustrated inFIG. 8 . - (b) To start recording, the user presses and holds the start/stop element 144 (
FIG. 7 ) which, in the example shown, is a wheel symbol. After the user continuously presses the start/stop element 144 for a designated period (e.g., one second), the programmed device 120 animates the start/stop element 144 and starts the recording of the event. In the example shown, the programmed device 120 causes the wheel symbol to spin or rotate. The continuous motion of the wheel symbol indicates that recording is in progress. It should be appreciated that, in other embodiments, the start/stop element 144 can include other animated symbols, such as a spinning basketball, spinning football, spinning baseball, spinning soccer ball, another spinning or moving sports object associated with a particular sport, or a dot or ball that travels clockwise around the perimeter (the path of flash 150). - (c) To capture video footage 146 (
FIG. 8 ) of the recorded event, the user presses and holds one or more fingers (or another part of the user's body) on the touchscreen 148 (FIG. 9 ) of the programmed device 120 until the system 13 displays a relatively bright flash 150 (FIG. 10A ) located at the perimeter of the recording interface 142. In this embodiment, the programmed device 120 has a designated confirmation period, such as two seconds. The programmed device 120 checks to determine whether the user has made a continuous, intentional input onto the touchscreen 148 for the confirmation period. Once the programmed device 120 confirms that the user has satisfied this condition, the programmed device 120 proceeds to generate the flash 150 and capture the video footage 146. It should be appreciated that, in other embodiments, the programmed device 120 is configured to receive other types of actions or inputs to generate the desired video footage 146, including, but not limited to, voice, audible, retinal, biometric and gesture inputs, user actions, movements of the programmed device 120 relative to other objects, and electronic signals from ancillary devices, sensors or accessories. The flash 150 (FIG. 10A ) indicates to the user that the programmed device 120 has successfully received the user's input to generate the desired video footage 146. In an embodiment, the flash 150 is bright white, silver, yellow, orange or red. In another embodiment, the flash 150 is a graphical animation of a rectangular path or line of fire showing a line of red and orange flames in motion. In yet another embodiment, the programmed device 120 displays a sequence of flashes 150 in which flash 150 quickly changes between illuminated and non-illuminated appearances. After the flashing or flash period ends, the programmed device 120 deactivates the flash 150, returning to the recording interface 142 shown inFIG. 10B . - (d) To pause or stop the recording, the user presses and holds the start/stop element 144. After the user continuously presses the start/stop element 144 for a designated period (e.g., one second), the programmed device 120 stops the animation of the start/stop element 144 and stops the recording of the event. In the example shown, the programmed device 120 stops the spinning and rotation of the wheel symbol. The stationary display of the wheel symbol indicates that recording has stopped or paused.
- (e) To wrap-up, end or terminate the recording session, the user presses or selects the recording exit element 145. In addition, the user can use his/her hand 152 to cover the rear camera lens 154 of the programmed device 120 as illustrated in
FIGS. 11-12 . The programmed device 120 checks to determine whether the user has made a continuous, intentional covering of the lens 154 for a confirmation period, such as one second. Once the programmed device 120 confirms that the user has satisfied this condition, the programmed device 120 recognizes an exit input. In an embodiment, in response to an exit input through the exit element 145 or the rear camera lens 154, the programmed device 120 automatically displays a publish decision interface 156 as illustrated inFIG. 13A . The publish decision interface 156 displays a continue recording element 158 and a publish now element 160. Depending upon the embodiment, the publish decision interface 156 can cover or replace the entire recording interface 142, or the publish decision interface 156 can be a pop-up window that overlays only part of the recording interface 142. If the user selects the continue recording element 158, the programmed device 120 displays the recording interface 142. If the user selects the publish now element 160, the programmed device 120 automatically publishes a highlight video having a compilation of select video clips of the video footage 146, or the programmed device 120 enables the user to add information before publishing such video, as described further below. The publish decision interface 156 provides a secondary safeguard against an unintentional stoppage of recording. The confirmation period for the lens covering can serve as a primary safeguard.
- (a) To activate the recording function of the programmed device 120, the user presses or taps the video camera symbol 78 as illustrated in
In response to the user's selection of the advanced mode element 130 (
-
- (a) To activate the recording function of the programmed device 120, the user presses or taps the video camera symbol 78 as illustrated in
FIGS. 3A and 6A-6B . In response, the programmed device 120 displays a recording interface 142 as illustrated inFIG. 16 . - (b) To start recording, the user presses and holds the start/stop element 144 (
FIG. 16 ) which, in the example shown, is a wheel symbol. After the user continuously presses the start/stop element 144 for a designated period (e.g., one second), the programmed device 120 animates the start/stop element 144 and starts the recording of the event. In the example shown, the programmed device 120 causes the wheel symbol to spin or rotate. The continuous motion of the wheel symbol indicates that recording is in progress. It should be appreciated that, in other embodiments, the start/stop element 144 can include other animated symbols, such as a spinning basketball, spinning football, spinning baseball, spinning soccer ball or another spinning or moving sports object associated with a particular sport. - (c) To generate or capture a video clip while, at the same time, recording the statistic associated with the video clip, the user provides one of the clip-stat commands 164 (
FIG. 14 ), multi-functional commands. As shown inFIG. 15 , the programmed device 120 stores a plurality of correlations 166 related to the clip-stat commands 164. - (d) As illustrated in
FIG. 16 , if the user presses or taps one finger at any single spot 168 on the touchscreen 148, this single-finger input has a one input characteristic associated with a scoring of one point (e.g., a basketball free throw or soccer goal). This causes the programmed device 120 to simultaneously save or record one point and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 16 , if the user presses or taps one finger at any single spot 168 on the touchscreen 148, the programmed device 120 simultaneously: (i) saves or records one point; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as a “1” appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 14 . - (e) As illustrated in
FIG. 17 , if the user simultaneously presses or taps two fingers at any two spots 170, 172 on the touchscreen 148, this two-finger input has a two input characteristic associated with a scoring of two points (e.g., a basketball field goal). This causes the programmed device 120 to simultaneously save or record two points and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 17 , if the user simultaneously presses or taps two fingers on any two spots 172, 174 on the touchscreen 148, the programmed device 120 simultaneously: (i) saves or records two points; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as a “2” appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 17 . - (f) As illustrated in
FIG. 18 , if the user simultaneously presses or taps three fingers at any three spots 174, 176, 178 on the touchscreen 148, this three-finger input has a three input characteristic associated with a scoring of three points (e.g., a basketball field goal behind the three point arc). This causes the programmed device 120 to simultaneously save or record three points and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 18 , if the user simultaneously presses or taps three fingers on any three spots 174, 176, 178 on the touchscreen 148, the programmed device 120 simultaneously: (i) saves or records three points; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as a “3” appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 14 . - (g) As illustrated in
FIG. 19 , if the user laterally drags or swipes one or more fingers from left to right or right to left on the touchscreen 148 along a lateral or substantially lateral path 180, the lateral swiping input has a lateral or horizontal input characteristic associated with a lateral or horizontal path of a passed ball (e.g., the passing of a basketball from one player to another player who scores). In an embodiment, this lateral or horizontal input characteristic is associated with the passing or movement of a ball or sports object substantially laterally or horizontally across a court or sports area. In basketball, the user could provide this input when a player passes a ball that results in an assist. This input causes the programmed device 120 to simultaneously save or record one assist and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 19 , if the user drags one or more fingers along substantially lateral path 180, the programmed device 120 simultaneously: (i) saves or records one assist; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as a “ASSIST” appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 14 . - (h) As illustrated in
FIG. 20 , if the user vertically drags or swipes one or more fingers upward on the touchscreen 148 along an upward or substantially upward path 182, the upward swiping input has a rise, jumping, vertical or upward input characteristic associated with the substantially upward path 182 of the rising motion of a player jumping upward (e.g., the upward jumping of a basketball player to rebound a ball). In an embodiment, this upward input characteristic is associated with the rebounding of a ball or sports object. In basketball, the user could provide this input when a player successfully rebounds a ball. This input causes the programmed device 120 to simultaneously save or record one rebound and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 20 , if the user drags one or more fingers along the substantially upward path 182, the programmed device 120 simultaneously: (i) saves or records one rebound; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as a “REBOUND” or a symbol thereof appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 14 . - (i) As illustrated in
FIG. 21 , if the user simultaneously presses or taps all four fingers (and optionally, the thumb) at any four spots 184, 186, 188, 190 or more on the touchscreen 148, this four-finger input has a hand input characteristic associated with an entire hand that is typically involved in stealing a ball from an opponent (e.g., a steal in basketball). This input causes the programmed device 120 to simultaneously save or record one steal and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 21 , if the user simultaneously presses or taps four fingers on any four spots 184, 186, 188 and 190 on the touchscreen 148, the programmed device 120 simultaneously: (i) saves or records one steal; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as “STEAL” or a symbol thereof appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 4 . - (j) As illustrated in
FIG. 22 , if the user simultaneously presses or taps the palm or base 192 of a first at any spot 194 on the touchscreen 148, this large surface or fist-shaped input has a powerful or protective input characteristic associated with a fight or action to block or reject an opponent (e.g., a block in basketball). This input causes the programmed device 120 to simultaneously save or record one block and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 22 , if the user simultaneously presses or taps the base 192 of the hand on any spot 194 on the touchscreen 148, the programmed device 120 simultaneously: (i) saves or records one block; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as “BLOCK” or a symbol thereof appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 14 . - (k) As illustrated in
FIG. 23 , if the user vertically drags or swipes one or more fingers to draw an X by swiping along intersecting paths 196, 198, the X-shaped input has a cancel, error or negative input characteristic associated with a mistake, error or underperformance of a player (e.g., a turnover by a basketball player). In an embodiment, this negative input characteristic is associated with a turnover caused by a basketball, football or other athlete. In basketball, the user could provide this input when a player loses the ball or otherwise performs a turnover. This input causes the programmed device 120 to simultaneously save or record one turnover and generate or capture the associated video clip, as described below. In an embodiment illustrated inFIGS. 14 and 23 , if the user drags one or more fingers along the intersecting paths 196, 198, the programmed device 120 simultaneously: (i) saves or records one turnover; (ii) generates or captures the associated video clip, as described below; and (iii) displays a statistics capture confirmation, such as a “TURNOVER” or a symbol thereof appearing momentarily on the touchscreen 148 and then disappearing as indicated inFIG. 14 .
- (a) To activate the recording function of the programmed device 120, the user presses or taps the video camera symbol 78 as illustrated in
There are several challenges and difficulties that event attendees encounter when video recording events (e.g., games) while, at the same time, trying to document important statistics regarding the events. First, the attendee experiences a series of emotional rises and falls throughout the event. Often, the pivotal moments in the event can cause the attendee to momentarily lose attention of the video recording or statistics. These emotions increase the difficulty to reliably video record all of the important footage of a designated player while reliably recording all of the important statistics of such player.
The programmed device 120 overcomes or substantially decreases this difficulty by providing several technical advantages. As described further below, the video generator 28 of the programmed device 120 has a clipping logic that enables the attendee to capture important footage after the pivotal moments have occurred. This avoid the burden of trying to remember to cut or clip pivotal moments while the moments are occurring. Also, the correlations 166 of the advanced recording mode 162, described above, enable the attendee to seamlessly capture a video clip and the associated statistic at the same time based on a single input. In addition, the characteristic of the input resembles or relates to the statistic. For example, a tap of one finger relates to a statistic of one point. This provides a cognitive learning and memory advantage by making it easier to remember which type of input to provide for a given statistic. This enhanced human machine interface simplifies the overall process of capturing important video clips and recording important statistics related to the video clips.
In another embodiment illustrated in
In another embodiment illustrated in
In an embodiment, the recording interface 214 enables the user to generate video clips while recording statistics through use of the statistics symbols 216. Depending upon the embodiment, the recording interface 214: (a) displays the solid images of the statistics symbols 216 on top of the recorded imagery; or (b) displays the translucent or partially transparent images of the statistics symbols 216 on top of the recorded imagery.
In an embodiment, the recording interface 214 includes and displays a statistics icon (not shown), such as an image of a clipboard or statistics book. During the recording session, the recording interface 214 displays such statistics icon, and the default is to hide (or otherwise not display) the statistics symbols 216. When the user presses the statistics icon, the recording interface 214 displays or pops-up the statistics symbols 216. This enables the user to select the appropriate statistics symbols 216 to record the applicable statistic.
In various embodiments described above, the type of inputs from the user to the programmed device 120 involves a touching or tapping of the touchscreen 148. It should be appreciated that, in other embodiments, the user can provide alternate types of inputs. In such embodiments, it is not necessary for the programmed device 120 to have a touchscreen 148.
In an embodiment, the system 13 enables the programmed device 120 to receive audio or sound inputs for voice commands. In a setup process, the programmed device 120 enables the user to train the programmed device 120 to recognize sound signatures or unique voice sounds produced by the user. For example, the user can output different oral statements into the microphone of the programmed device 120. The oral statements corresponds to different types of statistics, such as “ONE,” “TWO,” “THREE,” “ASSIST,” “REBOUND,” “STEAL,” “BLOCK,” and “TURNOVER.”
In this embodiment, the programmed device 120 includes a comparator that compares the user's unique voice to the environmental sounds, such as the roars of the crowd and voice commands of other attendees in the audience who are using programmed devices 120 on their electronic devices. The comparator identifies the user's voice so that the programmed device 120 does not register non-user sounds as voice commands by the user. In an embodiment, the programmed device 12 includes a sound confusion inhibitor that enables the user to record a unique voice activation sound, such as the first name, last name, initial or jersey number of the particular player for which the user is recording statistics. For example, the voice activation sound could be “JOHN,” JUSTICE” or “J.” In such example, the oral statements corresponding to the different types of statistics could be as follows: “J ONE,” “J TWO,” “J THREE,” “J ASSIST,” “J REBOUND,” “J STEAL,” “J BLOCK,” and “J TURNOVER.” If the user does not speak “J” before speaking the applicable statistic, the system 13 will not record such statistic.
In an embodiment, the programmed device 120 displays a pop-up or confirmation of the recorded statistic to confirm the statistic that the user input through his/her voice. For example, the system 13 can cause the programmed device 120 to display “ONE POINT” by itself or “ONE POINT” adjacent to a garbage symbol, in which case the user can press the garbage symbol if such statistic is wrong. If the user taps the garbage symbol, the programmed device 120 discards or otherwise does not record such erroneous statistic.
In another embodiment, the programmed device 120 enables the user to provide inputs through physical interaction with the programmed device 120, such as by applying forces to the programmed device 120, accelerating or moving the programmed device 120 or changing the orientation or position of the programmed device 120 (e.g., rotating or twisting the programmed device 120). In such embodiment, the programmed device 120 includes one or more sensors (including, but not limited to, accelerometers) configured to sense or detect forces, light changes, movement or positional change of the programmed device 120. For example, to start or stop a recording session, the system 13 can enable the user to quickly turn the programmed device 120 face up (to start) or face down (to stop). In another example, the system 13 can enable the user to record inputs for different statistics by: (a) sharply tapping the back case of the programmed device 120 one time to record one point; (b) sharply tapping the back case of the programmed device 120 two times to record two points; and (c) sharply tapping the back case of the programmed device 120 three times to record three points.
As described above, the recording options 110 (
In an embodiment, when the user provide an input to generate a video clip, the programmed device 120 displays a cutback pop-up 234 as illustrated in
Referring to
In the examples described, the time increments are seconds. It should be appreciated, however, that the time increments can be milliseconds or any other suitable increment. Also, the programmed device 120 is operable to generate and store the video track 238 through a rate capture rate within the range of thirty to one thousand frames per second (FPS) or through a rate capture rate of any other suitable FPS.
In the example shown, once the recording session starts, the programmed device 120 generates and stores a continuous stream, track or series of timestamps in chronological order based on a suitable time increment. In the example shown, the increment is seconds, and the programmed device 120 generated timestamps one through twenty-three. In this example, the user provided a first clip input at the point of twelve seconds, as indicated by the first arrow A1 shown in
As illustrated in
In another embodiment, the clipping process involves look-rearward and look-forward steps. In the example shown in
Later, the user provided a second clip input at the point of twenty seconds, as indicated by the second arrow B4 shown in
As illustrated in
In another embodiment, the clipping process involves interference management in addition to the look-rearward and look-forward steps described above. In the example shown in
Later, the user provided a second clip input at the point of fourteen seconds, as indicated by the second arrow C4 shown in
Accordingly, in response to the second clip input at C4, the programmed device 120 checks to determine whether any forward point timestamp has been marked that occurs in time less than five seconds before the second clip input C4. In this case, five seconds before C4 is the nine second point, and the first forward point C3 occurs at the twelve second point. Consequently, the programmed device 120 uses the marker C3 as the data marker for the second rearward point. Therefore, the data marker C3 is associated with both a forward point and a rearward point. At the same time or thereafter, the programmed device 120 flagged, marked or bookmarked the sixteen second point by storing a suitable data marker C5 (
As illustrated in
Referring to
During the recording session, the programmed device 120 determines whether the user has provided a stop input as indicated by the decision diamond 258. If the answer is yes, the programmed device 120 pauses or stops the recording session, as indicated by the step 260, and then awaits for another start input as indicated by the step 254. If the answer is no, the programmed device 120 continues the recording session.
During the recording session, the programmed device 120 is operable to receive a plurality of different statistic inputs from the user as indicated by step 262. The programmed device 120 stores the statistics (e.g., statistical data) associated with the statistic inputs. The programmed device 120 can save the statistics within a memory device component of the programmed device 120, within a data storage disk operatively coupled to the programmed device 120, or within a data storage device that is remote from the programmed device 120, such as a webserver or data storage device 12 (
Next, the programmed device 120 receives a clip input at an input time point as indicated by step 264. Next, as indicated by step 266, the programmed device 120 performs the following steps: (a) flags or bookmarks the input time point; (b) flags or bookmarks a rearward time point at R seconds (e.g., five seconds) before the input time point; and (c) flags or bookmarks a forward time point at F seconds (e.g., two seconds) after the input time point.
The automatic marking rearward in time and the automatic marking forward in time solve a pervasive problem experienced by typical users of prior art (conventional) recording devices. Users often miss important footage because they start or stop the video recording at the wrong times. For example, to save data storage capacity, users manually decide when to start and stop recording. When distracted, they often press the start button too late, so that the first part of the important footage is lost. Also, they often press the stop button too early, cutting off important footage. The programmed device 120 solves this problem by enabling the user to continuously record, taking advantage of the auto-deletion function described below. While recording, the programmed device 120 automatically captures the valuable moments by causing the clip marking to occur rearward and forward of the user's input time point.
After step 266, the programmed device 120 determines whether the rearward time point precedes the forward time point of the previous video clip, if any, as indicated by decision diamond 268. This step is important to avoid the undesired deletion of previously saved video clips, as described above. If the answer is no, the programmed device 120 proceeds to step 270. If the answer is yes, the programmed device 120 proceeds to step 272.
The answer may be no because there were no previously saved video clips. Also, the answer may be no because the forward time point of the most recently saved video clip is before the rearward time point. In any case, if the answer is no, the programmed device 120 automatically deletes the entire portion of the video track 238 that occurs between the rearward time point and the forward time point of the most recent, preceding video clip as indicated by step 270. If there are no previously saved video clips, the programmed device 120 automatically deletes the entire portion of the video track 238 that occurs before the rearward time point.
The programmed device 120 achieves several technical advantages by performing this auto-deletion function. Many events involve one or more relatively short, valuable actions or moments nested among dull, uninteresting or unimportant moments. For example, this is often the case for sports games, school debates, personal interviews and other events that are relatively long in duration. The prior art (conventional) process of editing a video after the recording is finished, can be time consuming, painstaking and burdensome. For example, producing a highlight video using the prior art process can take hours to edit the video tracks of an athlete's performance in a single game. Consequently, many videos with valuable moments are rarely viewed. People do not have the time or patience to watch long videos only to see a few valuable moments in the video. Nonetheless, for the sake of saving the valuable moments, users commonly save the full length of the videos on their prior art (conventional) mobile devices or on prior art (conventional) web servers.
This causes their prior art (conventional) mobile devices to reach maximum storage capacity, often in the midst of an event. Also, when users upload the full length videos to prior art (conventional) webservers, the webserver data centers consume substantial amounts of energy. For example, it has been reported that the data centers of Facebook®, YouTube® and others consume the equivalent of the energy output of numerous coal-fired power plants. Much of this energy goes to powering the webservers and keeping them cool. This energy is known to cause greenhouse gas emissions, resulting in a rising level of global pollution.
As described above, the auto-deletion function of the system 13 helps free-up data storage capacity in electronic devices 120 (e.g., smartphones) and in data storage devices 12 (e.g., webservers). In an embodiment, while the user records an event, the programmed device 120 purges or deletes the portions of the video track that contain dull, uninteresting or unvaluable footage. In such embodiment, the programmed device 120 performs this deletion dynamically during and throughout the recording session. By automatically deleting the excess tracks during the recording session, the programmed device 120 is less likely to reach maximum data storage capacity.
After the deletion step 270, the programmed device 120 proceeds to step 272. At step 272, the programmed device 120 retains or otherwise saves a video clip that is the portion of the video track 238 between the rearward time point and the forward time point. Accordingly, the programmed device 120 captures the applicable video clip of interest to the user. In an embodiment, the programmed device 120 retains such video clip within the video track 238 that is saved by the programmed device 120. In another embodiment, the programmed device 120 generates and saves a copy of such video clip and then deletes the original video clip from the video track 238.
As the recording session continues, the programmed device 120 receives another clip input at another input time point as indicated by step 274. Eventually, the user will be ready to end the recording session, such as at the end of the event. To do so, as indicated by step 276, the user provides a publish input or finish input by providing an input associated with the wrap-up, finalization or publication of a compilation video. Depending upon the embodiment, the user can provide this finish input by pressing the exit element 145 (
In response to the finish input, the programmed device 120 performs the following steps as indicated by step 278: (a) combines and consolidates all of the saved video clips X1, X2, X3 (
It should be appreciated that, depending upon the embodiment, the programmed device 120 can perform the auto-deletion function during or after the recording session. For example, in an embodiment, the programmed device 120 deletes the track portions EXCESS 1, EXCESS 2, and EXCESS 3 after the recording session ends in response to the finish input provided by the user. Such embodiment addresses the possibility that deleting the excess tracks during the recording session can overload or impair the processor of programmed device 120 depending upon the power of the processor. For example, by bookmarking during the recording without deleting, the processor of the programmed device 120 will have more power availability to generate the video track 238. By automatically deleting the excess tracks after the recording session, the programmed device 120 is less likely to reach maximum data storage capacity during subsequent recording sessions.
As illustrated in
In an embodiment, the programmed device 120 requires the user or video submitter to input at least one descriptor or a minimum amount of descriptors through the primary video categorizer interface 287. If the video submitter fails to do so, the programmed device 120 blocks, prevents or disables the distribution of the applicable compilation video to the home interface 54 (
In another embodiment, the programmed device 120 requires the user or video submitter to input a minimum amount of descriptors through the primary video categorizer interface 287 and the secondary video categorizer interface 288. If the video submitter fails to do so, the programmed device 120 blocks, prevents or disables the distribution of the applicable compilation video to the home interface 54 (
Referring again to
The public publication interface 290 also displays a sound field or sound symbol. By selecting the sound symbol, the user can upload, download or otherwise capture a desired sound track or musical recording. Depending upon the embodiment, the source of the sound track can be the local data storage of the programmed device 120 or a web server. In an embodiment, once the user captures the sound track, the programmed device 120 automatically: (a) cuts or trims the length of the sound track to match the length of the compilation video 280; and (b) incorporates the sound track into the compilation video 280, replacing the original audio of the compilation video 280 with the sound track.
After these steps, the user can press the public post element 294. In response, the programmed device 120 generates the front video interface 296 as illustrated in
In an embodiment, the participant center interface 308 (
In publishing the compilation video 280, the programmed device 120 transfers the the compilation video 280 to the one or more data storage devices 12 (
When the user clicks or selects a compilation video, such as the compilation video 60 (
When the user taps, pauses or finishes watching the compilation video 60, the programmed device 120 displays a flame rating interface 326 as illustrated in
In an embodiment, the system 13 calculates a fire rating 390 (
In an embodiment, the system 13 includes a video auto-deletion function to automatically purge the one or more data storage devices 12 of redundant videos—videos that highlight the same athlete in the same event. This video auto-deletion function reduces clutter and saves storage space in the one or more data storage devices 12. Also, this video auto-deletion function simplifies the home interface 54 (
In an embodiment, the system 13 automatically blocks the publication of compilation videos 280 of such video profile once the time window ends or closes. In this case, the programmed device 120 automatically displays a closed indicator (e.g., “POSTING TIME ENDED” or “CLOSED”) when the user enters enough data in the public publication interface 290 (
In an embodiment, the system 13 enables the athlete highlighted in the winning compilation video 280 to replace such compilation video 280 with an alternate compilation video 280 published by the athlete. This may be desirable, for example, if such athlete is displeased with the quality of the winning compilation video 280. Depending upon the embodiment, the system 13 can also enable such athlete to takedown or delete winning compilation videos 280 that emphasize such player's mistakes or poor or unflattering performance.
In the example illustrated in
In many cases, relatively low profile events, such as amateur sports games and high school games, receive little, if any, media coverage. Many of the events are not broadcast live by news channels. As a result, the participants do not receive timely exposure from the events, which can result in lost opportunities. Furthermore, the information that does circulate, such as a player's statistics or performance, can be inaccurate. For example, a high school team may have a game that is not covered by the local news media. When the game finishes at 9:00 pm on a Friday, a spectator might publish a Twitter™ message, such as “Chris Carlson scores 34 in Brightmore Tigers' win over Glendale Bears!” In this example, such information is false or fake news. The truth is that Chris Carlson scored 22 points, and the Glendale Bears won the game. The misinformation can be inaccurate or misleading. This can cause harm to the reputation and opportunities of the event participants.
In an embodiment, the verification module 34 (
As described above, the public publication interface 290 (
In an embodiment, the verification module 34 includes verification logic that is executable to compare the event data provided by one user for a certain video profile to the event data provided by the other users for the same video profile. If the system 13 determines that the event data of a designated quantity of users match, the system 13 confirms such event data as verified and indicates the verification by displaying a verification indicator 330 (
For example, thirty users may submit thirty compilation videos 280 with the same video profile within one hour after the end of a Friday night high school basketball game, resulting in a sequence of event data submissions one through thirty as follows:
In this example, the system 13 includes a verification factor that requires a minimum of five final score submissions to match each other. Once the first five submissions have matching final scores, the system 13 designates the final score as verified or confirmed. Then, the system 13 automatically either: (a) adds the confirmed event data 316 (
In another embodiment illustrated in
Next, as indicated by decision block 346, the system 13 determines whether the text extracted from the outcome indicator 342 indicates: (a) zero seconds of remaining game time 347; and (b) a home score 348 and visitor score 350 that match the corresponding data reported with the compilation video 280 submitted by the user. If the answer is no, the programmed device 120 indicates that the confirmation or verification is incomplete as indicated by step 338 and verification failure indicator 339 (
In another embodiment illustrated in
The mascot name 364 can be indicated on a banner, on a painted section of a wall, on the outcome indicator 342 or on another physical display medium 366 (
In the example illustrated in
Referring back to
Next, as indicated by decision block 373, the system 13 determines whether the text extracted from the outcome indicator 342 indicates: (a) zero seconds of remaining game time 347; and (b) a home score 348 and visitor score 350 that match the corresponding data reported with the compilation video 280 submitted by the user. If the answer is no, the programmed device 120 indicates that the confirmation or verification is incomplete as indicated by step 367 and verification failure indicator 339 (
As illustrated in
Many types of events, such as sports games, talent shows, theatrical plays and concerts, have venues where relatively large numbers of people attend. At the end of the event, the participants, their friends in the audience and other attendees often are hungry and wish to visit a local restaurant or eatery for a meal or snack. The food providers or restaurants compete with each other for these customers. Often times, restaurants located further from the venue receive less customers from the event than those restaurants located closer to the venue.
At or after the end of the event, the system 13 receives, verifies and transfers the event outcome data to the participant module 32 as described above. Referring to
The winner benefit interface 341 displays: (a) the verified event outcome data 344; (b) a win indicator 349, such as “Enjoy a treat for your win!”; (c) an expiration notice 348, such as “Expires at 11:37 pm”; (d) a plurality of award indicators or benefit indicators 350, such as free food items offered by various fast food restaurants; and (e) benefit terms 352, such as “Good for you and 4 friends!”
The loser benefit interface 343 displays: (a) the verified event outcome data 344; (b) a win indicator 354, such as “Enjoy a treat for your effort!”; (c) an expiration notice 348, such as “Expires at 11:37 pm”; (d) a plurality of award indicators or benefit indicators 356, such as food discounts and free food items offered by various fast food restaurants; and (e) benefit terms 358, such as “Good for you and 2 friends!” In this example, the value of the benefit indicators 356 is less than the value of the benefit indicators 350. Similarly, the benefit terms 358 are less favorable than the benefit terms 352. It should be appreciated that, in other examples, the interfaces 341, 342 can have different expiration notices 348 and other differences that grant more favor to the winning registered player than the losing registered player.
With the benefits indicated by the winner benefit indicator 340 or the losing benefit indicator 342, as applicable, the registered athlete can visit the applicable restaurant, before the applicable expiration time, with companions or friends. Upon arrival, for example, a winning athlete can obtain five items of large fries for the athlete and four friends. The transaction can be performed through different methods. In an embodiment, the programmed device 120 displays a unique code, such as a unique numeric or alphanumeric code or a scannable code (e.g., a 1D or 2D barcode, such as QR code datamatrix). In another embodiment, the programmed device 120 generates a signal, such as a radio frequency (“RF”) or infrared radiation (“IR”) signal. In yet another embodiment, to enroll for the benefit indicated at winner benefit indicator 341 and the losing benefit indicator 342, the benefit providers or restaurants require the participants to create loyalty card accounts with the restaurants, associating the participants' phone numbers with their accounts. Depending upon the embodiment, the cashiers of the restaurants can ascertain the benefits awarded to the participants by: (a) entering codes provided by the participants; (b) scanning barcodes displayed on the participants' programmed devices 120; (c) establishing an electronic communication between the point of sale machines and the programmed devices 120 to receive signals from the programmed devices 120; (d) entering the participants' phone numbers; or (e) any other suitable benefit transfer method. In an embodiment, each benefit provider (e.g., restaurant) has a webserver, database or benefit source 44 (
In an embodiment, the programmed devices 120 are enabled for near-field communication (“NFC”). For example, the programmed devices 120 can have RF transceivers, NFC protocols and NFC code operable to perform NFC with the point of sale devices of restaurants and other providers. For example, the NFC code can include a mobile wallet app such as Google Wallet™ or Apple Pay™. In another embodiment, the participant module 32 (
As described above, the user can tap or activate the menu element 81 to cause the programmed device 120 to display the features interface 82 (
The system 13 publishes the public zone 360 to the public, and the system 13 blocks public access to the private zone 362. The programmed device 120 enables the participant to provide select people (e.g., trainers, coaches, family members or recruiters) with access to the private zone 362. It should be understood that the video generator 28 (
As illustrated in
As illustrated in
In response to the participant's activation of the highlight video element 378 (
As illustrated in
As illustrated in
As illustrated in
Referring back to
In response to the participant's activation of the lowlight video element 414 (
As illustrated in
As illustrated in
As illustrated in
In response to the get sponsored element 446, the programmed device 120 displays the sponsors interface 448 as illustrated in
It can be difficult for event participants to find suitable partners or assistants for the pursuit of their objectives. For example, it can be challenging for athletes to find suitable AAU teams, sports camps, college recruiters, trainers and other partners. The connector module 36 (
The connector interface 456, shown in
In response to the user's selection of the connection facilitator element 460, the programmed device 120 displays a connection search interface 464 as illustrated in
In the example shown in
Continuing with this example, the user selected the Chicago Blaze club 478. In response, the programmed device 120 displayed the provider interface 480 as illustrated in
In the example shown in
For parents of participants under the age of eighteen, it can be difficult to research and identify suitable organizations for their children. For example, most parents of student athletes rely on word-of-mouth information regarding AAU teams. This is because there is little online information regarding many of these teams, and there is no readily-accessible, reliable resource that provides transparency into the team activities or otherwise facilitates the integrity, accuracy and objectivity of the information. Consequently, parents often mistakenly select AAU teams that are lead or coached by adults who are lacking in ethics and competence or who engage in nepotism. This exposes children and youth to hostile environments involving bullying by coaches, embarrassment or ridicule by coaches, poor role models of coaches engaged in fighting, profanity and confrontations with referees and others, physical and psychological abuse by coaches and other acts that are harmful to the self esteem and development of children and youth. The provider interface 480 (
If the user is interested in matching-up with, contracting with, joining or otherwise connecting with a provider who is listed through listing element 458 (
Conventionally, many providers such as AAU clubs, are not equipped to accept credit card or electronic payments. They require cash payments. The lack of receipts and handling of cash can cause security and fraud risks for payers. In an embodiment, the user can make one-time payments and periodic payments to the listed providers through the provider profile 500. This provides an improvement in security and convenience for athletes, participants and parents.
In an embodiment, the programmed device 120 is operable to display an item order interface 506 as illustrated in
In an embodiment, the programmed device 120 is operable to display an item order interface 514 as illustrated in
In this embodiment, the shoestring tag 516 includes a body 522 that defines a plurality of fasteners or couplers which, in the example shown, include string receiving holes 524, 526. The body 522 has a downwardly-curved, arc shape as shown. It should be appreciated, however, that the body 522 can be flat, wavy or have any other suitable shape. As shown in
In an embodiment, the electrical element 510 includes: (a) an antenna, transmitter or radiator operable to generate a wireless signal, such as a suitable RF; (b) a receiver operable to receive such a wireless signal; (c) a transceiver operable to generate and receive such a wireless signal; (d) a sensor operable to monitor or detect events and conditions related to the user who is wearing the bracelet 508 or shoestring tag 516 or the environment in which the user is running, walking, standing or participating; or (e) a memory unit operable to store data. In an embodiment, the electrical element 510 includes any suitable combination of the foregoing components. In an embodiment, the sensor has circuitry, including a data processor and memory, configured to sense foot speed, acceleration, impact, stress, fastest speed, the heights of jumps, biometric activity of the wearer and other performance-related factors that occur throughout the game or event.
In an embodiment, the electrical element 510 has circuitry coupled to a miniature battery power source. In another embodiment, the electrical element 510 includes a passive radio-frequency identification (“RFID”) module having a circuit configured to: (a) store and process information that modulates and demodulates external RF signals; (b) a power receiver operable to receive electrical power from the external RF signals; and (c) a transceiver operable to receive and transmit the RF signals.
The electrical element 510 is configured to communicate with or transmit signals to one or more external transceivers. Depending upon the embodiment, the external transceivers can be components of one or more programmed devices 120 or components of one or more sensors installed in the facility where the wearer is performing. In an embodiment, each external transceiver includes an RF transceiver operable to send high frequency RF signals to, and receive high frequency RF signals from, the electrical element 510.
In operation of an example, an athlete installs the shoestring tag 516 on the athlete's shoe 534 as illustrated in
In another embodiment, the electrical element 510 is configured to generate an energy signature, such as an RF signature, infrared light or other light within the invisible spectrum. In this embodiment, the programmed device 120 has a thermal imaging device, infrared radiation reader, video camera or other sensor that is configured to continuously track and detect the energy signature. Using the energy signature, the video generator 28 (
In an embodiment illustrated in
Depending upon the embodiment, the network 16 can include one or more of the following: a wired network, a wireless network, an LAN, an extranet, an intranet, a WAN (including, but not limited to, the Internet), a virtual private network (“VPN”), an interconnected data path across which multiple devices may communicate, a peer-to-peer network, a telephone network, portions of a telecommunications network for sending data through a variety of different communication protocols, a Bluetooth® communication network, an RF data communication network, an IR data communication network, a satellite communication network or a cellular communication network for sending and receiving data through short messaging service (“SMS”), multimedia messaging service (“MMS”), hypertext transfer protocol (“HTTP”), direct data connection, Wireless Application Protocol (“WAP”), email or any other suitable message transfer service or format.
In an embodiment, such one or more processors (e.g., processor 14) can include a data processor or a central processing unit (“CPU”). Each such one or more data storage devices can include, but is not limited to, a hard drive with a spinning magnetic disk, a Solid-State Drive (“SSD”), a floppy disk, an optical disk (including, but not limited to, a CD or DVD), a Random Access Memory (“RAM”) device, a Read-Only Memory (“ROM”) device (including, but not limited to, programmable read-only memory (“PROM”), electrically erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), a magnetic card, an optical card, a flash memory device (including, but not limited to, a USB key with non-volatile memory, any type of media suitable for storing electronic instructions or any other suitable type of computer-readable storage medium. In an embodiment, an assembly includes a combination of: (a) one or more of the databases 12 that store the system 13; and (b) one or more of the foregoing processors (e.g., processor 14).
Referring to
In an embodiment, the system 13 includes computer-readable instructions, algorithms and logic that are implemented with any suitable programming or scripting language, including, but not limited to, C, C++, Java, COBOL, assembler, PERL, Visual Basic, SQL Stored Procedures or Extensible Markup Language (XML). The system 13 can be implemented with any suitable combination of data structures, objects, processes, routines or other programming elements.
In an embodiment, the interfaces displayable by the devices 20 can include GUIs structured based on any suitable programming language. Each GUI can include, in an embodiment, multiple windows, pull-down menus, buttons, scroll bars, iconic images, wizards, the mouse symbol or pointer, and other suitable graphical elements. In an embodiment, the GUIs incorporate multimedia, including, but not limited to, sound, voice, motion video and virtual reality interfaces to generate outputs of the system 13 or the device 20.
In an embodiment, the memory devices and data storage devices described above can be non-transitory mediums that store or participate in providing instructions to a processor for execution. Such non-transitory mediums can take different forms, including, but not limited to, non-volatile media and volatile media. Non-volatile media can include, for example, optical or magnetic disks, flash drives, and any of the storage devices in any computer. Volatile media can include dynamic memory, such as main memory of a computer. Forms of non-transitory computer-readable media therefore include, for example, a floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution. In contrast with non-transitory mediums, transitory physical transmission media can include coaxial cables, copper wire and fiber optics, including the wires that comprise a bus within a computer system, a carrier wave transporting data or instructions, and cables or links transporting such a carrier wave. Carrier-wave transmission media can take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during RF and IR data communications.
It should be appreciated that at least some of the subject matter disclosed herein includes or involves a plurality of steps or procedures. In an embodiment, as described, some of the steps or procedures occur automatically or autonomously as controlled by a processor or electrical controller without relying upon a human control input, and some of the steps or procedures can occur manually under the control of a human. In another embodiment, all of the steps or procedures occur automatically or autonomously as controlled by a processor or electrical controller without relying upon a human control input. In yet another embodiment, some of the steps or procedures occur semi-automatically as partially controlled by a processor or electrical controller and as partially controlled by a human.
It should also be appreciated that aspects of the disclosed subject matter may be embodied as a method, device, assembly, computer program product or system. Accordingly, aspects of the disclosed subject matter may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all, depending upon the embodiment, generally be referred to herein as a “service,” “circuit,” “circuitry,” “module,” “assembly” and/or “system.” Furthermore, aspects of the disclosed subject matter may take the form of a computer program product embodied in one or more computer readable mediums having computer readable program code embodied thereon.
Aspects of the disclosed subject matter are described herein in terms of steps and functions with reference to flowchart illustrations and block diagrams of methods, apparatuses, systems and computer program products. It should be understood that each such step, function block of the flowchart illustrations and block diagrams, and combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create results and output for implementing the functions described herein.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the functions described herein.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions described herein.
Additional embodiments include any one of the embodiments described above, where one or more of its components, functionalities or structures is interchanged with, replaced by or augmented by one or more of the components, functionalities or structures of a different embodiment described above.
It should be understood that various changes and modifications to the embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present disclosure and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.
Although several embodiments of the disclosure have been disclosed in the foregoing specification, it is understood by those skilled in the art that many modifications and other embodiments of the disclosure will come to mind to which the disclosure pertains, having the benefit of the teaching presented in the foregoing description and associated drawings. It is thus understood that the disclosure is not limited to the specific embodiments disclosed herein above, and that many modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although specific terms are employed herein, as well as in the claims which follow, they are used only in a generic and descriptive sense, and not for the purposes of limiting the present disclosure, nor the claims which follow.
Claims
1. A method comprising:
- providing a plurality of computer-readable instructions that are executable to cause one or more processors to perform a plurality of steps, wherein the steps comprise: receiving a plurality of biographic information pieces, wherein each of the biographic information pieces is associated with a participant related to one or more events; receiving a plurality of geographic information pieces, wherein each of the geographic information pieces is associated with one of the participants; processing video data associated with a plurality of videos, wherein each of the videos displays one of the participants involved in one of the events; receiving a plurality of rating information pieces, wherein each of the rating information pieces is associated with a rating of one of the videos; causing a search interface to be displayed; receiving data that is input into the search interface;
- based on the data, causing a map interface to be displayed, wherein: the map interface displays a plurality of symbols representing a plurality of the participants; the symbols are positioned relative to each other based on the geographic information pieces; and each of the symbols comprises a characteristic associated with one of the ratings, wherein the characteristics vary depending on a difference between the ratings;
- receiving a selection of one of the symbols; and
- based on the selection, causing one or more outputs to be generated, wherein the one or more outputs comprise one of: a playing of the video associated with the participant represented by the selected symbol; and an indication of the biographic information piece associated with the participant represented by the selected symbol.
2. The method of claim 1, wherein each of the biographic information pieces comprises personal information describing one of the participants.
3. The method of claim 1, wherein each of the geographic information pieces comprises personal information describing a location of one of the participants.
4. The method of claim 1, wherein each of the ratings depends, at least in part, on a liking of one of the videos.
5. The method of claim 1, wherein:
- the search interface comprises a filter interface;
- the computer-readable instructions are executable to cause the processor to receive a plurality of filter settings that are input into the filter interface; and
- depending on the filter settings, determine which of the symbols are displayed by the map interface.
6. The method of claim 1, wherein the characteristics comprise one of size, shape and color. The method of claim 1, wherein:
- a first part of the computer-readable instructions are configured to be stored by a server that comprises one of the processors; and
- a second part of the computer-readable instructions are configured to be stored by an electronic device that comprises another one of the processors.
8. One or more data storage devices comprising:
- a plurality of computer-readable instructions that are executable to cause one or more processors to: process geographic information associated with a plurality of participants; process video data associated with a plurality of videos, wherein each of the videos is associated with one of the participants; process rating data, resulting in a rating associated with each of the videos; receive a search input;
- based on the search input, cause a map interface to be displayed, wherein: the map interface displays a plurality of symbols representing a plurality of the participants; and the symbols vary depending on a difference between the ratings;
- receive a selection of one of the symbols; and cause one or more outputs, based, at least in part, on the selection, wherein the one or more outputs comprise a playing of the video associated with the participant represented by the selected symbol.
9. The one or more data storage devices of claim 8, wherein:
- each of the videos displays one of the participants involved in an event; and
- the geographic information comprises location information describing a location of each of the participants.
10. The one or more data storage devices of claim 9, wherein the computer-readable instructions are executable to cause the one or more processors to:
- process biographic information describing each of the participants; and
- cause the one or more outputs to indicate the biographic information associated with the participant represented by the selected symbol.
11. The one or more data storage devices of claim 9, wherein each of the ratings depends, at least in part, on a liking of one of the videos.
12. The one or more data storage devices of claim 9, wherein the computer-readable instructions are executable to cause the one or more processors to:
- cause a search interface to be displayed;
- receive, through the search interface, at least one of a plurality of filter settings; and
- depending on the at least one filter setting, determine which of the symbols are displayed by the map interface
13. The one or more data storage devices of claim 9, wherein the computer-readable instructions are executable to cause the one or more processors to:
- establish a first size for a first one of the symbols that is associated with a first one of ratings; and
- establish a second size for a second one of the symbols that is associated with a second one of ratings,
- wherein the first size is greater than the second size,
- wherein the first rating is higher than the second rating.
14. A method comprising:
- configuring a plurality of computer-readable instructions so that the computer-readable instructions are executable to cause one or more processors to: process geographic information associated with a plurality of participants; process video data associated with a plurality of videos, wherein each of the videos is associated with one of the participants; process rating data, resulting in a rating associated with each of the videos; receive a search input;
- based on the search input, cause a map interface to be displayed, wherein: the map interface displays a plurality of symbols representing a plurality of the participants; and the symbols vary depending on a difference between the ratings;
- receive a selection of one of the symbols; and
- cause one or more outputs, based, at least in part, on the selection, wherein the one or more outputs comprise a playing of the video associated with the participant represented by the selected symbol.
15. The method of claim 14, wherein:
- each of the videos displays one of the participants involved in an event; and
- the geographic information comprises location information describing a location of each of the participants.
16. The method of claim 14, comprising configuring the computer-readable instructions so that the computer-readable instructions are executable to cause the one or more processors to:
- process biographic information describing each of the participants; and
- cause the one or more outputs to indicate the biographic information associated with the participant represented by the selected symbol.
17. The method of claim 16, wherein each of the ratings depends, at least in part, on a liking of one of the videos.
18. The method of claim 17, comprising configuring the computer-readable instructions so that the computer-readable instructions are executable to cause the one or more processors to:
- cause a search interface to be displayed;
- receive, through the search interface, at least one of a plurality of filter settings; and
- depending on the at least one filter setting, determine which of the symbols are displayed by the map interface
19. The method of claim 14, comprising configuring the computer-readable instructions so that the computer-readable instructions are executable to cause the one or more processors to:
- establish a first size for a first one of the symbols that is associated with a first one of ratings; and
- establish a second size for a second one of the symbols that is associated with a second one of ratings,
- wherein the first size is greater than the second size,
- wherein the first rating is higher than the second rating.
20. The method of claim 14, comprising:
- configuring a first part of the computer-readable instructions so that the first part is executable to cause the one or more processors to be stored by a server that comprises one of the processors; and
- configuring a second part of the computer-readable instructions so that the second part is executable to cause the one or more processors to be stored by an electronic device that comprises another one of the processors.
Type: Application
Filed: Dec 7, 2019
Publication Date: Apr 9, 2020
Inventors: Power P. Bornfreedom (Syracuse, NY), Renato L. Smith-Bornfreedom (Syracuse, NY)
Application Number: 16/706,706