INTUITIVE INTERFACES FOR REAL-TIME COLLABORATIVE INTELLIGENCE
Systems and methods for user interfaces for use on a computing device of a real-time collaborative computing system. A collaborative application runs on the computing device and displays information and data regarding the collaboration system and also receives user input via the user interface. The display interface arrangement may vary based on a type of collaborative session. Graphical user interfaces include a user interface based on a magnetic force model.
This application claims the benefit of U.S. Provisional Application No. 62/012,403 entitled AN INTUITIVE INTERFACE FOR REAL-TIME COLLABORATIVE CONTROL, filed Jun. 15, 2014, which is incorporated in its entirety herein by reference.
This application is a continuation-in-part of U.S. application Ser. No. 14/668,970 entitled METHODS AND SYSTEMS FOR REAL-TIME CLOSED-LOOP COLLABORATIVE INTELLIGENCE, filed Mar. 25, 2015, which in turns claims the benefit of U.S. Provisional Application 61/970,855 entitled METHOD AND SYSTEM FOR ENABLING A GROUPWISE COLLABORATIVE CONSCIOUSNESS, filed Mar. 26, 2014, both of which are incorporated in their entirety herein by reference.
This application is a continuation-in-part of U.S. application Ser. No. 14/708,038 entitled MULTI-GROUP METHODS AND SYSTEMS FOR REAL-TIME MULTI-TIER COLLABORATIVE INTELLIGENCE, filed May 8, 2015, which in turns claims the benefit of U.S. Provisional Application 61/991,505 entitled METHOD AND SYSTEM FOR MULTI-TIER COLLABORATIVE INTELLIGENCE, filed May 10, 2014, both of which are incorporated in their entirety herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates generally to systems and methods for group collaboration, and more specifically to systems and methods for closed-loop, dynamic group collaboration.
2. Discussion of the Related Art
Portable computing devices, such as cell phones, personal digital assistants, and portable media players have become popular personal devices due to their highly portable nature, their ability to provide accessibility to a large library of stored media files, their interconnectivity with existing computer networks, and their ability to pass information to other portable computing devices and/or to centralized servers through phone networks, wireless networks and/or through local spontaneous networks such as Bluetooth networks. Many of these devices also provide the ability to store and display media, such as songs, videos, podcasts, ebooks, maps, and other related content and/or programming. Many of these devices are also used as navigation tools, including GPS functionality. Many of these devices are also used as personal communication devices, enabling phone, text, picture, and video communication with other similar portable devices. Many of these devices include touch screens, tilt interfaces, voice recognition, and other modern user input modes. As a result, the general social trend within industrial societies is that every person does now or soon will maintain at least one such multi-purpose electronic device upon their person at most times, especially when out and about.
While such devices allow accessing information and person to person communication, they do not provide any unique tools and infrastructure that specifically enable groups of electronically networked individuals to have a real-time group-wise experience that evokes the group's collaborative intent and intelligence (Collaborative Consciousness). Hence, there is a substantial need to provide tools and methods by which groups of individuals, each having a portable computing device upon their person, to more easily contribute their personal will/intent to an emerging collaborative consciousness, allowing the group to collectively answer questions or otherwise express their groupwise will in real-time. Furthermore, there is a need to provide tools and methods that enable groups of users to be informed of the group-wise will that is emerging in real-time. The present invention, as described herein, addresses these and other deficiencies present in the art.
SUMMARY OF THE INVENTIONSeveral embodiments of the invention advantageously address the needs above as well as other needs by providing a display interface displayed by a collaborative software application running on a computing device of a real-time collaborative control system, the display interface comprising: a target board including a plurality of input choices arranged on the target board; a pointer, wherein a location of the pointer on the target board is updated by the collaborative software application; wherein the collaborative software application is configured to repeatedly perform the steps of: receiving user input from a user of the computing device, the user input indicating a user intent for selecting one of the input choices; sending the user input to a central collaboration server communicatively coupled to the computing device; receiving an updated coordinate location of the pointer on the target board from the central collaboration server; and displaying the updated coordinate location of the pointer on the target board.
In another embodiment, the invention can be characterized as a graphical pointer interface for a display interface of a computing device, comprising: a collaborative application running on the computing device and configured to receive user input via the display interface and update the display interface; a pointer having a center and displayed on the display interface, whereby a coordinate location of the pointer is repeatedly updated by the application; and a user input icon displayed on the display interface and configured to receive user input indicating a magnitude and a direction of movement of the pointer.
The above and other aspects, features and advantages of several embodiments of the present invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings.
Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention.
DETAILED DESCRIPTIONThe following description is not to be taken in a limiting sense, but is made merely for the purpose of describing the general principles of exemplary embodiments. The scope of the invention should be determined with reference to the claims.
Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
As referred to in this specification, “media items” refers to video, audio, streaming and any combination thereof. In addition, the audio subsystem is envisioned to optionally include features such as graphic equalization, volume, balance, fading, base and treble controls, surround sound emulation, and noise reduction. One skilled in the relevant art will appreciate that the above cited list of file formats is not intended to be all inclusive.
Real-time occurrences as referenced herein are those that are substantially current within the context of human perception and reaction.
As described in related patent application Ser. Nos. 14/668,970 and 14/708,038, the massive connectivity provided by the Internet is used to create a real-time closed-loop collaborative consciousness (or emergent group-wise intelligence) by collecting real-time input from large numbers of people through a novel user interface and processing the collected input from that large number of users into a singular group intent that can answer questions or otherwise take actions or convey will in real-time. The methods use intervening software and hardware to moderate the process, closing the loop around the disparate input from each of the many individual participants and the singular output of the group. In one embodiment, each individual user (“participant”) engages the user interface on a portable computing device 104, conveying his or her individual real-time will in response to a prompt such as a textually displayed (or audibly displayed) question as well as in response to real-time feedback provided to the user of the group's emerging real-time intent. This closes the loop around each user, for he is conveying individual intent while also reacting to the group's emerging intent. Thus each user must be able to see not only the prompt that begins a session, but the real-time group intent as it is forming. For example, if the intent is being conveyed as words, the user will see those words form, letter by letter. If the intent is being conveyed as a direction, the user sees the direction form, degree by degree. If the intent is being conveyed as a choice among objects, the user sees a graphical pointer 210 get closer and closer to a particular chosen object. Thus, the user is seeing the group's will emerge before his eyes, reacting to that will in real-time, and thus contributing to it. This closes the loop, not just around one user, but around all users who have a similar experience on their own individual computing device 104. While the embodiments described generally refer to portable computing devices, it will be understood that non-portable computing devices, such as desktop computers, may also be used.
A collaboration system has been developed that allows a group of users to collaboratively control the graphical pointer 210 in order to collaboratively answer questions or otherwise respond to prompts.
Referring first to
Embodiments of the plurality of portable computing devices 104 and the interaction of the computing devices 104 with the system 100 are previously disclosed in the related patent applications.
The collaboration system 100 comprises a Central Collaboration Server (CCS) 102 that is in communication with the plurality of portable computing devices 104, each portable computing device 104 running the Collaborative Intent Application (CIA), such that the plurality of individual users, each user interacting with one of the plurality of computing devices 104, can provide user input representing a user intent (i.e. the will of the user). The plurality of user inputs is numerically combined to result in a group intent, thus enabling collaborative control of the pointer 210 (or other graphical representation of the group intent) that is manipulated by the group intent to select a target from a group of elements (i.e. input choices) and thereby form collaborative responses. The portable computing devices 104 are in communication with the CCS 102 as shown by the data exchanges 106. In some embodiments, such as a multi-tier architecture, the portable computing devices 104 may communicate with each other. The CCS 102 includes software and additional elements as necessary to perform the required functions. In this application, it will be understood that the term “CCS” may be used to refer to the software of the CCS 102 or other elements of the CCS 102 that are performing the given function.
As disclosed in the related patent applications, in one embodiment each user views a target area 206 as shown below (also referred to as a target board) on a display of his portable computing device 104. Display of the target area 206 is enabled by the CIA of the device 104. In some embodiments the target area 206 comprises the plurality of input choices (e.g. letters, numbers, words, etc.) that can be selected to form a response to a posed query.
In another embodiment, also displayed on the target area 206 is the graphical pointer 210 that selectively moves in relation to the input choices displayed on the target area 206, said motion executed in response to the group intent input of the plurality of users. By collaboratively moving the pointer 210, said plurality of users is enabled to sequentially select targets from the input choices 208 of the target area 206 and thereby produce the collaborative response to the posed query or prompt. In some embodiments, the selection is made when the pointer 210 is positioned on or near the input choice 208 for more than a threshold amount of time. In some embodiments, the pointer 210 is determined to be on or near the input choice 208 if it is within a threshold proximity of the input choice 208. When the target is selected it is added to the emerging answer.
More specifically, embodiments of the current system 100 enable each of the plurality of users to view on their own portable computing device 104, the graphical pointer 210 and the target area 206, and enable each of said users to convey the user intent as to the desired direction (and optionally magnitude) of motion the user wants the pointer 210 to move so as to select one of a plurality of input choices 208 displayed on the target area 206.
The user input is typically represented as a user intent vector, including both a direction and magnitude of the user input. The user intent vector can be input by the user, for example, by tilting his or her computing device 104 in the desired direction. In other embodiments the user intent vector is input by swiping on a touchscreen. The user intent vector is communicated by the CIA running on the user's portable computing device 104, to the Central Collaboration Server (CCS) 102.
The CCS 102 receives the user intent vectors from the plurality of users, and then derives a group intent vector that represents the collective will of the group at that time.
The group intent vector is then used to compute an updated location of the pointer 210 with respect to the target area 206 and input choices 208, the updated location reflecting the collective will of the group.
The updated pointer location is then sent to each of the plurality of computing devices 104 over the network and is used by the CIA software running on said computing devices 104 to update the displayed location of the pointer 210. The result is that each of the plurality of users can watch the pointer 210 move, not based on their own individual input, but based on the overall collective intent of the group.
As shown in
The CIA software running on each computing device 104 is configured to display a graphical user interface (also referred to as a display interface or a decoupled control interface) that includes at least one graphical pointer 210 and a plurality of input choices 208. In some embodiments, the graphical pointer 210 is configured to look like a “puck” with a central viewing area that is partially transparent. When the pointer 210 is positioned over one of the input choices 208 such that a targeted input choice is substantially within the viewing area for more than a threshold amount of time, that input choice 208 is selected.
Referring next to
The display interface 200 of
As shown in
The display interface 200 of
As disclosed in related applications, if the pointer 210 is positioned over one input choice 208 for more than a threshold amount of time, that input choice 208 is selected. The same is true for each of the target words (yes, no, maybe). In addition, a “done” target is included, which when selected, indicates that the response being formed by selecting targets is now complete.
The system 100 is configured such that groups of users are enabled to collaboratively control the pointer 210 in response to prompts (for example, questions) that are posed to the group. In the example of
In some embodiments a time limit is moderated by the CIA/CCS software such that the users are given a limited amount of time to answer a posed question. In such embodiments, a timer (either numerical or graphical) may be displayed to the users on the display interface indicating how much time is left to answer the given question or prompt. For embodiments where users may be selecting a sequence of letters or words to compose the response, the time limit may be associated with each input choice target selection, thus giving the users a specified time limit for collaboratively making each target selection in the sequence. A second timer may be associated with the collaborative formulation of the complete response. In this way, the system 100 can employ a first targeting timer that limits the time allowed for targeting each input choice 208, and the second response timer that limits the amount of time allowed for the complete response.
The exemplary response timer 222 is shown in
Also displayed in the information bar 212 is the synchronicity indication 216 including a synchronicity value for that individual user, the value indicating how collaborative the user is being with respect to other users (as previously described in the related application Ser. Nos. 14/668,970 and 14/708,038).
In addition, each user can be assigned a rank value, the rank value displayed in the rank indication 220 shown in the information bar 212, the rank value being an indication of standing of that user with respect to other users on one or more performance metrics (as disclosed in the related applications) In some embodiments the performance metrics used for computing the rank value include the user score for each user and the user synchronicity for each user. In some embodiments, the number of questions a user has participated in is also used in computing the rank value and/or score value.
Using the display interface combined with the means for user input, the plurality of users is enabled to collaboratively control the motion of the pointer 210 to select one or more targets from the input choices 208 in response to prompts, thereby formulating an answer through synchronous real-time collaboration. As disclosed in the related patent applications, the plurality of users, each interacting with one of the plurality of computing devices 104, provides user input, which is numerically combined to enable collaborative control of the pointer 210 that is manipulated by group intents to select targets. As further disclosed in the related patent applications, each user views the target board on the display interface of his own computing device 104, as displayed by the CIA application running on the device, the target board comprising at least the target area 206, the prompt bar 202, and the response bar 204.
As shown in
After the target selection, the pointer location can be reset at a center of the screen, and the process repeats, allowing the users to select additional letters, numbers, words, etc., building the complete response. Once the response is complete, in some embodiments the response is shown on the session log display interface 400. In some embodiments, the users are shown a rating display interface for proving user input regarding rating of the answer (i.e., expressing their satisfaction with it). In other embodiments, a Tweet or other social media update may be sent out by the CCS software that includes the question, answer and/or statistics or other information or data about the session.
Referring next to
The exemplary session log display interface 400 of
For example, the top log entry 404 includes the session number 406 “00001”, indicating a first session. The session prompt 408 includes the text “Q: “What is your favorite color?” The response, as indicated my the session response 410, is “A: “Red”. The user count 412 for this session is “244”, indicating that 244 users participated in that session. The rating 414 given to the response is 48%.
The example session log display interface 400 can be a display interface generated by the local CIA on each computing device 104. Alternatively, the session log display interface 400 may be a web page that is accessible to users from around the world. The session log display interface 400 lists the questions and answers from previous sessions, so users can browse and see the responses for the various session. This can be a simple list, as shown in
The answers can be ordered sequentially (by time and date of the session). In some versions, the answers can be ordered by rating, thus letting people easily browse the highly rated answers. In some versions, all registered users can add to the ratings by browsing the page, not just the users who participated in that particular question/answer session. The rating in some embodiment can be a “thumbs up” indication.
In this way, the session log display interface 400 is a source of entertaining information for users, allowing them to see the historical responses produced by the group-wise collaborative intelligence. In many embodiments, the users who can access the session log display interface 400 and view the content are not limited to those who collaboratively produced answers to questions, thereby allowing a wider pool of users to enjoy the output from the collaborative sessions. In addition, embodiments can be configured in which many “collaboration rooms” with different groups of users operating in parallel, each group including users who control the pointer 210 for that group and engage in collaborative decision making. With many groups, each generating their own questions and producing their own collaborative responses, the session log display interface 400 can be configured to post the output from a plurality of groups in a centralized place. This allows a wide range of users to see the collaborative thinking that emerged from the plurality of groups in a fast and easy way. In such embodiments, the session log display interface 400 can additionally display additional data along with each question/answer pair, for example a name of the specific “collaboration room” from which it emerged, a number of users who contributed to the answer, an elapsed time used to collaboratively generate the answer, and one or more measures of synchronicity among the group who produced that answer while producing that answer.
The session log display interface 400 may additionally be configured to allow users who view the session log display interface 400 to rate answers shown on the displayed session log display interface 400 through simple asynchronous polling methods. In this way, the system 100 can employ a combination of the novel synchronous collaboration to generate answers along with more traditional asynchronous rating/polling to let users rate, rank, or otherwise subjectively quantify the quality of the answers.
As described above and in the related applications, the CIA and CCS software are configured to allow users to form collaborative groups enabled to answer the prompt collaboratively through the group-wise, real-time synchronous control method. In some embodiments the CIA/CCS system 100 is enabled to automatically ask questions to the group, selecting from a store of pre-defined questions. This is useful in getting the group started, or when no member of the group poses a question within a certain time limit. Conversely, in many situations the users are eager to ask questions and because only one can be answered at a time (in a particular collaboration room), there can be a backlog of questions and/or competition to get questions asked. Thus because there may be many collaborating users who may wish to ask a question at any given time, the system 100 can be configured to store pending questions in a question queue. This may be configured as a displayed list of questions, ordered, for example, such that the question at the top is answered next and proceeding downward. In this way, users can pose the question and see where it sits on the list over time, as previously asked questions get answered. This has the benefit of encouraging users to participate for long periods, waiting for their question to reach the top of the queue, at which point it becomes the active question for the group, an indication of such sent to all the users.
For embodiments that support question queuing functionality, the system 100 can also be configured to order the questions based on factors other than the order in which the questions were submitted. For example, in a preferred embodiment, the system 100 can be configured to give question-asking priority to users who have earned a high score or achieved high ranking during collaborative control sessions, their questions boosted up the queue based on their score. As described in the related patent applications, such scores and/or rankings are generally based on how collaborative the user has been during prior collaborative sessions.
In many instances, the user may pose a bad question to the group, because the question is not appropriate or not coherent or simply not interesting to the other users of the group. To address such situations, the system 100 can be configured to allow the group-wise intelligence to select a “bad questions” response to the prompt. This is a spatially arranged element that can be selected by the pointer 210, under group-wise control, and when selected indicates to the system 100 that the group does not want to answer the question. The question is then skipped, so another question can be asked (or pulled off the question queue).
This feature encourages users to ask quality questions. The user knows that if he does not ask a quality question, then the group-wise intelligence may immediately decide to deem it a “bad question” and remove it.
To further encourage users to ask quality questions, the system 100 can also be configured to subtract points from users who ask questions that are deemed “bad questions” by the group. In this way, there is a penalty associated with asking the bad question. Further, if the user's score is used by the system 100 to award the right to ask questions, a user who repeatedly asks bad questions and loses points, will get fewer and fewer opportunities to ask additional questions. This enables the collaborative intelligence system 100 to “silence” individual users who are not asking quality questions (or who are deliberately being disruptive).
As was described above with respect
To create a flexible system that enables this in a clean, intuitive, and easily adjustable way, a novel framework has been devised that employs a plurality of selectable target areas 206, each of said target areas 206 having a different set of input choices that can be collaboratively selected from. In some embodiments the user who asks the question can also indicate which of the selectable target areas 206 should be used to answer the question. In some embodiments, the collaborative group itself is given the ability to select among the selectable target areas 206, thus taking control not just of the selected answer but the pallet of possible answers.
In many preferred embodiments, both methods are employed such that the user who asks the question can optionally specify which selectable target area 206 to use to answer the question, while at the same time the group can collaboratively override the recommendation and choose a different selectable target area 206.
Referring next to
Shown in
The collaboration display interface, examples of which are shown in
Once logged into the server 102, in some embodiments the user can join one of the plurality of collaboration rooms, each collaboration room being the separately hosted group of users engaged in the collaborative experience. For example, the server 102 might allow the user to join one of 200 collaborations rooms, each of said rooms supporting up to 30 users who can chat, ask questions, and collaboratively answer questions among them. In some embodiments, the rooms are filled in a first-come, first-served manner, new rooms being created when a current room is filled with the maximum number of users. In some embodiments, rooms can be assigned a theme, which is a guideline for the topic to be debated (with questions and answers). For example, some collaboration rooms can be general purpose, some can be sports-related, some can be media-related, some can be finance-related, some can be political, some can be issue-related, etc. In some embodiments, collaboration rooms can be public or private. A public room can be filled with strangers who join in at will. A private room can be filled by invitation. In some embodiments, the user can invite his or her Facebook friends for participation in a custom room. Such a room is ideal for a group of friends asking personal questions. In some embodiments, there is also a single large room that can support hundreds, or thousands, or even millions of users, which is thereby a much larger experience than the small rooms that support 30 users. This large room creates a genuine global collective intelligence and can be assigned a unique name, for example “UNUM” (Unum is Latin for “the one”).
In some embodiments, themed rooms can be designed with themed target areas that are specific to the topic of discussion in the room. For example, a finance related room could employ a specialized target area 206 that includes input choices 208 such as “buy”, “sell”, “hold”, and “short”.
In some embodiments, the CCS 102 stores historical values related to each registered user, said historical values including the number of past sessions that the user participated in, user scores and/or synchronicity values for those sessions, and/or other pieces of data that indicate the user's skill in collaborating. In some such embodiments, certain collaboration rooms are restricted only for users who have achieved scores or other metrics that surpass a defined threshold. In this way, some rooms can be filled by the CCS 102 with novice collaborators while other rooms can be filled with experienced collaborators. In some embodiments, users can name the collaboration room, which can also be used as the name of the collaborative intelligence that emerges from that room. In some such embodiments, collaboration rooms of one name can compete with collaboration rooms of another name.
By allowing collaboration rooms to be populated by unique groups of users, each room uniquely named, the system 100 can be configured to allow a first collaboration room to ask a question that is directed at a second collaboration room. That second collaboration room can then answer the question as a group. In this way, two collaboration rooms can hold a conversation and/or debate. This allows one collective intelligence to communicate with and/or debate against another collective intelligence.
In some embodiments, collaboration rooms can be populated by selecting users based in part on personal profile data that is stored upon registration. For example, one collaboration room could be populated by users who self-identify as Democrat. Similarly, one collaboration room can be populated by users who self-identify as Republican. These two collaboration rooms can then be enabled through moderation by the CCS 102 to send questions and/or answers to each other, using the methods disclosed herein. In this way, a “Democratic Collaborative Intelligence” emerging from one collaboration room can hold a conversation with and/or hold a debate against, a “Republican Collaborative Intelligence” emerging from another collaboration room. Similarly, a room filled with Raiders fans can be enabled to hold a sports related conversation with, or hold a sports related debate against a room filled with 49er fans. Similarly a room filled with Stanford alumni can be enabled to hold a conversation with or engage in a debate against a room filled with Harvard alumni. In this way, the present invention allows for groups of likeminded people to pool their intelligence and converse with (and/or argue against) groups of other people, thereby creating an entirely new form of human communication.
As soon as two or more users are present in the collaboration room (i.e. have joined the current session), users would have the ability to chat with each other by typing a message in the user communication area 502 at the top of the prompt bar 202. Any message typed in will be sent to all other users, with an indicator of who said it. This allows groups of people to chat using standard functionality. In addition, users can ask questions, to the whole group, that are intended to be answered collaboratively. The software indicates a time period when the question can be asked by lighting up the ask light icon 506 that is positioned near the message bar. If the ask light icon 506 is shown as lit, the user can enter the question into the user communication area 502, then click the ask light icon 506, and the question is sent to all users. The question appears in the prompt bar 202. In the embodiment shown in
Once the question (i.e. prompt) appears in the prompt bar 202 for all users of the group, the users are instructed collaboratively control to answer the question, providing user input trying to move the pointer 210 towards one of the provided input choices. As shown in
In the yes/no target area 508 shown in
Of course, not all questions are yes/no in nature, and thus the present invention provides for other types of target areas 206 that are selectable by the user who asked the question and/or by the groupwise control of the pointer 210. In the embodiment of
If a yes/no-type question is asked to the group using the display interface 500 shown in
In one embodiment of the present invention, selection by the user of the specific target area 206 employs simple command codes added to the end of the question. For example, the user could type in the question “What do you think of the Rolling Stones?” and then add the command code “/rate” to the end of the string. This command code would be previously set in the CCS software to indicate that the “rate it” target area should be used. Alternatively, target area selection buttons could be provided on the display interface 500 for the user to select.
Referring next to
As shown in
Now the users can answer the question by collaboratively moving the pointer 210 to one of the spatially arranged “rate it” input choices 604. In one example, the pointer 210 moves to a “5 stars” input choice 604 and that answer is broadcast to all the users, as well as, in some embodiments, added to the session log. The answer could optionally be Tweeted® out by the software. It should be noted that this rating is not the average of a number of asynchronous ratings as would be achieved by a simple poll, but is a jointly derived rating that happens through a physical negotiation of the users, arriving at a consensus not an average. This consensus is a genuine group opinion and not merely the average of a set of individual opinions, thus achieving a true collaborative intelligence.
In this way, questions can be asked that are associated with a group of input choices that are spatially arranged. A sports question could have answers “win”, “lose”, “tie”, “too close to call”, “blow out”, for example. In some embodiments, as previously described, the “bad question” input choice may be included so users can collaboratively reject bad questions.
Referring next to
In yet other embodiments, the system 100 employs the “spell it” target area 602 including the “spell it” input choices 604 that can be selectively chosen, either by the user who asks the question, or by the group (for example, by the group collaboratively moving the pointer 210 over the spell it option in the board menu area 512).
As shown in
The “spell it” target area 602 in one embodiment includes punctuation as well as space and backspace, allowing users to write multiple words, or erase letters through collaborative action. The “spell it” target area 602 includes a “done” input choice so the group can collaboratively decide when the sequence of chosen letters is complete.
Referring next to
In some cases the question might be posed by one user that does not fit any of the predefined sets of input choices provided by any of the available choices of target areas 206, and yet the user does not want to leave the input choice selection open-ended, as with the “spell it” target area 702. To solve this problem, a novel solution has been derived that allows users to quickly ask the question while easily specifying the custom set of input choices 804 to be spatially arranged on the custom target area 802 for selection by the group. This is the “custom board” target area 802 as shown in
As shown in the exemplary display interface of
The CIA software running on the user's local computer sends a representation of this text to the CCS 102. In response, the CCS 102 sends the question portion of the text to the computing devices 104 of each participating user, for display in the prompt bar 202 on their screen. In addition, the CCS/CIA software crafts the custom target area 802 that is displayed on the display interface 200 of each computing device 104, said custom target area 802 including the custom input choices 804 in a spatially arranged format. In the example of
As described in detail in the aforementioned related patent applications, the CIA/CCS software enables the group of users to each impart their own individual input so as to collaboratively control the motion of the graphical pointer 210, said pointer 210 moving under group-wise control to answer questions or otherwise respond to prompts.
In a preferred embodiment, a physically intuitive metaphor is employed such that the pointer 210 is assigned a simulated mass and a simulated damping (friction) with respect to the target area 206. Each user is told that their personal user input acts to apply a simulated force upon the group-wise pointer 210 by imparting a group force vector upon it, said group force vector based on the user intent vector described in the related applications. The pointer 210 then moves in response to a vector sum of the applied forces. It can be a simple sum (or average) in which each user input is counted equally, or it can be a weighted sum (or average) in which the input from some users has more impact than others. As described in the related applications, the weighting process can be based on user scores earned during previous sessions.
Thus the intuitive conceptual model is provided to users wherein the plurality of user force vectors are applied to the pointer 210 based upon input conveyed by each user into their individual computing device 104. This is achieved by computing and imparting the group force vector upon the pointer 210 that is the sum or average of the user input force vectors. The computing and imparting is performed the CCS 102 which collects the real-time input from the users, computes the resultant vector, and applies it to a physics-based model controlling the graphical movement of the displayed pointer 210. The physics-based model considers a pointer mass, a environmental damping coefficient, and a current vector motion (velocity and acceleration) of the pointer 210, and determines an updated vector motion of the pointer 210 resulting from the current group force vector. Because the users are continually providing user inputs, the group force vector is repeatedly calculated, the group force vector repeatedly applied, and the vector motion of the pointer 210 repeatedly updated. In some embodiments, this is performed at rates of at least 10 updates per second, but ideally 30 to 60 updates per second. In some embodiments pointer motion is interpolated between updates based on the physics model. Even when no forces are applied by the users, the pointer 210 may maintain momentum and will continue to move for a period of time before stopped by damping.
Providing the intuitive conceptual model for group-wise control of the single pointer 210 is helpful, but there is still a need for an intuitive graphical user interface that makes supports the model, making it natural, intuitive, and fun. The challenge of the pointer interface is that unlike traditional user interfaces where a user's action has a direct and apparent impact on the object they are intending to control (e.g. the pointer 210), this collaborative system 100 is such that the motion of the pointer 210 is not based on the user's input but is based on the group input. Because of this, the user may impart a desire for the pointer 210 to move left at a given moment, but if the group intent is determined from the group of users as a desire for the pointer 210 to move right, the pointer 210 will move right. This can be disconcerting to the user, for the user's input and the motion of the pointer 210 can be significantly misaligned. In fact, users may even wonder if their user input is being considered by the system 100 at all if each user sees no direct evidence of their user input—each user sees only the pointer 210 moving in ways that appear to have no relation to the individual user intent. This is especially true when large numbers of users collaborate, for one user's input may have a very small contribution to the overall group intent. Thus, a significant need exists for intuitive graphical user interface methodologies that allow the individual user to see a result of his or her input, while also making the overall physical metaphor as clear and simple and intuitive as possible. More specifically, there is a substantial need to create a new type of user interface that intuitively links but substantially decouples the representation of each user's personal input from the motion of the collaborative controlled pointer 210. Some embodiments of intuitive graphical user interface methodologies have been described in the related patent applications.
Referring next to
The graphical magnet pointer interface 900 is a methodology for user input that supports a physically intuitive model for group-wise control of the graphical pointer 210. It employs the magnet icon 904 that is provided to each user for display on their personal computing device 104 (as controlled by the instance of the CIA software running on the user's personal computing device 104). In the embodiment shown, the magnet icon 904 is a “U” shaped magnet icon, but other types of magnet icons can be used, and/or other elements that graphically represent a physical pull force. In this way, each user can see his own magnet on his own screen, said magnet icon 904 being directly responsive to the user input provided by said user. Because the control of the magnet icon 904 is handled locally by the personal computing device 104, the graphical magnet pointer interface is highly responsive and not impacted by communication lag with the CCS 102, thus allowing each user to feel like he has a high-bandwidth highly responsive link into the system 100. The position of the magnet icon 904 on the user's display interface 200 may be controlled by a mouse coupled to the computing device 104 and used by the user, with a conventional mouse arrow icon changing to the magnet icon 904 when the mouse cursor nears the graphical pointer 210 that is also displayed on the display interface 200. The magnet icon 904 is displayed at the location of the mouse arrow icon, but is configured in the software to always point towards the center 910 of the circular pointer 210. Thus as the magnet icon 904 approaches the pointer 210, the magnet icon 904 appears to aim at the pointer center 910 as if the magnet icon 904 is magnetically attracted to the pointer 210.
In addition, the software controlling the magnet icon 904 may be configured to increase a size of the magnet icon 904 in size as the magnet icon 904 moves closer to the pointer 210, which would imply a larger magnetic force between the magnet icon 904 and the pointer 210. Thus, with a very simply graphical metaphor, the user understands without instruction that he can apply a virtual pull force on the pointer 210 (representing his user intent vector) that aims from the pointer center 910 to the location of the cursor (i.e. the magnet icon 904) controlled by the mouse.
As shown in
In some embodiments, magnitude of the user input can be graphically conveyed by how close or far the user positions the magnet icon 904 to the pointer 210. The closer the magnet icon 904 to the pointer center 910, the stronger the magnitude of the user input (i.e. the “magnetic force”). To make this visually intuitive, the magnet icon 904 increases in size as the magnet icon 904 moves closer to the pointer center 910. Once the magnet icon 904 overlaps the pointer 210, the magnet icon 904 may be limited from getting too close the pointer center 910 (i.e. from covering a central targeting area of the pointer 210). Thus the magnet icon 904 appears when the input cursor gets within certain proximity of the pointer 210, increases in size as the cursor nears the pointer 210, and disappears if the cursor gets too close to the pointer center 910, the magnet icon size increasing as the magnet icon 904 moves closer to the pointer center 910.
Referring next to
As shown in
As shown in
As shown in
The CCS 102 sums the user intent vectors from the plurality of users, computes the group intent vector, uses the group intent vector to apply the group force vector to the simulated physical model of the pointer 210 (mass, damping, etc. . . . ), and based on the physics model sends the pointer 210 coordinate information to each computing device 104, each of which are then updated with the new location of the pointer 210.
The result is the satisfying, intuitive, informative, and fun method by which individual users can convey their intent/will upon the graphical pointer 210 that is being controlled not by them individually, but by the group of users who are all applying real-time synchronous control input.
As described previously, some embodiments weight the input from all users equally. In such embodiments, the magnet icons 904 on the display interfaces of all individual users can employ the same mapping between size and distance to the pointer 210. However, for embodiments that weight users differently, magnet size can be scaled accordingly. In this way, the user who is being granted a higher contribution rate to the group due to earning points, can see a larger magnet icon 904 on their screen than the user who has been granted a lower contribution rate to the group. This provides visual intuition.
In general, users only see their individual magnet icon 904 on their screen. In some embodiments, however, the system 100 can be configured to allow the user to see a representation of the magnets controlled by other users. In such embodiments “ghost magnet” icons representing user inputs from other users are employed. The ghost magnet icons are largely transparent, thus making the ghost magnet icons easily distinguishable from the user's own magnet icon, and thus preventing the ghost magnet icons from obscuring other important elements on the display interface. If the user is collaborating along with 19 other users, the user might thus see one solid magnet icon 904 (under his own control) and 19 ghost magnet icons that represent the real-time user input being conveyed by the other users. The ghost magnet icon for one of the other users would only appear when that user is positioning his mouse near the representation of the pointer 210 on his display interface 200. The ghost magnet icons in some embodiments may resemble a swarm of bugs hovering around the pointer 210. When all ghost magnet icons are evenly distributed around the pointer 210 (accounting for both magnitude and direction), the net effect cancels out and the pointer 210 does not move. But as the group finds consensus, a majority of the magnet icons would be seen to group themselves on one side of the pointer 210, and the pointer 210 will move. Such a display helps to convey the group-wise behavior of the users which in many ways emulates swarms of bugs or flocks of birds. The ghost magnet paradigm is a graphical representation of this swarm-like behavior.
That said, seeing the ghost magnet icons during the collaborative session could disrupt performance of each individual user, giving each user too much insight into the behavior of the other users, even enable the user to game the system 100. Thus, another innovative method is not to show the ghost magnet icons in real-time, during the control of the pointer 210 to answer the question, but instead to store a history of the motion of the plurality of ghost magnet icons and magnet icon 904 in the CCS 102 and to allow users to see a replay of the session with all instances of magnet icons visible. In this way, the user can participate in the session, seeing only his own magnet icon 904 (representing his user input) and the group-wise pointer 210 that represents the will of the group. The pointer 210 will move (if consensus is achieved) and answer the question. Then, after the group-wise response is crafted and posted for all to see, individual users can ask to see the replay of the session, and in that replay view the history of the magnet icons, showing how the group came to the consensus, thus forming the collaborative intelligence that answered the question.
Viewing all magnet icons during the replay (or in real time) has the benefit of revealing to the users how different the real-time group-wise synchronous control system 100 is from an asynchronous poll, for the motion of the group of magnet icons reveals the collaborative process that is not a simple vote but instead a negotiation, the users finding a solution that's highly agreeable for the participants, users having to adjust their view in real time to form a consensus. In this way, the current system 100 does not merely collect views and average them, the way a vote would, but encourages the formation of the totally new “group view” that may not reflect the will of any particular individual, but does reflect the view of the group. As a result, the unique system 100 disclosed here can be seen as creating an artificial sentience with its own views and opinions and personality traits that emerge in real time through dynamic negotiation.
Referring next to
While the graphical magnet interface as shown in
As shown in
As shown in
As shown in
While many embodiments are described herein, it is appreciated that this invention can have a range of variations that practice the same basic methods and achieve the novel collaborative capabilities that have been disclosed above. Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
While the invention herein disclosed has been described by means of specific embodiments, examples and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.
Claims
1. A collaborative intelligence system comprising:
- a plurality of computing devices each comprising a communications infrastructure coupled to each of a processor, a memory, a display, and a user interface;
- a collaborative software application stored on each memory and configured to run on each processor to: display a target board including a plurality of input choices arranged spatially, and a text prompt received from a collaboration server in networked communication with each of the plurality of computing devices; receive, repeatedly, from the user interface, user input representing a user intent vector; send, repeatedly, a representation of the user intent vector to the collaboration server; receive, repeatedly, a pointer location from the collaboration server; and present, repeatedly, a graphical pointer, wherein a location of the graphical pointer on the target board is updated based on the pointer location;
- wherein the collaboration server is configured to run a collaboration mediation application, the collaboration mediation application configured to: send the text prompt to the plurality of computing devices; receive, repeatedly, the representation of the user intent vector from each of the plurality of computing devices; responsively determine, repeatedly in real-time, the pointer location from the representations of the user intent vectors received from each of the plurality of computing devices; and send, repeatedly in real-time, the pointer location to the plurality of computing devices;
- whereby a closed-loop system is formed between the collaboration server and each collaborative software application.
2. The collaborative intelligence system of claim 1, wherein the text prompt is a question received by the collaboration server from a first user of a first computing device of the plurality of computing devices, said question having being entered by the first user of the first computing device.
3. The collaborative intelligence system of claim 1, wherein the text prompt is a question selected from a queue of questions stored in memory accessible to the collaboration server.
4. The collaborative intelligence system of claim 3, wherein the location of each question in the queue of questions is based at least in part upon a score, a rank, or a performance history of a first user of a first computing device of the plurality of computing devices, the first computing device used to send the question to the collaboration server.
5. The collaborative intelligence system of claim 1, wherein the plurality of input choices are spatially arranged such that they are approximately equidistant from a starting location of the graphical pointer.
6. The collaborative intelligence system of claim 1, wherein the plurality of input choices is received by each of the computing devices from the collaboration server.
7. The collaborative intelligence system of claim 1, wherein the plurality of input choices is selected, by a user of one computing device of the plurality of computing devices, from a menu of a plurality of different plurality of input choices.
8. The collaborative intelligence system of claim 1, wherein the system represents dynamics of a simulated physics model including a mass parameter associated with the graphical pointer and force values associated with the user input.
9. The collaborative intelligence system of claim 1, wherein the plurality of input choices includes at least one input choice that when collaboratively selected indicates that the text prompt is rejected.
10. The collaborative intelligence system of claim 1, wherein the plurality of input choices includes a delete answer choice that when collaboratively selected indicates at least a portion of a collaborative response should be deleted.
11. The collaborative intelligence system of claim 1, wherein the plurality of input choices includes a response complete answer choice that when collaboratively selected indicates that a current collaborative response to the text prompt is complete.
12. The collaborative intelligence system of claim 1, wherein spatial locations of the input choices on the target board are based at least in part on information received by each of the computing devices from the collaboration server.
13. The collaborative intelligence system of claim 1 wherein the collaboration server is further configured to determine that one of the plurality of input choices is collaboratively selected from the plurality of input choices if the graphical pointer is collaboratively positioned on or near the one of the plurality of input choices for more than a threshold amount of time.
14. The collaborative intelligence system of claim 13 wherein the graphical pointer is displayed as a graphical puck with a central area and wherein the graphical pointer is determined to be positioned on or near a displayed input choice if the central area is positioned substantially over a location associated with that input choice.
15. The collaborative intelligence system of claim 1, wherein each of the plurality of computing devices further includes a tilt sensor coupled to the processor, wherein the collaborative software application is further configured to receive the user input representing the user intent vector using the tilt sensor.
16. The collaborative intelligence system of claim 1, wherein each of the plurality of computing devices further includes a touchscreen coupled to the processor, wherein the collaborative software application is further configured to receive the user input representing the user intent vector using the touchscreen.
17. The collaborative intelligence system of claim 16 wherein the user intent vector is determined based at least in part upon a swipe gesture imparted by a user on the touchscreen.
18. The collaborative intelligence system of claim 1, wherein the collaborative software application is further configured to display a board menu configured for selecting from a plurality of target board arrangements.
19. The collaborative intelligence system of claim 18 wherein the plurality of target board arrangements includes a board designed for YES/NO questions, a board designed for rating questions, and a board designed for number-line questions.
20. The collaborative intelligence system of claim 1, wherein the collaboration mediation application is further configured to assign the graphical pointer a simulated mass, said simulated mass used in determining the updated location of the graphical pointer in combination with the user intent vectors received from each of the plurality of computing devices.
21. The collaborative intelligence system of claim 20, wherein each of the user intent vectors received from each of the plurality of computing devices is used to apply a simulated force on the simulated mass, a resulting motion of the simulated mass being used to update the location of the graphical pointer in response to the user intent vectors.
22. The collaborative intelligence system of claim 20, wherein the collaboration mediation application is further configured to assign the graphical pointer a simulated damping, said simulated damping used in determining the updated location of the graphical pointer in response to the user intent vectors received from each of the plurality of computing devices.
23. A decoupled control interface for user interaction with a collaboratively controlled pointer on a computing device, the decoupled control interface comprising:
- a collaborative application running on the computing device and configured to receive user input via a user interface, update a display interface of the computing device, and receive collaboration data over a communication link;
- the collaboratively controlled pointer having a center and displayed on the display interface, whereby a displayed location of the collaboratively controlled pointer is repeatedly updated by the application in response to the received collaboration data;
- a user input icon having a position and orientation on the display interface whereby the position and orientation is repeatedly updated by the application in response to the user input; and
- a user interface process configured to determine a desired magnitude and a direction of movement of the collaboratively controlled pointer based on the relative positioning of the user input icon with respect to the collaboratively controlled pointer, wherein the collaboration data received over the communication link originates from a collaboration server that repeatedly updates the collaboration data in response to a desired magnitude and direction of motion received from each of a plurality of computing devices.
24. The decoupled control interface of claim 23, wherein the collaboratively controlled pointer is a circular puck shape and the user input icon is a circular puck shape located within the collaboratively controlled pointer.
25. The decoupled control interface of claim 24, wherein the magnitude is determined based on a distance between the center of the collaboratively controlled pointer and a center of the user input icon.
26. The decoupled control interface of claim 24, wherein the direction is the direction from the center of the collaboratively controlled pointer to the user input icon.
27. The decoupled control interface of claim 23, wherein the user input is based on a physics-based model.
28. The decoupled control interface of claim 27, wherein the physics-based model is a magnetic model.
29. The decoupled control interface of claim 27, wherein the physics-based model is a model including a pointer mass.
30. The decoupled control interface of claim 23, wherein the user input icon is a magnet icon having a longitudinal axis, wherein the longitudinal axis intersects the center of the collaboratively controlled pointer.
31. The decoupled control interface of claim 30, wherein the magnitude is determined based on a distance between the magnet icon and the center of the collaboratively controlled pointer.
32. The decoupled control interface of claim 23, wherein the direction of movement is indicated by an angle between the user input icon and a reference line passing through the center of the collaboratively controlled pointer.
Type: Application
Filed: Jun 12, 2015
Publication Date: Nov 3, 2016
Patent Grant number: 9940006
Inventor: Louis B. Rosenberg (San Luis Obispo, CA)
Application Number: 14/738,768