COORDINATING A DISPLAY FUNCTION BETWEEN A PLURALITY OF PROXIMATE CLIENT DEVICES
In an embodiment, a control device registers proximate client devices to a coordinated display group and obtains display capability information for each registered client device. The control device determines to initiate a coordinated display session for outputting visual data via the coordinated display group. The registered proximate client devices execute a synchronization procedure to obtain synchronization information by which the master application can derive current relative orientation and position data for each registered proximate client device. The control devices maps a different portion of the visual data to respective display screens of the registered proximate client devices based on the display capability information and synchronization information. The control device delivers the mapped portions of the visual data to the registered proximate client devices for presentation thereon.
Latest QUALCOMM Incorporated Patents:
The present application for patent claims priority to Provisional Application No. 61/813,891, entitled “COORDINATING A DISPLAY FUNCTION BETWEEN A PLURALITY OF PROXIMATE CLIENT DEVICES”, filed Apr. 19, 2013, by the same inventors as the subject application, assigned to the assignee hereof and hereby expressly incorporated by reference herein in its entirety.
BACKGROUND OF THE INVENTION1. Field of the Invention
Embodiments of the invention relate to coordinating a display function between a plurality of proximate client devices.
2. Description of the Related Art
Wireless communication systems have developed through various generations, including a first-generation analog wireless phone service (1G), a second-generation (2G) digital wireless phone service (including interim 2.5G and 2.75G networks) and a third-generation (3G) high speed data, Internet-capable wireless service. There are presently many different types of wireless communication systems in use, including Cellular and Personal Communications Service (PCS) systems. Examples of known cellular systems include the cellular Analog Advanced Mobile Phone System (AMPS), and digital cellular systems based on Code Division Multiple Access (CDMA), Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), the Global System for Mobile access (GSM) variation of TDMA, and newer hybrid digital communication systems using both TDMA and CDMA technologies.
It is typical for client devices (e.g., laptops, desktops, tablet computers, cell phones, etc.) to be provisioned with one or more display screens. However, during playback of visual data (e.g., image data, video data, etc.) each client device is usually limited to outputting the visual data via its own display screen(s). Even where one client device forwards the visual data to another client device, the output of the visual data is typically constrained to the set of display screens connected to one particular client device.
For example, if a group of users have access to multiple display devices (e.g., iPhones, Android phones, iPads, etc.) and the group of users wants to display a big image or video, the group of users is must typically use the display device with the biggest display screen. For example, if the group of users collectively has four (4) smart phones and three (3) tablet computers, the group of users will probably select one of the tablet computers for displaying the video or image. As will be appreciated, many of the available display screens go unused in this scenario.
SUMMARYIn an embodiment, a control device registers proximate client devices to a coordinated display group and obtains display capability information for each registered client device. The control device determines to initiate a coordinated display session for outputting visual data via the coordinated display group. The registered proximate client devices execute a synchronization procedure to obtain synchronization information by which the master application can derive current relative orientation and position data for each registered proximate client device. The control devices maps a different portion of the visual data to respective display screens of the registered proximate client devices based on the display capability information and synchronization information. The control device delivers the mapped portions of the visual data to the registered proximate client devices for presentation thereon.
A more complete appreciation of embodiments of the invention and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the invention, and in which:
Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.
The words “exemplary” and/or “example” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” and/or “example” is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term “embodiments of the invention” does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.
Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, “logic configured to” perform the described action.
A client device, referred to herein as a user equipment (UE), may be mobile or stationary, and may communicate with a radio access network (RAN). As used herein, the term “UE” may be referred to interchangeably as an “access terminal” or “AT”, a “wireless device”, a “subscriber device”, a “subscriber terminal”, a “subscriber station”, a “user terminal” or UT, a “mobile terminal”, a “mobile station” and variations thereof. Generally, UEs can communicate with a core network via the RAN, and through the core network the UEs can be connected with external networks such as the Internet. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over wired access networks, WiFi networks (e.g., based on IEEE 802.11, etc.) and so on. UEs can be embodied by any of a number of types of devices including but not limited to PC cards, compact flash devices, external or internal modems, wireless or wireline phones, and so on. A communication link through which UEs can send signals to the RAN is called an uplink channel (e.g., a reverse traffic channel, a reverse control channel, an access channel, etc.). A communication link through which the RAN can send signals to UEs is called a downlink or forward link channel (e.g., a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein the term traffic channel (TCH) can refer to either an uplink/reverse or downlink/forward traffic channel.
Referring to
Referring to
While internal components of UEs such as the UEs 200A and 200B can be embodied with different hardware configurations, a basic high-level UE configuration for internal hardware components is shown as platform 202 in
Accordingly, an embodiment of the invention can include a UE (e.g., UE 200A, 200B, etc.) including the ability to perform the functions described herein. As will be appreciated by those skilled in the art, the various logic elements can be embodied in discrete elements, software modules executed on a processor or any combination of software and hardware to achieve the functionality disclosed herein. For example, ASIC 208, memory 212, API 210 and local database 214 may all be used cooperatively to load, store and execute the various functions disclosed herein and thus the logic to perform these functions may be distributed over various elements. Alternatively, the functionality could be incorporated into one discrete component. Therefore, the features of the UEs 200A and 200B in
The wireless communication between the UEs 200A and/or 200B and the RAN 120 can be based on different technologies, such as CDMA, W-CDMA, time division multiple access (TDMA), frequency division multiple access (FDMA), Orthogonal Frequency Division Multiplexing (OFDM), GSM, or other protocols that may be used in a wireless communications network or a data communications network. As discussed in the foregoing and known in the art, voice transmission and/or data can be transmitted to the UEs from the RAN using a variety of networks and configurations. Accordingly, the illustrations provided herein are not intended to limit the embodiments of the invention and are merely to aid in the description of aspects of embodiments of the invention.
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Generally, unless stated otherwise explicitly, the phrase “logic configured to” as used throughout this disclosure is intended to invoke an embodiment that is at least partially implemented with hardware, and is not intended to map to software-only implementations that are independent of hardware. Also, it will be appreciated that the configured logic or “logic configured to” in the various blocks are not limited to specific logic gates or elements, but generally refer to the ability to perform the functionality described herein (either via hardware or a combination of hardware and software). Thus, the configured logics or “logic configured to” as illustrated in the various blocks are not necessarily implemented as logic gates or logic elements despite sharing the word “logic.” Other interactions or cooperation between the logic in the various blocks will become clear to one of ordinary skill in the art from a review of the embodiments described below in more detail.
The various embodiments may be implemented on any of a variety of commercially available server devices, such as server 400 illustrated in
It is typical for client devices (e.g., laptops, desktops, tablet computers, cell phones, etc.) to be provisioned with one or more display screens. However, during playback of visual data (e.g., image data, video data, etc.) each client device is usually limited to outputting the visual data via its own display screen(s). Even where one client device forwards the visual data to another client device, the output of the visual data is typically constrained to the set of display screens connected to one particular client device.
For example, if a group of users have access to multiple display devices (e.g., iPhones, Android phones, iPads, etc.) and the group of users wants to display a big image or video, the group of users is must typically use the display device with the biggest display screen. For example, if the group of users collectively has four (4) smart phones and three (3) tablet computers, the group of users will probably select one of the tablet computers for displaying the video or image due to their larger display screen area. As will be appreciated, many of the available display screens go unused in this scenario.
Embodiments of the invention are directed to methods for quick utilization of aggregate display technique for ad-hoc aggregated displays based upon dynamically discovering relative position and orientation information pertaining to individual displays in the ad-hoc created display group. More specifically, embodiments are directed to client applications configured for execution on a set of proximate client devices for implementing a coordinated display session, and a master application running on a “control device” for managing the coordinated display session (e.g., a central server, one of the proximate client devices that is engaged in the coordinated display session or another proximate client device that is not engaged in the coordinated display function). For example, modern mobile devices (e.g. large display smartphones, tablets, etc.) can be kept or held adjacent to each other to form a large aggregate display screen. The master application can utilize this large aggregate display screen to facilitate a group-render function where visual data spans across the large aggregate display screen as if it were a single display screen, as will be explained in more detail below.
Referring to
In the embodiment of
At some later point in time, the master application identifies visual data to be displayed in proximity to the client devices 1 . . . N by the coordinated display group via a coordinated display session, 530. For example, at 530, a user of one or more of the client devices 1 . . . N may desire to output a video via an aggregated display screen that leverages the display screens on two or more of the client devices 1 . . . N. In a further example, while not shown in
In response to the determination to implement the coordinated display session via the coordinated display group at 530, the master application receives synchronization information that indicates current relative orientation and position data for each of client devices 1 . . . N, 535. Examples of how the master application can obtain the synchronization information at 535 are described below in more detail with respect to
After obtaining the synchronization information at 535, the master application maps a different portion of the visual data to a respective display screen of client devices 1 . . . N based on (i) the display capability information of client devices 1 . . . N as reported at 505 and 520, and (ii) the synchronization information received at 535, 540. In an example, client devices 1 . . . N may correspond to less than all of the client devices that previously registered to the coordinated display group between 500-525. For example, one or more registered client devices may be out of position or have moved out of proximity so as to fail to satisfy a proximity condition (e.g., see
As will be appreciated, if the control device corresponds to one of client devices 1 . . . N in the embodiment of
In context with 535 of
Further, the swipes from
Yet another option is that a picture of the coordinated display group can be snapped (by some other camera device), reported to the master application and then analyzed to identify where tablet computers 1 . . . 8 are relative to each other. In a further example, to facilitate the picture-based synchronization for the relative position and orientation of the coordinated display group, the master application can deliver a unique image (e.g., a number, a color, a QR Code, etc.) to display while the camera device snaps the picture of the coordinated display group. The master can then identify the relative position and orientation data based upon detection of the unique images in the snapped image.
It will be appreciated that requiring a user to swipe his/her finger across the display screens of the coordinated display group can become impractical for medium or large aggregate screen sizes, or for coordinated display groups that include some client devices without touch-screen capability, as illustrated in
In these cases, another option is to strobe a light beam or sound wave across the coordinated display group and then gauge the relative positions and orientations of its constituent client devices based on differences in timing and/or angle of detection relative to the strobe. In the sound wave example, for a medium size display (e.g., with an aggregate size of a few feet across, as shown in
For a very large aggregate display (e.g., thousands of client devices held by users in a stadium), the users can be asked to take a picture of a fixed object (e.g., a three dimensional object) that is present in each user's view while being relatively close to the respective users. For example, in
In conjunction with registering client devices X . . . Z, the master application receives updated synchronization information that indicates current relative orientation and position data for each of client devices 1 . . . N with respect to client devices X . . . Z, 925A (e.g., similar to 535 of
After obtaining the updated synchronization information at 925A, the master application updates the mapping of the visual data based on (i) the display capability information of client devices 1 . . . N and X . . . Z as reported at 505, 520 and 915A, and (ii) the updated synchronization information received at 925A, in order to incorporate the respective display screens of client devices X . . . Z into the aggregated display screen area, 930A. In context with
Later, during the coordinated display session, the master application determines to remove one or more client devices from the coordinated display session, 1015A. For convenience of explanation, assume that the master application determines at 1015A to remove client devices 1 and 2 from the coordinated display group while permitting client devices 3 . . . N to remain in the coordinated display group. The determination of 1015A can be reached in a variety of different ways. For example, users of client devices 1 and 2 may physically move client devices 1 and 2 away from the aggregated display screen area, client devices 1 and 2 may experience a low battery condition (even if they are not moved) and so on.
In conjunction with removing client devices 1 and 2 from the coordinated display group, the master application obtains updated synchronization information that indicates current relative orientation and position data for each of client devices 3 . . . N, 1020A (e.g., similar to 535 of
After obtaining the updated synchronization information at 1020A, the master application updates the mapping of the visual data based on (i) the display capability information of client devices 3 . . . N as reported at 520, and (ii) the updated synchronization information obtained at 1020A, in order to adapt the aggregated display screen area based on the departure of client devices 1 and 2, 1025A. In context with
In the embodiments described above with respect to
Later, during the coordinated display session, the master application determines to transition the master application function from client device to a different device, 1125A. In the embodiment of
After determining to transition the master application function away from client device 1 at 1125A, client device 1 negotiates with client devices 2 . . . N in order to identify a target client device for the master application function transfer, 1130A. For convenience of explanation, in the embodiment of
In conjunction with transitioning the master application function from client device 1 to client device 2, the master application (now on client device 2) obtains updated synchronization information that indicates current relative orientation and position data for each of client devices 1 . . . N, 1145A (e.g., similar to 535 of
After obtaining the updated synchronization information at 1145A, the master application updates the mapping of the visual data based on (i) the display capability information of client devices 1 . . . N as reported at 520, and (ii) the updated synchronization information obtained at 1145A, in order to adapt to any changes to the aggregated display screen area, 1150A. In context with
Referring to
In a further example, the target client devices to which the audio data is mapped can be based in part upon the content of the visual data that is being presented. For example, the aggregate display screen area in
Referring to
The set of audio parameters configured at 1300 can relate to any audio characteristic associated with the coordinated display session (e.g., which client devices are asked to output audio for the session, the volume level or amplitude at which one or more of the client devices are asked to output audio for the session, settings such as bass, treble and/or fidelity associated with audio to be output by one or more of the client devices, an audio orientation for the session such as 2.1 surround sound or 5.1 surround sound, etc.). In another example, the set of audio parameters can include how an equalizer function is applied to audio to be output for the coordinated display session (e.g., how the audio is processed through an enhancing or attenuation/de-emphasizing equalizer function). In the equalizer function example, if motion vectors (e.g., see
The configuration of the set of audio parameters can occur at the beginning of the coordinated display session in an example, and/or at the start-point of audio for the coordinated display session. While not shown explicitly in
After the set of audio parameters is configured at 1300, assume that the coordinated display session continues for a period of time with the audio component being output in accordance with the configured set of audio parameters. During the coordinated display session, the master application evaluates video content data within one or more mapped portions of the video content, 1305. For example, the evaluated video content data can include one or more motion vectors (e.g., see
For example, at 1405, video frame motion vectors that correspond to the video being collectively output by the client devices participating in the coordinated display session are measured in real-time by the master application. The video frame motion vectors can then be analyzed to detect an object (or objects) with the highest relative motion vector (1410). Then, the audio focus can shift (1415-1420) to focus on the identified high-motion object (or objects) by reconfiguring the set of audio parameters so that a client device outputting the mapped video portion with the detected object(s) outputs audio at a higher relative volume and/or amplification, by temporarily muting or lowering the volume output by other client devices, and so on. In a specific example, a given client device outputting the mapped video portion with the detected object(s) can have its volume raised 50%, each adjacent client device to the given client device can have their respective volume raised 25%, each client device that is two-screens (or two-positions) away from the given client device can play at a normal or default volume level, and each other client device can have their respective volume temporarily muted or lowered by some percentage.
For example, at 1605, video frames from the video being collectively output by the client devices participating in the coordinated display session are measured in real-time by the master application. The video frames can then be analyzed to detect an object (or objects) with the highest relative object focus (1610). Then, the audio focus can shift (1615-1620) to focus on the identified in-focus object (or objects) by reconfiguring the set of audio parameters so that a client device outputting the mapped video portion with the detected object(s) outputs audio at a higher relative volume and/or amplification, by temporarily muting or lowering the volume output by other client devices, and so on. In a specific example, a given client device outputting the mapped video portion with the detected object(s) can have its volume raised 50%, each adjacent client device to the given client device can have their respective volume raised 25%, each client device that is two-screens (or two-positions) away from the given client device can play at a normal or default volume level, and each other client device can have their respective volume temporarily muted or lowered by some percentage.
Further, while
Further, as the coordinated display session is implemented, the process of
Further, while the client devices shown in
While
Referring to
At 1810, in a first embodiment, assume that the set of eye tracking devices corresponds to a single master eye tracking device that is responsible for tracking the eye movements of each viewer in the viewing population. In this case, the master eye tracking device can execute a “baselining” operation which establishes the central eye position on the horizontal axis and vertical axis. The “baselining” operation could be triggered as a dedicated “calibration step/moment/time window” during setup of the coordinated display session, irrespective of where the viewing population is expected to be looking at that particular time. Alternatively, the baselining operation can be triggered in association with a prompt that is expected to draw the gazes of the viewing population. For example, a “play/start” touch-screen option may be output by one of the video presentation devices in the viewing population, such as the device designated as the master eye tracking device. In this case, when a viewer presses the play/start button being displayed on the master eye tracking device, the viewer can reasonably be expected to be looking at the play/start button, which can assist in eye tracking calibration. Eye movement along the horizontal axis (up/down) and vertical axis (left/right) can thereafter be measured by the master eye tracking device and conveyed back to the master application as the eye movement monitoring feedback at 1810. In a further example, a max threshold of eye movement can be established beyond which the eye tracking deviations would be ignored (e.g., either omitted from the eye movement monitoring feedback by the master eye tracking device, or included in the eye movement monitoring feedback by the master eye tracking device and then discarded by the master application). For example, the max threshold can include max values for horizontal and vertical movement “delta” from the baseline, whereby the delta is the angular deviation for the stare relative to the baseline.
At 1810, in a second embodiment, instead of designing at single master eye tracking device, a distributed eye tracking solution can be implemented. In this case, two or more client devices (e.g., potentially all of the video presentation devices participating in the coordinated display session) are designated to perform eye tracking and the two or more designated eye tracking devices establish the horizontal and vertical deviation of the viewer's stare/gaze from the principal and perpendicular axis. Each of the two or more designated eye tracking devices independently acts on the deviation measures therein and attenuates or amplifies the audio stream. In an example, in the distributed eye tracking mode, if there is a 3×3 array (not shown) of video presentation devices and the viewer is looking at the top-right device, other devices would measure horizontal and vertical axis stare/gaze deviation increasing from right to left as well as from top to bottom. In another example, in the distributed eye tracking mode, if there is a 2×4 array of video presentation devices and the viewer is looking at the top-right device (e.g., see Viewer 5 in
After obtaining the eye movement monitoring feedback from the designated set of eye tracking devices at 1810, the master application determines whether to modify one or more session parameters associated with the coordinated display session, 1815. If the master application determines not to modify the one or more session parameters at 1815, the coordinated display session continues using the current session parameter configuration and the process returns to 1810 where the master application continues to obtain eye movement monitoring feedback from the designated set of eye tracking devices. Otherwise, if the master application determines to modify the one or more session parameters at 1815, the process advances to 1820. At 1820, the master application modifies the one or more session parameters associated with the coordinated display session based on the eye movement monitoring feedback, after which the coordinated display session continues using the modified session parameters and the process returns to 1810 where the master application continues to obtain eye movement monitoring feedback from the designated set of eye tracking devices.
After obtaining the eye movement monitoring feedback from the designated set of eye tracking devices at 1910, the master application determines whether to modify an audio component (e.g., the set of audio parameters previously configured at 1910) of the coordinated display session based on the eye movement monitoring feedback, 1915. If the master application determines not to modify the audio component of the coordinated display session at 1915, the coordinated display session does not modify the audio component and instead continues using the current set of audio parameters and then advances to 1925. Otherwise, if the master application determines to modify the audio component at 1915, the master application modifies the audio component by reconfiguring the set of audio parameters based on the eye movement monitoring feedback from 1910 (e.g., by adjusting volume levels being output by one or more of the client devices in the session, changing an audio orientation for the session, modifying how enhancing or de-emphasizing equalizer functions are applied to audio being mapped to one or more client devices in the session, etc.) and then advances to 1925. Examples of how the audio component can be modified based on the eye movement monitoring feedback are provided below in more detail.
At 1925, the master application determines whether to modify an eye tracking component of the coordinated display session based on the eye movement monitoring feedback from 1910. The eye tracking component relates to any parameter associated with how the eye movement monitoring feedback is obtained. For example, at 1925, the master application can determine whether to modify how client devices are allocated to the set of eye tracking devices, the master application may determine whether to ask the set of eye tracking devices to initiate a calibration (or baselining) procedure, the master application may determine whether to toggle eye tracking off or on for the coordinated display session, the master application can determine whether a priority viewer has been detected in the viewing population and, if so, order the set of eye tracking devices to focus on the priority viewer, and so on. If the master application determines not to modify the eye tracking component of the coordinated display session at 1925, the coordinated display session continues without modifying the eye tracking component and then advances to 1935. Otherwise, if the master application determines to modify the eye tracking component at 1925, the master application modifies the eye tracking component based on the eye movement monitoring feedback from 1910, 1930, and then advances to 1935. Examples of how the eye tracking component can be modified based on the eye movement monitoring feedback are provided below in more detail.
At 1935, the master application determines whether to modify a video component associated with the coordinated display session based on the eye movement monitoring feedback from 1910. For example, at 1935, the master application can determine whether to expand a particular mapped video portion so that a bigger version of the particular mapped version is displayed across multiple (or even all) of the video presentation devices participating in the coordinated display session (e.g., a full-screen mode or zoomed-in mode). In another example, at 1935, the master application can determine whether to duplicate a particular mapped video portion so that a same-sized version of the particular mapped version is displayed across multiple (or even all) of the video presentation devices participating in the coordinated display session (e.g., a screen-copy or multi-view mode). If the master application determines not to modify the video component for the coordinated display session at 1935, the coordinated display session continues without modifying the video component and the process returns to 1910 where the master application continues to obtain eye movement monitoring feedback (e.g., potentially in a modified form if the eye tracking component is modified at 1930, or even stopped altogether if the eye tracking component modification toggles eye tracking to an off mode or disabled mode). Otherwise, if the master application determines to modify the video component for the coordinated display session at 1935, the master application modifies the video component based on the eye movement monitoring feedback from 1910, 1940. After 1940, the process returns to 1910 where the master application continues to obtain eye movement monitoring feedback (e.g., potentially in a modified form if the eye tracking component is modified at 1930, or even stopped altogether if the eye tracking component modification toggles eye tracking to an off mode or disabled mode). Additional examples of how the video component can be modified based on the eye movement monitoring feedback are provided below in more detail.
In the embodiment of
Table 1 (below) illustrates a variety of implementation examples whereby different session parameters (e.g., the audio component, the eye tracking component, the video component, etc.) are modified at 1820 of
As will be appreciated from a review of examples provided in Table 1 (above), different types of monitoring feedback can trigger different session parameter changes.
Referring to Example 1A from Table 1, a viewing population with a single viewer (“Viewer 1”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected as looking at Screen 2 (e.g., for more than a nominal threshold period of time, etc.) for a coordinated display session with a session state that is characterized by a single video+audio source (or feed) being collectively output by the coordinated display group by Screens 1 . . . 8. In Example 1A, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to increase the relative speaker volume being output by Screen 2 (more specifically, by an audio output device coupled to the video presentation device with Screen 2) and/or by other screens in proximity to Screen 2 (more specifically, by other audio output devices coupled to the video presentation devices with the other screens in proximity to Screen 2). As used herein, referring to a “screen” in context with audio output will be recognized as referring to an audio output device that is coupled to or associated with that particular screen. For example, in
Referring to Example 1B from Table 1, assume that the session modification from Example 1A has already occurred and the audio component for the coordinated display session has been updated based on Viewer 1 being detected as looking at Screen 2. Now in Example 1B, at some later point in time, assume that Viewer 1 is detected by the set of eye tracking devices as either looking away from Screen 2 (e.g., for more than a threshold period of time, so that minor eye deviations such as blinking by Viewer 1 will not trigger an audio component modification for the coordinated display session) or physically moving out of range of the set of eye tracking devices. In this case, the session parameter modification is to revert the audio configuration to previous settings and/or a previous audio configuration state. For example, the speaker volume for each of Screens 1 . . . 8 can be returned to 25%. In another example, the previous audio configuration state could be configured different, for example, as 2.1 pseudo-surround sound, 5.1 pseudo-surround sound, or some other static-playout mode that is not dictated by eye tracking.
Referring to Example 1C from Table 1, similar to Example 1A, a viewing population with a single viewer (“Viewer 1”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected as looking at Screen 2 (e.g., for more than a nominal threshold period of time, etc.) for a coordinated display session with a session state that is characterized by a single video+audio source (or feed) being collectively output by the coordinated display group by Screens 1 . . . 8. In Example 1C, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to apply an enhancing equalizer function to audio being output by Screen 2 (more specifically, by an audio output device coupled to the video presentation device with Screen 2) and/or by other screens in proximity to Screen 2 (more specifically, by other audio output devices coupled to the video presentation devices with the other screens in proximity to Screen 2, such as adjacent Screens 1, 3, 5, 6 and 7). Also, a de-emphasizing (or inverse) equalizer function can be applied to audio being output by one or more screens that are not in proximity to Screen 2 (e.g., Screens 4 and 8 which are not adjacent to Screen 2, or even the adjacent Screens 1, 3, 5, 6 and 7). In one example, the enhancing equalizer function is applied to Screen 2, while Screens 1 and 3-8 do not have their audio modified. In another example, the enhancing equalizer function is applied to Screens 1 . . . 3 and 5 . . . 7 (e.g., Screen 2 plus adjacent screens), while Screens 4 and 8 do not have their audio modified. In another example, the enhancing equalizer function is applied to Screen 2 only, Screens 1, 3 and 5 . . . 7 do not have their audio modified and a de-emphasizing (or inverse) equalizer function is applied to Screens 4 and 8. It will be appreciated that while the audio component modifications in other examples from Table 1 pertain primarily to volume levels and/or audio configuration, any of these examples could be implemented with respect to modifications to other audio parameter types (e.g., equalizer functions, treble, bass and/or fidelity modifications, etc.) in other scenarios based on similar feedback.
Referring to Example 2A from Table 1, a viewing population with multiple viewers (“Viewers 1 . . . 5”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected with Viewers 1 . . . 3 looking at Screen 2, Viewer 4 looking at Screen 7 and Viewer 5 looking at Screen 4, for a coordinated display session with a session state that is characterized by a single video+audio source (or feed) being collectively output by the coordinated display group by Screens 1 . . . 8. In each case, some nominal threshold of time of eye-to-screen contact can be required before any particular viewer qualifies as “looking” at that particular screen. In Example 2A, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to stop eye tracking so long as multiple viewers are present and to transition the audio configuration state to a default audio configuration state (e.g., the all-25% speaker volume state, 2.1 pseudo-surround sound, 5.1 pseudo-surround sound). Example 2A from Table 1 is not expressly illustrated in the FIGS. Basically, in Example 2A from Table 1, the master application assumes that it will be difficult to track eye movement from a large viewing population so as to provide relevant eye movement-based audio to the entire viewing population, and thereby decides to supply the viewing population with basic or default audio.
Referring to Example 2B from Table 1, a viewing population with multiple viewers (“Viewers 1 . . . 5”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected with Viewers 1 . . . 3 looking at Screen 2, Viewer 4 looking at Screen 7 and Viewer 5 looking at Screen 4, for a coordinated display session with a session state that is characterized by a single video+audio source (or feed) being collectively output by the coordinated display group by Screens 1 . . . 8. In each case, some nominal threshold of time of eye-to-screen contact can be required before any particular viewer qualifies as “looking” at that particular screen. In Example 2B, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to have each eye tracking device in the set of eye tracking devices monitor eye movements for each viewer in its respective range, and to selectively increase the relative speaker volume being output by each screen being watched by a threshold number of viewers (e.g., 1, 3, etc.) and screens in proximity to one of the “watched” screens. For example, in
Referring to Example 2C from Table 1, a viewing population with multiple viewers (“Viewers 1 . . . 5”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected with Viewers 1 . . . 3 looking at Screen 2, Viewer 4 looking at Screen 7 and Viewer 5 looking at Screen 4, for a coordinated display session with a session state that is characterized by a single video+audio source (or feed) being collectively output by the coordinated display group by Screens 1 . . . 8. In each case, some nominal threshold of time of eye-to-screen contact can be required before any particular viewer qualifies as “looking” at that particular screen. In Example 2C, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to calculate a weighted score for each screen based on screen-specific viewing metrics, and then to configure a target audio configuration state for the coordinated display session based on the screen-specific viewing metrics. For example, the screen-specific viewing metrics can include (i) a number of viewers watching each screen, (ii) a proximity of a “non-watched” screen from a “watched” screen, (iii) a number of “watched” screens to which a “non-watched” screen is adjacent, (iv) a duration that one or more viewers have been watching a particular screen (e.g., an average duration that viewers historically watch a particular screen compared with other screens, etc.) and/or (v) any combination thereof.
For example, in
Referring to Example 3A from Table 1, a viewing population with a single viewer (“Viewer 1”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected as looking at Screen 3 (e.g., for more than a nominal threshold period of time, etc.) for a coordinated display session with a session state that is characterized by a single video source (or feed), which may optionally include audio, being collectively output by the coordinated display group by Screens 1 . . . 8. In Example 3A, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to zoom-in (or blow-up) the mapped video portion being output by the screen being watched by the viewer. For example, in
Referring to Example 3B from Table 1, unlike Examples 1A-3A, the coordinated display session has a session state that is characterized by multiple video sources (or feeds), each of which may optionally include audio, being collectively output by the coordinated display group by Screens 1 . . . 8. As shown in
In Example 3B, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to duplicate the mapped video portion being output by the screen being watched by any viewer for more than t2 onto one or more other screens, temporarily blocking other feeds that were previously mapped to those screens. For example, based on Viewer 2 staring at Feed 7 on Screen 7 for more than t2 as shown in
Referring to Example 4A from Table 1, similar to Example 3B, the coordinated display session has a session state that is characterized by multiple video sources (or feeds), each of which may optionally include audio, being collectively output by the coordinated display group by Screens 1 . . . 8. As shown in
In Example 4A, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to zoom-out (or merge) multiple mapped video portions being viewed habitually by a particular viewer over time (i.e., more than t3) so as to produce a merged feed that is output by at least one of the habitually viewed screens. So, it is possible that each feed being viewed habitually by Viewer 3 can be updated to output the merged feed, or alternatively that only a few (or even one) of the habitually viewed screens is affected.
Referring to Example 4B from Table 1, the coordinated display session has a session state that is characterized by a single video source (or feed), which may optionally include audio, being collectively output by the coordinated display group by Screens 1 . . . 8. In Example 4B, a viewing population with a single viewer (“Viewer 1”) being actively eye-tracked (or monitored) by the set of eye tracking devices is detected with Viewer 1 having a history of alternating between Screens 3-4 and 7-8 for more than the time threshold (t3) (e.g., Viewer 3 watches Screen 3 for 10 seconds, then Screen 4 for 19 seconds, then Screen 7 for 18, seconds, then Screen 8 for 20 seconds, then Screen 3 again for 8 seconds, and so on, so it is clear that Viewer 1 keeps returning to these four particular screens habitually).
In Example 4B, similar to Example 4A, an example session parameter modification that can be triggered by the eye movement monitoring feedback is to zoom-out (or merge) multiple mapped video portions being viewed habitually by a particular viewer over time (i.e., more than t3) so as to produce a merged feed that is output by at least one of the habitually viewed screens. So, it is possible that each screen being viewed habitually by Viewer 1 can be updated to output the merged feed, or alternatively that only a few (or even one) of the habitually viewed screens is affected.
In accordance with any of the session parameter modifications discussed above with respect to Table 1 and/or
Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal (e.g., UE). In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Claims
1. A method of operating a master application configured for execution on a control device, comprising:
- registering a plurality of proximate client devices to a coordinated display group that is managed by the master application, wherein the plurality of proximate client devices includes at least one mobile client device and wherein the registering includes receiving display capability information associated with each of the plurality of proximate client devices;
- determining to initiate a coordinated display session for outputting visual data via the coordinated display group;
- receiving synchronization information that indicates current relative orientation and position data for each of the plurality of proximate client devices, the synchronization information including (i) a captured image of the plurality of proximate client devices, (ii) feedback related to how each of the plurality of proximate client devices detects a beacon that is directed towards the plurality of proximate client devices by an external device, (iii) feedback related to how each of the plurality of proximate client devices detects user movement in response to a prompt configured to request that a specified movement pattern be implemented in proximity to each of the plurality of proximate client devices and/or (iv) one or more captured images of a target object taken by each of the plurality of proximate client devices;
- mapping, for each proximate client device in a set of the plurality of proximate client devices, a different portion of the visual data to a respective display screen based on (i) the display capability information of the proximate client device, and (ii) the synchronization information for the plurality of proximate client devices; and
- delivering the mapped portions of the visual data to the set of the plurality of proximate client devices for presentation by a set of respective display screens.
2. The method of claim 1, wherein the control device corresponds to one of the plurality of proximate client devices, another proximate client device that does not belong to the coordinated display group or a remote server device that is separate from the plurality of proximate client devices.
3. The method of claim 1, wherein the set of the plurality of proximate client devices includes each client device from the plurality of proximate client devices.
4. The method of claim 1, wherein the set of the plurality of proximate client devices excludes at least one client device from the plurality of proximate client devices.
5. The method of claim 4, wherein the excluded at least one client device is excluded based on a failure of the at least one client device to satisfy a proximity condition for the coordinated display session, an orientation condition for the coordinated display session or a display capability condition for the coordinated display session.
6. The method of claim 1, wherein the display capability information indicates resolution and/or screen-size.
7. The method of claim 1,
- wherein the synchronization information includes (i) the captured image of the plurality of proximate client devices, and
- wherein the captured image is taken while each of the plurality of proximate client devices is displaying a unique image.
8. The method of claim 7, wherein the mapping includes:
- identifying each of the unique images in the captured image;
- associating each unique identified image in the captured image with a corresponding one of the plurality of proximate client devices; and
- determining the current relative orientation and position data for each of the plurality of proximate client devices based upon the associating.
9. The method of claim 1,
- wherein the synchronization information includes (ii) the feedback related to how each of the plurality of proximate client devices detects the beacon,
- wherein the beacon corresponds to light or sound that is strobed across the plurality of proximate client devices, wherein the mapping includes:
- determining the current relative orientation and position data for each of the plurality of proximate client devices based upon differences in timings and/or angles of arrival at which the plurality of proximate client devices detects the beacon.
10. The method of claim 1,
- wherein the synchronization information includes (iii) the feedback related to how each of the proximate client devices detects the user movement in response to the prompt, wherein the mapping includes:
- determining the current relative orientation and position data for each of the plurality of proximate client devices based upon timing characteristics related to the user movement detected by the proximate client devices.
11. The method of claim 1,
- wherein the synchronization information includes (iv) the one or more captured images of the target object taken by each of the plurality of proximate client devices, wherein the mapping includes:
- determining the current relative orientation and position data for each of the plurality of proximate client devices based on image processing that is performed on the one or more captured images of the target object taken by each of the plurality of proximate client devices.
12. The method of claim 1, wherein the mapped portions of the visual data correspond to non-overlapping portions of the visual data that, upon presentation by the plurality of proximate client devices, collectively function to reproduce the visual data.
13. The method of claim 1, further comprising:
- receiving updated synchronization information that indicates updated display capability information for one or more of the set of the plurality of proximate client devices, wherein the updated display capability information indicates that the one or more proximate client devices is no longer capable of outputting its mapped portion of the visual data;
- modifying the mapping of the visual data so as to exclude mapping any portion of the visual data to the one or more proximate client devices based on the updated display capability information; and
- delivering the modified mapped portions of the visual data to their corresponding proximate client devices for presentation by their respective display screens.
14. The method of claim 1, further comprising:
- receiving updated synchronization information that indicates updated relative orientation and position data for at least one of the plurality of proximate client devices;
- modifying the mapping of the visual data to the respective display screens of one or more of the plurality of proximate client devices based on (i) the display capability information of the one or more proximate client devices, and (ii) the updated synchronization information for the at least one proximate client device; and
- delivering the modified mapped portions of the visual data to their corresponding proximate client devices for presentation by their respective display screens.
15. The method of claim 14,
- wherein the updated synchronization information indicates that the at least one proximate client device no longer satisfies an orientation condition or a proximity condition for the coordinated display session,
- wherein the modifying updates the mapping of the visual data so as to exclude mapping any portion of the visual data to the at least one proximate client device that no longer satisfies the orientation condition or the proximity condition, and
- wherein the modified mapped portions of the visual data are not delivered to the at least one proximate client device.
16. The method of claim 14,
- wherein the updated synchronization information indicates that the at least one proximate client device is at least one new proximate client device that satisfies an orientation condition and a proximity condition for the coordinated display session,
- wherein the modifying updates the mapping of the visual data so as to map at least one portion of the visual data to the new proximate client device, and
- wherein the updated mapped portions of the visual data are delivered to the at least one new proximate client device in addition to the set of the plurality of proximate client devices.
17. The method of claim 14,
- wherein the updated synchronization information indicates that the at least one proximate client device has changed its position and/or orientation while still satisfying an orientation condition and a proximity condition for the coordinated display session, and
- wherein the modifying updates the mapping of the visual data based on the changed position and/or orientation of the at least one proximate client device.
18. The method of claim 1, wherein the registering further includes receiving audio output capability information associated with each of the plurality of proximate client devices, further comprising,
- identifying audio data to be output in conjunction with the visual data;
- mapping, for one or more of the set of the plurality of proximate client devices, a different portion of the audio data to a respective audio output device based on (i) the audio output capability information of the proximate client device, and (ii) the synchronization information for the plurality of proximate client devices; and
- delivering the mapped portions of the audio data to their corresponding proximate client devices for output by their respective audio output devices.
19. The method of claim 1, further comprising:
- determining to transfer management for the coordinated display session from the master application on the control device to a different device;
- negotiating with one or more other devices to identify a target device for transferring the management;
- transferring the management for the coordinated display session to the identified target device; and
- ceasing execution of the master application on the control device in conjunction with the management transfer.
20. A method of operating a client device with a display screen, comprising:
- registering, by a client application configured for execution on the client device, to a coordinated display group that is managed by a master application configured for execution on a control device, wherein the coordinated display group includes the client device and at least one other proximate client device, wherein at least one of the client device and/or the at least one other proximate client device corresponds to a mobile client device;
- reporting display capability information associated with a display screen of the client device in conjunction with the registering;
- executing a synchronization procedure configured to obtain synchronization information by which the master application can derive current relative orientation and position data for the client device, the synchronization procedure including (i) displaying a unique image via the display screen in conjunction with at least one unique image being displayed by at least one display screen of the at least one other proximate client device to facilitate a captured image showing each of the unique images by an external device, (ii) detecting a beacon that is directed towards the client device and the at least one other proximate client device and reporting beacon detection feedback to the master application, (iii) presenting a prompt to request that a specified movement pattern be implemented in proximity to the client device and the at least one other proximate client device and reporting feedback related to how the user movement is detected by the client device and/or (iv) capturing one or more images of a target object and reporting the one or more captured images to the master application; and
- receiving, from the master application, a portion of visual data for presentation on the client device, wherein different portions of the visual data are collectively configured for presentation on the client device and the at least one other proximate client device during a coordinated display session, wherein the received portion of the visual data to be presented on the client device is based upon (i) the reported display capability information of the client device, and (ii) the reported synchronization information.
21. The method of claim 20, further comprising:
- presenting the received portion of the visual data on the display screen of the client device.
22. The method of claim 20, wherein the control device corresponds to the client device, the at least one other proximate client device or a remote server device that is separate from the client device and the at least one other proximate client device.
23. The method of claim 20,
- wherein the synchronization procedure includes (ii) detecting the beacon that is directed to the client device and the at least one other proximate client device, and
- wherein the beacon corresponds to light or sound that is strobed across the client device and the at least one other proximate client device.
24. The method of claim 20,
- wherein the synchronization procedure includes (iii) presenting the prompt to request that the specified movement pattern be implemented in proximity to the client device, wherein the synchronization procedure further comprises:
- detecting a user swiping one or more fingers in proximity to the client device in response to the prompt,
- wherein the detected user swipe is reported as the feedback related to how the user movement is detected by the client device.
25. The method of claim 20, wherein the different portions correspond to non-overlapping portions of the visual data that, upon presentation by the client device and the at least one other proximate client device, collectively function to reproduce the visual data.
26. The method of claim 20, further comprising:
- reporting updated synchronization information to the master application that indicates that the client device is no longer capable of outputting its mapped portion of the visual data,
- wherein the receiving step stops receiving the portion of the visual data for presentation on the client device in response to the reporting of the updated synchronization information.
27. The method of claim 20,
- reporting updated synchronization information to the master application that indicates updated relative orientation and position data for at least one of the plurality of proximate client devices, further comprising:
- receiving a modified version of the portion of the visual data for presentation on the client device in response to the reporting of the updated synchronization information.
28. The method of claim 20,
- reporting audio output capability information associated with an audio output device of the client device in conjunction with the registering; and
- receiving, in response to the reporting of the audio output capability information, a portion of audio data to be output in conjunction with the portion of the visual data by the client device.
29. A control device configured to execute a master application, comprising:
- means for registering a plurality of proximate client devices to a coordinated display group that is managed by the master application, wherein the plurality of proximate client devices includes at least one mobile client device and wherein the registering includes receiving display capability information associated with each of the plurality of proximate client devices;
- means for determining to initiate a coordinated display session for outputting visual data via the coordinated display group;
- means for receiving synchronization information that indicates current relative orientation and position data for each of the plurality of proximate client devices, the synchronization information including (i) a captured image of the plurality of proximate client devices, (ii) feedback related to how each of the plurality of proximate client devices detects a beacon that is directed towards the plurality of proximate client devices by an external device, (iii) feedback related to how each of the plurality of proximate client devices detects user movement in response to a prompt configured to request that a specified movement pattern be implemented in proximity to each of the plurality of proximate client devices and/or (iv) one or more captured images of a target object taken by each of the plurality of proximate client devices;
- means for mapping, for each proximate client device in a set of the plurality of proximate client devices, a different portion of the visual data to a respective display screen based on (i) the display capability information of the proximate client device, and (ii) the synchronization information for the plurality of proximate client devices; and
- means for delivering the mapped portions of the visual data to the set of the plurality of proximate client devices for presentation by a set of respective display screens.
30. The control device of claim 29, wherein the control device corresponds to one of the plurality of proximate client devices, another proximate client device that does not belong to the coordinated display group or a remote server device that is separate from the plurality of proximate client devices.
31. A client device with a display screen, comprising:
- means for registering, by a client application configured for execution on the client device, to a coordinated display group that is managed by a master application configured for execution on a control device, wherein the coordinated display group includes the client device and at least one other proximate client device, wherein at least one of the client device and/or the at least one other proximate client device corresponds to a mobile client device;
- means for reporting display capability information associated with a display screen of the client device in conjunction with the registering;
- means for executing a synchronization procedure configured to obtain synchronization information by which the master application can derive current relative orientation and position data for the client device, the synchronization procedure including (i) displaying a unique image via the display screen in conjunction with at least one unique image being displayed by at least one display screen of the at least one other proximate client device to facilitate a captured image showing each of the unique images by an external device, (ii) detecting a beacon that is directed towards the client device and the at least one other proximate client device and reporting beacon detection feedback to the master application, (iii) presenting a prompt to request that a specified movement pattern be implemented in proximity to the client device and the at least one other proximate client device and reporting feedback related to how the user movement is detected by the client device and/or (iv) capturing one or more images of a target object and reporting the one or more captured images to the master application; and
- means for receiving, from the master application, a portion of visual data for presentation on the client device, wherein different portions of the visual data are collectively configured for presentation on the client device and the at least one other proximate client device during a coordinated display session, wherein the received portion of the visual data to be presented on the client device is based upon (i) the reported display capability information of the client device, and (ii) the reported synchronization information.
32. The client device of claim 31, wherein the control device corresponds to the client device, the at least one other proximate client device or a remote server device that is separate from the client device and the at least one other proximate client device.
33. A control device configured to execute a master application, comprising:
- logic configured to register a plurality of proximate client devices to a coordinated display group that is managed by the master application, wherein the plurality of proximate client devices includes at least one mobile client device and wherein the registering includes receiving display capability information associated with each of the plurality of proximate client devices;
- logic configured to determine to initiate a coordinated display session for outputting visual data via the coordinated display group;
- logic configured to receive synchronization information that indicates current relative orientation and position data for each of the plurality of proximate client devices, the synchronization information including (i) a captured image of the plurality of proximate client devices, (ii) feedback related to how each of the plurality of proximate client devices detects a beacon that is directed towards the plurality of proximate client devices by an external device, (iii) feedback related to how each of the plurality of proximate client devices detects user movement in response to a prompt configured to request that a specified movement pattern be implemented in proximity to each of the plurality of proximate client devices and/or (iv) one or more captured images of a target object taken by each of the plurality of proximate client devices;
- logic configured to map, for each proximate client device in a set of the plurality of proximate client devices, a different portion of the visual data to a respective display screen based on (i) the display capability information of the proximate client device, and (ii) the synchronization information for the plurality of proximate client devices; and
- logic configured to deliver the mapped portions of the visual data to the set of the plurality of proximate client devices for presentation by a set of respective display screens.
34. The control device of claim 33, wherein the control device corresponds to one of the plurality of proximate client devices, another proximate client device that does not belong to the coordinated display group or a remote server device that is separate from the plurality of proximate client devices.
35. A client device with a display screen, comprising:
- logic configured to register, by a client application configured for execution on the client device, to a coordinated display group that is managed by a master application configured for execution on a control device, wherein the coordinated display group includes the client device and at least one other proximate client device, wherein at least one of the client device and/or the at least one other proximate client device corresponds to a mobile client device;
- logic configured to report display capability information associated with a display screen of the client device in conjunction with the registering;
- logic configured to execute a synchronization procedure configured to obtain synchronization information by which the master application can derive current relative orientation and position data for the client device, the synchronization procedure including (i) displaying a unique image via the display screen in conjunction with at least one unique image being displayed by at least one display screen of the at least one other proximate client device to facilitate a captured image showing each of the unique images by an external device, (ii) detecting a beacon that is directed towards the client device and the at least one other proximate client device and reporting beacon detection feedback to the master application, (iii) presenting a prompt to request that a specified movement pattern be implemented in proximity to the client device and the at least one other proximate client device and reporting feedback related to how the user movement is detected by the client device and/or (iv) capturing one or more images of a target object and reporting the one or more captured images to the master application; and
- logic configured to receive, from the master application, a portion of visual data for presentation on the client device, wherein different portions of the visual data are collectively configured for presentation on the client device and the at least one other proximate client device during a coordinated display session, wherein the received portion of the visual data to be presented on the client device is based upon (i) the reported display capability information of the client device, and (ii) the reported synchronization information.
36. The client device of claim 35, wherein the control device corresponds to the client device, the at least one other proximate client device or a remote server device that is separate from the client device and the at least one other proximate client device.
37. A non-transitory computer-readable medium containing instructions stored thereon, which, when executed by a control device configured to execute a master application, cause the control device to perform operations, the instructions comprising:
- at least one instruction to cause the control device to register a plurality of proximate client devices to a coordinated display group that is managed by the master application, wherein the plurality of proximate client devices includes at least one mobile client device and wherein the registering includes receiving display capability information associated with each of the plurality of proximate client devices;
- at least one instruction to cause the control device to determine to initiate a coordinated display session for outputting visual data via the coordinated display group;
- at least one instruction to cause the control device to receive synchronization information that indicates current relative orientation and position data for each of the plurality of proximate client devices, the synchronization information including (i) a captured image of the plurality of proximate client devices, (ii) feedback related to how each of the plurality of proximate client devices detects a beacon that is directed towards the plurality of proximate client devices by an external device, (iii) feedback related to how each of the plurality of proximate client devices detects user movement in response to a prompt configured to request that a specified movement pattern be implemented in proximity to each of the plurality of proximate client devices and/or (iv) one or more captured images of a target object taken by each of the plurality of proximate client devices;
- at least one instruction to cause the control device to map, for each proximate client device in a set of the plurality of proximate client devices, a different portion of the visual data to a respective display screen based on (i) the display capability information of the proximate client device, and (ii) the synchronization information for the plurality of proximate client devices; and
- at least one instruction to cause the control device to deliver the mapped portions of the visual data to the set of the plurality of proximate client devices for presentation by a set of respective display screens.
38. The non-transitory computer-readable medium of claim 37, wherein the control device corresponds to one of the plurality of proximate client devices, another proximate client device that does not belong to the coordinated display group or a remote server device that is separate from the plurality of proximate client devices.
39. A non-transitory computer-readable medium containing instructions stored thereon, which, when executed by a client device with a display screen, cause the client device to perform operations, the instructions comprising:
- at least one instruction to cause the client device to register, by a client application configured for execution on the client device, to a coordinated display group that is managed by a master application configured for execution on a control device, wherein the coordinated display group includes the client device and at least one other proximate client device, wherein at least one of the client device and/or the at least one other proximate client device corresponds to a mobile client device;
- at least one instruction to cause the client device to report display capability information associated with a display screen of the client device in conjunction with the registering;
- at least one instruction to cause the client device to execute a synchronization procedure configured to obtain synchronization information by which the master application can derive current relative orientation and position data for the client device, the synchronization procedure including (i) displaying a unique image via the display screen in conjunction with at least one unique image being displayed by at least one display screen of the at least one other proximate client device to facilitate a captured image showing each of the unique images by an external device, (ii) detecting a beacon that is directed towards the client device and the at least one other proximate client device and reporting beacon detection feedback to the master application, (iii) presenting a prompt to request that a specified movement pattern be implemented in proximity to the client device and the at least one other proximate client device and reporting feedback related to how the user movement is detected by the client device and/or (iv) capturing one or more images of a target object and reporting the one or more captured images to the master application; and
- at least one instruction to cause the client device to receive, from the master application, a portion of visual data for presentation on the client device, wherein different portions of the visual data are collectively configured for presentation on the client device and the at least one other proximate client device during a coordinated display session, wherein the received portion of the visual data to be presented on the client device is based upon (i) the reported display capability information of the client device, and (ii) the reported synchronization information.
40. The non-transitory computer-readable medium of claim 39, wherein the control device corresponds to the client device, the at least one other proximate client device or a remote server device that is separate from the client device and the at least one other proximate client device.
Type: Application
Filed: Apr 17, 2014
Publication Date: Oct 23, 2014
Applicant: QUALCOMM Incorporated (San Diego, CA)
Inventors: Amit GOEL (San Diego, CA), Sandeep SHARMA (San Diego, CA), Mohammed Ataur Rahman SHUMAN (San Diego, CA)
Application Number: 14/255,869
International Classification: G06F 3/14 (20060101);