Methods, Systems, and Products for Managing Multiple Data Sources
Methods, systems, and products manage multiple data sources or networks to a single device. A plurality of signals is received at the device, which may include video signals, from multiple local data sources. The device may further include an output module, which may output the plurality of signals onto a single display device.
Latest AT&T Patents:
- METHOD AND SYSTEM FOR DYNAMIC LINK AGGREGATION
- DUAL SUBSCRIBER IDENTITY MODULE RADIO DEVICE AND SERVICE RECOVERY METHOD
- CARRIER AGGREGATION - HANDOVER SYNERGISM
- APPARATUSES AND METHODS FOR FACILITATING AN INDEPENDENT SCELL TOPOLOGY IN RESPECT OF COMMUNICATIONS AND SIGNALING
- Protection Against Relay Attack for Keyless Entry Systems in Vehicles and Systems
This application is a continuation of U.S. application Ser. No. 11/611,795, filed Dec. 15, 2006, now issued as U.S. Pat. No. ______, and incorporated herein by reference in its entirety.
BACKGROUNDComputer users perform a variety of tasks with the use of computing devices and the like. For instance, users may at times use a computer to perform work, surf the web, play games, or watch movies or the like. Some employers may, however, limit their employees' use of the employer's computers or networks. For example, employers may block access to non-work related web sites. This may cause some employee computer users to use multiple computers for multiple different tasks. For instance, an employee computer user may use a work computer or network to perform work, while using a personal computer to navigate the web. Using multiple computers, however, can prove to be confusing and spatially inefficient.
These problems may be exacerbated for a teleworking computer user. This is because a teleworking computer user may desire a reliable link into an employer's network, in addition to the ability to perform work-related and non-work related tasks on one or more computers. Currently, the teleworker only has the solution of keeping multiple computers and other desired electronics on her desk in order to accomplish all of her tasks. Again, this solution is less than ideal.
The description below addresses these and other shortcomings in the present art.
SUMMARYDevices for managing multiple data sources or networks are described herein. One such device may include a data source input module configured to input a plurality of signals, which may include video signals, from multiple local data sources. The device may further include an output module, which may receive the signals from the data source input module and output the signals onto a single display device.
Another device may include computer-readable media having instructions for selecting, from multiple data source displays each outputted from a respective local data source and each located on a single display device, one of the multiple data source displays. Further instructions may include accessing the selected local data source to allow for use of the local data source, in response to the selecting of the local data source.
A method described herein may include receiving data, including video data, from multiple local data sources. These local data sources may include some first local data sources that couple to a first network and other second local data sources that couple to a second network that is independent of the first network. This method may also include outputting some or all of the video data received from both the first and second local data sources onto a single display device.
Other systems, methods, and/or computer program products according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional systems, methods, and/or computer program products be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
The teachings herein are described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
In some implementations, input/output devices 112 may be located at a single user station 114. As such, manager module 110 may be configured to accept signal inputs from a plurality of data sources 102(1)-(N) and output those signals to single user station 114. In other implementations, devices 112 may be located at a multitude of locations, and may thus not comprise single user station 114. In any instance, the resulting system may thus comprise an environment capable of managing information to and from a plurality of data sources. As described in greater detail with reference to
Furthermore, data sources 102(1)-(N) may all be put to the same use or such uses may vary. For instance, in some implementations data source 102(1) may be used to carry broadband entertainment signals, while data source 102(2) may be used to connect a virtual private network (VPN) signal. The latter signal may be used, for example, to connect to a work account that requires relatively high security. Meanwhile, in some instances data source 102(N) may be used for a connection to the internet. The internet connection may be an open internet connection, as compared to some instances of the VPN connection over data source 102(2). Because the VPN connection may be used, in some instances, to connect to a work account or the like, the managing employer may limit internet access over the network. The open internet connection of data source 102(N) may remedy this limitation in some implementations.
As discussed in reference to
An independent network data source 216 may also provide input signals to manager module 110. Independent network data source 216 may transmit and receive data via a network 222 that is independent of network 218. If multiple independent network data sources exist, then one or more such networks may exist. Due to the independence of network 222 from network 218, independent network 222 and thus independent network data source 216 may be relatively free from any problems or encumbrances that network 218 may experience. For instance, if network 218 congests or crashes, signals traveling via independent network 222 may be unaffected. Thus, the content itself of independent network data source 216 may be relatively more secure as compared to a data source that shares a network with other data sources. Similar to network 218, independent network 222 may comprise a land-line telephone-based dial up connection, a digital subscriber (DSL) line, a cable-based network, a satellite network, or the like.
Similar to data sources 102(1)-(N), independent network data source 216 may comprise a plurality of different devices. Such non-limiting examples may include personal computers such as towers and laptops, server devices, personal digital assistants (PDA's), mobile telephones, land-based telephone lines, external memory devices, cameras, or any other electronic device. In some implementations, however, independent network data source 216 may comprise devices that define a telepresence module. Such a telepresence module may operate to provide, to a local or remote location, audio and/or video signals of a user when that user is present at a certain location, such as user station 114. A telepresence module may also operate to receive audio and/or video signals from a remote or local location. When a user is present at the desired location, such as the user station 114, the telepresence module may automatically or manually activate.
Stated otherwise, this resulting telepresence module may serve as a reliable telepresence link between two or more locations. For instance, telepresence link could serve as a link between user station 114 comprised of input/output devices 112 and a user's place of work. This place of work may, in some instances, be the user's office located at his or her employer's building. This link may provide audio and/or video signals to the two or more locations, such as user station 114 and the user's office at work. As such, independent network 222 and independent network data source 216 may provide a reliable telepresence signal without concern for any congestion of failure that network 218 may encounter.
Furthermore, independent network 222 and independent network data source 216 may provide always-on point-to-point or point-to-multipoint connectivity. Such a reliable and always-on system may serve to avoid not only processor errors, discussed above by way of congestion and/or failures, but may also serve to avoid operational error by a user by simplifying the system. An always-on system may also avoid the possibility of a data source entering “sleep mode” at an undesirable time.
Environment 200 may also comprise a telephone line 224, which may also provide input signals to manager module 110. Telephone line 224 may be a traditional land-based telephone line, or it may comprise packet (IP), mobile or satellite technology. Signals from telephone line 224 may also output to some or all of input/output devices 112.
Input/output devices 112 may comprise one or more displays 226, an audio system 228, cursor controllers 230, cameras 232, microphones 234 and/or sensors 236. Input/output devices 112 may serve to input or output signals to or from data sources 102(1)-(N) or independent network data source 216. As illustrated, input/output devices 112 may be configured to connect to manager module 110. Furthermore, input/output devices 112 may be located in a plurality of locations, or in some implementations may be located at single user station 114. Some components may also serve more than one purpose, or a single device could comprise all or nearly all of input/output devices 112.
One or more displays 226 may receive and display video signals from data sources 102(1)-(N) and/or independent network data source 216. If displays 226 comprises a single display, then the video signals and hence data source displays may be displayed on the single display. This is discussed in detail below with reference to
In some implementations, display 226 may comprise a cylindrical or spherical monitor, which may span approximately 180° around a user. Display 226 may comprise a liquid crystal display (LCD) screen or the like, or may comprise a projector screen. If display 226 comprises a projector screen, then input/output devices 112 may further comprise a projector for outputting video signals onto display 226. Alternatively, manager module 110 could itself comprise a projector. If the projector screen is cylindrical or spherical, the projector may be capable of spreading the data source displays across the screen, again as discussed in detail below.
As mentioned above, input/output devices 112 may also comprise one or more audio systems 228. Audio system 228 may comprise speakers or the like for outputting audio signals from data sources 102(1)-(N) and/or independent network data source 216. Audio system 228 may further comprise a multi-channel audio reproduction system, which may comprise an automated mixer(s), amp(s), headphones, or the like. An automated mixer may mix audio signals from a plurality of data sources as well as incoming telephone calls.
Input/output devices 112 may also comprise one or more cursor controllers 230. Cursor controllers may include, without limitation, a text input device such as a keyboard, a point-and-select device such as a mouse, and/or a touch screen. Input/output devices 112 may also include one or more cameras 232. Cameras 232 may comprise, for example, one or more video cameras. In some implementations, camera 232 may be configured for use with a telepresence module discussed above. For example, a video camera signal may capture the image of a user and provide the signal to manager module 110. Manager module 110 may then transmit the signal to one or more locations, one of which may be the user's work office. The transmitted video signal may be streaming in some instances, so that an image of a user is projected at all times at one or more locations, such as the user's work office. This image may, for example, be displayed on a display device at one or more locations such as the user's work office. Thus, with the use of the one or more cameras 232, the user may be able to work at a remote user station, such as user station 114, while the user's coworkers or superiors can view the user and his or her activities. Of course, teleworking is merely one example of an activity for which the one or cameras 232 may be put to use.
Input/output devices 112 may further comprise, in some instances, one or more microphones 234 or other multi-channel audio sensing equipment. Microphone 234 may be configured to provide audio input signals to manager module 110. For example, microphone 234 may be configured for use with the telepresence module discussed above. In this implementation, microphone 234 may be configured to capture audio signals from a user and provide these signals to manager module 110. Manager module 110 may then transmit these signals to one or more locations, one of which may again be the user's work office. The transmitted audio signal may be streaming, so that the sounds of the user are audible at one or more locations at all times. Also, microphone 234 may be noise-gated and set with a threshold value. This may be of particular importance in implementations with multiple microphones, so as to only send user-inputted audio signals from a certain microphone to manager module 110 when the user intends to do so. Microphone 234 may also act in conjunction with manager module 110 to enable voice recognition. In some implementations, these components may require that user's voice be recognized before allowing user to use microphone 234 and hence manager module 110.
Furthermore, implementations with multiple microphones may utilize stereophonic sound techniques. For instance, if a user conducts multiple videoconference and/or teleconference meetings simultaneously, the system may dedicate a microphone to each. As a result, if a user turns and speaks into a left microphone, a recipient represented on the data source display on the left may receive the sound at greater volumes. At the same time, the microphone on the right may receive less input and, hence, the recipient depicted in the data source display on the right may receive the sound at lesser volumes.
In the office example, microphone 234 could be coupled with camera 232 so as to provide the image and sounds of the user at the user's work office at all times. The user's work office may thus not only have a display depicting the user's image, but may also have one or more speakers to output the audio signals captured by microphone 234 and transmitted by manager module 110. Furthermore, in some instances the user's work office (or other exemplary locations) may contain one or more cameras, such as a video camera, and one or more microphones so that two-way audio and visual communication may occur between the office and the remote user station, such as user station 114.
In the teleworking example, this may provide adequate availability of the user in the office, despite the fact that user may be remotely situated. For example, a co-worker may be able to walk over to the user's work office and see that the user is working at a remote user station. The co-worker may confront a monitor or the like displaying the user at a remote user station. The co-worker may also be able to ask the user questions and like with use of the office microphone(s), and the two may be able to communicate as efficiently or nearly as efficiently as if the user were physically present at the work office. Furthermore, while this telepresence link may be secure, it may also be accessible by others. A user's boss, for instance, may be able to bridge into the telepresence signal rather than have to walk to the user's work office to communicate to the user.
Output/input devices 112 may further comprise one or more sensors 236. Sensor 236 may, in some instances, sense when a user is present at a user station, such as user station 114. In some implementations, sensor 236 may comprise a weight-based sensor that may be configured to situate adjacent to or integral with a user's chair. Thus, in some instances when a user sits down on his or her chair, sensor 236 may detect that the user is present. Furthermore, sensor 236 may be capable of differentiating between users. In the weight-based sensor example, sensor 236 may be capable of differentiating between a first user (User #1) and a second user (User #2) based on each user's weight. It is noted, however, that other sensors are envisioned. For example, camera 232 may serve to sense when a user is present, as may microphone 234.
Furthermore, sensor 236 may collect various pieces of information, which may use to determine the presence of a user. Sensor 236 may also store and/or log this collected information. For instance, sensor 236 may collect information such as the weight or height of a user, as well as other user-specific data. Furthermore, sensor 236 may gather the time of when certain data was collected. In some implementations, sensor 236 may calculate and save the amount of time that a user spent at the user station, as well as the amount of time that the user was away. This collected information may be provided only the user, or it could be available by others, such as the user's boss in the teleworking examples.
Along with sensor 236, input/output devices 112 may also collect various pieces of information. For instance, camera 232 may collect location information pertaining to a present user. Similarly, microphone 234 may collect information regarding the sounds emanating from the present user. Sensor 236, camera 232, microphone 234, and other input/output devices 112 may not only collect the actual content of the user's actions, but may also collect the “recipe” of the content of the user's actions. These devices may create a chronological list of events that comprise the user's actions, which may be sent to remote locations for synthesis. Using this events list or recipe, remote locations may synthesize the user's actions. This may apply to audio, video, smell or any other characteristic that may be present at user station 114 or the like. While this synthesis may not be as real as if the remote location merely receives and displays streaming video, audio, or the like, the size of the file transporting this information may be orders of magnitude smaller.
For instance, such a chronological events list created by various pieces of information collected by input/output devices 112 may look like the following:
-
- At time 21:27:06 GMT, 28 Nov. 2006; weight sensor reading=151.5 pounds; audio amplitude=13.7 decibels; audio content=“I'm working from home today” using Bob Jones' voice font; playback rate=7 words per minute; video content=Bob Jones in chair, display cartoon face item #47 for 50 milliseconds
- At time 21:27:08 GMT, 28 Nov. 2006; weight sensor reading 000.3 pounds; audio amplitude=0.0 decibels; video content=empty chair, display empty chair
- At time . . . .
As discussed above, this events list may be transmitted to remote locations, which may then synthesize the content according to the recipe. Using the above events list, for example, the remote location may initially show a cartoon face or the like of a generic man, or a more realistic picture of “Bob Jones”. The remote location system, such as a computer at the user's work, may also project a voice, and possibly Bob's voice, that says “I′m working from home today”. The projected voice may be approximately equal to the volume spoken by Bob, and may also be approximately equal to the words' spoken cadence. Furthermore, at time=21:27:08 GMT, the remote location may display an empty chair representing that Bob is no longer present at user station 114. Such an events list may serve to decrease file size being sent along the networks while still maintaining a realistic experience at the remote location.
Sensor 236 may also serve other purposes. In some implementations, sensor 236 may adjust the sizes of data source displays or data source display windows, discussed in detail below, depending on a current angle of rotation of a user's chair. Sound volumes emanating from different data source displays may also be adjusted based on chair rotation, as may microphone volumes. For instance, if a user's chair is currently rotated to the left, insinuating that user's attention currently focuses on the data source display to the left, the volume of that data source may be increased, as may be the window size. It is also envisioned that sensor 236 could make other similar adjustments in this manner.
Furthermore, in some implementations, sensor 236 may cause one or more of data sources 102(1)-(N) and/or independent network data source 216 to turn on or off, or perform one or more other functions. In the implementation of the weight-based chair sensor, sensor 236 may cause a telepresence module to turn on when a user sits in his or her chair. This may thereby establish a reliable telepresence link. Again, in some implementations this could automatically turn on a telepresence link to a work office, which may comprise turning on a display, camera, and/or microphone at the work office. In this implementation, sensor 236 may also notify others located at the work office (or other exemplary location) when the user has stepped out of the remote user station, such as user station 114. When the user gets up from his or her chair, for instance, the work office display may present an away message indicating that the user has recently left the user station, but may soon return. Furthermore, sensor 236 may recognize users in order to automatically provide user preferences tailored to the detected user. These user preferences may comprise audio or video preferences or the like, which may be comprised of the different audio and video capabilities discussed below.
As noted above, telephone line 224 may input to manager module 110. Thus, input/output devices 112 may comprise a telephone. Also, some or all of input/output devices 112 described above may serve telephone functionality over telephone line 216. For instance, audio system 228 may output sound from telephone line 224, while microphone 234 may input audio signals into telephone line 216. Furthermore, display 226 may exhibit a video signal transmitted over telephone line 224, such as for a video teleconferencing call. Camera 232 may accordingly also provide video signals, such as of the image of the user, over telephone line 224. Other components may be included to achieve full telephone capabilities, such as a speakerphone, a headset, a handset, or the like.
As mentioned above, manager module 110 may include user input module 342, which may be configured to receive user input signals or other user instructions. In some instances, user input module 342 may be configured to receive input from one or more of the one or more cursor controllers 230, cameras 232, microphones 234, and/or sensors 236. Signals received from cursor controller 230 may, for example, be received for the purpose of accessing a data source or modifying, activating or using a program or application running on a data source. User input module 342 may also be configured to receive signals from camera 232, which may comprise images (e.g. video images) of a user located at user station 114. Similarly, user input module 342 may receive user-inputted audio signals from microphone 234. User input module 342 may further be configured to receive signals from sensor 236, which may, in some instances, serve to indicate that a user is present at user station 114. For instance, user input module 342 may receive a signal from sensor 236 indicating that a user is sitting in the user station chair and/or facing a certain direction as discussed above. All of the afore-mentioned signals may be relayed from user input module 342, and hence manager module 110, and to the respective data source 102(1)-(N) or independent network data source 216 destination.
Turning next to
Data source displays 444(1)-(N) and/or independent network data source display 646, however, need not be arranged with such uniformity. Reference is thus made to
Returning to the example discussed above, data source 102(1) may comprise a personal computer, which may be used to carry broadband entertainment signals. Similarly, data source 102(2) may comprise a computer as well, with its purpose being to provide a VPN connection to a user's work account. Data source 102(N), meanwhile, may also comprise a personal computer, such as a laptop computer, with its purpose being to provide an open internet connection for the user's navigating convenience. Furthermore, independent network data source 216 may comprise a telepresence signal so that a user may work at a remote user station, such as user station 114, while broadcasting audio and/or video signals to the user's work office, for instance.
Now merging this implementation with
If manager module 110 is to have the capability to dynamically, select, arrange, and modify data source displays 444(1)-(N) and/or independent network data source display 646, one or more of data sources 102(1)-(N) or independent network data source 216 may be chosen, selected, highlighted, or the like. In some implementations, cursor controller 230 may help to provide this capability. Reference is thus made to
Furthermore, point-and-select cursor 848 may serve to navigate over the output of display 226 and select one or more of a plurality of data source displays 444(1)-(N) and 646. A data source, such as the data sources 102(1)-(N) and 216, may be selected by clicking a portion of a cursor controller 230 or by merely moving cursor 848 over a certain data source display 444(1)-(N) and 646. Once selected, the user may have access to that data source (including its video and audio signals) and may also now have the ability to modify and/or arrange the data source display. As illustrated in
Continuing the example discussed immediately above, data source display 444(2) may correspond to data source 102(2), which may be performing work-related VPN operations. For example, the user may have a work-related spreadsheet open on data source 102(2). Nevertheless, the user may choose to focus on the contents of independent network data source 216 and may thus move data source display 444(2), corresponding to the spreadsheet, away from the center of display 226. Again, the user may also choose to lessen the size and possibly the resolution of the display 444(2). Other data source displays, such as the data source displays 444(1), 444(N), and 646 may likewise be selected, arranged, re-arranged, and modified.
Furthermore, selecting data source 102(2) may also serve to allow for use or modification of the data source. For example, if the user in the current example selects data source display 444(2) with point-and-select cursor 848, then the user may be able to operate on the work-related spreadsheet. Similarly, selection of another data source display, such as the data source display 444(1), 444(N), or 646, may allow for operation of the selected data source.
Referring now to
Data source display windows 950(1)(1)-(N)(N) may be modified, selected, arranged, and re-arranged in display 226 in many of the ways discussed above in regards to data source displays 444(1)-(N). For instance, once a data source 102(1)-(N) is selected, such as in the manner depicted in
Manager module 110, in conjunction with audio system 228, may be further configured to manage audio signals from data sources 102(1)-(N) and/or independent network data source 216. Sound from the multiple data sources may project in unison, singly, or in any user-chosen combination. In some implementations, the sound emanating from one data source will project from the direction of the location of the corresponding data source display. Referring back to
Alternatively, sound may appear to originate from the direction that a user is looking. This may be accomplished by the conjunction of manager module 110, audio system 228, as well as camera 232, which may serve to notify manager module 110 of the user's current head orientation. This implementation may also be accomplished with the help of a user's chair. For instance, the direction from which sound emanates may be related to the current rotation of a user's chair.
Manager module 110 and audio system 228 may also manage sound volumes in a multitude of ways. For instance, the volume of a data source may be related—possibly directly related—to the size of the corresponding data source display. Reference is again made to
Furthermore, it is noted that manager module 110 may store user preferences, as discussed above. Each user may have one or more stored preference settings, which may comprise audio or video preferences, or the like. Thus, if sensor 236, which may comprise a weight-based chair sensor, recognizes User #1, then User #1's preference settings may be activated. Depending on the time of day or possibly User #1's selection, one of a plurality of different preference settings may be selected. For instance, User #1 may have a work preference setting and a recreational preference setting. In the work preference setting, display 226 may enlarge a work-related data source display and may lessen sizes and/or resolutions of others. In a recreational preference setting, all data source displays may be enlarged. If, for example, sensor detects User #1 during the daytime, then the work preference setting may be enabled. Alternatively, User #1 may choose his or her own preference setting, such as recreational. While these implementations involve automatically setting preferences, it is to be understood that preferences may also be manually configured.
Other preference settings may be default settings. For instance, when a videoconference call is received, a data source display window associated with the call may increase in size, while others may decrease in size. When such a video call is received, all other data source displays may also disappear, so as to limit the calling party's visual access to the user's data. This may be used, for example, if one or more of data source displays comprise proprietary information. Furthermore, when a video or an audio phone call is received, all other sound coming from other data sources may be muted. It is to be understood that these specific capabilities are but some non-limiting examples of possible configurations.
Finally,
It is noted that the various modules shown herein may be implemented in hardware, software, or any combination thereof. Additionally, these modules are shown as separate items only for convenience of reference and description, and these representations do not limit possible implementations of the teachings herein. Instead, various functions described with these modules could be combined or separated as appropriate in a given implementation, without departing from the scope and spirit of the description herein.
CONCLUSIONAlthough techniques and devices for managing data from multiple data sources have been described in language specific to certain features and methods, it is to be understood that the features defined in the appended claims are not necessarily limited to the specific features and methods described. Rather, the specific features and methods are disclosed as illustrative forms of implementing the claimed subject matter.
Claims
1. A system, comprising:
- a processor; and
- memory for storing code that when executed causes the processor to perform operations, the operations comprising:
- receiving broadband signals from a broadband connection to a first network;
- receiving virtual private network signals over a separate connection to a second network;
- determining a current location matches a telepresence location;
- activating a telepresence module when the current location matches the telepresence location;
- receiving telepresence signals over an independent connection to a third network when the telepresence module is activated;
- selecting a single display device of multiple displays communicating with the processor; and
- outputting the broadband signals, the virtual private network signals, and the telepresence signals to a single connection to the single display device.
2. The system according to claim 1, wherein the operations further comprise receiving a sensor measurement at the telepresence module.
3. The system according to claim 1, wherein the operations further comprise generating a tiled presentation of the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
4. The system according to claim 1, wherein the operations further comprise proportionally displaying the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
5. The system according to claim 1, wherein the operations further comprise generating an aligned presentation of the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
6. The system according to claim 1, wherein the operations further comprise minimizing a presentation of one of the broadband signals, the virtual private network signals, and the telepresence signals output to the single display device.
7. The system according to claim 1, wherein the operations further comprise expanding a presentation of one of the broadband signals, the virtual private network signals, and the telepresence signals output to the single display device.
8. A method, comprising:
- receiving, at a device, broadband signals from a broadband connection to a first network;
- receiving, at the device, virtual private network signals over a separate connection to a second network;
- determining a current location of the device matches a telepresence location;
- activating a telepresence module stored in memory of the device when the current location matches the telepresence location;
- receiving, at the device, telepresence signals over an independent connection to a third network when the telepresence module is activated;
- receiving a user input at the device that selects a single display device of multiple display devices connected to the device; and
- outputting the broadband signals, the virtual private network signals, and the telepresence signals to a single connection from the device to the single display device.
9. The method according to claim 1, further comprising receiving a sensor measurement at the telepresence module.
10. The method according to claim 1, further comprising generating a tiled presentation of the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
11. The method according to claim 1, further comprising proportionally displaying the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
12. The method according to claim 1, further comprising generating an aligned presentation of the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
13. The method according to claim 1, further comprising minimizing a presentation of one of the broadband signals, the virtual private network signals, and the telepresence signals output to the single display device.
14. The method according to claim 1, further comprising expanding a presentation of one of the broadband signals, the virtual private network signals, and the telepresence signals output to the single display device.
15. A computer readable memory storing instructions that when executed cause a processor to perform operations, the operations comprising:
- receiving, at a device, broadband signals from a broadband connection to a first network;
- receiving, at the device, virtual private network signals over a separate connection to a second network;
- determining a current location of the device matches a telepresence location;
- activating a telepresence module stored in memory of the device when the current location matches the telepresence location;
- receiving, at the device, telepresence signals over an independent connection to a third network when the telepresence module is activated;
- receiving a user input at the device that selects a single display device of multiple display devices connected to the device; and
- outputting the broadband signals, the virtual private network signals, and the telepresence signals to a single connection from the device to the single display device.
16. The computer readable memory according to claim 15, wherein the operations further comprise receiving a sensor measurement at the telepresence module.
17. The computer readable memory according to claim 15, wherein the operations further comprise generating a tiled presentation of the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
18. The computer readable memory according to claim 15, wherein the operations further comprise proportionally displaying the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
19. The computer readable memory according to claim 15, wherein the operations further comprise generating an aligned presentation of the broadband signals, the virtual private network signals, and the telepresence signals on the single display device.
20. The computer readable memory according to claim 15, wherein the operations further comprise minimizing a presentation of one of the broadband signals, the virtual private network signals, and the telepresence signals output to the single display device.
Type: Application
Filed: Jan 24, 2013
Publication Date: May 30, 2013
Applicant: AT&T INTELLECTUAL PROPERTY I, L.P. (Atlanta, GA)
Inventor: AT&T Intellectual Property I, L.P. (Atlanta, GA)
Application Number: 13/748,621
International Classification: H04L 29/08 (20060101);