PRESENTATION OF A VIRTUAL KEYBOARD ON A MULTIPLE DISPLAY DEVICE
Methods and systems for presenting a user interface that includes a virtual keyboard are provided. More particularly, a virtual keyboard can be presented using one or more touch screens included in a multiple display device. The content of the virtual keyboard can be controlled in response to user input. Configurable portions of the virtual keyboard include selectable rows of virtual keys. In addition, whether selectable rows of virtual keys and/or a suggestion bar is displayed together with the standard character and control keys of the virtual keyboard can be determined in response to context or user input.
Latest Z124 Patents:
The present application claims the benefits of and priority, under 35 U.S.C. §119(e), to U.S. Provisional Application Ser. No. 61/539,884, filed Sep. 27, 2011, entitled “MOBILE DEVICE;” which is incorporated herein by reference in its entirety for all that it teaches and for all purposes.
BACKGROUNDA substantial number of handheld computing devices, such as cellular phones, tablets, and E-Readers, make use of a touch screen display not only to deliver display information to the user but also to receive inputs from user interface commands. While touch screen displays may increase the configurability of the handheld device and provide a wide variety of user interface options, this flexibility typically comes at a price. The dual use of the touch screen to provide content and receive user commands, while flexible for the user, may obfuscate the display and cause visual clutter, thereby leading to user frustration and loss of productivity.
The small form factor of handheld computing devices requires a careful balancing between the displayed graphics and the area provided for receiving inputs. On the one hand, the small display constrains the display space, which may increase the difficulty of interpreting actions or results. On the other, a virtual keypad or other user interface scheme is superimposed on or positioned adjacent to an executing application, requiring the application to be squeezed into an even smaller portion of the display.
This balancing act is particularly difficult for single display touch screen devices. Single display touch screen devices are crippled by their limited screen space. When users are entering information into the device, through the single display, the ability to interpret information in the display can be severely hampered, particularly when a complex interaction between display and interface is required.
SUMMARYThere is a need for a dual multi-display handheld computing device that provides for enhanced power and/or versatility compared to conventional single display handheld computing devices. These and other needs are addressed by the various aspects, embodiments, and/or configurations of the present disclosure. Also, while the disclosure is presented in terms of exemplary embodiments, it should be appreciated that individual aspects of the disclosure can be separately claimed.
Additionally, it is desirable to have a multi-display device that can selectively display a virtual keyboard in different device orientations. Moreover, it is desirable to provide a virtual keyboard with configurable keys or sets of keys. In accordance with embodiments of the present disclosure, the set of keys selected at a particular time can be determined by context or user input. Moreover, user input can be include a swipe gesture entered on the set of keys, or a press of a specially provided key. In addition, it is desirable to provide a suggestion bar to assist a user in entering words while typing.
In some embodiments, a method for presenting a user interface on a multiple display device is provided, the method comprising:
presenting a virtual keyboard, wherein the virtual keyboard is presented in at least a first screen of the multiple display device;
presenting a selected first one of a plurality of slider bars above a first row of the keyboard, wherein the first one of the plurality of slider bars includes a first set of virtual keys;
receiving input selecting a second one of the plurality of slider bars;
in response to the input selecting the second one of the plurality of slider bars, presenting the selected second one of the plurality of slider bars above the first row of the keyboard, wherein the selected second one of the plurality of slider bars includes a second set of virtual keys;
presenting an input field, wherein at least a portion of the input field is displayed in at least a second screen of the multiple displays.
In some embodiments, a device is provided, the device comprising:
a screen, including:
a first touch screen display;
a second touch screen display;
memory;
a processor;
application programming stored in the memory and executed by the processor, wherein the application programming is operable to:
-
- present a virtual keyboard, wherein in a dual landscape orientation the virtual keyboard is presented within the first touch screen display, wherein in a dual portrait orientation the keyboard is presented within a portion of the first touch screen display and within a portion of the second touch screen display;
- in response to input from a user, one of:
- present one of a plurality of selectable slider bars above a top row of keys included in the virtual keyboard;
- present one of the plurality of selectable slider bars above a top row of keys included in the virtual keyboard and a suggestion bar above the one of the plurality of selectable slider bars;
- present neither the one of a plurality of selectable slider bars nor the suggestion bar.
In some embodiments, a computer readable medium having stored thereon computer-executable instructions is provided, the computer executable instructions causing a processor to execute a method for presenting a user interface, the computer executable instructions comprising:
instructions to display a keyboard comprising a plurality of rows of virtual keys, wherein in a first operating mode the keyboard is displayed within a first touch screen display, and wherein in a second operating mode a first part of the keyboard is displayed within a portion of the first touch screen display and a second part of the keyboard is displayed within a portion of a second touch screen display;
instructions to display at least one of a slider bar and a suggestion bar above a top row of virtual keys of the keyboard.
The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably.
The term “automatic” and variations thereof, as used herein, refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material”.
The term “computer-readable medium” as used herein refers to any tangible storage and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium.
When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
The term “desktop” refers to a metaphor used to portray systems. A desktop is generally considered a “surface” that typically includes pictures, called icons, widgets, folders, etc. that can activate show applications, windows, cabinets, files, folders, documents, and other graphical items. The icons are generally selectable to initiate a task through user interface interaction to allow a user to execute applications or conduct other operations.
The term “screen,” “touch screen,” or “touchscreen” refers to a physical structure that includes one or more hardware components that provide the device with the ability to render a user interface and/or receive user input. A screen can encompass any combination of gesture capture region, a touch sensitive display, and/or a configurable area. The device can have one or more physical screens embedded in the hardware. However a screen may also include an external peripheral device that may be attached and detached from the device. In embodiments, multiple external devices may be attached to the device. Thus, in embodiments, the screen can enable the user to interact with the device by touching areas on the screen and provides information to a user through a display. The touch screen may sense user contact in a number of different ways, such as by a change in an electrical parameter (e.g., resistance or capacitance), acoustic wave variations, infrared radiation proximity detection, light variation detection, and the like. In a resistive touch screen, for example, normally separated conductive and resistive metallic layers in the screen pass an electrical current. When a user touches the screen, the two layers make contact in the contacted location, whereby a change in electrical field is noted and the coordinates of the contacted location calculated. In a capacitive touch screen, a capacitive layer stores electrical charge, which is discharged to the user upon contact with the touch screen, causing a decrease in the charge of the capacitive layer. The decrease is measured, and the contacted location coordinates determined. In a surface acoustic wave touch screen, an acoustic wave is transmitted through the screen, and the acoustic wave is disturbed by user contact. A receiving transducer detects the user contact instance and determines the contacted location coordinates.
The term “display” refers to a portion of one or more screens used to display the output of a computer to a user. A display may be a single-screen display or a multi-screen display, referred to as a composite display. A composite display can encompass the touch sensitive display of one or more screens. A single physical screen can include multiple displays that are managed as separate logical displays. Thus, different content can be displayed on the separate displays although part of the same physical screen.
The term “displayed image” refers to an image produced on the display. A typical displayed image is a window or desktop. The displayed image may occupy all or a portion of the display.
The term “display orientation” refers to the way in which a rectangular display is oriented by a user for viewing. The two most common types of display orientation are portrait and landscape. In landscape mode, the display is oriented such that the width of the display is greater than the height of the display (such as a 4:3 ratio, which is 4 units wide and 3 units tall, or a 16:9 ratio, which is 16 units wide and 9 units tall). Stated differently, the longer dimension of the display is oriented substantially horizontal in landscape mode while the shorter dimension of the display is oriented substantially vertical. In the portrait mode, by contrast, the display is oriented such that the width of the display is less than the height of the display. Stated differently, the shorter dimension of the display is oriented substantially horizontal in the portrait mode while the longer dimension of the display is oriented substantially vertical.
The term “composite display” refers to a logical structure that defines a display that can encompass one or more screens. A multi-screen display can be associated with a composite display that encompasses all the screens. The composite display can have different display characteristics based on the various orientations of the device.
The term “gesture” refers to a user action that expresses an intended idea, action, meaning, result, and/or outcome. The user action can include manipulating a device (e.g., opening or closing a device, changing a device orientation, moving a trackball or wheel, etc.), movement of a body part in relation to the device, movement of an implement or tool in relation to the device, audio inputs, etc. A gesture may be made on a device (such as on the screen) or with the device to interact with the device.
The term “module” as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element.
The term “gesture capture” refers to a sense or otherwise a detection of an instance and/or type of user gesture. The gesture capture can occur in one or more areas of the screen, A gesture region can be on the display, where it may be referred to as a touch sensitive display or off the display where it may be referred to as a gesture capture area.
A “multi-screen application” or “multiple-display application” refers to an application that is capable of multiple modes. The multi-screen application mode can include, but is not limited to, a single screen mode (where the application is displayed on a single screen) or a composite display mode (where the application is displayed on two or more screens). A multi-screen application can have different layouts optimized for the mode. Thus, the multi-screen application can have different layouts for a single screen or for a composite display that can encompass two or more screens. The different layouts may have different screen/display dimensions and/or configurations on which the user interfaces of the multi-screen applications can be rendered. The different layouts allow the application to optimize the application's user interface for the type of display, e.g., single screen or multiple screens. In single screen mode, the multi-screen application may present one window pane of information. In a composite display mode, the multi-screen application may present multiple window panes of information or may provide a larger and a richer presentation because there is more space for the display contents. The multi-screen applications may be designed to adapt dynamically to changes in the device and the mode depending on which display (single or composite) the system assigns to the multi-screen application. In alternative embodiments, the user can use a gesture to request the application transition to a different mode, and, if a display is available for the requested mode, the device can allow the application to move to that display and transition modes.
A “single-screen application” refers to an application that is capable of single screen mode. Thus, the single-screen application can produce only one window and may not be capable of different modes or different display dimensions. A single-screen application may not be capable of the several modes discussed with the multi-screen application.
The term “window” refers to a, typically rectangular, displayed image on at least part of a display that contains or provides content different from the rest of the screen. The window may obscure the desktop.
The terms “determine”, “calculate” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.
It shall be understood that the term “means” as used herein shall be given its broadest possible interpretation in accordance with 35 U.S.C., Section 112, Paragraph 6. Accordingly, a claim incorporating the term “means” shall cover all structures, materials, or acts set forth herein, and all of the equivalents thereof. Further, the structures, materials or acts and the equivalents thereof shall include all those described in the summary of the invention, brief description of the drawings, detailed description, abstract, and claims themselves.
The preceding is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and/or configurations. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other aspects, embodiments, and/or configurations of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a letter that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
DETAILED DESCRIPTIONPresented herein are embodiments of a device. The device can be a communications device, such as a cellular telephone, or other smart device. The device can include two screens that are oriented to provide several unique display configurations. Further, the device can receive user input in unique ways. The overall design and functionality of the device provides for an enhanced user experience making the device more useful and more efficient.
Mechanical Features:
Primary screen 104 also includes a configurable area 112 that has been configured for specific inputs when the user touches portions of the configurable area 112. Secondary screen 108 also includes a configurable area 116 that has been configured for specific inputs. Areas 112a and 116a have been configured to receive a “back” input indicating that a user would like to view information previously displayed. Areas 112b and 116b have been configured to receive a “menu” input indicating that the user would like to view options from a menu. Areas 112c and 116c have been configured to receive a “home” input indicating that the user would like to view information associated with a “home” view. In other embodiments, areas 112a-c and 116a-c may be configured, in addition to the configurations described above, for other types of specific inputs including controlling features of device 100, some non-limiting examples including adjusting overall system power, adjusting the volume, adjusting the brightness, adjusting the vibration, selecting of displayed items (on either of screen 104 or 108), operating a camera, operating a microphone, and initiating/terminating of telephone calls. Also, in some embodiments, areas 112a-C and 116a-C may be configured for specific inputs depending upon the application running on device 100 and/or information displayed on touch sensitive displays 110 and/or 114.
In addition to touch sensing, primary screen 104 and secondary screen 108 may also include areas that receive input from a user without requiring the user to touch the display area of the screen. For example, primary screen 104 includes gesture capture area 120, and secondary screen 108 includes gesture capture area 124. These areas are able to receive input by recognizing gestures made by a user without the need for the user to actually touch the surface of the display area. In comparison to touch sensitive displays 110 and 114, the gesture capture areas 120 and 124 are commonly not capable of rendering a displayed image.
The two screens 104 and 108 are connected together with a hinge 128, shown clearly in
Device 100 also includes a number of buttons 158. For example,
There are also a number of hardware components within device 100. As illustrated in
The overall design of device 100 allows it to provide additional functionality not available in other communication devices. Some of the functionality is based on the various positions and orientations that device 100 can have. As shown in
In addition to the open position, device 100 may also have a “closed” position illustrated in
Device 100 can also be used in an “easel” position which is illustrated in
Transitional states are also possible. When the position sensors 172A and B and/or accelerometer indicate that the screens are being closed or folded (from open), a closing transitional state is recognized. Conversely when the position sensors 172A and B indicate that the screens are being opened or folded (from closed), an opening transitional state is recognized. The closing and opening transitional states are typically time-based, or have a maximum time duration from a sensed starting point. Normally, no user input is possible when one of the closing and opening states is in effect. In this manner, incidental user contact with a screen during the closing or opening function is not misinterpreted as user input. In embodiments, another transitional state is possible when the device 100 is closed. This additional transitional state allows the display to switch from one screen 104 to the second screen 108 when the device 100 is closed based on some user input, e.g., a double tap on the screen 110,114.
As can be appreciated, the description of device 100 is made for illustrative purposes only, and the embodiments are not limited to the specific mechanical features shown in
Hardware Features:
A third region of the touch sensitive screens 104 and 108 may comprise a configurable area 112, 116. The configurable area 112, 116 is capable of receiving input and has display or limited display capabilities. In embodiments, the configurable area 112, 116 may present different input options to the user. For example, the configurable area 112, 116 may display buttons or other relatable items. Moreover, the identity of displayed buttons, or whether any buttons are displayed at all within the configurable area 112, 116 of a touch sensitive screen 104 or 108, may be determined from the context in which the device 100 is used and/or operated. In an exemplary embodiment, the touch sensitive screens 104 and 108 comprise liquid crystal display devices extending across at least those regions of the touch sensitive screens 104 and 108 that are capable of providing visual output to a user, and a capacitive input matrix over those regions of the touch sensitive screens 104 and 108 that are capable of receiving input from the user.
One or more display controllers 216a, 216b may be provided for controlling the operation of the touch sensitive screens 104 and 108, including input (touch sensing) and output (display) functions. In the exemplary embodiment illustrated in
The processor 204 may comprise a general purpose programmable processor or controller for executing application programming or instructions. In accordance with at least some embodiments, the processor 204 may include multiple processor cores, and/or implement multiple virtual processors. In accordance with still other embodiments, the processor 204 may include multiple physical processors. As a particular example, the processor 204 may comprise a specially configured application specific integrated circuit (ASIC) or other integrated circuit, a digital signal processor, a controller, a hardwired electronic or logic circuit, a programmable logic device or gate array, a special purpose computer, or the like. The processor 204 generally functions to run programming code or instructions implementing various functions of the device 100.
A communication device 100 may also include memory 208 for use in connection with the execution of application programming or instructions by the processor 204, and for the temporary or long term storage of program instructions and/or data. As examples, the memory 208 may comprise RAM, DRAM, SDRAM, or other solid state memory. Alternatively or in addition, data storage 212 may be provided. Like the memory 208, the data storage 212 may comprise a solid state memory device or devices. Alternatively or in addition, the data storage 212 may comprise a hard disk drive or other random access memory.
In support of communications functions or capabilities, the device 100 can include a cellular telephony module 228. As examples, the cellular telephony module 228 can comprise a GSM, CDMA, FDMA and/or analog cellular telephony transceiver capable of supporting voice, multimedia and/or data transfers over a cellular network. Alternatively or in addition, the device 100 can include an additional or other wireless communications module 232. As examples, the other wireless communications module 232 can comprise a Wi-Fi, BLUETOOTH™, WiMax, infrared, or other wireless communications link. The cellular telephony module 228 and the other wireless communications module 232 can each be associated with a shared or a dedicated antenna 224.
A port interface 252 may be included. The port interface 252 may include proprietary or universal ports to support the interconnection of the device 100 to other devices or components, such as a dock, which may or may not include additional or different capabilities from those integral to the device 100. In addition to supporting an exchange of communication signals between the device 100 and another device or component, the docking port 136 and/or port interface 252 can support the supply of power to or from the device 100. The port interface 252 also comprises an intelligent element that comprises a docking module for controlling communications or other interactions between the device 100 and a connected device or component.
An input/output module 248 and associated ports may be included to support communications over wired networks or links, for example with other communication devices, server devices, and/or peripheral devices. Examples of an input/output module 248 include an Ethernet port, a Universal Serial Bus (USB) port, Institute of Electrical and Electronics Engineers (IEEE) 1394, or other interface.
An audio input/output interface/device(s) 244 can be included to provide analog audio to an interconnected speaker or other device, and to receive analog audio input from a connected microphone or other device. As an example, the audio input/output interface/device(s) 244 may comprise an associated amplifier and analog to digital converter. Alternatively or in addition, the device 100 can include an integrated audio input/output device 256 and/or an audio jack for interconnecting an external speaker or microphone. For example, an integrated speaker and an integrated microphone can be provided, to support near talk or speaker phone operations.
Hardware buttons 158 can be included for example for use in connection with certain control operations. Examples include a master power switch, volume control, etc., as described in conjunction with
The device 100 can also include a global positioning system (GPS) receiver 236. In accordance with embodiments of the present invention, the GPS receiver 236 may further comprise a GPS module that is capable of providing absolute location information to other components of the device 100. An accelerometer(s) 176 may also be included. For example, in connection with the display of information to a user and/or other functions, a signal from the accelerometer 176 can be used to determine an orientation and/or format in which to display that information to the user.
Embodiments of the present invention can also include one or more position sensor(s) 172. The position sensor 172 can provide a signal indicating the position of the touch sensitive screens 104 and 108 relative to one another. This information can be provided as an input, for example to a user interface application, to determine an operating mode, characteristics of the touch sensitive displays 110, 114, and/or other device 100 operations. As examples, a screen position sensor 172 can comprise a series of Hall effect sensors, a multiple position switch, an optical switch, a Wheatstone bridge, a potentiometer, or other arrangement capable of providing a signal indicating of multiple relative positions the touch screens are in.
Communications between various components of the device 100 can be carried by one or more buses 222. In addition, power can be supplied to the components of the device 100 from a power source and/or power control module 260. The power control module 260 can, for example, include a battery, an AC to DC converter, power control logic, and/or ports for interconnecting the device 100 to an external source of power.
Device State:
As illustrated in
In state 304, the device is in a closed state with the device 100 generally oriented in the portrait direction with the primary screen 104 and the secondary screen 108 back-to-back in different planes (see
In the closed state, the device can also move to a transitional state where the device remains closed by the display is moved from one screen 104 to another screen 108 based on a user input, e.g., a double tap on the screen 110, 114. Still another embodiment includes a bilateral state. In the bilateral state, the device remains closed, but a single application displays at least one window on both the first display 110 and the second display 114. The windows shown on the first and second display 110, 114 may be the same or different based on the application and the state of that application. For example, while acquiring an image with a camera, the device may display the view finder on the first display 110 and displays a preview for the photo subjects (full screen and mirrored left-to-right) on the second display 114.
In state 308, a transition state from the closed state 304 to the semi-open state or easel state 312, the device 100 is shown opening with the primary screen 104 and the secondary screen 108 being rotated around a point of axis coincidence with the hinge. Upon entering the easel state 312, the primary screen 104 and the secondary screen 108 are separated from one another such that, for example, the device 100 can sit in an easel-like configuration on a surface.
In state 316, known as the modified easel position, the device 100 has the primary screen 104 and the secondary screen 108 in a similar relative relationship to one another as in the easel state 312, with the difference being one of the primary screen 104 or the secondary screen 108 are placed on a surface as shown.
State 320 is the open state where the primary screen 104 and the secondary screen 108 are generally on the same plane. From the open state, the device 100 can transition to the docked state 344 or the open landscape state 348. In the open state 320, the primary screen 104 and the secondary screen 108 are generally in the portrait-like orientation while in landscaped state 348 the primary screen 104 and the secondary screen 108 are generally in a landscape-like orientation.
State 324 is illustrative of a communication state, such as when an inbound or outbound call is being received or placed, respectively, by the device 100. While not illustrated for clarity, it should be appreciated the device 100 can transition to the inbound/outbound call state 324 from any state illustrated in
Transition state 322 illustratively shows primary screen 104 and the secondary screen 108 being closed upon one another for entry into, for example, the closed state 304.
In
As discussed, in the center portion of the chart 376, the inputs that are received enable the detection of a transition from, for example, a portrait open state to a landscape easel state—shown in bold—“HAT.” For this exemplary transition from the portrait open to the landscape easel state, a Hall Effect sensor (“H”), an accelerometer (“A”) and a timer (“T”) input may be needed. The timer input can be derived from, for example, a clock associated with the processor.
In addition to the portrait and landscape states, a docked state 368 is also shown that is triggered based on the receipt of a docking signal 372. As discussed above and in relation to
User Interaction:
With reference to
With reference to
With reference to
With reference to
With reference to
The above gestures may be combined in any manner, such as those shown by
The functional result of receiving a gesture can vary depending on a number of factors, including a state of the device 100, display 110, 114, or screen 104, 108, a context associated with the gesture, or sensed location of the gesture. The state of the device commonly refers to one or more of a configuration of the device 100, a display orientation, and user and other inputs received by the device 100. Context commonly refers to one or more of the particular application(s) selected by the gesture and the portion(s) of the application currently executing, whether the application is a single- or multi-screen application, and whether the application is a multi-screen application displaying one or more windows in one or more screens or in one or more stacks. Sensed location of the gesture commonly refers to whether the sensed set(s) of gesture location coordinates are on a touch sensitive display 110, 114 or a gesture capture region 120, 124, whether the sensed set(s) of gesture location coordinates are associated with a common or different display or screen 104,108, and/or what portion of the gesture capture region contains the sensed set(s) of gesture location coordinates.
A tap, when received by an a touch sensitive display 110, 114, can be used, for instance, to select an icon to initiate or terminate execution of a corresponding application, to maximize or minimize a window, to reorder windows in a stack, and to provide user input such as by keyboard display or other displayed image. A drag, when received by a touch sensitive display 110, 114, can be used, for instance, to relocate an icon or window to a desired location within a display, to reorder a stack on a display, or to span both displays (such that the selected window occupies a portion of each display simultaneously). A flick, when received by a touch sensitive display 110, 114 or a gesture capture region 120, 124, can be used to relocate a window from a first display to a second display or to span both displays (such that the selected window occupies a portion of each display simultaneously). Unlike the drag gesture, however, the flick gesture is generally not used to move the displayed image to a specific user-selected location but to a default location that is not configurable by the user.
The pinch gesture, when received by a touch sensitive display 110, 114 or a gesture capture region 120, 124, can be used to minimize or otherwise increase the displayed area or size of a window (typically when received entirely by a common display), to switch windows displayed at the top of the stack on each display to the top of the stack of the other display (typically when received by different displays or screens), or to display an application manager (a “pop-up window” that displays the windows in the stack). The spread gesture, when received by a touch sensitive display 110, 114 or a gesture capture region 120, 124, can be used to maximize or otherwise decrease the displayed area or size of a window, to switch windows displayed at the top of the stack on each display to the top of the stack of the other display (typically when received by different displays or screens), or to display an application manager (typically when received by an off-screen gesture capture region on the same or different screens).
The combined gestures of
Firmware and Software:
With reference to
The applications 564 can be any higher level software that executes particular functionality for the user. Applications 564 can include programs such as email clients, web browsers, texting applications, games, media players, office suites, etc. The applications 564 can be stored in an application store 560, which may represent any memory or data storage, and the management software associated therewith, for storing the applications 564. Once executed, the applications 564 may be run in a different area of memory 508.
The framework 520 may be any software or data that allows the multiple tasks running on the device to interact. In embodiments, at least portions of the framework 520 and the discrete components described hereinafter may be considered part of the OS 516 or an application 564. However, these portions will be described as part of the framework 520, but those components are not so limited. The framework 520 can include, but is not limited to, a Multi-Display Management (MDM) module 524, a Surface Cache module 528, a Window Management module 532, an Input Management module 536, a Task Management module 540, an Application Model Manager 542, a Display Controller, one or more frame buffers 548, a task stack 552, one or more window stacks 550 (which is a logical arrangement of windows and/or desktops in a display area), and/or an event buffer 556.
The MDM module 524 includes one or more modules that are operable to manage the display of applications or other data on the screens of the device. An embodiment of the MDM module 524 is described in conjunction with
The Surface Cache module 528 includes any memory or storage and the software associated therewith to store or cache one or more images of windows. A series of active and/or non-active windows (or other display objects, such as, a desktop display) can be associated with each display. An active window (or other display object) is currently displayed. A non-active windows (or other display objects) were opened and, at some time, displayed but are now not displayed. To enhance the user experience, before a window transitions from an active state to an inactive state, a “screen shot” of a last generated image of the window (or other display object) can be stored. The Surface Cache module 528 may be operable to store a bitmap of the last active image of a window (or other display object) not currently displayed. Thus, the Surface Cache module 528 stores the images of non-active windows (or other display objects) in a data store.
In embodiments, the Window Management module 532 is operable to manage the windows (or other display objects) that are active or not active on each of the displays. The Window Management module 532, based on information from the MDM module 524, the OS 516, or other components, determines when a window (or other display object) is visible or not active. The Window Management module 532 may then put a non-visible window (or other display object) in a “not active state” and, in conjunction with the Task Management module Task Management 540 suspends the application's operation. Further, the Window Management module 532 may assign, through collaborative interaction with the MDM module 524, a display identifier to the window (or other display object) or manage one or more other items of data associated with the window (or other display object). The Window Management module 532 may also provide the stored information to the application 564, the Task Management module 540, or other components interacting with or associated with the window (or other display object). The Window Management module 532 can also associate an input task with a window based on window focus and display coordinates within the motion space.
The Input Management module 536 is operable to manage events that occur with the device. An event is any input into the window environment, for example, a user interface interactions with a user. The Input Management module 536 receives the events and logically stores the events in an event buffer 556. Events can include such user interface interactions as a “down event,” which occurs when a screen 104, 108 receives a touch signal from a user, a “move event,” which occurs when the screen 104, 108 determines that a user's finger is moving across a screen(s), an “up event, which occurs when the screen 104, 108 determines that the user has stopped touching the screen 104, 108, etc. These events are received, stored, and forwarded to other modules by the Input Management module 536. The Input Management module 536 may also map screen inputs to a motion space which is the culmination of all physical and virtual display available on the device.
The motion space is a virtualized space that includes all touch sensitive displays 110,114 “tiled” together to mimic the physical dimensions of the device 100. For example, when the device 100 is unfolded, the motion space size may be 960×800, which may be the number of pixels in the combined display area for both touch sensitive displays 110, 114. If a user touches on a first touch sensitive display 110 on location (40, 40), a full screen window can receive touch event with location (40, 40). If a user touches on a second touch sensitive display 114, with location (40, 40), the full screen window can receive touch event with location (520, 40), because the second touch sensitive display 114 is on the right side of the first touch sensitive display 110, so the device 100 can offset the touch by the first touch sensitive display's 110 width, which is 480 pixels. When a hardware event occurs with location info from a driver 512, the framework 520 can up-scale the physical location to the motion space because the location of the event may be different based on the device orientation and state. The motion space may be as described in U.S. patent application Ser. No. 13/187,026, filed Jul. 20, 2011, entitled “Systems and Methods for Receiving Gesture Inputs Spanning Multiple Input Devices,” which is hereby incorporated by reference in its entirety for all that it teaches and for all purposes.
A task can be an application and a sub-task can be an application component that provides a window with which users can interact to do something, such as dial the phone, take a photo, send an email, or view a map. Each task may be given a window in which to draw a user interface. The window typically fills a display (for example, touch sensitive display 110,114), but may be smaller than the display 110,114 and float on top of other windows. An application usually consists of multiple sub-tasks that are loosely bound to each other. Typically, one task in an application is specified as the “main” task, which is presented to the user when launching the application for the first time. Each task can then start another task or sub-task to perform different actions.
The Task Management module 540 is operable to manage the operation of one or more applications 564 that may be executed by the device. Thus, the Task Management module 540 can receive signals to launch, suspend, terminate, etc. an application or application sub-tasks stored in the application store 560. The Task Management module 540 may then instantiate one or more tasks or sub-tasks of the application 564 to begin operation of the application 564. Further, the Task Management Module 540 may launch, suspend, or terminate a task or sub-task as a result of user input or as a result of a signal from a collaborating framework 520 component. The Task Management Module 540 is responsible for managing the lifecycle of applications (tasks and sub-task) from when the application is launched to when the application is terminated.
The processing of the Task Management Module 540 is facilitated by a task stack 552, which is a logical structure associated with the Task Management Module 540. The task stack 552 maintains the state of all tasks and sub-tasks on the device 100. When some component of the operating system 516 requires a task or sub-task to transition in its lifecycle, the OS 516 component can notify the Task Management Module 540. The Task Management Module 540 may then locate the task or sub-task, using identification information, in the task stack 552, and send a signal to the task or sub-task indicating what kind of lifecycle transition the task needs to execute. Informing the task or sub-task of the transition allows the task or sub-task to prepare for the lifecycle state transition. The Task Management Module 540 can then execute the state transition for the task or sub-task. In embodiments, the state transition may entail triggering the OS kernel 518 to terminate the task when termination is required.
Further, the Task Management module 540 may suspend the application 564 based on information from the Window Management Module 532. Suspending the application 564 may maintain application data in memory but may limit or stop the application 564 from rendering a window or user interface. Once the application becomes active again, the Task Management module 540 can again trigger the application to render its user interface. In embodiments, if a task is suspended, the task may save the task's state in case the task is terminated. In the suspended state, the application task may not receive input because the application window is not visible to the user.
The frame buffer 548 is a logical structure(s) used to render the user interface. The frame buffer 548 can be created and destroyed by the OS kernel 518. However, the Display Controller 544 can write the image data, for the visible windows, into the frame buffer 548. A frame buffer 548 can be associated with one screen or multiple screens. The association of a frame buffer 548 with a screen can be controlled dynamically by interaction with the OS kernel 518. A composite display may be created by associating multiple screens with a single frame buffer 548. Graphical data used to render an application's window user interface may then be written to the single frame buffer 548, for the composite display, which is output to the multiple screens 104,108. The Display Controller 544 can direct an application's user interface to a portion of the frame buffer 548 that is mapped to a particular display 110,114, thus, displaying the user interface on only one screen 104 or 108. The Display Controller 544 can extend the control over user interfaces to multiple applications, controlling the user interfaces for as many displays as are associated with a frame buffer 548 or a portion thereof. This approach compensates for the multiple physical screens 104,108 that are in use by the software component above the Display Controller 544.
The Application Manager 562 is an application that provides a presentation layer for the window environment. Thus, the Application Manager 562 provides the graphical model for rendering by the Task Management Module 540. Likewise, the Desktop 566 provides the presentation layer for the Application Store 560. Thus, the desktop provides a graphical model of a surface having selectable application icons for the Applications 564 in the Application Store 560 that can be provided to the Window Management Module 556 for rendering.
Further, the framework can include an Application Model Manager (AMM) 542. The Application Manager 562 may interface with the AMM 542. In embodiments, the AMM 542 receives state change information from the device 100 regarding the state of applications (which are running or suspended). The AMM 542 can associate bit map images from the Surface Cache Module 528 to the tasks that are alive (running or suspended). Further, the AMM 542 can convert the logical window stack maintained in the Task Manager Module 540 to a linear (“film strip” or “deck of cards”) organization that the user perceives when the using the off gesture capture area 120 to sort through the windows. Further, the AMM 542 may provide a list of executing applications to the Application Manager 562.
An embodiment of the MDM module 524 is shown in
The Display Configuration Module 568 determines the layout for the display. In embodiments, the Display Configuration Module 568 can determine the environmental factors. The environmental factors may be received from one or more other MDM modules 524 or from other sources. The Display Configuration Module 568 can then determine from the list of factors the best configuration for the display. Some embodiments of the possible configurations and the factors associated therewith are described in conjunction with
The Preferences Module 572 is operable to determine display preferences for an application 564 or other component. For example, an application can have a preference for Single or Dual displays. The Preferences Module 572 can determine an application's display preference (e.g., by inspecting the application's preference settings) and may allow the application 564 to change to a mode (e.g., single screen, dual screen, max, etc.) if the device 100 is in a state that can accommodate the preferred mode. However, some user interface policies may disallow a mode even if the mode is available. As the configuration of the device changes, the preferences may be reviewed to determine if a better display configuration can be achieved for an application 564.
The Device State Module 574 is operable to determine or receive the state of the device. The state of the device can be as described in conjunction with
The Gesture Module 576 is shown as part of the MDM module 524, but, in embodiments, the Gesture module 576 may be a separate Framework 520 component that is separate from the MDM module 524. In embodiments, the Gesture Module 576 is operable to determine if the user is conducting any actions on any part of the user interface. In alternative embodiments, the Gesture Module 576 receives user interface actions from the configurable area 112,116 only. The Gesture Module 576 can receive touch events that occur on the configurable area 112,116 (or possibly other user interface areas) by way of the Input Management Module 536 and may interpret the touch events (using direction, speed, distance, duration, and various other parameters) to determine what kind of gesture the user is performing. When a gesture is interpreted, the Gesture Module 576 can initiate the processing of the gesture and, by collaborating with other Framework 520 components, can manage the required window animation. The Gesture Module 576 collaborates with the Application Model Manager 542 to collect state information with respect to which applications are running (active or paused) and the order in which applications must appear when a user gesture is performed. The Gesture Module 576 may also receive references to bitmaps (from the Surface Cache Module 528) and live windows so that when a gesture occurs it can instruct the Display Controller 544 how to move the window(s) across the display 110,114. Thus, suspended applications may appear to be running when those windows are moved across the display 110,114.
Further, the Gesture Module 576 can receive task information either from the Task Manage Module 540 or the Input Management module 536. The gestures may be as defined in conjunction with
The Requirements Module 580, similar to the Preferences Module 572, is operable to determine display requirements for an application 564 or other component. An application can have a set display requirement that must be observed. Some applications require a particular display orientation. For example, the application “Angry Birds” can only be displayed in landscape orientation. This type of display requirement can be determined or received, by the Requirements Module 580. As the orientation of the device changes, the Requirements Module 580 can reassert the display requirements for the application 564. The Display Configuration Module 568 can generate a display configuration that is in accordance with the application display requirements, as provided by the Requirements Module 580.
The Event Module 584, similar to the Gesture Module 576, is operable to determine one or more events occurring with an application or other component that can affect the user interface. Thus, the Event Module 584 can receive event information either from the event buffer 556 or the Task Management module 540. These events can change how the tasks are bound to the displays. The Event Module 584 can collect state change information from other Framework 520 components and act upon that state change information. In an example, when the phone is opened or closed or when an orientation change has occurred, a new message may be rendered in a secondary screen. The state change based on the event can be received and interpreted by the Event Module 584. The information about the events then may be sent to the Display Configuration Module 568 to modify the configuration of the display.
The Binding Module 588 is operable to bind the applications 564 or the other components to the configuration determined by the Display Configuration Module 568. A binding associates, in memory, the display configuration for each application with the display and mode of the application. Thus, the Binding Module 588 can associate an application with a display configuration for the application (e.g. landscape, portrait, multi-screen, etc.). Then, the Binding Module 588 may assign a display identifier to the display. The display identifier associated the application with a particular display of the device 100. This binding is then stored and provided to the Display Controller 544, the other components of the OS 516, or other components to properly render the display. The binding is dynamic and can change or be updated based on configuration changes associated with events, gestures, state changes, application preferences or requirements, etc.
User Interface Configurations:
With reference now to
It may be possible to display similar or different data in either the first or second portrait configuration 604, 608. It may also be possible to transition between the first portrait configuration 604 and second portrait configuration 608 by providing the device 100 a user gesture (e.g., a double tap gesture), a menu selection, or other means. Other suitable gestures may also be employed to transition between configurations. Furthermore, it may also be possible to transition the device 100 from the first or second portrait configuration 604, 608 to any other configuration described herein depending upon which state the device 100 is moved.
An alternative output configuration may be accommodated by the device 100 being in a second state. Specifically,
The device 100 manages desktops and/or windows with at least one window stack 700, 728, as shown in
A window stack 700, 728 may have various arrangements or organizational structures. In the embodiment shown in
Another arrangement for a window stack 728 is shown in
Yet another arrangement of a window stack 760 is shown in
In the embodiment shown, the desktop 786 is the lowest display or “brick” in the window stack 760. Thereupon, window 1 782, window 2 782, window 3 768, and window 4 770 are layered. Window 1 782, window 3 768, window 2 782, and window 4 770 only occupy a portion of the composite display 764. Thus, another part of the stack 760 includes window 8 774 and windows 5 through 7 shown in section 790. Only the top window in any portion of the composite display 764 is actually rendered and displayed. Thus, as shown in the top view in
When a new window is opened, the newly activated window is generally positioned at the top of the stack. However, where and how the window is positioned within the stack can be a function of the orientation of the device 100, the context of what programs, functions, software, etc. are being executed on the device 100, how the stack is positioned when the new window is opened, etc. To insert the window in the stack, the position in the stack for the window is determined and the touch sensitive display 110, 114 to which the window is associated may also be determined. With this information, a logical data structure for the window can be created and stored. When user interface or other events or tasks change the arrangement of windows, the window stack(s) can be changed to reflect the change in arrangement. It should be noted that these same concepts described above can be used to manage the one or more desktops for the device 100.
A logical data structure 800 for managing the arrangement of windows or desktops in a window stack is shown in
A window identifier 804 can include any identifier (ID) that uniquely identifies the associated window in relation to other windows in the window stack. The window identifier 804 can be a globally unique identifier (GUID), a numeric ID, an alphanumeric ID, or other type of identifier. In embodiments, the window identifier 804 can be one, two, or any number of digits based on the number of windows that can be opened. In alternative embodiments, the size of the window identifier 804 may change based on the number of windows opened. While the window is open, the window identifier 804 may be static and remain unchanged.
Dimensions 808 can include dimensions for a window in the composite display 760. For example, the dimensions 808 can include coordinates for two or more corners of the window or may include one coordinate and dimensions for the width and height of the window. These dimensions 808 can delineate what portion of the composite display 760 the window may occupy, which may the entire composite display 760 or only part of composite display 760. For example, window 4 770 may have dimensions 880 that indicate that the window 770 will occupy only part of the display area for composite display 760, as shown in
A stack position identifier 812 can be any identifier that can identify the position in the stack for the window or may be inferred from the window's control record within a data structure, such as a list or a stack. The stack position identifier 812 can be a GUID, a numeric ID, an alphanumeric ID, or other type of identifier. Each window or desktop can include a stack position identifier 812. For example, as shown in
A display identifier 816 can identify that the window or desktop is associated with a particular display, such as the first display 110 or the second display 114, or the composite display 760 composed of both displays. While this display identifier 816 may not be needed for a multi-stack system, as shown in
Similar to the display identifier 816, an active indicator 820 may not be needed with the dual stack system of
An embodiment of a method 900 for creating a window stack is shown in
A multi-screen device 100 can receive activation of a window, in step 908. In embodiments, the multi-screen device 100 can receive activation of a window by receiving an input from the touch sensitive display 110 or 114, the configurable area 112 or 116, a gesture capture region 120 or 124, or some other hardware sensor operable to receive user interface inputs. The processor may execute the Task Management Module 540 may receive the input. The Task Management Module 540 can interpret the input as requesting an application task to be executed that will open a window in the window stack.
In embodiments, the Task Management Module 540 places the user interface interaction in the task stack 552 to be acted upon by the Display Configuration Module 568 of the Multi-Display Management Module 524. Further, the Task Management Module 540 waits for information from the Multi-Display Management Module 524 to send instructions to the Window Management Module 532 to create the window in the window stack.
The Multi-Display Management Module 524, upon receiving instruction from the Task Management Module 540, determines to which touch portion of the composite display 760, the newly activated window should be associated, in step 912. For example, window 4 770 is associated with the a portion of the composite display 764 In embodiments, the device state module 574 of the Multi-Display Management Module 524 may determine how the device is oriented or in what state the device is in, e.g., open, closed, portrait, etc. Further, the preferences module 572 and/or requirements module 580 may determine how the window is to be displayed. The gesture module 576 may determine the user's intentions about how the window is to be opened based on the type of gesture and the location of where the gesture is made.
The Display Configuration Module 568 may use the input from these modules and evaluate the current window stack 760 to determine the best place and the best dimensions, based on a visibility algorithm, to open the window. Thus, the Display Configuration Module 568 determines the best place to put the window at the top of the window stack 760, in step 916. The visibility algorithm, in embodiments, determines for all portions of the composite display, which windows are at the top of the stack. For example, the visibility algorithm determines that window 3 768, window 4 770, and window 8 774 are at the top of the stack 760 as viewed in
In embodiments, the Task Management Module 540 sends the window stack information and instructions to render the window to the Window Management Module 532. The Window Management Module 532 and the Task Management Module 540 can create the logical data structure 800, in step 924. Both the Task Management Module 540 and the Window Management Module 532 may create and manage copies of the window stack. These copies of the window stack can be synchronized or kept similar through communications between the Window Management Module 532 and the Task Management Module 540. Thus, the Window Management Module 532 and the Task Management Module 540, based on the information determined by the Multi-Display Management Module 524, can assign dimensions 808, a stack position identifier 812 (e.g., window 1 782, window 4 770, etc.), a display identifier 816 (e.g., touch sensitive display 1 110, touch sensitive display 2 114, composite display identifier, etc,), and an active indicator 820, which is generally always set when the window is at the “top” of the stack. The logical data structure 800 may then be stored by both the Window Management Module 532 and the Task Management Module 540. Further, the Window Management Module 532 and the Task Management Module 540 may thereinafter manage the window stack and the logical data structure(s) 800.
Demand for portable electronic devices with high levels of functionality continues to rise and personal electronic devices continue to become increasingly more portable. While computer power, battery life, screen size and overall functionality of portable phones and smart phones continues to increase, user reliance on these devices increases. Many users of such devices rely heavily on such devices for general communication, accessing the internet, cloud computing, and accessing various locally stored information such as contact information, files, music, pictures and the like. It is often desirable therefore to connect such heavily relied on devices to an additional computing device or display, such as a monitor or tablet device, such as a SmartPad (SP) 1000 (see
Accordingly, it is desirable for the device 100 to be able to interface with an additional device, such as the SmartPad 1000, that enables functionality similar to, for example, both a tablet computer system and smart phone. Furthermore, a need exists for the above-described device to allow for various pre-existing features of both devices, such as sending and receiving phone calls and further allowing for the accessibility of applications running on the device 100. A need also exists for the above device 100 to provide the benefits of both a tablet computer system and cellular phone in one integrative device by allowing for common operations and functionality without compromising the form factor of the device.
One exemplary embodiment is directed toward a selectively removable device and smartpad system. The smartpad system is discussed in greater detail hereinafter, and can have various features for complementing the communications device, such as a smart phone or device 100. For example, the smartpad may supplement the device 100 by providing increased screen size, increased processor size, increased battery or power supply, or the like. Similarly, the device 100 may compliment the SP 1000 by providing connectivity through one or more wireless networks, access to various stored information, and the like. It will expressly recognized therefore that two or more devices of the present invention may be provided in a connected or docked and generally symbiotic relationship. It will further be recognized that the devices provide various features, benefits and functionality in their independent state(s).
In accordance with one exemplary embodiment, the device 100 is capable of being received by the SP 1000 through a recessed feature of the SP 1000 having corresponding dimensions to the device 100. In one exemplary embodiment, the SP 1000 is provided and preferably sized for receiving a predetermined device 100. In alternative embodiments, however, it is contemplated that the SP 1000 is provided, the smartpad capable of receiving a plurality of communications devices of different sizes. In such embodiments, the SP 1000 may receive communications devices of various sizes by, for example, the inclusion of additional elements, such as spacers and various adjustable features.
In accordance with one exemplary embodiment, the device 100 and SP 1000 have a docking relationship that is established when the device 100 is connected to the SP 1000 during various modes of operation. For example, in one embodiment, a system is provided comprising the SP 1000 and the device 100, the SP 1000 capable of physically receiving the device 100, wherein the device 100 is operable as the primary computing device. In such an embodiment, the SP 1000 may, for example, simply provide enhanced audio and visual features for the device 100 that comprises its own CPU, memory, and the like. It is further contemplated that the system can be placed in a mode of operation wherein the device 100 docked to the SP 1000 provide it in a more passive mode where, for example, the device 100 draws power from the SP 1000 such as to recharge a battery of the device 100.
In accordance with another exemplary embodiment, the device 100 and SP 1000 are provided wherein the device 100 is received or docked with the SP 1000 and wherein a substantial area of the device 100 is positioned within one or more compartments of the SP 1000. For example, where as various known devices comprise docking features which require or result in the docked item to be generally exposed, thereby substantially altering the external dimensions of the host device and/or creating a potential for damaging one or both devices upon impact, an exemplary embodiment contemplates the SP 1000 which receives the device 100 in a manner such that the external dimensions of the SP 1000 are not substantially altered when the devices are connected. In such an arrangement, the device 100 and associated connection means are generally protected and the SP 1000 is allowed to substantially maintain its original shape. In accordance with one exemplary embodiment, the SP 1000 is capable of receiving and/or docking the device 100 wherein the device 100 is received in lockable association with the SP 1000. As used herein, the term “lockable” is not intended to designate or limit it to any particular arrangement. Rather, lockable is intended to refer to various embodiments as described herein and will be recognized by one of ordinary skill in the art. In one embodiment, the device 100 is connectable to the SP 1000 wherein the SP 1000 comprises extension springs for first electively securing the device 100 in a docked manner and an ejection feature for releasing the device 100 from the SP 1000. Moreover, as will be described in greater detail below, it should be appreciated that the device 100 and SP 1000 can communicate using wired and/or wireless technology(ies) with equal success. Moreover, and in accordance with another exemplary embodiment, the hinged device 100 is selectively connectable to the SP 1000 wherein the device 100 is received by the SP 1000 in an open position and where in one or more preexisting ports of the SP 1000 correspond with internal receiving features of the SP 1000, such that the device 100 and the SP 1000 may be operated simultaneously in various modes of use.
In accordance with some exemplary embodiments, the SP 1000 is provided with an eject or release button to facilitate the removal of a stored or docked device 100.
While the following description uses the term “smart” in conjunction with the display device 1000, it is to be appreciated that this term does not necessarily connotate that there is intelligence in the SmartPad. Rather, it is to be appreciated that there can be “intelligence,” including one or more of a processor(s), memory, storage, display drivers, etc., in the SmartPad, and/or one or more of these elements shared with the device 100 via, for example, one or more of a port, bus, connection, or the like. In general, any one or more of the functions of the device 100 is extendable to the SmartPad 700 and vice versa.
The exemplary SmartPad 700 includes a screen 1004, a SP touch sensitive display 1010, a SP configurable area 1008, a SP gesture capture region(s) 1012 and a SP camera 1016. The SP 1000 also includes a port (not visible in this orientation) adapted to receive the device 100 as illustrated at least in
The device 100 docks with the SmartPad 1000 via the port on the SP 1000 and the corresponding port 136 on device 100. As discussed, port 136 in some embodiments is an input/output port (I/O port) that allows the device 100 to be connected to other peripheral devices, such as a display, keyboard, printing device and/or SP 1000. In accordance with one exemplary embodiment, the docking is accomplished by the device 100 sliding into the left-hand side of the SP 1000, with the device 100 being in an open state and the device 100 engaging a port in the SP 1000 corresponding to port 136. In accordance with one exemplary embodiment, the device 100 engages a doored cassette-like slot in the SP 1000 into which the device 100 slides. (See for example
The SP 1000 includes a screen 1004. In some embodiments, the entire front surface of the SP 1000 may be touch sensitive and capable of receiving input by a user touching the front surface of the screen 1004. The screen 1004 includes touch sensitive display 1010, which, in addition to being touch sensitive, is also capable of displaying information to a user.
The screen 1004 also includes a configurable area 1008 that has been configured for specific inputs when the user touches portions of the configurable area 1008. Area 1012a is configured to receive a “back” input indicating that a user would like to view information previously displayed. Area 1012b is configured to receive a “menu” input indicating that the user would like to view options from a menu. Area 1012c is configured to receive a “home” input indicating that the user would like to view information associated with a “home” view.
In other embodiments, areas 1012a-c may be configured, in addition to the configurations described above, for other types of specific inputs including controlling features of device 100 and/or device 1000, some non-limiting examples including adjusting overall system power, adjusting the volume, adjusting the brightness, adjusting the vibration, selecting of displayed items on screen 1004, operating the SP camera 1016, operating a microphone, and initiating/terminating of telephone calls. Also, in some embodiments, areas 1012a-c may be configured for specific inputs depending upon the application running on device 100/SP 1000 and/or information displayed on the touch sensitive displays 1010.
In addition to touch sensing, screen 1004 may also include areas that receive input from a user without requiring the user to touch the display area of the screen. For example, screen 1004 can include gesture capture area 1012. These areas are able to receive input by recognizing gestures made by a user without the need for the user to actually touch the surface of the display area. In comparison to touch sensitive display 1010 and 1014, the gesture capture area 1012 may not be capable of rendering a displayed image.
While not illustrated, there may also be a number of hardware components within SP 1000. As illustrated in
In general, the touch sensitive display 1010 may comprise a full color, touch sensitive display. A second area within each touch sensitive screen 1004 may comprise the SP gesture capture region 1012. The SP gesture capture region 1012 may comprise an area or region that is outside of the SP touch sensitive display 1010 area that is capable of receiving input, for example in the form of gestures provided by a user. However, the SP gesture capture region 1012 does not necessarily include pixels that can perform a display function or capability.
A third region of the SP touch sensitive screen 1004 may comprise the configurable area 1012. The configurable area 1012 is capable of receiving input and has display or limited display capabilities. In embodiments, the configurable area 1012 may present different input options to the user. For example, the configurable area 1012 may display buttons or other relatable items. Moreover, the identity of displayed buttons, or whether any buttons are displayed at all within the configurable area 1012 of the SP touch sensitive screen 1004 may be determined from the context in which the device 1000 is used and/or operated. In an exemplary embodiment, the touch sensitive screen 1004 comprise liquid crystal display devices extending across at least those regions of the touch sensitive screen 1004 that is capable of providing visual output to a user, and a capacitive input matrix over those regions of the touch sensitive screen 1004 that is capable of receiving input from the user.
As discussed above with reference to
In addition to the above, the SP touch sensitive screen 1004 may also have an area that assists a user with identifying which portion of the screen is in focus. This could be a bar of light or in general and indicator that identifies which one or more portions of the SP touch sensitive screen 1004 are in focus. (See for example,
One or more display controllers (such as display controllers 216a, 216b and/or dedicated display controller(s) on the SP 1000) may be provided for controlling the operation of the touch sensitive screen 1004 including input (touch sensing) and output (display) functions.
In accordance with one exemplary embodiment, a separate touch screen controller is provided for the SP 1000 in addition to each of the controllers for the touch screens 104 and 108. In accordance with alternate embodiments, a common or shared touch screen controller may be used to control any one or more of the touch sensitive screens 104 and 108, and/or 1004. In accordance with still other embodiments, the functions of the touch screen controllers may be incorporated into other components, such as a processor and memory or dedicated graphics chip(s).
In a similar manner, the SP 1000 may include a processor complementary to the processor 204, either of which may comprise a general purpose programmable processor or controller for executing application programming or instructions. In accordance with at least some embodiments, the processors may include multiple processor cores, and/or implement multiple virtual processors. In accordance with still other embodiments, the processors may include multiple physical processors. As a particular example, the processors may comprise a specially configured application specific integrated circuit (ASIC) or other integrated circuit, a digital signal processor, a controller, a hardwired electronic or logic circuit, a programmable logic device or gate array, a special purpose computer, or the like. The processors generally function to run programming code or instructions implementing various functions of the device 100 and/or SP 1000.
The SP 1000 can also optionally be equipped with an audio input/output interface/device(s) (not shown) to provide analog audio to an interconnected speaker or other device, and to receive analog audio input from a connected microphone or other device. As an example, the audio input/output interface/device(s) 256 may comprise an associated amplifier and analog to digital converter usable with SP 1000. Alternatively or in addition, the device 100 can include an integrated audio input/output device 256 and/or an audio jack for interconnecting an external speaker or microphone via SP 1000. For example, an integrated speaker and an integrated microphone can be provided, to support near talk or speaker phone operations.
Hardware buttons (not shown) but similar to hardware buttons 158 can be included for example for use in connection with certain control operations. Examples include a master power switch, volume control, etc., as described in conjunction with
Communications between various components of the device 100 and SP 1000 can be carried by one or more buses and/or communications channels. In addition, power can be supplied to one or more of the components of the device 100 and Sp 1000 from a power source and/or power control module 260. The power control module 260 and/or device 100 and/or SP 1000 can, for example, include a battery, an AC to DC converter, power control logic, and/or ports for interconnecting the device 100/1000 to an external source of power.
The middleware 520 may also be any software or data that allows the multiple processes running on the devices to interact. In embodiments, at least portions of the middleware 520 and the discrete components described herein may be considered part of the OS 516 or an application 564. However, these portions will be described as part of the middleware 520, but those components are not so limited. The middleware 520 can include, but is not limited to, a Multi-Display Management (MDM) class 524, a Surface Cache class 528, a Window Management class 532, an Activity Management class 536, an Application Management class 540, a display control block, one or more frame buffers 548, an activity stack 552, and/or an event buffer 556—all of the functionality thereof extendable to the SP 1000. A class can be any group of two or more modules that have related functionality or are associated in a software hierarchy.
The MDM class 524 also includes one or more modules that are operable to manage the display of applications or other data on the screen of the SP 1000. An embodiment of the MDM class 524 is described in conjunction with
In conjunction with the docking of device 100 with SP 1000, one or more of the devices can begin power management. For example, one or more of the device 100 and SP 1000 can include power supplies, such as batteries, solar, or in general any electrical supply, any one or more of which being usable to supply one or more of the device 100 and SP 1000. Furthermore, through the use of, for example, an AC power adaptor connected to port 1208, the SP 1000 can supply power to device 100, such as to charge device 100. It will be appreciated that the power management functionality described herein can be distributed between one or more of the device 100 and SP 1000, with power being sharable between the two devices.
In addition to power management functions, upon the device 100 being docked with the SP 1000, the displays on device 100 can be turned off to, for example, save power. Furthermore, electrical connections are established between the device 100 and SP 1000 such that the speaker, microphone, display, input capture region(s), inputs, and the like, received by SP 1000 are transferable to device 100. Moreover, the display on device 1000 is enabled such that information that would have been displayed on one or more of the touch sensitive displays 110 and 114 is displayed on touch sensitive display 1010. As will be discussed in greater detail herein, the SP 1000 can emulate the dual display configuration of the device 100 on the single display 1010.
The SP 1000 can optionally be equipped with the headphone jack 1212 and power button 1216. Moreover, any hardware buttons or user input buttons on the device 100 could be extended to and replicated on the SP 1000.
This dock event between the device 100 and SP 1000 can be seen as states 336 or 344 in
In accordance with one exemplary embodiment, the accelerometer 176 on device 100 is used to determine the orientation of both the device 100 and SP 1000, and consequently the orientation of the touch screen display 1010. Therefore, the accelerometer(s) 176 outputs a signal that is used in connection with the display of information to control the orientation and/or format in which information is to be displayed to the user on display 1010. As is to be appreciated, reorientation can include one or more of a portrait to landscape conversion, a landscape to portrait conversion, a resizing, a re-proportioning and/or a redrawing of the window(s) associated with the application(s).
On reorienting of the running application(s), the application(s) is displayed on display 1010 on SP 1000.
In accordance with an optional exemplary embodiment, priority can be given to the application that is in focus. For example, and using again applications “B” and “C” as illustrated in
In accordance with another optional embodiment, the application in focus could be displayed in full-screen mode on display 1010 with the application(s) not in focus placed into a window stack that is, for example, in a carousel-type arrangement as discussed hereinafter.
Displaying of the application(s) are managed by one or more of the display controller 544, framework 520, window management module 532, display configuration module 568, as well as middleware 520 and associated classes. In single application mode, all dual screen capable applications can be launched in either a dual screen or max mode, where the application is displayed substantially filling the display 1010. This is applicable to when the SP 1000 is either in the portrait mode, as illustrated in
Therefore, in one exemplar embodiment, when a single application is executed, a single application can launch in the full screen mode and can be correlated to the max mode as discussed in relation to
This resizing can occur regardless of whether a native application on the device 100 actually supports the orientation of the SP 1000. Therefore, even if the application does not support a particular orientation on device 100, the display configuration module 568 can appropriately re-render and/or re-size the window for the application for appropriate display on the SP 1000.
In accordance with a first example, the first portion is allocated one third of the screen 1010's resolution, while the second portion 1708 is allocated two thirds of the screen real estate.
In accordance with another example, the screen 1010 is split 50/50. In accordance with yet another example, the first portion could be allocated 70% of the screen 1010's real estate, while the second portion 1708 could be allocated 30%. The managing and resizing of these windows can again be done in cooperation with the display configuration module 568, as well as the windows management module 532 and display controllers for successful rendering of the location of the window(s) on the SP 1000.
As will be appreciated, and in a manner similar to the operation of device 1000, should the SP 1000 change orientation (e.g., from landscape to portrait or vice versa) the window(s) for the application(s) can be redrawn in the appropriate orientation taking into account window prioritization based on whether a particular application and current focus is for a dual screen application or a single screen application.
Focus can also be taken into consideration when determining which window of the application should be displayed when the SP 1000 is in the portrait position. For example, if the application is an e-mail client, and the application natively is displayed on dual screens on device 1000 (a first screen being directed toward showing inbox content, and the second screen being a preview window for a specific item in the inbox) the system can evaluate which window is currently in focus, and ensure that window is displayed in the portrait max mode when the SP 1000 is in the portrait orientation.
In
Some other exemplary embodiments of windows management within the SP 1000 upon the device 100 docking with the SP 1000 are as follows: For example, a device 100 is docked to the SP 1000, with the SP 1000 in a portrait orientation and there are two single-screen applications running on the device 1000, the application in focus is placed in a lower portion of the display 1010, and the application not in focus is placed on an upper portion of the display 1010. Another exemplary scenario, where the device 100 is docked to a portrait-oriented SP 1000 where one dual-screen application is running on the device 100 and the SP 1000 is in a dual application mode, applies gravity drop as discussed herein.
In another exemplary scenario, where the device 100 is running two single-screen applications, and the SP 1000 is in a landscape dual application mode, the first application is assigned to a first portion of the display 1010 and the second application is assigned to a second portion of the display 1010.
In yet another exemplary scenario where the device 100 is running one dual-screen application and the SP 1000 is in dual application landscape mode, both screens of the dual screen application can be shown on the SP 1000.
Stickiness can also apply to the SP 1000 such that, for example, when a first application is in focus, upon docking to a single application mode SP 1000, the application remains visible after docking. As another example of stickiness, if a second application is in focus upon docking to a single application mode SP 1000, application two remains visible after docking.
In accordance with another example, the device 100 is running one dual-screen application and is docked to a landscape-oriented SP 1000 in max mode, the windows are re-oriented to be side-by-side, opposed to one above the other.
In
In general, in the embodiments illustrated in
In this mode, each application has the ability to determine how the application appears in each orientation (e.g., portrait and landscape).
To change focus, a user could use any of the gestures discussed herein or could, for example, simply touch the area where application C is displayed, thereby changing focus to application C, at which point a corresponding relocation of the focus indicator 2616 to adjacent to application C would occur.
In the multiple application mode, in both portrait and landscape orientations, each application could have its own associated window stack as show in
In
In step S4510, carousel movement of the “panels” shown in the display can be initiated through user input, such as a gesture. Control then continues to step S4512 where the control sequence ends.
In step S4614 carousel movement of the panels can be affected by, for example, an input of a gesture by the user. Control then continues to step S4616 where the control sequence ends.
To avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scopes of the claims. Specific details are set forth to provide an understanding of the present disclosure. It should however be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.
For example, the smartpad could have multiple physical and/or logical screens/displays. Additionally, the smartpad could be used with one or more input devices such as a stylus, mouse, or the like. Moreover, the smartpad could be populated with a processor, memory, communications means and the like that would allow for stand-alone operation. Even further, the smartpad could be associated or docked with other types of communications devices such as a smartphone such that the smartpad could be used as a display and/or I/O interface therefore.
An embodiment of a method 4900 for executing an application is shown in
An application is executed, in step 4908. In embodiments, a processor 204 receives indication to execute an application through a user interface 110, 114, 112, 116, etc. The indication can be a selection of an icon associated with the application. In other embodiments, the indication can be a signal generated from another application or event, such as receiving an e-mail or other communication, which causes the application to execute automatically. The processor 204 can retrieve the application 564a from the application store 560 and begin its execution. In executing the application 564a, a user interface can be generated for a user.
In creating a user interface, the application 564a can begin executing to create a manifest, in step 4912. A manifest is a data structure that indicates the capabilities of the application 564a. The manifest can generally be created from the resources in the resources directory of the application 564a. The resources directory can indicate the types of modes, locations, or other indications for how the user interface should be configured in the multi-display device 100. For example, the several modes can include: “classic mode” that indicates that the application 564a is capable of being displayed on a single screen or display 110/114; “dual mode” that indicates that the application 564a is capable of being displaced on two or more displays 110 and 114; “max mode” that indicates the application 564a is capable of being displayed or desires to be displayed across multiple displays 110 and 114; and/or “bilateral mode” that indicates that the application 564a is capable of being displayed on 2 or more displays 110 and 114 when the device 100 is in easel mode (see
Similarly, the manifest can include a desired or allowed location within the displays 110/114. The possible locations can include: “left”, which indicates that the application 564a desires to be displayed on the left display 110; “right”, which indicates that the application 564a desires to be displayed on the right display 114; and/or other indications of where a location should be including possible “top” and/or “bottom” of one or more of the displays 110/114.
The application 564a can also indicate that it desires to be displayed in a “minimum” window, which is a window that occupies less than the full area of a single display. There may be other modes possible for the application 564a, which may be included in the manifest. The manifest can be sent from the application 564a to the multi-display management module 524.
The multi-display management module 524 can receive the manifest, in step 4916. In receiving the manifest, the multi-display management module 524 can use the information to determine a display binding for the application 564a. The manifest may be received more than once from the application 564a based on changes in how the application 564a is being executed, where the application 564a desires to have a different display setting for the new mode. Thus, with the manifest, the application 564a can indicate to the multi-display management module 524 how best to or what is the desired for the application's user interface. The multi-display management module 524 can use the information in the manifest to determine the best fit for the user interface depending on how the device 100 is currently configured.
The multi-display management module 524 can determine the application display mode, in step 4920. Here the multi-display management module 524 receives or retrieves an indication of the device 100 configuration. For example, the multi-display management module 524 can determine if the device is in single display configuration (see
Further, the multi-display management module 524 can determine if the device 100 is in a portrait or landscape orientation. With this information, the multi-display management module 524 may then consider the capabilities or preferences listed for the application 564a in the received manifest. The combined information may then allow the multi-display management module 524 to determine a display binding. The display binding can include which of the one or more displays 110 and/or 114 are going to be used to display the application's user interface(s). For example, the multi-display management module 524 can determine that the primary display 110, the secondary display 114, or all displays 110 and 114 of the device 100 will be used to display the application's user interface.
The display modes setting can be assigned by creating or setting a number in the display binding. This number can be “0” for the primary display 110, “1” for the secondary display 114, or “2” for dual displays 110 and 114. The display mode setting can also indicate if the application 564a should display the user interface in portrait or landscape orientation. Further, there may be other settings, for example, providing a max mode or other setting that may indicate how the application 564a is to be displayed on the device. The display binding information is stored in a data structure to create and set a binding, in step 4924.
The established display binding may then be provided, by the multi-display management module 524, to the application 564a, in step 4928. The provided display binding data structure can become an attribute of the application 564a. An application 564a may thereinafter store the display binding attribute in the memory of the device 100. The application 564a with the display binding may then generate a user interface based on this display binding. The application 564a may be unaware of the position of the display 110/114 but may, from the display binding, be able to determine the size of the available user interface to generate a window that has particular characteristics for that display setting.
When a configuration change happens to the device 100, the multi-display management module 524 may change the display binding and send a new display binding to the application 564a. In embodiments, the multi-display management module 524 may indicate to the application 564a that there is a new binding or, in other embodiments, the application 564a may request a display configuration change or a new display binding, in which case the multi-display management module 524 may send a new display binding to the application 564a. Thus, the multi-display management module 524 can change the configuration of the display for the application 564a by altering the display binding for the application 564a during the execution of that application 564a.
The multi-display management module 524 thereinafter, while the application 564a is executing, can determine if there has been a configuration change to the device 100, in step 4932. The configuration change may be an event (see
In step 4936, a new application mode change may be determined. Application mode changes can also occur in the application 564a, and thus, the application 564a can determine if something has occurred within the application 564a that requires a different display setting. The mode change can create a desire to change the display 110/114, and thus, require the application 564a to generate a new manifest. If the application 564a does sense a mode change or an event has occurred that requires a change in display setting, the method 4900 proceeds YES back to step 4912. At step 4912, a new manifest or preference is created by the application 564a that may be received by the multi-display management module 524 to determine if the multi-display management module 524 can change the display binding. If it is possible to provide the preferred display, the multi-display management module 524 can create a new display binding and send display binding back to the application 564a and allow the application 564a to alter its user interface. If no mode change is sensed or an event is not received to create a mode change, the method 4900 proceeds NO to end operation 4940.
Comparing the virtual keyboard 3704 presented with portions of the screens 104, 108 in a portrait mode, such that the virtual keyboard 3704 spans the two screens 104, 108, to the mode of operation in which the virtual keyboard 3704 is presented by a single screen 104 or 108 in a landscape mode highlights changes that can be made to accommodate the line of separation 3708. In particular, in the embodiment illustrated in
In the example of
In accordance with embodiments of the present invention, a virtual keyboard 3704 can be displayed by one or both screens 104, 108 of a dual screen device 100 as described herein. For example, in a single screen landscape mode, the virtual keyboard 3704 and the active set of selectable virtual keys 3912 can be displayed across a portion of one of the screens 104 or 108. As another example, in a dual screen landscape mode, the virtual keyboard 3704 and the active selectable set of virtual keys 3912 can be displayed across one of the screens 104 or 108, while additional application information is displayed in a second one of the screens 104 or 108. As yet another example, and as shown in
As can also be appreciated by one of skill in the art after consideration of the present disclosure, if the drag 400 or flick 404 gesture entered on the area of the display 104 and/or 108 in which the first selectable set of virtual keys 3912a was displayed had been to the left in the above example, the first selectable set of virtual keys 3912a would have been replaced by the third selectable set of virtual keys 3912c. In accordance with embodiments of the present disclosure, a selectable set of virtual keys 3912 must be dragged by at least one third (i.e., by about 33%) of its width in order for the swiped set of virtual keys 3912 to be replaced by the set of virtual keys 3912 following the swiped selectable set of virtual keys 3912 in the sequence. In accordance with still other embodiments of the present disclosure, the individual keys within a selectable set of virtual keys 3912 may behave like any other virtual keys within a virtual keyboard 3704, except that the entire bar or row of virtual keys 3912 is moved by a swipe gesture, instead of having the effect of changing an individual key.
The particular set of virtual keys 3912 that are displayed, at least initially, can be determined according to the context in which the device 100 is operating. For example, the Internet/position bar 3912a, which contains common character keys used in URLs, email addresses, and instant messaging, as well as arrow keys for precise caret control can be presented when the device 100 is executing a web browser application. When the arrow keys within the Internet/position bar 3912a are used to control the position of the cursor, the selection of an input field will not change the selectable set of virtual keys to another set. The number bar 3912b contains number characters, and can be displayed when the device 100 is executing a spreadsheet or other application, according to the rules of the application. The punctuation bar 3912c contains extra punctuation characters, and can be displayed when the device 100 is executing a word processing application. The default selectable virtual keys 3912 for an application or for a context are typically displayed upon the first instantiation of the virtual keyboard 704 within the application or context, after which the user can select a different selectable set of virtual keys 3912 by swiping the displayed row of selectable virtual keys 3912, by opening a new application or changing the context, or by making a selection from a menu.
In accordance with further embodiments, different numbers of selectable sets of virtual keys 3912 can be provided. For example, if two selectable sets of virtual keys 3912 are provided, while one of the selectable sets 3912 is on display, a user can select the other of the selectable sets 3912 by entering a swipe gesture on the displayed selectable set 3912 in either direction. In accordance with other embodiments, where three or more selectable sets of virtual keys 3912 are available, no more than portions of two adjacent selectable sets of virtual keys 3912 may be presented as part of the keyboard 3704 at any one point in time. The other selectable sets of virtual keys 3912 can be maintained in a stack in memory, and selected through the successive entry of swipe gestures by the user, until the desired selectable set of virtual keys 3912 has been reached and is displayed as part of the virtual keyboard 3704.
With reference to
Whether the suggestion bar 4105 is displayed can be controlled by providing user input at the slider bar toggle button 3824. For example, a default configuration of a virtual keyboard 3704 can be present both the suggestion bar 4105 and a selectable set of virtual keys 3912 when an application is being accessed or while the device is in a context that makes use of the suggestion bar 4105. A first press of the slider bar toggle button 3824 can remove the suggestion bar 4105, and leaving the active set of selectable virtual keys 3912 visible. A second press of the slider bar toggle button 3824 can remove the row of selectable virtual keys such that virtual keyboard 3704 ends with the top row of letter characters. A third press of the slider bar toggle button 3824 can restore the visibility of both the suggestion bar 4105 and the selectable set of virtual keys 3912. Continued presses of the slider bar toggle button can continue to move through the cycle in the same order. In accordance with embodiments of the present disclosure, the current display state can be visually indicated by the three bars included in the slider bar toggle button 3824 itself. As previously noted, in accordance with at least some embodiments of the present disclosure, the slider bar toggle button 3824 is only displayed in the dual portrait orientation. In such embodiments, when the device 100 is turned to the dual landscape orientation, the slider bar toggle button 3824 is no longer visible. Moreover, a virtual keyboard 3704 display state selected using the slider bar toggle button 3824 can control the virtual keyboard 3704 display state when the device is placed in a dual landscape orientation, even though the slider bar toggle button 3824 is not displayed in the dual landscape orientation. When the device 100 is in a dual landscape orientation, a menu selection, context, or other means can be provided for selection the visibility of the suggestion bar 4105 and/or the selectable set of virtual keys. Upon returning the device to the dual portrait orientation, the state selected using the slider bar toggle button when the device 100 was last in the dual portrait orientation can be applied, at least until the selected visibility state is changed.
With reference now to
At step 4212, a determination is made as to whether input to change the displayed selectable set of virtual keys 3912 has been received. If input to change the displayed set of virtual keys has been received, a next selectable set of virtual keys 3912 is identified (step 4216). The identification of the next selectable set of virtual keys 3912 to display can be made in view of various considerations. These include but are not limited to the number of selectable sets of virtual keys 3912 available, their relative location in a stack of selectable sets of virtual keys 3912 maintained in memory 208, 508, and the direction of a swiping motion received as the input from the user. Once the next selectable set of virtual keys 3912 has been identified, the next set of selectable virtual keys 3912 is displayed (step 4220). The process can then return to step 4212, where, for example, input selecting a next selectable set of virtual keys 3912 in a sequence can be received.
If input to change the displayed selectable set of virtual keys 3912 is not received at step 4212, a determination can be made as to whether input to modify a display of the virtual keyboard 3704 has been received (step 4224). For example, in a default or initial configuration or state, the virtual keyboard 3704 can include a suggestion bar 4105 and a selectable set of virtual keys 3912, in addition to the standard character and control keys of the virtual keyboard 3704. If input to modify the virtual keyboard 3704 is received, such modification can include placing the virtual keyboard 3704 in a second state, in which the suggestion bar 4105 is not displayed, and in which the selectable set of virtual keys 3912 and the standard character and control keys of the virtual keyboard 3704 continue to be displayed. A modification to the display of the virtual keyboard 3704 can further include dismissing both the suggestion bar 4105 and the selectable set of virtual keys 3912, so that only the standard character and control keys of the virtual keyboard 3704 are displayed. Input to change or modify the presentation of the virtual keyboard 3704 can include entering input at the slider bar toggle button 3824 while the device 100 is in a dual portrait orientation, entering input through a menu, changing the context of operation of the device to one in which a different virtual keyboard 3704 configuration is applied, or changing the currently visible application to one that uses a different virtual keyboard 3704. If input to change the display of the virtual keyboard 3704 is received, a revised virtual keyboard 3704 is displayed (step 4228). The process can then return to step 4224, where, for example, further input changing the virtual keyboard 3704 can be received.
If at step 4224 input to modify the display of the virtual keyboard 3704 is not received, a determination can next be made as to whether to continue to display the virtual keyboard 3704 (step 4232). For example, user input or the operation or execution of application programming can signal that the display of the virtual keyboard 3704 should be discontinued. If the display is to be continued, the process can return to step 4208. Alternatively, if the display of the virtual keyboard 3704 is to be discontinued, the process may end.
In accordance with embodiments of the present invention, a virtual keyboard 3704 including the selectable set of virtual keys 3912 and suggestion bar 4105 features described herein can be implemented by application programming or instructions executed by an associated device 100. For example, the provision and operation of a virtual keyboard 3704, selectable sets of virtual keys 3912, and a suggestion bar 4105, can be through or in association with one or more applications stored in memory 208, 508 and executed by a processor 204, 504. Moreover, the provision of a virtual keyboard 3704 and associated features can be implemented using screens 104, 108 including touch sensitive display 110, 114 regions and the operation of one or more associated display controllers 216. Moreover, the provision of a virtual keyboard 3704 in accordance with embodiments of the present invention can be in view of inputs from various applications and components of the device 100. For instance, a basic virtual keyboard function can be provided by the operating system 516 in association with an operating system 516 and/or framework 520. Moreover, the available selectable sets of virtual keys 3912 and/or the content of a suggestion bar 4105 can be determined, at least in part, by a particular application 564 being executed and for which user input is provided using the virtual keyboard 3704.
The exemplary systems and methods of this disclosure have been described in relation to swipeable key lines or selectable sets of virtual keys. However, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scopes of the claims. Specific details are set forth to provide an understanding of the present disclosure. It should however be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.
In accordance with embodiments of the present invention, methods disclosed herein can be performed by the execution of application programming stored in memory 208, 508 and by a processor 204, 504. For example, a windows management module or class 532 can include functionality to receive and act on input received from a user. Moreover, such input can include window management input, and can include the execution of steps to open or close applications 564, or windows presenting pages associated with such application 564, for example in connection with minimization and maximization operations. In addition, although certain embodiments have been described in connection with operation on a device 100 having first 104 and second 108 screens, the invention is not limited to operation on such a device. For example, embodiments of the present invention can be performed on a device or a combination of devices operating in concert with one another that have more than two screens, and/or in connection with a screen comprising virtual screens or windows.
Furthermore, while the exemplary aspects, embodiments, and/or configurations illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined in to one or more devices, such as a tablet-like device, or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switch network, or a circuit-switched network. It will be appreciated from the preceding description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system. For example, the various components can be located in a switch such as a PBX and media server, gateway, in one or more communications devices, at one or more users' premises, or some combination thereof. Similarly, one or more functional portions of the system could be distributed between a telecommunications device(s) and an associated computing device.
Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Also, while the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects.
In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Exemplary hardware that can be used for the disclosed embodiments, configurations and aspects includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as program embedded on personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
Although the present disclosure describes components and functions implemented in the aspects, embodiments, and/or configurations with reference to particular standards and protocols, the aspects, embodiments, and/or configurations are not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present disclosure.
The present disclosure, in various aspects, embodiments, and/or configurations, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various aspects, embodiments, configurations embodiments, subcombinations, and/or subsets thereof. Those of skill in the art will understand how to make and use the disclosed aspects, embodiments, and/or configurations after understanding the present disclosure. The present disclosure, in various aspects, embodiments, and/or configurations, includes providing devices and processes in the absence of items not depicted and/or described herein or in various aspects, embodiments, and/or configurations hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and\or reducing cost of implementation.
The foregoing discussion has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Moreover, though the description has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
Claims
1. A method for presenting a user interface on a multiple display device, comprising:
- presenting a virtual keyboard, wherein the virtual keyboard is presented in at least a first screen of the multiple display device;
- presenting a selected first one of a plurality of slider bars above a first row of the keyboard, wherein the first one of the plurality of slider bars includes a first set of virtual keys;
- receiving input selecting a second one of the plurality of slider bars;
- in response to the input selecting the second one of the plurality of slider bars, presenting the selected second one of the plurality of slider bars above the first row of the keyboard, wherein the selected second one of the plurality of slider bars includes a second set of virtual keys;
- presenting an input field, wherein at least a portion of the input field is displayed in at least a second screen of the multiple displays.
2. The method of claim 1, wherein the first one of the plurality of slider bars is selected by default.
3. The method of claim 1, further comprising:
- detecting an orientation of the multiple display device;
- in response to determining that the multiple display device is in a dual landscape orientation, presenting the virtual keyboard within the first screen of the multiple display device;
- in response to determining that the multiple display device is in a dual portrait orientation, presenting a first part of the virtual keyboard within a first portion of the first screen of the multiple display device and presenting a second part of the virtual keyboard within a first portion of the second screen of the multiple display device.
4. The method of claim 3, further comprising:
- while the second one of the plurality of slider bars is presented, receiving an input selecting the first one of the plurality of slider bars, wherein the input is a gesture entered along a line corresponding to the line of virtual keys included in the second one of the plurality of slider bars, and wherein the gesture includes a movement in a first direction;
- in response to the input selecting the first one of the plurality of slider bars, presenting the selected first one of the plurality of slider bars above the first row of the keyboard.
5. The method of claim 4, further comprising:
- while the first one of the plurality of slider bars is presented, receiving an input selecting a third one of the plurality of slider bars, wherein the input is a gesture entered along a line corresponding to the line of virtual keys included in the first one of the plurality of slider bars, and wherein the gesture includes a movement in the first direction;
- in response to the input selecting the third one of the plurality of slider bars, presenting the selected third one of the plurality of slider bars above the first row of the keyboard, wherein the selected third one of the plurality of slider bars includes a third set of virtual keys.
6. The method of claim 5, further comprising:
- while the third one of the plurality of slider bars is presented, receiving an input selecting the second one of the plurality of slider bars, wherein the input is a gesture entered along a line corresponding to the line of virtual keys included in the third one of the plurality of slider bars, and wherein the gesture includes a movement in the first direction.
7. The method of claim 6, further comprising:
- while the second one of the plurality of slider bars is presented, receiving an input selecting the third one of the plurality of slider bars, wherein the input is a gesture entered along a line corresponding to the line of virtual keys included in the second one of the plurality of slider bars, and wherein the gesture includes a movement in a second direction, wherein the first direction is opposite the second direction.
8. The method of claim 1, further comprising:
- in response to input including a letter key press, presenting a word suggestion bar, wherein the word suggestion bar is presented in at least the first screen of the multiple display device, and wherein the suggestion bar is presented above the selected slider bar.
9. The method of claim 8, wherein a first word suggestion included in the word suggestion bar consists of the sequence of typed characters in a last sequence of typed characters, and wherein word suggestions are displayed following the last sequence of typed characters.
10. The method of claim 9, wherein with the multiple display device in a dual portrait orientation the word suggestion bar extends across a seam separating the first and second screens.
11. The method of claim 10, wherein in response to a input selecting a word in the word suggestion bar, the word selected by the input is entered.
12. The method of claim 11, further comprising:
- receiving a first input comprising a press of a slider bar toggle button, wherein each press of the slider bar toggle button moves a visibility of the slider bar and the suggestion bar between each of at least three states, the at least three states including:
- a state in which both the slider bar and the suggestion bar are presented;
- a state in which only the slider bar is presented; and
- a state in which neither the slider bar nor the suggestion bar are presented.
13. The method of claim 1, wherein the first one of the plurality of slider bars is selected according to an operating mode of the multiple display device, and wherein the plurality of slider bars includes at least a punctuation bar, a number bar, and an Internet bar.
14. The method of claim 1, further comprising:
- receiving a first input at a slider bar toggle button;
- in response to the first input at the slider bar toggle button, presenting the second one of the plurality of slider bars;
- while the second one of the plurality of slider bars is presented, receiving a second input at the slider bar toggle button;
- in response to the second input at the slider bar toggle button, presenting a third one of the plurality of slider bars;
- while the third one of the plurality of slider bars is presented, receiving a third input at the slider bar toggle button;
- in response to the third input at the slider bar toggle button, presenting the first one of the plurality of slider bars.
14. A device, comprising:
- a first touch screen display;
- a second touch screen display;
- memory;
- a processor;
- application programming stored in the memory and executed by the processor, wherein the application programming is operable to: present a virtual keyboard, wherein in a dual landscape orientation the virtual keyboard is presented within the first touch screen display, wherein in a dual portrait orientation the keyboard is presented within a portion of the first touch screen display and within a portion of the second touch screen display; in response to input from a user, one of: present one of a plurality of selectable slider bars above a top row of keys included in the virtual keyboard; present one of the plurality of selectable slider bars above a top row of keys included in the virtual keyboard and a suggestion bar above the one of the plurality of selectable slider bars; present neither the one of a plurality of selectable slider bars nor the suggestion bar.
15. The device of claim 14, wherein while a first one of the plurality of selectable slider bars is presented, the application programming is further operable to:
- in response to input in the form of a gesture in a first direction entered in an area of at least one of the first and second screens presenting the first selectable slider bar, present a second one of the plurality of selectable slider bars.
16. The device of claim 15, wherein while the second one of the plurality of selectable slider bars is presented, the application programming is further operable to:
- in response to input in the form of a gesture in the first direction entered in an area of at least one of the first and second screens presenting the second selectable slider bar, present a third one of the plurality of selectable slider bars;
- while the third one of the plurality of selectable slider bars is presented, the application programming is further operable to:
- in response to input in the form of a gesture in the first direction entered in an area of at least one of the first and second screens presenting the third selectable slider bar, present the first selectable slider bar.
17. A computer readable medium having stored thereon computer-executable instructions, the computer executable instructions causing a processor to execute a method for presenting a user interface, the computer executable instructions comprising:
- instructions to display a keyboard comprising a plurality of rows of virtual keys, wherein in a first operating mode the keyboard is displayed within a first touch screen display, and wherein in a second operating mode a first part of the keyboard is displayed within a portion of the first touch screen display and a second part of the keyboard is displayed within a portion of a second touch screen display;
- instructions to display at least one of a slider bar and a suggestion bar above a top row of virtual keys of the keyboard.
18. The computer readable medium of claim 17, wherein a first slider bar is displayed, and wherein the slider bar includes a row of virtual keys, the computer executable instructions further comprising:
- in response to a first input, replacing the first set of keys included in the first slider bar with a second set of keys included in a second slider bar.
19. The computer readable medium of claim 18, the computer executable instructions further comprising:
- in the second operating mode, presenting as one of the virtual keys a slider bar toggle key, wherein while in the second operating mode and in response to a key press input received at the slider bar toggle key the display of one of the following display states can be selected:
- a first state that includes a display of the slider bar and the suggestion bar;
- a second state that includes the slider bar without the suggestion bar; and
- a third state that includes neither the slider bar nor the suggestion bar.
20. The computer readable medium of claim 19, wherein the first input is entered while the first slider bar displayed and while in the second operating mode, wherein the first input includes a gesture entered in an area of one of the first and second touch screen displays corresponding to a display of a portion of the first slider bar along the row of virtual keys included in the first slider bar, and wherein the gesture includes dragging the row of virtual keys included in the first slider bar for 33% of the width of the row of virtual keys included in the first slider bar.
International Classification: G06F 3/0489 (20060101);