Abstract: A mobile telephone (1) includes a multimodal user interface and a rendering unit (10) that can be used to display icons on the display screen (3) of the mobile telephone (1). The rendering unit (10) receives inputs from a number of status factor determiners of the mobile telephone (1), such as an environmental quality assessment unit (7), a data quality of service unit (8), a network signal strength unit (9), an application engine (5), multimodal interface components (11), and an automatic speech recognition unit of a speech engine (22). The rendering unit (10) uses the status information that it receives to select an icon to be displayed to convey information about the current status of the mobile telephone (1) to the user. The icon that is displayed by the rendering unit (10) is in the form of a human face that can show varying expressions and emotions.
Abstract: An interaction engine (4) monitors the use of a mobile telephone (1) and applications running on the mobile telephone (1) by a user, and provides information regarding those interactions to a user expertise calculation module (7). The user expertise calculation module (7) then uses that information to determine a current level of expertise of the user of the device. The interaction engine (4) uses the determined level of user expertise to determine a set of user prompts to be used for the current user. The selected set of prompts is provided to a prompt selection module (6) of the interaction engine (4). The prompt selection engine (6) selects a prompt from the provided set of prompts based on the current status of the application or the user's interaction, which prompt is then provided by the interaction engine (4) via a speech engine (2) or visual user interface elements (3) automatically to the user.