Configurable interface for devices

A keyboard apparatus for information entry with means for dynamically configuring a legend on a key of the keyboard apparatus. The keyboard apparatus includes means for detecting a selection of the key, and means for associating the selection of the key with the legend on the key. The legend is displayed on a LCD device forming a part of the key or is displayed on a LCD device forming a keyboard. Alternatively, the key may be selected by suing a touch sensitive LCD display.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a continuation-in-part of co-pending application entitled INTELLIGENT KEYBOARD SYSTEM, Ser. No. 09/281,739, filed Jun. 4, 1999, which is a continuation-in-part application of now abandoned application entitled A SYSTEM LEVEL SCHEME TO CONTROL INTELLIGENT APPLIANCES, Ser. No. 08/764,903 filed Dec. 16, 1996.

BACKGROUND OF THE INVENTION

Currently the key pad buttons on a cellular telephone/mobile device (CT/MD) pose a limitation in inputting broad based queries. There are only 12 non-control buttons on many CT/MDs. Even where there are more, there are so few that inputting even as little as the letter-number ASCII set is not really practical. For example, in the present art there have been attempts to expand the number of keys, such as treating the numeric keys as numbers unless a code is entered, such as A* #@ or the like, then treating a A2″ as an Aa@, A2-2″ as a Ab@, and A2-2-2″ as a Ac@. Entering A2″ three times to form a Ac@ is both confusing and slow, and such approaches have not been popular. If a mixed string of letters and numbers are desired, the three A2″s may have to be delimited with, for example, A* #@, and the process becomes increasingly more unwieldy. There has been some success in using a computer, especially a computer operating with Afuzzy@ logic, to extract the probable combination of letters in a numeric string, exemplified by an interactive directory for finding the telephone extension number of an employee by Aspelling@ the employee's name on a numeric key pad. This is a satisfactory solution only in limited cases. Numeric reduction of this type has not been generally used except for telephone directories and similar purposes.

SUMMARY OF THE INVENTION

It is an object of the present invention to provide a scheme by which the limitations of a key pad are overcome and the key pad is enhanced. The scheme uses a local or network server.

The protocols for configuring each key to a specific function or variable set of functions are stored in a Server C. The protocols for all keys may be stored on Server C similarly. The menu for any macro function can be stored on this Server C. Server C may be part of a local loop or located on the internet.

In an embodiment of the present invention, displays, such as small LCD displays, are mounted on the top of the keys and connected to a matrix addressing system. When a key is reconfigured, such as from an English language AA@ to some Japanese character, the legend displayed on the key with the small display is changed accordingly.

In another embodiment of the present invention, the keyboard is displayed in the display window of a computing device, such as a hand held wireless device. The term wireless device includes entertainment/game machines. The screen of the wireless device is touch sensitive, so the user can type on the screen as if it were a standard keyboard.

In another embodiment of the present invention, the keyboard is displayed on a separate screen in the position of and replacing the keyboard on a device, such as a hand held wireless device. This screen is touch sensitive, so the user may type on it as if it were a keyboard.

In another embodiment of the present invention, the keys on any of the above keyboards, as well as on keyboards of the present invention generally, have a sound output, such as a voice output. In this way visually impaired or persons with similar concerns can listen to what keys are being depressed.

Other objects, features and advantages of the present invention will become apparent from the following detailed description when taken in conjunction with the accompanying drawings.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

The accompanying drawings, being incorporated in and forming a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the present invention:

FIG. 1 is an embodiment of the present invention showing a CT/MD with a reconfigurable keyboard communicating with a Central Server C.

FIG. 2 is an embodiment of the present invention showing a CT/MD with display devices on the keys for defining the function of the key dynamically.

FIG. 3 is an embodiment of the present invention showing a key with a screen or display thereon for containing a legend.

FIG. 4 is an embodiment of the present invention showing a wireless device having a screen for containing a keypad which is accessed by a pointer, such as a stylus.

FIG. 5 is an embodiment of the present invention showing a wireless device having a microphone for allowing voice entries for language translation.

FIG. 6 is an embodiment of the present invention showing how users of the present invention who are physically separated can collaborate in a signing ceremony.

DETAILED DESCRIPTION OF THE INVENTION

The present invention provides means for more easily and intuitively assigning, for example, key values to a wireless device such as to a key associated with the wireless device. The present invention also provides means for compressing or expanding the keys on an entry system such as a wireless device or wireless computing device to more efficiently provide keys needed for entry or other reasons, such as sound, in a desired space.

The present invention uses a Central Server C providing the software routines and other support for realizing the improved input key means for a wireless device or for a wireless computing device.

Thus the Server C contains a number of menus for different applications comprising of assigned values for each key function.

1. Individual Key->may take one or more values that are programmable.

2. Full set or subset of keys->may take one or more values that are programmable.

3. The individual or subset or full set of keys->is programmable to perform assigned functions.

4. The above individual or subset or full set of keys in combination may comprise a menu to perform various customizable functions.

5. The identity of each programmed value for a key, set of keys or full set of keys is stored in the Server C.

6. The menus, sub menus and individual key functions are stored in Server C and may be accessed for use by wired or wireless means. They can be dynamically changed as defined by the user's needs.

7. The user may easily go from one set of functions or menus to another set of functions or menus by selecting an option from the CT/MD.

8. The menus or functions may coexist on the CT/MD. One function or menu may go to the background and one may be in the foreground. One set may be primary and the others secondary or a hierarchy of functions/menus may be maintained, such as with a windowing of templates, where the user may change templates in the same manner as changing windows on a personal computer (PC).

9. Server C manages the delivery of these functions to the CT/MD and also maintains a history.

10. This same process is extendible to pen based inputs where certain figures or icons or strokes may be designated to indicate certain functions or menus that we stored on the Server C and delivered as needed by a command from the CT/MD.

11. This same process is extendible to voice based input commands and output where each voice command or output means a certain function or a menu that is stored in Server C. The voice recognition function in addition may add more functionality to respond to a given voice. The voices may be in different languages.

12. The same process may be extendible to sounds rather than voice; for example, the sound of a bell.

In addition the CT/MD may contain electronics and process capability to internally store the various programmable key functions or menus such that different functions and menus may be chosen as the need arises.

In addition, the web server may be shrunk into a microchip that can be part of the internal electronics of the CT/MD, in which case a local or network server may or may not be needed. In this event the features described above for programming and describing each key or input/output could be handled by the internal web server independently or in conjunction with a local or network Server C.

If a user initiates communication with a particular device, i.e., if a user selects a particular device, the system may understand the context and may change the keypad automatically. Thus the system may perform context-aware keypad changes. This context may be based upon location, the devices communicated with, devices present in its local environment, or other factors

FIG. 1 illustrates a wireless system 100 with a CT/MD 102 having a dynamically reconfigurable keypad 104. Such a keypad 104 provides the ability to define macro keys not included with the standard alphanumeric keypad. In FIG. 1, a CT/MD 102 which seems standard has display devices mounted on each key 106, so that the legend appearing on the key 106 is configurable in software such as from Central Server C 108 without requiring external physical changes.

FIG. 2 illustrates a wireless device 200 such as a CT/MD having a display 202 and a key pad 204. The key pad 204 has keys such as key 206 which are assignable as desired in software.

The user may choose to reassign a key on the wireless device to represent a particular function. For example, the user could assign a key to serve as a garage door opener. The user may also use this functionality for universal language capability, such as to change an English keypad to serve as a Japanese keypad. The display mounted on the key may be used to change the keypad template, such as by introducing a Japanese character on the key replacing the English letter AA@ or a macro such as “open garage door”.

FIG. 3 shows an embodiment of the present invention in the form of a key 300 such as a key that might be found on a multifunction keyboard. In FIG. 3, the key 300, such as a key from a multi-function keypad, is composed of a liquid crystal display (LCD) which can be modified with electrical inputs only. In this manner, as new templates are used, the key 300 will immediately reflect these changes. Thus, when a key 300 is reassigned a new name and function, the key's new name can become apparent to the user as a legend 302 on the key 300 itself.

The LCD or similar display need not form a part of the key. A clear button made of, for example, plastic may encase a LCD type display which may or may not be touch sensitive; that is, a touch sensitive LCD. As new templates are loaded, the LCD display is modified to reflect these changes.

FIG. 4 shows an embodiment of the present invention with a CT/MD 400. FIG. 4 shows the CT/MD 400 having a dynamic key pad 402 such as a touch sensitive LCD panel. The CT/MD 400 optionally includes a liquid crystal display (LCD) 404. If a writing area is present then new templates can be loaded with, for example, selectable icons, and a stylus 406 can be used to choose the various keys.

Server Based, Remote Handwriting Recognition.

Handwriting recognition may be processing intensive. Wireless devices may not have the processing capability to perform advanced handwriting recognition techniques within a reasonable time. The wireless devices can offload handwriting recognition functions to a central server. The server may then transmit the recognized characters back to the wireless device, such as screen 402.

This could serve also as a signature authentication or finger print authentication mechanism. A scanner could be used to perform finger print authentication. Such authentication could take place remotely on a Central Server C 108.

FIG. 5 illustrates a wireless system 500 which is an embodiment of the present invention. In FIG. 5, a wireless device 502 transmits an image of the text that has been captured from the writing area 504. This may be a bit map image or it could be in a standard format that both the wireless device 502 and Central Server C 508 understand.

The wireless device 502 establishes a wireless connection with the Central Server C 508 and transmits the image in a standard format. The Server C 508 then performs the processing on the image and converts it into a format of standard recognized characters which the wireless device 502 understands. The server 508 thus takes an image format of the inputted information and converts it into another format of known characters. After this processing is complete the server C 508 can then transmit the converted format back to the wireless device 502. The server C 508 could also perform language translation on the inputted information. A microphone 506 at the wireless device 502 accepts voice. Voice clips may be transferred to the server 508 and converted to text using voice recognition software at the server 508. Alternatively, language translation may be performed on the voice file for voice based language translation. After the server 508 has performed these processing steps, voice files or text may be sent back to the wireless device 502.

The system 500 can also be used for user authentication such as with finger print, eye print, or password authentication.

Authentication:

Additionally, the key pad 400/stylus 406 interface could be redefined so that a finger print could be taken for image authentication. This image would be used, for example, for user authentication. The software for recognizing a finger print could reside on a network server 508 or on the hand held device 502.

The present invention allows for handwriting recognition and can be used for authentication. The recognition software can be on the network server or on the hand held device. The present invention also allows for the person to speak to a cell phone/hand held device and access remote macros. For example, by stating Aopen garage@. This command could connect to a network server 508 which would then authenticate the voice. Since voice recognition could be burdensome, this operation could be performed on a networked server 508 or on the hand held device 502. Once the voice has been recognized through voice recognition software, the command will be performed.

In FIG. 6, an embodiment of an input pad such as a touch sensitive screen 600 of another part of the invention allows for collaborating. The present invention allows screens such as screen 600 to be viewed interactively for interacting from separate devices. For example, if three screens such as screens 602-1, 602-2, 602-3 are used to sign a document from different places, signatures 602 can be on separate screens 600 and optionally displayed on other screens as well. Each screen can be watched separately, with signing being done in parallel or sequentially on the separate screens. This allows the signatures displayed on screens 602 to be placed on a virtual document 604 for interactive verification. Each signature displayed on screens 602 can have a different trust level. The escrow agent is Server C 508.

The present invention has been described with a number of features and advantages. For example, one embodiment of the present invention provides a keyboard device including a a plurality of configurable keys and a central server where the central server includes means for dynamically configuring a legend on a selected key from the configurable keys, means for detecting an actuation (selection) of the selected key with the legend, and means for associating the actuation of the selected key with the legend on the selected key. The central server could be remote or local to the keyboard device.

The keys in the keyboard typically could be LCDs for displaying the respective legends, and desirably are touch sensitive.

The keyboard device could be voice based, sound based or macro based, including key, sound or voice. The keyboard device could be wireless, such as a cellular telephone or mobile device. The keyboard device could be non-wireless.

The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and it should be understood that many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present invention and its practical application, to thereby enable others skilled in the art to best utilize the present invention and various embodiments, with various modifications, as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.

Claims

1-12. (canceled)

13. A device comprising:

a processor coupled to a transceiver and a memory device; and
an interface coupled to the processor, wherein the interface includes a key set and a touch screen, wherein the processor is configured to dynamically configure the interface in a configuration that includes configuring the key set as an input and output device and configuring the touch screen as an input and output device.

14. The device of claim 13, wherein the device is configured to include one or more of servers, portable computers (PCs), multifunction communication devices, cellular telephones, and personal digital assistants (PDAs).

15. The device of claim 13, wherein the configuration includes configuring the key set as an input device and configuring the touch screen as an input and output device.

16. The device of claim 13, wherein the configuration includes configuring the key set as an input and output device and configuring the touch screen as an output device.

17. The device of claim 13, wherein the configuration includes configuring the key set as an output device and configuring the touch screen as an input device.

18. The device of claim 13, wherein the configuration includes configuring the key set as an input device and configuring the touch screen as an input device.

19. The device of claim 13, wherein the configuration includes configuring the key set as an input device and configuring the touch screen as an output device.

20. The device of claim 13, wherein the configuration includes configuring the key set as an output device and configuring the touch screen as an output device.

21. The device of claim 13, wherein the configuration includes configuring the key set as an output device and configuring the touch screen as an input and output device.

22. The device of claim 13, wherein the configuration includes configuring the key set as an input and output device and configuring the touch screen as an input device.

23. The device of claim 13, wherein the configuration includes an authentication data receiver that receives authentication data.

24. The device of claim 23, wherein the authentication data includes one or more of audio, handwriting, user profiles, passwords, finger prints, retinal scans, photos, images, and video.

25. The device of claim 23, wherein the processor is configured to process the authentication data by comparing the authentication data to data samples and using information of the comparing to determine a match condition of the authentication data to one or more of a person and a location.

26. The device of claim 23, wherein the processor is configured to process the authentication data by transferring the authentication data to one or more remote devices and comparing the authentication data to data samples stored in the remote devices, wherein the processor is configured to process the authentication data by using information of the comparing to determine a match condition of the authentication data to one or more of a person and a location.

27. The device of claim 23, wherein the authentication data includes audio data.

28. The device of claim 27, wherein the audio data includes one or more of voice data of a person and sound data of a location.

29. The device of claim 28, wherein the processor is configured to convert the audio data to a text format.

30. The device of claim 27, wherein the interface includes a microphone.

31. The device of claim 30, wherein the audio data includes one or more of a voice clip and a voice file received via the microphone, the processor configured to process the audio data, the processing including executing one or more functions that include storing the data within the device and communicating the data to a remote device.

32. The device of claim 31, wherein the processing includes converting the audio data from a first language to a second language.

33. The device of claim 30, wherein the audio data includes one or more of a voice clip and a voice file received via the microphone, the processor configured to process the audio data, wherein the processing includes converting the audio data to text data in at least one of a plurality of languages.

34. The device of claim 23, wherein the authentication data includes handwriting data.

35. The device of claim 23, wherein the processor is configured to process the authentication data to determine a match condition of the authentication data to one or more of a person and a location

36. The device of claim 35, wherein the processor controls the transceiver according to the match condition.

37. The device of claim 35, wherein the processor controls access by a user to the device according to the match condition.

38. The device of claim 35, wherein the processor controls access to services of a network via the device according to the match condition.

39. The device of claim 35, wherein the processor controls access to a vehicle via the device according to the match condition.

40. The device of claim 35, wherein the processor controls a transaction via the device according to the match condition.

41. The device of claim 13, wherein the processor receives from the memory device configuration data that includes the configuration.

42. The device of claim 13, wherein the processor receives via the transceiver configuration data that includes the configuration.

43. The device of claim 13, wherein the processor automatically selects the configuration in response to an input received from a user.

44. The device of claim 13, wherein the processor automatically selects the configuration in response to context information.

45. The device of claim 44, wherein the context information includes one or more of a location, recent function configurations, and information received from at least one remote device via the transceiver.

46. The device of claim 13, wherein the transceiver includes one or more of a wired transceiver and a wireless transceiver.

47. The device of claim 13, wherein configuring the key set as the input device includes configuring at least one first key of the key set for a first function of a plurality of functions and configuring at least one second key of the key set for a second function of a plurality of functions.

48. The device of claim 13, wherein the processor is configured to display a keyboard on the touch screen, wherein the keyboard is touch-sensitive.

49. The device of claim 13, wherein the interface includes a display, wherein the key set includes a keyboard displayed on the display.

50. The device of claim 49, wherein one or more of the key set and the keyboard can be moved from a first location selected by a user to a second location selected by the user within the display.

51. The device of claim 49, wherein at least a portion of the display is used for one or more of receiving character inputs and viewing.

52. The device of claim 13, wherein the configuration includes configuring the key set in one of a plurality of languages.

53. The device of claim 13, wherein the key set configured as the output device is configured to provide audible feedback in response to inputs of each key of the key set, wherein the audible feedback is in one or more of a plurality of languages and sounds.

54. The device of claim 13, wherein the key set and the touch screen are co-located in a region.

55. A system comprising:

a network; and
a plurality of client devices coupled to the network, each of the client devices including a processor coupled to a transceiver and an interface, wherein the interface includes a key set and a touch screen, wherein the processor is configured to dynamically configure the interface in a configuration that includes configuring the key set as an input and output device and configuring the touch screen as an input and output device.

56. The system of claim 55, wherein the client devices include one or more of servers, portable computers (PCs), multifunction communication devices, cellular telephones, and personal digital assistants (PDAs).

57. The system of claim 55, wherein the configuration includes configuring the key set as an input device and configuring the touch screen as an input and output device.

58. The system of claim 55, wherein the configuration includes configuring the key set as an input and output device and configuring the touch screen as an output device.

59. The system of claim 55, wherein the configuration includes configuring the key set as an output device and configuring the touch screen as an input device.

60. The system of claim 55, wherein the configuration includes configuring the key set as an input device and configuring the touch screen as an input device.

61. The system of claim 55, wherein the configuration includes configuring the key set as an input device and configuring the touch screen as an output device.

62. The system of claim 55, wherein the configuration includes configuring the key set as an output device and configuring the touch screen as an output device.

63. The system of claim 55, wherein the configuration includes configuring the key set as an output device and configuring the touch screen as an input and output device.

64. The system of claim 55, wherein the configuration includes configuring the key set as an input and output device and configuring the touch screen as an input device.

65. The system of claim 55, wherein the configuration includes configuring the interface as an authentication data receiver, wherein the authentication data includes one or more of audio data, handwriting data, user profiles, passwords, finger prints, retinal scans, photos, images, and video.

66. The system of claim 65, wherein the processor is configured to process the authentication data by comparing the authentication data to data samples and using information of the comparing to determine a match condition of the authentication data to one or more of a person and a location.

67. The system of claim 65, wherein the processor of a first client device of the plurality of client devices is configured to process the authentication data by transferring the authentication data to a second client device of the plurality of client devices and comparing the authentication data to data samples stored in the second client device, wherein the processor is configured to process the authentication data by using information of the comparing to determine a match condition of the authentication data to one or more of a person and a location.

68. The system of claim 65, wherein the processor is configured to process the authentication data to determine a match condition of the authentication data to one or more of a person and a location

69. The system of claim 55, wherein the processor automatically selects the configuration in response to one or more of context information and an input received from a user.

70. The system of claim 55, wherein the network includes one or more of a wired network and a wireless network.

71. The system of claim 55, wherein the interface includes a microphone configured to receive audio data that includes one or more of a voice clip and a voice file, wherein a first client device receiving the audio data is configured to transfer the audio data to a second client device.

72. The system of claim 71, wherein the processor of the second client device is configured to:

convert the audio data to text data;
transfer the text data to the first client device, wherein the interface of the first client device is configured to display the text data.

73. The system of claim 55, wherein the processor of at least one client device is configured to communicate using a plurality of languages with at least one other client device.

74. The system of claim 55, a first set of the plurality of client devices include communication devices and a second set of the plurality of client devices include a server, wherein communication among the client devices of the first set are routed through the server.

75. A method comprising:

forming a coupling among a plurality of client devices;
receiving at a first client device configuration data from a second client device; and
dynamically configuring the first client device using the configuration data, the configuring including configuring an interface of the first client device, wherein the interface includes a key set and a touch screen, wherein the configuring includes configuring the key set as an input and output device and configuring the touch screen as an input and output device.

76. The method of claim 75, wherein the configuring includes configuring the key set as an input device and configuring the touch screen as an input and output device.

77. The method of claim 75, wherein the configuring includes configuring the key set as an input and output device and configuring the touch screen as an output device.

78. The method of claim 75, wherein the configuring includes configuring the key set as an output device and configuring the touch screen as an input device.

79. The method of claim 75, wherein the configuring includes configuring the key set as an input device and configuring the touch screen as an input device.

80. The method of claim 75, wherein the configuring includes configuring the key set as an input device and configuring the touch screen as an output device.

81. The method of claim 75, wherein the configuring includes configuring the key set as an output device and configuring the touch screen as an output device.

82. The method of claim 75, wherein the configuring includes configuring the key set as an output device and configuring the touch screen as an input and output device.

83. The method of claim 75, wherein the configuring includes configuring the key set as an input and output device and configuring the touch screen as an input device.

84. The method of claim 75, wherein the configuring includes configuring at least one first key of the key set for a first function of a plurality of functions and configuring at least one second key of the key set for a second function of a plurality of functions.

85. The method of claim 75, wherein the configuring includes configuring the interface as an authentication data receiver that receives authentication data, wherein the authentication data includes one or more of audio data, handwriting data, user profiles, passwords, finger prints, retinal scans, photos, images, and video.

86. The method of claim 85, comprising:

comparing the authentication data to data samples;
determining a match condition of the authentication data to one or more of a person and a location using information of the comparing.

87. The method of claim 85, comprising:

transferring the authentication data to one or more remote devices;
comparing the authentication data to data samples stored in the remote devices;
determining a match condition of the authentication data to one or more of a person and a location using information of the comparing.

88. The method of claim 85, comprising determining a match condition of the authentication data to one or more of a person and a location

89. The method of claim 88, comprising controlling access to the coupling among the plurality of client devices according to the match condition.

90. The method of claim 75, comprising automatically selecting the configuration data in response to an input received from a user.

91. The method of claim 75, comprising automatically selecting the configuration data in response to context information, wherein the context information includes one or more of a location, recent function configurations, and information received from at least one remote device via the transceiver.

Patent History
Publication number: 20080146283
Type: Application
Filed: Nov 14, 2006
Publication Date: Jun 19, 2008
Inventors: Raman K. Rao (Palo Alto, CA), Sunil Kaliputnam Rao (Palo Alto, CA), Sanjay Kaliputnam Rao (Palo Alto, CA)
Application Number: 11/599,715
Classifications
Current U.S. Class: Restrictive Dialing Circuitry (455/565); Auto-dialing Or Repertory Dialing (e.g., Using Bar Code, Etc.) (455/564)
International Classification: H04B 1/38 (20060101); H04M 1/00 (20060101);