MULTI-DIMENSIONAL TOUCH INPUT VECTOR SYSTEM FOR SENSING OBJECTS ON A TOUCH PANEL
A touch panel system allows multiple simultaneous touch objects on a touch panel to be distinguished. The touch panel includes on its periphery a first plurality of light transmitters and a second plurality of light sensors, each positioned around at least a portion of a perimeter of the touch panel. A processor in communication with the at least one light sensor acquires light intensity data from the sensor(s), wherein any one or more touch objects placed within a touch detectable region of the panel interrupts at least a subset of light paths between transmitter and sensor. Based on the interrupted light paths, the processor generates a touch input vector that represents the placement of each touch object on the touch panel.
Latest PQ Labs, Inc. Patents:
This application is a continuation of U.S. patent application Ser. No. 12/910,704, filed Oct. 22, 2010, now U.S. Pat. No. 8,605,046, granted Dec. 10, 2013.
TECHNICAL FIELDThe present invention relates a system for sensing multiple touch objects placed on the surface of a touch panel.
BACKGROUND ARTA touch panel is a type of user interface device that may be attached to a surface of a display device or a projection surface. Traditional touch panels, although widely used, is unable to detect multiple fingers or objects on a surface. If two or more fingers are simultaneously touched, the touch panel may stop working or it may report only one of the touch positions or an inaccurate phantom touch position.
The interface of controlling a machine using multiple fingers has a long history even before the computer was invented (e.g. piano, control panel, DJ mixer, etc). As computers become available to more populations and become increasing powerful, the human computer interface has evolved more and more natural to the users representing real interactions in our physical world. In the early days, computers used punched cards as input. Later, console and keyboard interface was introduced. However, at that time, computer users were still limited to programmers and trained staffs because users had to remember all the command and parameters in order to interact with computers. The use of WIMP (Window, Icon, Menu, Pointing device) greatly simplified the task. Virtual buttons including icons, menus represent physical operations in a 2D graphical way so that a pointing device (e.g. mouse, touch screen) can simulate clicks on it intuitively. However, unlike interactions in our real world, WIMP is limited to single point inputting device (e.g. the user can use mouse to point to only a single location at one time). This limitation results in serious inefficiency. One logical operation may require a series of mouse clicks and mouse moves. For example, a user may click on multiple levels of menus and move across the screen to access buttons, icons in order to perform one logical operation. Imagine if we have only one finger instead of ten in our everyday life, the life would be very difficult for us. In addition, WIMP is limited to single user, multi-user operation is not possible because only one mouse/pointing device is available system wide. The use of multi-pointing device (e.g. a touch screen that can detect locations of multiple fingers simultaneously) can significantly increase interaction efficiency and allows multi-user collaborations.
A traditional infrared touch panel comprises an array of light transmitters on two adjacent sides of the touch panel and an array of light detectors on the other two adjacent sides of the touch panel. Each light transmitter corresponds to one light detector on the opposite position. The transmitter and detector layout formed X and Y light beam paths, where a single finger touch on the surface will block one X light beam and one Y light beam. The touch coordinates and size of the touch area can then be determined by the intersection of the blocked X beam and Y beam. The problem associated with light beam matrix touch screen is that it cannot accurately detect multiple touch positions simultaneously. For example, if there are two fingers touching the surface at the same time, four light beam intersection points will be found. Two of the intersection points will be phantom points. The actual touch positions cannot be determined on such light beam matrix touch panel.
In short, the input vector generated by a traditional touch panel is a singleton <P1>, where P1 is the location of a touch point. It is therefore an object of the present invention to provide more dimensions and features of the touch property to allow sensible control and a new generation of interaction.
Singleton touch input vector generated by a traditional infrared touch panel:
<P1>
Multi-dimensional touch input vector generated by the present invention:
The present invention provides an apparatus to detect ID, position, size and the convex contour of one or more objects placed on the surface of a touch panel.
The system of the invention uses a method for detecting an ID, position, size and convex contour of at least one touch object placed on a touch region W within a perimeter of a touch panel, the touch panel including on its periphery at least one light transmitter and at least one light sensor. Steps of the method involve the following:
(a) acquiring light intensity data from a subset of light paths L between at least one light transmitter and at least one light sensor of the touch panel, at least one of the light paths being interrupted by placement of at least one touch object within the touch region W;
(b) computing hot regions H={hi: i≦NH, where hi is the ith hot region} from a subset of said light intensity data by calculating the shape and boundary of interrupted light paths;
(c) computing expected object area S by overlaying said hot regions H and comparing it with a predetermined overlay region P;
(d) deriving totally disconnected expected object area S′ from S;
(e) computing spatial properties, including position, size and convex contour, of said totally disconnected expected object area S′;
(f) associating touch objects with a subset of said totally disconnected expected object area S′; and
(g) assigning to each said touch objects an ID and said spatial properties as a touch input vector representing the placement of each touch object on the touch panel.
The touch system apparatus for detecting objects placed on a surface within a perimeter of a touch panel includes at least one light transmitter positioned around at least a portion of the perimeter of said touch panel and at least one light sensor positioned around at least a portion of the perimeter of said touch panel, wherein said at least one light sensor is of L-shape or linear shape, wherein at least one touch object placed on the surface within the perimeter of the touch panel interrupts at least a subset of light paths between said at least one light transmitter and said at least one light sensor.
According to one embodiment, the light sensor is a CIS (contact image sensor) module in L-shape or linear shape positioned around at least a portion of the perimeter of the touch panel.
According to one embodiment, the light transmitter comprises a LED semiconductor die and a lens wherein said lens has a wider x-axis view angle than y-axis view angle. This structure allows more energy to be focused and directed towards the light sensor array and reduces energy waste on other directions.
For this invention, a method for detecting an id, position, size and convex contour of at least one object placed on a touch region W within a perimeter of a touch panel including on its periphery at least one light transmitter and at least one light sensor, said-method comprises the following steps:
(a) acquiring light intensity data from a subset of light paths L between at least one light transmitter and at least one light sensor of the touch panel, at least one of the light paths being interrupted by placement of at least one touch object within the touch region W;
(b) computing hot regions H={hi: i≦NH, where hi is the ith hot region} from a subset of said light intensity data by calculating the shape and boundary of interrupted light paths;
(c) computing expected object area S 55 by overlaying said hot regions H and comparing it with a predetermined overlay region P;
(d) deriving totally disconnected expected object area S′ from S;
(e) computing spatial properties, including position, size and convex contour, of said totally disconnected expected object area S′;
(f) associating touch objects with a subset of said totally disconnected expected object area S′;
(g) assigning to each of said touch objects an ID and said spatial properties as a touch input vector representing the placement of each touch object on the touch panel.
The first step is to acquire light intensity data from a subset of light paths L. In a preferred embodiment, such subset of light paths can be predefined based on the view angles of the light transmitters and light sensors. For example, in
In a preferred embodiment of this invention where the touch accuracy is the first priority, it is best that the subset of light paths L contains all the light paths that are within the view angles of light transmitters and light sensors. For example, in
In another preferred embodiment of this invention where the detection speed is the first priority, it is best that the subset of light paths L contains the least light paths that are sufficient for touch object detection. For example, the subset of light paths L can be dynamically reconfigured so that the locality properly of previous frames and future frames can be used to reduce the number of light paths needed to detect touch objects.
The first step of (a) acquiring a subset of light intensity data from said at least one light sensors further comprises the steps of:
(1) switching on each of said at least one light transmitters at least once for a calculated duration;
(2) reading electrical signals at least once from each light sensor of said subset of said at least one light sensors during the switch on time and/or before the switch on time.
Typically only one light transmitter is switched on at a time when multiple signals are read from different light sensors. The switch on duration depends on the response time of the light sensors. The response time of light sensors is dynamically affected by signal strength, ambient light, etc. For example, the response time is shorter in an environment with ambient light than in a dark room. Depending on different configurations, the switch on time can be configured to a constant value or be controlled by a processor in communication with the light transmitters and light sensors to adjust the duration dynamically.
Next, hot regions are computed from the light intensity data by calculating the shape and boundary of interrupted light paths. In a preferred embodiment of this invention where dynamic reconfiguring the subset of light paths L is expensive, hot regions can be computed from a subset of said light intensity data instead of reconfiguring the subset of light paths.
In order to illustrate the method used in this invention step by step as an example, a simplified subset of light paths L is chosen for illustration purpose. FIG. 5 shows the subset of light paths to be used in this particular example. The same subset of light paths L is used in
-
- {L2-D2, L2-D9,
- L3-D3, L3-D10,
- L4-D4, L4-11, L4-D19,
- L5-D5, L5-D20,
- L6-D6, L6-D21,
- L7-D7, L7-D22,
- L8-D1, L8-D8, L8-D23,
- L9-D1, L9-D9, L9-D24,
- L10-D3, L10-D10,
- L11-D11, L11-D4,
- L12-D12, L12-D5,
- L13-D13, L13-D6,
- L14-D14, L14-D7,
- L15-D15, L15-D8,
- L16-D16, L16-D9,
- L17-D17, L17-D10,
- L18-D18, L18-D11,
- L19-D19, L19-D12,
- L20-D20, L20-D13,
- L21-D21,
- }
In
-
- h1=AA_BA_BZ_AZ;
- h2=BA_CA_CZ_BZ;
- h3=GA_IA_AZ_GZ;
- h4=JA_KA_KZ_JZ;
- h5=KA_LA_GZ_KZ;
- h6=MA_NA_BZ_AZ;
- h7=NA_OA_CZ_BZ;
- h8=DA_EA_EZ_DZ;
- h9=EA_FA_FZ_EZ;
- h10=JA_KA_NZ_MZ;
- h11=FA_GA_HZ_FZ;
Now we have hot regions H computed.
Next, step (c) is to compute expected object area S by overlaying said hot regions H and comparing it with a predetermined overlay region P.
For the first preferred embodiment, overlay region Ri is calculated as:
where F is a filter region
Overlay regions Ri is the data structure representing 1st, . . . , ith hot regions overlaid all together. Overlay regions R1 through R11 are illustrated in separate
For this example, filter region F is set to be the whole touch region W, as shown in
For example in
R0 is initialized to be { }
To overlay hot region h1(AA_BA_BZ_AZ) on R0:
R1={<AA—BA—BZ—AZ,1>}
To overlay hot region h2 (BA_CA_CZ_BZ) on R1:
R2={<AA—BA—BZ—AZ,1>,<BA—CA—CZ—BZ,1>}
Please note, R2: {<AA_BA_BZ_AZ, 1>, <BA_CA_CZ_BZ,1>} is also considered equivalent to {<AA_CA_CZ_AZ,1>}, which describe the same overlay regions in one big piece instead of two smaller pieces. In short, overlay regions <x1, c> together with <x2, c> is considered equivalent to an overlay region <x1+x2, c>.
To overlay hot region h3 (GA_IA_AZ_GZ) on R2
To overlay hot region h4 (JA_KA_KZ_JZ) on R3
To overlay hot region h11 (FA_GA_HZ_FZ) on R9
The final overlay region R of all hot regions: R=R11.
Regions (e.g. hi) and overlay regions (e.g. Ri) can be stored in a processor's memory or computer's main memory or graphics card memory using vector and/or raster and/or 3D z-order data structures. The use of vector format to represent regions and overlay, regions allows high precision, consumes less memory and fast geometry calculation. The use of raster or 3D z-order formats can also be used in graphics card acceleration. The uses of different data structures to represent the same regions and overlay regions are considered to be equivalent between each other.
where G is a set of hot regions pre-calculated from said subset of light intensity data wherein the light intensity values are filled with zeros or a value below a predefined threshold. Pre-calculated hot regions G are outlined in
In one preferred embodiment (such as this example), G=L, which means all the light paths are hot regions. As shown in
The calculation is similar to the calculation of overlay regions except that the filter region F is not applied. Overlay regions Q can be seen in
The next thing to do in step (c) is to calculate expected object area S by comparing R and a predetermined overlay region P. In this first preferred embodiment, P is set to be Q which is previously calculated.
S=Ui=1NPSelectCompare(xi,ci,ε,R),
where P={<x1,c1>, . . . , <xN,P,cN,P>} and ε=0 or a small integer
and
In this embodiment, we set ε=0. However, in other embodiments, ε can be a small integer such as 1 or 2, etc. In an ideal situation, the light intensity data are all acquired in one shot or in a very small duration. However, there are some cases that the acquiring time cannot be ignored. For example, in case that the touch object moves extremely fast, the light intensity acquired by different light sensors is sampled at different time one after another, which causes some of the hot regions to shift away from its actually position during the elapsed time. By increasing the ε value, this invention can be more robust to detect fast moving objects.
In this embodiment, we compare the overlay regions R and Q to find the common regions labeled with the same c. For example, the overlay region <BG_CJ_CK,3> is in R (or R9) and the overlay region <bc, 3> is in Q. The region of BG_CJ_CK and the region of bc is equivalent to each other because they mark the same region in
In this embodiment, for example, the common regions are {<ad, 3>, <bd, 3>, <bc, 3>, <ae, 3>, <hg, 3>, <fh, 3>, <hh, 3>, <fi, 3>, <ej, 3>, <ek, 3> <u5, 2>}, so S={ad, bd, bc, ae, hg, fh, hh, fi ej, ek, u5}, as seen in
In a second preferred embodiment, step (c) is processed in a different way, which is slightly faster. For the second preferred embodiment, overlay region Ri is calculated as:
where F is a filter region and
The different overlay regions R1 through R11 are shown in
In this embodiment, R0 is initialized to be the precalculated set Q filtered by F. In this example we use the whole touch region W as the filter so that R0=Q.
R0={<a1,2>,<a2,2>,<a3,2>,<a4,2>,<a5,3, . . . <b1,2>,<b2,2>,<b3,2>,<b4,3>, . . . }
To overlay hot region h1 (A1_B1_B2_A2) on R0:
R1={<a1,1>,<a2,1>,<a3,1>,<a4,1>,<a5,2>, . . . <b1,2>,<b2,2>,<b3,2>,<b4,3>, . . . }
To overlay hot region h2 (B1_C1_C2_B2) on R1:
R2={<a1,1>,<a2,1>,<a3,1>,<a4,1>,<a5,2>, . . . <b1,1>,<b2,1>,<b3,1>,<b4,2>, . . . }
To overlay hot region h9 (F1_G1_H2_F2) on R8
Regions (e.g. hi) and overlay regions (e.g. Ri) can be stored in a processor's memory or computer's main memory or graphics card memory using vector and/or raster and/or 3D z-order data structures. The use of vector format to represent regions and overlay regions allows high precision, consumes less memory and fast geometry calculation. The use of raster or 3D z-order formats can also be used in graphics card acceleration. The uses of different data structures to represent the same regions and overlay regions are considered to be equivalent between each other.
The next thing to do in step (c) is to calculate expected object area S by comparing R and a predetermined overlay region P. In this embodiment, P is set to be {<W, 0>}. Expected object area S is computed as:
S=Ui=1NRSelectCompare(xi,ci,ε,P),
where R={<x1,c1>, . . . , <xNR,cNR>} and ε=0 or a small integer
and
In this embodiment, we set ε=0. In this embodiment, we compare the overlay regions R and P (={<W, 0>}) to find the common regions labeled with the same c. Since the labels in P are zeros, the expected object area is selected regions whose labels are zeros. The compared common regions are {<ad, 0>, <bd, 0>, <bc, 0>, <ae, 0>, <hg, 0>, <fh, 0>, <hh, 0>, <ej, 0>, <fi, 0>, <ek, 0>, <u5, 0>}, so S={ad, bd, bc, ae, hg, fh, hh, ej, fi, ek, u5}.
Now we have expected object area S, the next step is to compute the totally disconnected expected object area S′. In this example, S={ad, bd, bc, ae, hg, fh, hh, ej, fi, ek, u5}, where regions ad, bd, bc, ae are connected, regions hg, fh, hh, ej, fi, ek are connected and u5 is connected by itself.
The totally disconnected expected object area S′={ad+bd+bc+ae, hg+fh+hh+ej+fi+ek, u5} (shown in
The last step (g) is to assign an ID and said spatial properties (e.g., position, size and convex contour) to each of the touch objects on the touch panel. The assignment generates a multi-dimensional touch input vector that can be used in the same way as touch input data from prior single-dimensional touch panels:
In order to assign a consistent ID to the same touch object, a temporal and spatial analysis is performed to identify the same touch object at a slightly different location detected at different times. For example, a recursive function can be defined to enumerate all possible id-to-object mappings in order to find the best mapping that minimize the global moving difference between the previous frame and the current frame.
For each frame, the steps (a), (b), (c), (d), (e), (f), (g) are performed. A typical implementation of the present invention performs 60 frames per second in order to continuously capture touch objects movement and assign a correct and consistent ID to the same touch object.
A further improvement in the specific LED or light transmitter design for the present invention involves coating with a reflective material around at least a portion of the surface of the light transmitter. This allows light energy previously escaping to other directions to be bounced back and forth until reaching a proper escaping direction. Thus, light energy is more focused and directed towards the light sensors in the present invention.
In another preferred embodiment, there is at least one internal processor in communication with said at least one light sensor so as to obtain light intensity data. Further, at least one external processor is configured to communicate with said at least one internal processor to accelerate the calculation of overlays. Such external processor can be a computer processor and/or a computer graphics card. The communication protocol need to be high bandwidth and with little latency. In one preferred embodiment, such communication protocol can be USB or Ethernet.
Claims
1. A touch system for detecting an object placed on a surface within a perimeter of a touch panel having an x-axis and a y-axis comprising:
- a first plurality of light transmitters in optical communication with a second plurality of light sensors, each light sensor positioned around at least a portion of the perimeter of said touch panel, each light transmitter comprising a LED semiconductor die and a lens wherein said lens has a wider x-axis view angle than y-axis view angle.
2. The touch system as in claim 1, wherein the cross section of said lens is an ellipse.
3. The touch system as in claim 1, wherein at least one light transmitter is coated with a reflective material around at least a portion of the surface of said at least one light transmitter.
4. A touch system for detecting an object placed on a surface within a perimeter of a touch panel having four corners comprising:
- a first plurality of light transmitters positioned around at least a portion of the perimeter of said touch panel; and
- a second plurality of light sensors positioned around at least a portion of the perimeter of said touch panel, wherein at least some of the second plurality of light sensors have an L-shape or linear shape, wherein at least one touch object placed on the surface within the perimeter of the touch panel interrupts at least a subset of light paths between at least one of the light transmitters and one of the light sensors.
5. The touch system as in claim 4 further comprising at least one processor in communication with at least some of the second plurality of light sensors so as to obtain light intensity data therefrom, the processor configured to locate and distinguish one or more touch objects placed on the touch panel based on the interrupted light paths.
6. The touch system as in claim 4, wherein at least one light sensor is a CIS module.
7. The touch system as in claim 4, wherein four light transmitters among the first plurality of light transmitters are positioned at the four corners of said touch panel.
8. A touch system for detecting an id, position, size, and convex contour of at least one object placed on a surface within a perimeter of a touch panel comprising:
- a first plurality of light transmitters positioned around at least a portion of the perimeter of said touch panel; a second plurality of light sensors positioned around at least a portion of the perimeter of said touch panel;
- at least one light sensor from the second plurality of light sensors providing light intensity data from a subset of light paths between at least one light transmitter in the first plurality of light sensors and said at least one light sensor, one or more of said light paths being interrupted by placement of at least one touch object onto the surface of the touch panel;
- at least one internal processor in communication with said at least one light sensor so as to obtain light intensity data;
- means for computing hot regions H=[hi:i≦NH, where hi is the ith hot region and NH is the number of hot regions] from a subset of said light intensity data by calculating the shape and boundary of interrupted light paths;
- means for computing expected object area S by overlaying said hot regions H and comparing it with a predetermined overlay region P;
- means for deriving totally disconnected expected object area S′ from S;
- means for computing spatial properties, including position, size and convex contour, of said totally disconnected expected object area S′;
- means for associating touch objects with a subset of said totally disconnected expected object area S′; and
- means for assigning to each said touch objects an ID and said spatial properties as a touch input vector representing the placement of each touch object on the touch panel.
9. The touch system as in claim 8, further comprises:
- at least one external processor in communication with said at least one internal processor to accelerate the calculation of overlays.
Type: Application
Filed: Dec 9, 2013
Publication Date: Jun 19, 2014
Applicant: PQ Labs, Inc. (San Jose, CA)
Inventor: Fei Lu (San Jose, CA)
Application Number: 14/101,095
International Classification: G06F 3/03 (20060101);