Method and apparatus for inputting information including coordinate data
A method, computer readable medium, and apparatus for inputting information, including coordinate data, includes: extracting a predetermined object from an image, including a predetermined object above a plane; detecting motion of the predetermined object while the predetermined object is within a predetermined distance from the plane; and then determining if to input predetermined information.
Latest SMART Technologies ULC Patents:
- Interactive input system with illuminated bezel
- System and method of tool identification for an interactive input system
- Method for tracking displays during a collaboration session and interactive board employing same
- System and method for authentication in distributed computing environment
- Wirelessly communicating configuration data for interactive display devices
This document is a continuation of U.S. application Ser. No. 09/698,031 filed on Oct. 30, 2000, and is based on Japanese patent application No. 11-309412 filed in the Japanese Patent Office on Oct. 29, 1999, the entire contents of each of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to a method and apparatus for inputting information including coordinate data. More particularly, the present invention relates to a method and apparatus for inputting information including coordinate data of a location of a coordinate input member, such as a pen, a human finger, etc., on an image displayed on a relatively large screen.
2. Discussion of the Background
Lately, presentation systems, electronic copy boards, or electronic blackboard systems provided with a relatively large screen display device, such as a plasma display panel, a rear projection display, etc., are coming into wide use. Certain type of presentation systems also provide a touch input device disposed in front of a screen for inputting information related to the image displayed on the screen. Such a touch input device is also referred as an electronic tablet, an electronic pen, etc.
As to such a presentation system, for example, when a user of the system touches an icon on a display screen, a touch input device detects and inputs the touching motion and the coordinates of the touched location. Similarly, when the user draws a line, the touch input device repetitively detects and inputs a plurality of coordinates as a locus of the drawn line.
As an example, Japanese Laid-Open Patent Publication No. 11-85376 describes a touch input apparatus provided with light reflecting devices disposed around a display screen, light beam scanning devices, and light detectors. The light reflecting device has a characteristic to reflect incident light toward a direction close to the incident light. During an operation of the apparatus, scanning light beams emitted by the light beam scanning devices are reflected by the light reflecting devices, and then received by the light detectors. When a coordinate input member, such as a pen, a user's finger, etc., touches the surface of the screen at a location, the coordinate input member interrupts the path of the scanning light beams, and thereby the light detector is able to detect the touched location as a missing of the scanning light beams at the touched location.
In this apparatus, when a certain location-detecting accuracy in a direction perpendicular to the screen is required, the scanning light beams are desired to be thin and to scan on a plane close enough to the screen. Meanwhile, when the surface of the screen is contorted, the contorted surface may interfere with the transmission of the scanning light beams, and consequently a coordinate input operation might be impaired. As a result, for example, a double-click operation might not be properly detected, free hand drawing lines and characters might be erroneously detected, and so forth.
As another example, Japanese Laid-Open Patent Publication No. 61-196317 describes a touch input apparatus provided with a plurality of television cameras. In the apparatus, the plurality of television cameras detect three-dimensional coordinates of a moving object, such as a pen, as a coordinate input member. Because the apparatus detects a three-dimensional coordinates, the plurality of television cameras are desirable to capture images of the moving object at a relatively high flame rate.
As further example, a touch input apparatus provided with an electro magnetic tablet and an electromagnetic stylus is known. In this apparatus, a location of the stylus is detected based on electromagnetic induction between the tablet and the stylus. Therefore, a distance between the tablet and the stylus tends to be limited in a rather short distance, for example, eight millimeters; otherwise a large size stylus or a battery powered stylus is used.
SUMMARY OF THE INVENTIONThe present invention has been made in view of the above-discussed and other problems and to overcome the above-discussed and other problems associated with the background methods and apparatus. Accordingly, an object of the present invention is to provide a novel method and apparatus that can input information including coordinate data even when the surface of a display screen is contorted to a certain extent and without using a light scanning device.
Another object of the present invention is to provide a novel method and apparatus that can input information including coordinate data using a plurality of coordinate input members, such as a pen, a human finger, a stick, a rod, a chalk, etc.
Another object of the present invention is to provide a novel method and apparatus that can input information including coordinate data with a plurality of background devices, such as a chalkboard, a whiteboard, etc., in addition to a display device, such as a plasma display panel, a rear projection display.
To achieve these and other objects, the present invention provides a method, computer readable medium and apparatus for inputting information including coordinate data that include extracting a predetermined object from an image including the predetermined object above a plane, detecting a motion of the predetermined object while the predetermined object is in a predetermined distance from the plane, and determining to input predetermined information.
A more complete appreciation of the present invention and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views, and more particularly to
The display panel 12 displays an image with, for example, a 48 by 36 inch screen (diagonally 60 inches) and 1024 by 768-pixel resolution, which is referred as an XGA screen. For example, a plasma display panel, a rear projection display, etc., may be used as the display panel 12. Each of the first electronic camera 10 and the second electronic camera 11 implements a two-dimensional imaging device with a resolution that enables such as a selecting operation of an item in a menu window, a drawing operation of free hand lines, letters, etc. A two-dimensional imaging device is also referred as an area sensor.
The two-dimensional imaging device preferably has variable output frame rate capability. The two-dimensional imaging device also preferably has a random access capability that allows any imaging cell therein randomly accessed to obtain an image signal from the cell. Such a random access capability is sometimes also referred to as random addressability. As an example of such a random access two-dimensional imaging device, a complementary metal oxide semiconductor sensor (CMOS sensor) may be utilized.
The electronic camera 10 also includes a wide-angle lens 50 which covers around 90 degrees or wider angle and an analog to digital converter. Likewise, the electronic camera 11 also includes a wide-angle lens 52 which covers around 90 degrees or wider angle and an analog to digital converter. The first electronic camera 10 is disposed at a upper corner of the display panel 12 and such that an optical axis of the wide-angle lens 50 forms an angle of approximately 45 degrees with the horizontal edge of the display panel 12. The second electronic camera 11 is disposed at the other upper corner of the display panel 12 and such that the optical axis of the wide-angle lens 52 forms an angle of approximately 45 degrees with the horizontal edge of the display panel 12.
Further, the optical axis each of the electronic cameras 10 and 11 is disposed approximately parallel to a display screen surface of the display panel 12. Thus, the electronic cameras 10 and 11 can capture whole the display screen surface of the display panel 12, respectively. Each of the captured images is converted into digital data, and the digital image data is then transmitted to the control apparatus 2.
The control apparatus 2 also includes a local area network controller (LAN controller) 32, a LAN interface 33, a floppy disk controller (FD controller) 34, a FD drive 35, a compact disc read only memory controller (CD-ROM controller) 36, a CD-ROM drive 37, a keyboard controller 38, a mouse interface 39, a real time clock generator (RTC generator) 40, a CPU bus 41, a PCI bus 42, an internal X bus 43, a keyboard 44, and a mouse 45.
The CPU 20 executes a boot program, a basic input and output control system (BIOS) program stored in the ROM 24, an operating system (OS), application programs, etc. The main memory 21 may be structured by, e.g., a dynamic random access memory (DRAM), and is utilized as a work memory for the CPU 20. The clock generator 22 may be structured by, for example, a crystal oscillator and a frequency divider, and supplies a generated clock signal to the CPU 20, the bus controller 23, etc., to operate those devices at the clock speed.
The bus controller 23 controls data transmission between the CPU bus 41 and the internal X bus 43. The ROM 24 stores a boot program, which is executed immediate after the coordinate data input system 1S is turned on, device control programs for controlling the devices included in the system 1S, etc. The PCI bridge 25 is disposed between the CPU bus 41 and the PCI bus 42 and transmits data between the PCI bus 42 and devices connected to the CPU bus 41, such as the CPU 20 through the use of the cash memory 26. The cash memory 26 may be configured by, for example, a DRAM.
The hard disk 27 stores system software such as an operating system, a plurality of application programs, various data for multiple users of the coordinate data input system 1S. The hard disk (HD) controller 28 implements a standard interface, such as a integrated device electronics interface (IDE interface), and transmits data between the PCI bus 42 and the hard disk 27 at a relatively high speed data transmission rate.
The display controller 29 converts digital letter/character data and graphic data into an analog video signal, and controls the display panel 12 of the coordinate data input apparatus 1 so as to display an image of the letters/characters and graphics thereon according to the analog video signal.
The first image processing circuit 30 receives digital image data output from the first electronic camera 10 through a digital interface, such as an RS-422 interface. The first image processing circuit 30 then executes an object extraction process, an object shape recognition process, a motion vector detection process, etc. Further, the first image processing circuit 30 supplies the first electronic camera 10 with a clock signal and an image transfer pulse via the above-described digital interface.
Similarly, the second image processing circuit 31 receives digital image data output from the second electronic camera 11 through a digital interface, such as also an RS-422 interface. The second image processing circuit 31 is configured as the substantially same hardware as the first image processing circuit 30, and operates substantially the same as the first image processing circuit 30 operates. That is, the second image processing circuit 31 also executes an object extraction process, an object shape recognition process, a motion vector detection process, and supplies a clock signal and an image transfer pulse to the second electronic camera 11 as well.
In addition, the clock signal and the image transfer pulse supplied to the first electronic camera 10 and those signals supplied to the second electronic camera 11 are maintained in synchronization.
The LAN controller 32 controls communications between the control apparatus 2 and external devices connected to a local area network, such as an Ethernet, via the LAN interface 33 according to the protocol of the network. As an example of an interface protocol, the Institute of Electrical and Electronics Engineers (IEEE) 802.3 standard may is used.
The FD controller 34 transmits data between the PCI bus 42 and the FD drive 35. The FD drive 35 reads and writes a floppy disk therein. The CD-ROM controller 36 transmits data between the PCI bus 42 and the CD-ROM drive 37. The CD-ROM drive 37 reads a CD-ROM disc therein and sends the read data to the CD-ROM controller 36. The CD-ROM controller 36 and the CD-ROM drive 37 may be connected with an IDE interface.
The keyboard controller 38 converts serial key input signals generated at the keyboard 44 into parallel data. The mouse interface 39 is provided with a mouse port to be connected with the mouse 45 and controlled by mouse driver software or a mouse control program. In this example, the coordinate data input apparatus 1 functions as a data input device, and therefore the keyboard 44 and the mouse 45 may be omitted from the coordinate data input system 1S in normal operations except for a moment during a maintenance operation for the coordinate data input system 1S. The RTC generator 40 generates and supplies calendar data, such as day, hour, and minute, etc., and is battery back-upped.
Now, a method for determining a location where a coordinate input member has touched on or come close to the image display surface of the display panel 12 is described.
As stated above, the first and second electronic cameras 10 and 11 are disposed such that the optical axes of the wide-angle lenses 50 and 52, i.e., the optical axes of incident lights to the cameras, are parallel to the display surface of the display panel 12. Further, the first and second electronic cameras 10 and 11 are disposed such that each of the angles of view of the electronic cameras 10 and 11 covers substantially a whole area where the coordinate input member can come close and touch the display panel 12.
In
The symbol P denotes a point where an image of the contacting point A(x, y) is formed on the CMOS image sensor 51. The point P is referred as a projected point P of the contacting point A(x, y). The symbol h denotes a distance between the point P and the point Q. The symbol a denotes an angle which the optical axis of the wide-angle lens 50 forms with the X-line, and the symbol θ denotes an angle which the optical axis of the wide-angle lens 50 forms with a line connecting the contacting point A(x, y) and the point P.
Referring to
θ=arctan(h/f) (1)
β1−α−θ (2)
Where, the angle α and the distance f are constant values, because these values are determined by a mounted mutual location of the wide-angle lens 50 and the CMOS image sensor 51, and a mounted angle of the wide-angle lens 50 to the line X-line at a manufacturing plant. Therefore, when the distance h is given, the angle β1 is solved. Regarding the second electronic camera 11, similar equations are hold, and thus the angle β2 is solved.
After the angle β1 and the angle β2 are obtained, the coordinates of the contacting point A(x, y) are calculated by the followings based on a principle of trigonometrical survey;
x=Lx tan β2/(tan β1+tan β2) (3)
y=x X tan β1 (4)
Next, a relation between the CMOS image sensor 51 and an image of the edges of the display panel 12 formed on the CMOS image sensor 51 is described. Each of the CMOS image sensors 51 and 53 has a two-dimensional array or a matrix of imaging picture elements (pixels) or imaging cells. When the number of imaging cells in a direction and the number of imaging cells in the other direction are different each other, the CMOS image sensors 51 and 53 are disposed such that a side having the larger number of imaging cells is parallel to the surface of the display panel 12.
Regarding the CMOS image sensors 51 and 53, a coordinate axis along the direction having the larger number of imaging cells is represented by Ycamera axis. A coordinate axis along the direction having a smaller number of imaging cells, i.e., the direction perpendicular to the Ycamera axis is represented by Xcamera axis. Thus, images of the edges or margins of the display panel 12 that are formed on the CMOS image sensors 51 and 53 become a line parallel to the Ycamera axis and perpendicular to the Xcamera axis. A projection of the surface of the display panel 12 on the CMOS image sensors 51 and 53 is formed as substantially the same line on the CMOS image sensors 51 and 53. Accordingly, such a line formed on the CMOS image sensors 51 and 53 is hereinafter referred as “a formed line of the surface of the display panel 12,” “a projected line of the surface of the display panel 12,” or just simply “the surface of the display panel 12.”
When, points A(x1c, y1c), B(x2c, y2c), C(x3c, y3c) are arbitrary points on the projected line of the surface of the display panel 12. An angle δ between a line connecting each point and the origin of the coordinate system and the Ycamera axis is stated as follows;
δ1=arctan(x1c/y1c) (5)
δ2=arctan(x2c/y2c) (6)
δ3=arctan(x3c/y3c) (7)
After that, the tilted angle δ is obtained as an average value of those angles;
δ=(δ1+δ2+δ3)/3 (8)
When the surface of the display panel 12 is tilted to the Ycamera axis, a tilted coordinate system, which tilts angle δ to the original coordinate system (Xcamera, Ycamera), may also be conveniently utilized to obtain a location of a coordinate input member and a motion vector thereof. The tilted coordinate system is related to a rotation of the original coordinate system at angle δ. When the surface of the display panel 12 tilts clockwise, the tilted coordinate system is obtained by being rotated counterclockwise, and vice versa. Relations between the original coordinate system (Xcamera, Ycamera) and the tilted coordinate system, which is denoted by (X1camera, Y1camera), are the following:
X1camera=XcameraX cos δ+YcameraX sin δ (9)
Y1camera=XcameraX cos δ−YcameraX sin δ (10)
When the surface of the display panel 12 does not tilt to the Ycamera axis by, e.g., as a result of adjusting operation on the electronic cameras 10 and 11 at a production factory, or an installing and maintenance operation at a customer office, those coordinate conversions are not always needed.
With reference to
In step 5102, the first image processing circuit 30 measures plural distances between the object and the projected line of the surface of the display panel 12 on the CMOS image sensor 51. For measuring a distance, the first image processing circuit 30 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 on the CMOS image sensor 51. An image forming reduction ratio on the CMOS image sensor 51 is fixed and a pixel pitch of the CMOS image sensor 51 (i.e., the interval between imaging cells) is known. As a result, the number of pixels between two points determines a distance between the two points.
For measuring plural distances between the object and the surface of the display panel 12, the first image processing circuit 30 counts pixels as regards plural distances between the contours of the extracted object and the projected line of the surface of the display panel 12.
In step S103, the first image processing circuit 30 extracts the least number of pixels among the plural numbers of pixels counted for measuring plural distances in step S102. A symbol Nmin denotes the least number of pixels among the plural numbers of pixels. Consequently, the distance being the minimum value Nmin corresponds to a nearest point of the object to the surface of the display panel 12. The first image processing circuit 30 then determines whether the minimum value Nmin is smaller than a predetermined number M0. When the minimum value Nmin is smaller than the predetermined number M0, i.e., YES in step S103, the process proceeds to step S104, and when the minimum value Nmin is not smaller than the predetermined number M0, i.e., NO in step S103, the process returns to step S101.
In step S104, the first image processing circuit 30 calculates motion vectors regarding predetermined plural points on the extracted contours of the object including the nearest point to the display panel 12. For this calculation, the first image processing circuit 30 uses the identical frame image data used for extracting the contours and the next following frame image data received from the first electronic camera 10.
In this example, the first image processing circuit 30 obtains optical flows, i.e., velocity vectors, by calculating a rate of temporal change of a pixel image density. The first image processing circuit 30 also obtains a rate of spatial change of image densities of pixels in the vicinity of the pixel used for calculating the rate of temporal change of the pixel image density. The motion vectors are expressed on the coordinate system (Xcamera, Ycamera), which associates with the projected line of the surface of the display panel 12 on the CMOS image sensor 51 (i.e., Ycamera) and the coordinate perpendicular to the surface of the display panel 12 (i.e., Xcamera).
In this example, the nearest point of the pen to the display panel 12, which is marked by the black dot at the tip of the pen in
Referring back to
In step S106, the CPU 20 determines whether the extracted object, such as the pen in
In step S108, the CPU 20 measures the distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object. When the extracted object is physically soft, such as a human finger, the extracted object may contact at an area rather than a point. In such case, the contacting point A(x, y) can be replaced with the center of the contacting area. In addition, as stated earlier, the term contacting point A(x, y) is applied for not only a contacting state of the object and the display panel 12, but also a state that the object is adjacent to the display panel 12.
A range from the optical axis crossing point Q to an end of the CMOS image sensor 51 contains a fixed number (denoted by N1) of pixels, which only depends upon relative locations of the wide-angle lens 50 and the CMOS image sensor 51 being disposed.
On the other hand, a range from the point P to the end of the CMOS image sensor 51 contains variable pixels (denoted by N2), which varies depending upon the location of the contacting point A(x, y) of the object. Therefore, the range between the point Q and the point P contains |N1-N2| pixels, and the distance between the point Q and point P in the direction Ycamera, i.e., the distance h, is determined as |N1-N2| x the pitch of the pixels.
Referring back again to
In step S111, the CPU 20 solves the coordinates x and y of the object on the display panel 12 by using the equations (3) and (4), with known quantities L, and the solved angles β1 and β2.
In step S109, the CPU 20 determines whether the object is still within the predetermined region above the display panel 12 using the trace data of motion vector components Vx of the object. When the object is in the predetermined region, i.e., YES in step S109, the process returns to step S104 to obtain motion vectors again, and when the object is out of the predetermined region, i.e., NO in step S109, the process returns to step S101.
As stated above, for solving β1, β2, x and y by using equations (1), (2), (3) and (4), the calculating operations is executed by the CPU 20. However, angles β1, β2 may also be solved by the first image processing circuit 30 and the second image processing circuit 31, respectively, and then the obtained β1, β2 are transferred to the CPU 20 to solve the coordinates x and y.
In addition, the CPU 20 may also execute the above-described contour extracting operation in step S101, the distance measuring operation in step S102, the least number extracting and comparing operation in steps S103 and S104 in place of the first image processing circuit 30. When the CPU 20 executes the operation, the hard disk 27 may initially store program codes, and the program codes are loaded to the main memory 21 for execution every time after the system 1S is boot upped.
When the coordinate data input system 1S is in a writing input mode or a drawing input mode, the CPU 20 generates display data according to the obtained plural sets of coordinates x and y of the object, i.e., the locus data of the object, and sends the generated display data to the display controller 29. Thus, the display controller 29 displays an image corresponding to the locus of the object on the display panel 12 of the coordinate data input apparatus 1.
A certain type of display panel, such as a rear projection display, has a relatively elastic surface, such as a plastic sheet screen.
Accordingly, when the method of
With reference to
In step S202, the first image processing circuit 30 or the CPU 20 first extracts geometrical features of the shape of the extracted contours of the object. For extracting geometrical features, the first image processing circuit 30 or the CPU 20 determines the position of the barycenter of the contours of the object, then measures distances from the barycenter to plural points on the extracted contours for all radial directions like the spokes of a wheel. Then, the CPU 20 extracts geometrical features of the contour shape of the object based on relations between each direction and the respective distance. Japanese Laid-Open Patent Publication No. 8-315152 may also be referred for executing the above-stated character extraction method.
After that, the CPU 20 compares the extracted geometrical features of the contour shape of the object with features of cataloged shapes of potential coordinate input members one after the other. The shapes of potential coordinate input members may be stored in the ROM 24 or the hard disk 27 in advance.
When the operator of the coordinate data input system 1S points to an item on a menu or an icon, or draws a line, etc., with a coordinate input member, the axis of the coordinate input member may tilt in any direction with various tilting angles. Therefore, the CPU 20 may rotate the contour shape of the object for predetermined angles to compare with the cataloged shapes.
Instead of such a rotating operation of the contour shape, the shapes of potential coordinate input members may be rotated at plural angles in advance, and the rotated shapes stored in the ROM 24 or the hard disk 27. Thus, the real-time rotating operation of the contour shape is not needed; consequently, execution time for the coordinate data inputting operation is further saved.
By this method, not all the cataloged shapes of potential coordinate input members are required to be stored in the ROM 24 or the hard disk 27; therefore storage capacity thereof is saved. As an example, the axial symmetry may be determined based on distances from the barycenter to plural points on the extracted contours.
Referring back to
In step S204, the first image processing circuit 30 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 12. For measuring those distances, the first image processing circuit 30 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 with respect to each of the plural distances. A distance between two points is obtained as the product of the pixel pitch of the CMOS image sensor 51 and the number of pixels between the points.
In step S205, the first image processing circuit 30 or the CPU 20 extracts the least number of pixels, which is denoted by Nmin, among the plural numbers of pixels counted in step S204, and determines whether the minimum value Nmin is smaller than a predetermined number M0. When the minimum value Nmin is smaller than the predetermined number M0, i.e., YES in step S205, the process proceeds to step S206, and when the minimum value Nmin is not smaller than the predetermined number M0, i.e., NO in step S205, the process returns to step S201.
In step S206, the first image processing circuit 30 or the CPU 20 calculates motion vectors (Vx, Vy) regarding predetermined plural points on the extracted contours of the object including the nearest point to the display panel 12. The component Vx is a vector component along the Xcamera axis, i.e., a direction perpendicular to the projected line of the surface of the display panel 12, and the component Vy is a vector component along the Ycamera axis, i.e., a direction along the surface of the display panel 12. For calculating the motion vectors, the first image processing circuit 30 or the CPU 20 uses consecutive two frames and utilizes the optical flow method stated above.
In step S207, the CPU 20 successively stores motion vector components along the direction of Xcamera (i.e., Vx) of the calculated motion vectors of frames in the main memory 21 as trace data. In step S208, the CPU 20 determines whether the extracted object has made an attempt to input coordinates on the display panel 12 based on the trace data of motion vectors. When the object has made an attempt to input coordinates, i.e., YES in step S209, the process branches to step S211, and when the object has not made an attempt, i.e., No in step S209, the process proceeds to step S210.
In step S210, the CPU 20 determines whether the object is within a predetermined region above the display panel 12 using the trace data of motion vector components Vx of the object. When the object is in the predetermined region, i.e., YES in step S210, the process returns to step S206 to obtain new motion vectors again, and when the object is out of the predetermined region, i.e., NO in step S210, the process returns to step S201.
In step S211, the first image processing circuit 30 or the CPU 20 measures a distance h on the CMOS image sensor 51 between the optical axis crossing point Q and a projected point P of a contacting point A(x, y). In step S212, with reference to
In step S213, referring to
As described, the CPU 20 only inputs coordinates of an object that coincides with one of cataloged shapes of potential coordinate input members. Accordingly, the coordinate data input system 1S can prevent an erroneous or unintentional inputting operation, e.g., inputting coordinates of an operator's arm, head, etc.
With reference to
In step S303, the first image processing circuit 30 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 12. For measuring those distances, the first image processing circuit 30 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 regarding each of the distances. A distance between two points is obtained as the product of the pixel pitch of the CMOS image sensor 51 and the number of pixels between the points.
In step S304, the first image processing circuit 30 or the CPU 20 extracts the least number of pixels Nmin among the plural numbers of pixels counted in step S303, and determines whether the minimum value Nmin is smaller than a predetermined number M0. When the minimum value Nmin is smaller than the predetermined number M0, i.e., YES in step S304, the process proceeds to step S305, and when the minimum value Nmin is not smaller than the predetermined number M0, i.e., NO in step S304, the process returns to step S301.
In step S305, the first image processing circuit 30 or the CPU 20 calculates motion vectors (Vx, Vy) regarding predetermined plural points on the extracted contours of the object including the nearest point to the display panel 12. The component Vx is a vector component along the Xcamera axis, i.e., a direction perpendicular to the projected line of the surface of the display panel 12, and the component Vy is a vector component along the Ycamera axis, i.e., a direction along the surface of the display panel 12. For calculating the motion vectors, the first image processing circuit 30 or the CPU 20 uses two consecutive frames of image data and utilizes the optical flow method stated above.
In step S306, the CPU 20 successively stores motion vector components along the direction Xcamera, i.e., component Vx, of plural frames in the main memory 21 as trace data.
In step S307, the CPU 20 determines whether a moving direction of the extracted object has been reversed from an advancing motion toward the display panel 12 to a leaving motion from the panel 12 based on the trace data of motion vectors. When the moving direction of the extracted object has been reversed, i.e., YES in step S307, the process branches to step S309, and when the moving direction has not reversed, i.e., No in step S307, the process proceeds to step S308.
In step S308, the first image processing circuit 30 or the CPU 20 determines whether the object is within a predetermined region above the display panel 12 using the trace data of motion vector components Vx of the object. When the object is in the predetermined region, i.e., YES in step S308, the process returns to step 5305 to obtain new motion vectors again, and when the object is out of the predetermined region, i.e., NO in step S308, the process returns to step S301.
In step S309, the first image processing circuit 30 or the CPU 20 measures a distance h on the CMOS image sensor 51 between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object. For projected point P, for example, a starting point of a motion vector being centered among plural motion vectors, whose direction has been reversed, is used.
In step. S310, referring to
In step S311, referring to
Referring to
In step S403, the first image processing circuit 30 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 12. For measuring those distances, the first image processing circuit 30 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 for each of the distances. A distance between two points is obtained as the product of the pixel pitch of the CMOS image sensor 51 and the number of pixels between the points.
In step S404, the first image processing circuit 30 or the CPU 20 extracts the least number of pixels Nmin among the plural numbers of pixels counted in step S403, and determines whether the minimum value Nmin is smaller than a predetermined number M0. When the minimum value Nmin is smaller than the predetermined number M0, i.e., YES in step S404, the process proceeds to step S405, and when the minimum value Nmin is not smaller than the predetermined number M0, i.e., NO in step S404, the process returns to step S401.
In step S405, the first image processing circuit 30 or the CPU 20 calculates motion vectors (Vx, Vy) regarding predetermined plural points on the extracted contours of the object including the nearest point to the display panel 12. Vx is a vector component along the Xcamera axis, i.e., a direction perpendicular to the projected line of the surface of the display panel 12, and Vy is a vector component along the Ycamera axis, i.e., a direction along the surface of the display panel 12. For calculating the motion vectors, the first image processing circuit 30 or the CPU 20 uses two consecutive frames and utilizes the optical flow method stated above.
In step S406, the CPU 20 successively stores motion vector components along the direction Xcamera of the calculated vectors, i.e., the component Vx, in the main memory 21 as trace data.
In step S407, the CPU 20 determines whether the vector component Vx, which is perpendicular to the plane of the display panel 12, has become a value of zero from an advancing motion toward the display panel 12. When the component Vx of the motion vector has become practically zero, i.e., YES in step S407, the process branches to step S409, and when the component Vx has not become zero yet, i.e., No in step S407, the process proceeds to step S408.
In step S408, the CPU 20 determines whether the object is located within a predetermined region above the display panel 12 using the trace data of motion vectors component Vx of the object. When the object is located in the predetermined region, i.e., YES in step S408, the process returns to step S405 to obtain new motion vectors again, and when the object is out of the predetermined region, i.e., NO in step S408, the process returns to step S401.
In step S409, the CPU 20 determines that a coordinate inputting operation has been started, and transits the state of the coordinate data input system 1S to a coordinate input state. In step S410, the first image processing circuit 30 or the CPU 20 measures a distance h between the optical axis crossing point Q and the projected point P of a contacting point A(x, y) of the object on the CMOS image sensor 51.
In step S411, referring to
In step S413, the CPU 20 determines whether the motion vector component Vy at the point P has changed while the other motion vector component Vx is value of zero. In other words, the CPU 20 determines whether the object has moved in any direction whatever along the surface of the display panel 12. When the motion vector component Vy has changed while the other motion vector component Vx is zero, i.e., YES in step S413, the process returns to step S410 to obtain the coordinates x and y of the object at a moved location. When the motion vector component Vy has not changed, i.e., No in step S413, the process proceeds to step S414.
Further, the CPU 20 may also determine the motion vector component Vy under a condition that the other component Vx is a positive value, which represents a direction approaching toward the display panel 12 in addition to the above-described condition of the component Vx is zero.
In step S414, the CPU 20 determines whether the motion vector component Vx regarding the point P has become a negative value, which represents a direction leaving from the display panel 12. When the motion vector component Vx has become a negative value, i.e., YES in step S414, the process proceeds to step S415, and if NO, the process returns to step S410. In step S415, the CPU 20 determines that the coordinate inputting operation has been completed, and terminates the coordinate input state of the coordinate data input system 1S.
Thus, the CPU 20 can generate display data according to the coordinated data obtained during the above-described coordinate input state, and transmit the generated display data to the display controller 29 to display an image of the input data on the display panel 12.
When a coordinate input member is within a predetermined distance, the frame rate output from each of the CMOS image sensors 51 and 53 is increased to obtain the motion of the coordinate input member further in detail. When the coordinate input member is out of the predetermined distance, the output frame rate is decreased to reduce loads of the other devices in the coordinate data input system 1S, such as the first image processing circuit 30, the second image processing circuit 31, the CPU 20, etc.
The frame rate of each of the first and second electronic cameras 10 and 11, i.e., the frame rate of each of the CMOS image sensors 51 and 53, is capable of being varied as necessary between at least at two frame rates, one referred to as a high frame rate and the other referred to as a low frame rate. A data size per unit time input to the first image processing circuit 30 and the second image processing circuit 31 varies depending on the frame rate of the image data. When the coordinate data input system 1S is powered on, the low frame rate is initially selected as a default frame rate.
Referring now to
In step S503, the first image processing circuit 30 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 12. For measuring those distances, the first image processing circuit 30 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 regarding each of the distances. A distance between two points is obtained as the product of the pixel pitch of the CMOS image sensor 51 and the number of pixels between the points.
In step S504, the first image processing circuit 30 or the CPU 20 extracts the least number of pixels Nmin among the plural numbers of pixels counted in step S503, and determines whether the minimum value Nmin is smaller than a first predetermined number M1. When the minimum value Nmin is smaller than the first predetermined number M1, i.e., YES in step S504, the process proceeds to step S505, and when the minimum value Nmin is not smaller than the first predetermined number M1, i.e., NO in step S504, the process returns to step S501.
The first predetermined number M1 in the step S504 is larger than a second predetermined number M0 for starting trace of vector data used in the following steps.
In step S505, the first image processing circuit 30 sends a command to the first electronic camera 10 to request increasing the output frame rate of the CMOS image sensor 51. Such a command for switching the frame rate, i.e., from the low frame rate to the high frame rate or from the high frame rate to the low frame rate, is transmitted through a cable that also carries image data. When the first electronic camera 10 receives the command, the first electronic camera 10 controls the CMOS image sensor 51 to increase the output frame rate thereof. As an example for increasing the output frame rate of the CMOS image sensor 51, the charge time of each of photoelectric conversion devices, i.e., the imaging cells, in the CMOS image sensor 51 may be decreased.
In step S506, the CPU 20 determines whether the object is in a second predetermined distance from the display panel 12 to start a tracing operation of motion vectors of the object. In other words, the CPU 20 determines if the minimum value Nmin is smaller than the second predetermined number M0, which corresponds to the second predetermined distance, and if YES, the process proceeds to step S507, and if No, the process branches to step S508.
In step S507, the CPU 20 traces the motion of the object and generates coordinate data of the object according to the traced motion vectors. As stated earlier, the second predetermined number M0 is smaller than the first predetermined number M1; therefore, the spatial range for tracing motion vectors of the object is smaller than the spatial range for outputting image data with the high frame rate from the CMOS image sensor 51.
In step S508, the first image processing circuit 30 determines whether the minimum value Nmin is still smaller than the first predetermined number M1, i.e., the object is still in the range of the first predetermined number M1. When the minimum value Nmin is still smaller than the first predetermined number M1, i.e., YES in step S508, the process returns to step S506, and when the minimum value Nmin is no longer smaller than the first predetermined number M1, i.e., NO in step S508, the process proceeds to step S509.
In step S509, the first image processing circuit 30 sends a command to the first electronic camera 10 to request decreasing the output frame rate of the CMOS image sensor 51, and then the process returns to the step S501. Receiving the command, the first electronic camera 10 controls the CMOS image sensor 51 to decrease again the output frame rate thereof.
In the above-described operational steps, the second electronic camera 11 and the second image processing circuit 31 operate substantially the same as the first electronic camera 10 and the first image processing circuit 30 operate.
In this example, while the coordinate input device is a distant place from the display panel 12, the first electronic camera 10 and the second electronic camera 11 operate in a low frame rate, and output a relatively small quantity of image data to the other devices. Consequently, power consumption of the coordinate data input system 1S is decreased.
The pixels in each of the CMOS image sensors 51 and 53 can be randomly accessed by pixel, i.e., the pixels in the CMOS image sensors 51 and 53 can be randomly addressed to output the image signal thereof. This random accessibility enables the above-stated output image area limitation. When the coordinate data input system 1S is powered on, the output image area is set to cover a region surrounded by a whole horizontal span of and a predetermined altitude range above the display panel 12 as a default image area.
Referring now to
In step S603, the first image processing circuit 30 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 12. For measuring those distances, the first image processing circuit 30 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 for each of the distances for each measuring distance. A distance between two points is obtained as the product of the pixel pitch of the CMOS image sensor 51 and the number of pixels between the two points.
In step S604, the first image processing circuit 30 or the CPU 20 extracts the least number of pixels Nmin among the plural numbers of pixels counted in step S603, and determines whether the minimum value Nmin is smaller than a predetermined number K. When the minimum value Nmin is smaller than the predetermined number K, i.e., YES in step S604, the process proceeds to step S605, and when the minimum value Nmin is not smaller than the predetermined number K, i.e., NO in step S604, the process returns to step S601.
Referring back to
Such a command for limiting the output image area is transmitted through a common cable that carries image data. When the first electronic camera 10 receives the command, the first electronic camera 10 controls the CMOS image sensor 51 so as to limit the output image area thereof.
Referring back to
In step S607, the first image processing circuit 30 sends a command to the first electronic camera 10 to limit the output image area of the CMOS image sensor 51 in the distance λ around the moved location ym1 of the object as illustrated in
In step S608, the CPU 20 determines whether the object is within a predetermined distance from the display panel 12 to start a tracing operation of motion vectors of the object. In other words, the CPU 20 determines if the minimum value Nmin is smaller than the predetermined number M0, which corresponds to the predetermined distance, and if YES in step S608, the process proceeds to step S609, and if No in step S608, the process branches to step S610.
In step S609, the CPU 20 traces motion vectors of the object, and inputs coordinate data of the object according to traced motion vectors.
In step S610, the CPU 20 determines whether the object is still within the predetermined altitude K above the display panel 12 for outputting image data limited in the range 2λ. When the object is within the predetermined altitude K, i.e., YES in step S610, the process returns to step S608, and when the object is no longer within the predetermined altitude K, i.e., NO in step S610, the process proceeds to step S611.
In step S611, the first image processing circuit 30 sends a command to the first electronic camera 10 to expand the output image area of the CMOS image sensor 51 to cover the whole area of the display panel 12, and then the process returns to the step S601. When the first electronic camera 10 receives the command, the first electronic camera 10 controls the CMOS image sensor 51 to expand the output image that covers the whole area of the display panel 12 so as to be in the same state as when the coordinate data input system 1S is turned on.
In the above-described operational steps, the second electronic camera 11 and the second image processing circuit 31 operate substantially the same as the first electronic camera 10 and the first image processing circuit 30 operate.
Present-day large screen display devices in the market, such as a plasma display panel (PDP) or a rear projection display generally have a 40-inch to 70-inch screen with 1024-pixel by 768-pixel resolution, which is known as an XGA screen. For capitalizing on those performance figures to a coordinate data input system, image sensors, such as the CMOS image sensors 51 and 53 are desirable to be provided with about 2000 imaging cells (pixels) in a direction. Against those backdrops, the following examples according to the present invention are configured to further reduce costs of a coordinate data input system.
The linear sensor camera may also be referred as a line sensor camera, a one-dimensional sensor camera, a 1-D camera, etc., and the area sensor camera may also be referred as a video camera, a two-dimensional camera, a two-dimensional electronic camera, a 2-D camera, a digital still camera, etc.
Each of the first linear sensor camera 70 and the second linear sensor camera 71 includes a wide-angle lens, which covers 90 degrees or more and a charge coupled device (CCD) linear image sensor. The first linear sensor camera 70 and the second linear sensor camera 71 output image data as analog signals. The CCD linear image sensor is provided with, for example, 2000 pixel imaging cells, i.e., photoelectric converters, such as photodiodes. Thus, the first linear sensor camera 70 and the second linear sensor camera 71 have an image resolution for reading an image on an XGA screen display in a direction along the array of the imaging cells, repetitively.
Further, the two linear sensor cameras are disposed in an appropriate crossing angle of the optical axes thereof, and therefore enables inputting various information including two-dimensional coordinates, such as information on a selecting operation of an item in a menu window, a drawing operation of free hand lines and letters, etc.
The area sensor camera 72 includes a wide-angle lens, which covers 90 degrees, or more, a two-dimensional CMOS image sensor, and an analog to digital converter. The two-dimensional CMOS image sensor has enough imaging cells and an enough output frame rate to enable recognizing the motion of a coordinate input member. The two-dimensional CMOS image sensor, for example, a sensor having 640 by 480 imaging cells, which is referred to as a VGA screen, may be used. The area sensor camera 72 outputs image data as a digital signal, the data being converted by the embedded analog to digital converter.
Any of the first linear sensor camera 70, the second linear sensor camera 71, and the area sensor camera 72 includes a smaller number of imaging pixels compare to the two-dimensional image sensor used in the coordinate data input system 1S of
The first linear sensor camera 70 and the area sensor camera 72 are disposed at an upper left corner of the display panel 73, respectively, such that the optical axis each of the wide-angle lenses forms an angle of approximately 45 degrees with a horizontal edge of the display panel 73. The second linear sensor camera 71 is disposed at an upper right corner of the display panel 73, such that the optical axis of the wide-angle lens forms an angle of approximately 45 degrees with a horizontal edge of the display panel 73. Further, the optical axis each of the cameras 70, 71 and 72 is disposed approximately parallel to the display surface of the display panel 73. Thus, each of the cameras 70, 71 and 72 can capture whole the display screen area of the display panel 73, and transmit the captured image data to the control apparatus 61.
The display panel 73 displays an image with, for example, a 48 by 36 inch screen and 1024 by 768-pixel resolution. For example, a plasma display panel, a rear projection liquid crystal display, a rear projection CRT display, etc., may be used as the display panel 73.
The frame 74 is preferably to be structured with a low optical reflection coefficient material, such as black painted or plated metals, black resins, on the surface thereof. The frame 74 is mounted on the left side, the bottom, and the right side circumferences of the display panel 73. Regarding a direction perpendicular to the surface of the display panel 73, the frame 74 is disposed protruding above the surface of the display panel 73. The dimensional amount of the protrusion may be equal to or more than the angle of view of the first linear sensor camera 70 and the second linear sensor camera 71 in the direction perpendicular to the surface of the display panel 73.
Accordingly, when no coordinate input member exists in the vicinity of the surface of the display panel 73, the first linear sensor camera 70 and the second linear sensor camera 71 capture the frame 74 and output image data thereof, i.e., black image data, respectively.
The control apparatus 61 also includes a local area network (LAN) controller 32, a LAN interface 33, a floppy disk (FD) controller 34, a FD drive 35, a compact disc read only memory (CD-ROM) controller 36, a CD-ROM drive 37, a keyboard controller 38, a mouse interface 39, a real time clock (RTC) generator 40, a CPU bus 41, a PCI bus 42, an internal X bus 43, a keyboard 44, and a mouse 45.
In
Referring to
The second image processing circuit 91 includes an analog to digital converting circuit, and receives the analog image signal output from the first linear sensor camera 70 via a coaxial cable. Then, the second image processing circuit 91 detects a linear (one-dimensional) location of an object based on the received image signal. Further, the second image processing circuit 91 supplies the first linear sensor camera 70 with a clock signal and an image transfer pulse via the above-described digital interface.
The third image processing circuit 92 is configured with substantially the same hardware as the second image processing circuit 91, and operates substantially the same as the second image processing circuit 91 operates. That is, the third image processing circuit 92 includes an analog to digital converting circuit, and receives the analog image signal output from the second linear sensor camera 71 via a coaxial cable. Then, the third image processing circuit 92 detects a linear location of the object based on the image signal received from the second linear sensor camera 71. The third image processing circuit 92 also supplies the second linear sensor camera 71 with a clock signal and an image transfer pulse via a digital interface, such as an RS-422 interface.
In addition, the clock signal and the image transfer pulse supplied to the first linear sensor camera 70 and those supplied to the second linear sensor camera 71 are maintained in synchronization.
The PEDESTAL LEVEL of the waveform corresponds to an output voltage of a captured image of the black frame 74. A positive pulse in the waveform corresponds to a captured image of a coordinate input member having a relatively high optical reflection coefficient, e.g., white, red, gray, etc. Lighting fixtures and/or sunlight flooded from windows irradiate both the black frame 74 and a coordinate input member, however the black frame 74 reflects little light and the coordinate input member reflects more light, and thereby the linear CCD image sensors in the linear sensor cameras 70 and 71 generate such a waveform having a pulse thereupon.
The height of the pulse is proportional to the optical reflection coefficient of the coordinate input member. Further, the height and width of the pulse is affected by the size of the coordinate input member and the distance thereof from the first linear sensor camera 70 and the second linear sensor camera 71. For example, when the coordinate input member is thin and located far from the first linear sensor camera 70 and the second linear sensor camera 71, the height and width of the pulse on an output voltage waveform generally become thin and short.
Furthermore, the height and width of the pulse is affected by a location of the coordinate input member in the direction perpendicular to the surface of the display panel 73. For example, when the coordinate input member is contacting the display panel 73, a pulse appears with a maximum height and width. As the coordinate input member leaves from the display panel 73, the height and width of the pulse become thinner and shorter. If the coordinate input member is out of the angle of view of the first linear sensor camera 70 and the second linear sensor camera 71, the pulse disappears.
The alternate long and short dash line denoted by THRESHOLD LEVEL represents a threshold voltage used for discriminating or slicing a pulse portion of the waveform signal. When a pulse portion of the signal is above the threshold level, the location of the peak of the pulse along the time axis is utilized for identifying the location of the coordinate input member on the display panel 73.
As described, the height and width of the pulse is affected by the above described various factors, therefore the threshold level may be determined based on an experiment. Further, the threshold level may be readjusted according to illumination of the room in which the coordinate data input system 60S is installed for use.
Referring back to
The above-stated points P and Q, and distance h substantially correspond to those symbols shown in
Similarly, the third image processing circuit 92 detects a peak of a pulse in an image signal output from the CCD linear image sensor of the second linear sensor camera 71 as a projected point P of the contacting point of the coordinate input member. Then, the third image processing circuit 92 measures a distance h between the optical axis crossing point Q of the second linear sensor camera 71 and the detected point P on the CCD linear image sensor. Accordingly, the angle β2 is also obtained. In addition, the distance L, which is the distance between the wide-angle lenses of the first linear sensor camera 70 and the second linear sensor camera 71, is known. Finally, a contacting point A(x, y) of the coordinate input member is solved.
In the first place, the area sensor camera 72 limits an image area in a direction perpendicular to the display panel 73 to output the image data to the first image processing circuit 90 within a predetermined distance from the display panel 73 as necessary. In other words, the area sensor camera 72 clips an upper and/or lower portion of an analog image signal output from the CMOS area sensor thereof. Then, the area sensor camera 72 converts the analog image signal of the remained portion into digital data, and sends out the digital image data as frame image data to the first image processing circuit 90.
With reference to
In step S702, the first image processing circuit 90 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 73. For measuring those distances, the first image processing circuit 90 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 73 for each of the measuring distances. A pixel pitch of the CMOS image sensor is known, and therefore the number of pixels between two points determines the distance between the two points.
In step S703, the first image processing circuit 90 or the CPU 20 extracts the least number of pixels, which is denoted by Nmin, among the plural numbers of pixels counted in step S702, and determines whether the minimum value Nmin is smaller than a predetermined number M0. When the minimum value Nmin is smaller than the predetermined number M0, i.e., YES in step S703, the process proceeds to step S704, and when the minimum value Nmin is not smaller than the predetermined number M0, i.e., NO in step S703, the process returns to step S701.
In step S704, the first image processing circuit 90 or the CPU 20 calculates motion vectors regarding predetermined plural points on the extracted contours of the object including the nearest point, which corresponds the minimum value Nmin, to the display panel 73. For the calculation, the first image processing circuit 90 or the CPU 20 uses the identical frame image data used for extracting the contours and the next following frame image data received from the area sensor camera 72.
In this example, for calculating motion vectors, the first image processing circuit 90 or the CPU 20 first obtains optical flows, i.e., velocity vectors by calculating a rate of temporal change of a pixel image density and a rate of spatial change of image density of pixels surrounding the pixel used for calculating the temporal change. The motion vectors are expressed with the coordinate system (Xcamera, Ycamera), which associates with a line of the surface of the display panel 73 focused on the CMOS area sensor (i.e., Ycamera) and the coordinate perpendicular to the display panel 73 (i.e., Xcamera).
In step S705, the CPU 20 stores the calculated motion to vector components along the direction Xcamera, such as Vx, in the main memory 21. The CPU 20 stores those components obtained from each frame image data in succession. The successively stored data is referred as trace data of motion vectors.
In step S706, the CPU 20 determines whether the extracted object has made an attempt to input coordinates on the display panel 73 based on the trace data. As a determining method, the method illustrated in
In step S708, referring to
In step S709, the second image processing circuit 91 or the CPU 20 solves the angle β1 by using the equations (1) and (2), with known quantities f and α, and the measured distance h. As regards image data received from the second linear sensor camera 71, the third image processing circuit 92 or the CPU 20 solves the angle β2 in a similar manner.
In step S711, referring to
In step S710, the CPU 20 determines whether the object is within the predetermined region above the display panel 73 using the trace data of motion vector components Vx of the object. In other words, the CPU 20 determines whether the minimum value Nmin among plural distances is still smaller than the predetermined number M0. When the object is in the predetermined region, i.e., YES in step S710, the process returns to step S704 to obtain motion vectors again. When the object is out of the predetermined region, i.e., NO in step S710, the process returns to step S701.
With reference to HG. 21, in step S801, the first image processing circuit 90 or the CPU 20 extracts contours of an object as a coordinate input member from the frame image data received from the area sensor camera 72.
In step S802, the first image processing circuit 90 or the CPU 20 first extracts features of the shape of the extracted contours of the object. For extracting features of the shape, the first image processing circuit 90 or the CPU 20 determines the position of the barycenter of the contours of the object, then measures distances from the barycenter to plural points on the extracted contours for all radial directions like the spokes of a wheel. After that, the CPU 20 characterizes the contour shape of the object based on relations between each direction and the respective distance.
After that, the first image processing circuit 90 or the CPU 20 compares the character extracted contour shape of the object with cataloged shapes of potential coordinate input members. The shapes of potential coordinate input members may be stored in the ROM 24 or the hard disk 27 in advance.
When an operator of the coordinate data input system 60S points to an item in a menu, an icon, draws a line, etc., by using a coordinate input member, the axis of the coordinate input member may tilt in any direction with various tilting angles. Therefore, the first image processing circuit 90 or the CPU 20 may compare the contour shape of the object after being rotated at various angles with the cataloged shapes.
Instead of the rotation of the contour shape, the shapes of potential coordinate input members may be rotated at plural angles in advance, and the rotated shapes are stored in the ROM 24 or the hard disk 27. Thus, the real time rotating operation of the contour shape is not needed; and consequently execution time is saved.
In step S803, the first image processing circuit 90 or the CPU 20 determines whether the contour shape of the object coincides with one of the cataloged shapes of potential coordinate input members. When the identified contour shape coincides with one of the cataloged shapes, i.e., YES in step S803, the process proceeds to step S804, and when the identified contour shape does not coincide with any of the cataloged shapes, i.e., NO in step S803, the process returns to step S801.
In step S804, the first image processing circuit 90 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 73. For measuring those distances, the first image processing circuit 90 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 73 as regards each of the measuring distances.
In step S805, the first image processing circuit 90 or the CPU 20 extracts the least number of pixels, i.e., Nmin, among the plural numbers of pixels counted in step S804, and determines whether the minimum value Nmin is smaller than a predetermined number M0. When the minimum value Nmin is smaller than the predetermined number M0, i.e., YES in step S805, the process proceeds to step S806, and when the minimum value Nmin is not smaller than the predetermined number M0, i.e., NO in step S805, the process returns to step S801.
In step S806, the first image processing circuit 90 or the CPU 20 calculates motion vectors regarding predetermined plural points on the extracted contours of the object including the nearest point to the display panel 73 by using the identical frame image data used for extracting the contours and the next following frame image data received from the area sensor camera 72.
In this example, for calculating motion vectors, the first image processing circuit 90 or the CPU 20 first obtains optical flows, i.e., velocity vectors by calculating a rate of temporal change of a pixel image density and a rate of spatial change of image density of pixels surrounding the pixel used for calculating the temporal change. The motion vectors are expressed with the coordinate system Xcamera, Ycamera.
In step S807, the CPU 20 stores motion vector components along the direction Xcamera of the calculated vectors, such as Vx, in the main memory 21. The CPU 20 stores those components obtained from each frame image data in succession as trace data of the motion vectors.
In step S808, the CPU 20 determines whether the extracted object has made an attempt to input coordinates on the display panel 73 based on the trace data. As a determining method, the method of
In step S810, the CPU 20 determines whether the object is within a predetermined region above the display panel 73 using the trace data of motion vector components Vx of the object. When the object is in the predetermined region, i.e., YES in step S810, the process returns to step S806 to obtain motion vectors again, and when the object is out of the predetermined region, i.e., NO in step S810, the process returns to step S801.
In step S811, the second image processing circuit 91 or the CPU 20 measures a distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object according to the image data received from the first linear sensor camera 70. Similarly, the third image processing circuit 92 or the CPU 20 measures a distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object according to the image data received from the second linear sensor camera 71.
In step S812, the second image processing circuit 91 or the CPU 20 solves the angle β1 by using the equations (1) and (2), with known quantities f and α, and the measured distance h. As regards image data received from the second linear sensor camera 71, the third image processing circuit 92 or the CPU 20 solves the angle β2 in a similar manner.
In step S813, referring to
In this example, when a coordinate input member is only in proximity to the display panel 73, the first linear sensor camera 70 and the second linear sensor camera 71 output image data, respectively, to save loads for other devices in the coordinate data input system 60S.
Referring to
In step S902, the second image processing circuit 91 sends a command to the first linear sensor camera 70 to start imaging operation. Likewise, the third image processing circuit 92 sends a command to the second linear sensor camera 71 to start an imaging operation. Those commands are transmitted via digital interfaces. According to the commands, the first linear sensor camera 70 starts an imaging operation and sends the taken image data to the second image processing circuit 91. The second linear sensor camera 71 also starts an imaging operation and sends the taken image data to the third image processing circuit 92.
In step S903, the second image processing circuit 91 and the third image processing circuit 92 trace the coordinate input member and input coordinates of the coordinate input member on the display panel 73, respectively.
In step S904, the first image processing circuit 90 or the CPU 20 determines whether the coordinate input member is out of the predetermined region for tracing motion vectors thereof. When the coordinate input member is out of the predetermined region, i.e., YES in step S904, the process proceeds to step S905, and when the coordinate input member is still in the predetermined region, i.e., NO in step S904, the process returns to step S903.
In step S905, the second image processing circuit 91 sends a command to the first linear sensor camera 70 to halt the imaging operation. Likewise, the third image processing circuit 92 sends a command to the second linear sensor camera 71 to halt the imaging operation. According to the commands, the first linear sensor camera 70 and the second linear sensor camera 71 halt the imaging operation, respectively.
In the above example, the predetermined region above the display panel 73 is commonly used for both starting imaging operations and tracing motion vectors. However, a predetermined region for starting imaging operations by the first linear sensor camera 70 and the second linear sensor camera 71 may be greater than a predetermined region for tracing motion vectors of a coordinate input member.
With reference to
In step S1002, the first image processing circuit 30 or the CPU 20 measures plural distances between points on the contours of the extracted object and points on the projected line of the surface of the display panel 12. For measuring those distances, the first image processing circuit 30 or the CPU 20 counts pixels included between a point on the contours of the extracted object and a point on the projected line of the surface of the display panel 12 for each of the measuring distances. The number of pixels between two points determines the distance between the two points.
In step S1003, the first image processing circuit 30 or the CPU 20 extracts the least number of pixels, i.e., Nmin, among the plural numbers of pixels counted in step S1002. Then, the first image processing circuit 30 or the CPU 20 determines whether the minimum value Nmin is larger than a first predetermined number M1 and equal to or smaller than a second predetermined number M2.
The REGION 1 is assigned for tracing motion vectors of the coordinate input member, and the REGION 2 is assigned for moving a cursor, inputting a gesture command, etc. For example, a pen as a coordinate input member is illustrated in the REGION 2 in
Referring back to
In step S1004, the first image processing circuit 30 measures a distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object according to the image data received from the first electronic camera 10. Similarly, the second image processing circuit 31 measures a distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object according to the image data received from the second electronic camera 11.
In step S1005, the first image processing circuit 30 solves angle β1 by using the equations (1) and (2), with known quantities f and α, and the measured distance h. As regards image data received from second electronic camera 11, the second image processing circuit 31 solves angle β2 in a similar manner.
In step S1006, referring to
In step S1007, the CPU 20 generates display data of a cursor at a location according to the obtained coordinates x and y of the object, and sends the generated display data to the display controller 29. The CPU 20 may also send a cursor command to display a cursor at the location. Thus, the display controller 29 can display a cursor at the location where the coordinate input member exists on the display panel 12. After that, the process returns to step S1001. Thus, as long as the coordinate input member moves in the REGION 2, the displayed cursor follows the coordinate input member.
In step S1008, the first image processing circuit 30 determines whether the minimum value Nmin is equal to or smaller than the first predetermined number M1. That is to say, the first image processing circuit 30 determines whether the coordinate input member is in the REGION 1. When the result of the determination is true, i.e., YES in step S1008, the process proceeds to step S1009, and when the result is false, i.e., NO in step S1008, the process returns to step S1001.
In step S1009, the first image processing circuit 30 calculates motion vectors regarding predetermined plural points on the extracted contours of the object including the nearest point to the display panel 12 by using the identical frame image data used for extracting the contours and the next following frame image data received from the first electronic camera 10. After that, the CPU 20 determines whether the extracted object has made an attempt to input coordinates on the display panel 12 based on the trace data of the calculated motion vectors.
When the CPU 20 determines that the object has made an attempt to input coordinates, the first image processing circuit 30 measures a distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object according to the image data received from the first electronic camera 10. Similarly, the second image processing circuit 31 measures a distance h between the optical axis crossing point Q and a projected point P of a contacting point A(x, y) of the object according to the image data received from the second electronic camera 11.
Then, the first image processing circuit 30 solves angle β1 by using the equations (1) and (2), with known quantities f and α, and the measured distance h. As regards image data received from the second electronic camera 11, the second image processing circuit 31 solves angle β2 in a similar manner.
After that, referring to
In the above-described example, the CPU 20 solves the coordinates x and y of the object on the display panel 12 for every frame image input. However, the CPU 20 may also solve coordinates x and y for every plural frames of images.
In addition, in the above-described example, the obtained coordinates x and y on the display panel 12 in the REGION 2 is used for moving a cursor. However, the obtained coordinates x and y may also be used for another use, such as inputting a gesture command. For inputting a gesture command, the CPU 20 may stores plural sets of coordinate data, i.e., trace data of coordinate data including time stamps thereof. Then, the CPU 20 analyzes the trace data of coordinate data, and tests whether the trace data coincides one of a plurality of defined locus of commands, which may be stored in the hard disk 27 in advance.
As an example, Japanese Laid-Open Patent Publication No. 5-197810 describes a matching method. The method first obtains a set of a temporal combination and a spatial combination of motion vectors extracted from input images. The method then verifies the obtained set of temporal combination and spatial combination with patterns in a command pattern dictionary provided in advance. Thus, the method identifies the input command as a specific one in the command pattern dictionary.
As an example of gesture commands, when an operator strokes a pen downwardly at a predetermined range of velocity in the REGION 2 above the display panel 12, the CPU 20 may recognize the stroke as a scroll command. When the CPU 20 recognizes as a scroll command, the CPU 20 scrolls the image displayed on the display panel 12 downwardly for a predetermined length, for example, the same length to the input stroke.
Further, inputting either a gesture command or coordinate data may be distinguished according to a figure of the coordinate input member. For example, when a human hand or finger draws a figure on the display panel 12, the coordinate data input system 1S may recognize the motion as a gesture command, and when a symmetrical object, such as a pen, draws, the system 1S may input coordinates of the symmetrical object.
In the network system 200, each of the coordinate data input systems 1SA, 1SB, 1SC and 60SB transmits detected coordinate data of a coordinate input member and related information, such as a gesture command, accompanying control signals according to a transmission control protocol to the other coordinate data input systems via the PSTN 210 and the LAN 220.
Further, each of the coordinate data input systems 1SA, 1SB, 1SC and 60SB displays images on the display panel 12 of
Therefore, all the coordinate data input systems 1SA, 1SB, 1SC and 60SB can share identical information and display an identical image on the display panel 12 or 73. In other words, people in different places can input information including coordinate data to a coordinate data input system implemented in each of the different places, and watch substantially the same image on the each display panel.
The server 230 stores programs to be executed by the CPU 20 of
When a manufacturer of the coordinate data input systems revises a program of the systems, the manufacturer stores the revised program and informs users of the systems of the new program revision. Then, the users of the coordinate data input systems can download the revised program into hard disk 27 of
As described above, the novel method and apparatus according to the present invention can input information including coordinate data without using a light scanning device even when the surface of a display screen is contorted to a certain extent.
Further, the novel method and apparatus according to the present invention can input information including coordinate data using a plurality of coordinate input members, such as a pen, a human finger, a stick, etc.
Furthermore, the novel method and apparatus according to the present invention can input information including coordinate data with a plurality of background devices, such as a chalkboard, a whiteboard, etc., in addition to a display device, such as a plasma display panel, a rear projection display.
Numerous modifications and variations of the present invention are possible in light of the above teachings. For example, features described for certain embodiments may be combined with other embodiments described herein. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
Claims
1. A method for inputting information including coordinate data, comprising:
- providing at least two cameras at respective corners of a display;
- extracting, based on outputs from the at least two cameras, a predetermined object from an image including the predetermined object above a plane of the display and a plane of the display;
- determining whether the predetermined object is within a predetermined distance from the plane of the display;
- detecting, based on outputs from the at least two cameras, a position of the predetermined object while the predetermined object is determined to be within a predetermined distance from the plane;
- calculating angles of views of each of the at least two cameras to the detected position; and
- calculating coordinates of the predetermined object on the display panel utilizing the calculated angles.
2. A method for inputting information including coordinate data according to claim 1, wherein the at least two cameras are in opposite corners of the display.
3. A device for inputting information including coordinate data, comprising:
- at least two cameras at respective corners of a display;
- an object extracting device configured to extract a predetermined object from an image including the predetermined object above a plane of the display and a plane of the display, and to determine whether the predetermined object is within a predetermined distance from the plane of the display;
- a detector device configured to detect a position of the predetermined object while the predetermined object is within a predetermined distance from the plane; and
- a controller configured to calculate angles of views of each of the at least two cameras to the detected position and to calculate coordinates of the predetermined object on the display panel utilizing the calculated angles.
4. A device for inputting information including coordinate data according to claim 3, wherein the at least two cameras are in opposite corners of the display.
5. A device for inputting information including coordinate data, comprising:
- at least two imaging means at respective corners of a display;
- means for extracting, based on outputs from the at least two imaging means, a predetermined object from an image including the predetermined object above a plane of the display and a plane of the display, and for determining whether the predetermined object is within a predetermined distance from the plane of the display;
- means for detecting, based on outputs from the at least two imaging means, a position of the predetermined object while the predetermined object is within a predetermined distance from the plane;
- means for calculating angles of view of each of the least two imaging means and for calculating coordinates of the predetermined object on the display panel utilizing the calculated angles.
6. A device for inputting information including coordinate data according to claim 5, wherein the at least two imaging means are in opposite corners of the display.
7. Apparatus usable with at least one processing structure for inputting information including coordinate data, comprising:
- a display device having at least two cameras at respective corners thereof; and
- at least one non-transitory computer readable medium having program code configured to cause the at least one processing structure to: (i) extract, based on outputs from the at least two cameras, a predetermined object from an image including the predetermined object above a plane of the display device and a plane of the display device; (ii) determine whether the predetermined object is within a predetermined distance from the plane of the display device; (iii) detect, based on outputs from the at least two cameras, a position of the predetermined object while the predetermined object is determined to be within a predetermined distance from the plane; (iv) calculate angles of views of each of the at least two cameras to the detected position; and (v) calculate coordinates of the predetermined object on the display device utilizing the calculated angles.
8. Apparatus usable with at least one processing structure for inputting information including coordinate data according to claim 7, wherein the at least two cameras are disposed at opposite corners of the display device.
9. Apparatus usable with at least one processing structure for inputting information including coordinate data, comprising:
- a display having at least two cameras at respective corners thereof; and
- at least one non-transitory computer readable medium configured to cause the at least one processing structure to: (i) extract a predetermined object from an image including the predetermined object above a plane of the display and a plane of the display, and to determine whether the predetermined object is within a predetermined distance from the plane of the display; (ii) detect a position of the predetermined object while the predetermined object is within a predetermined distance from the plane; and (iii) calculate angles of views of each of the at least two cameras to the detected position and to calculate coordinates of the predetermined object on the display panel utilizing the calculated angles.
10. Apparatus usable with at least one processing structure for inputting information including coordinate data according to claim 9, wherein the at least two cameras are disposed at opposite corners of the display.
11. Apparatus usable with at least one processing structure for inputting information including coordinate data, comprising:
- a display panel having at least two imaging devices at respective corners thereof; and
- at least one non-transitory computer readable medium configured to cause the at least one processing structure to: (i) extract, based on outputs from the at least two imaging devices, a predetermined object from an image including the predetermined object above a plane of the display panel and a plane of the display panel, and for determining whether the predetermined object is within a predetermined distance from the plane of the display panel; (ii) detect, based on outputs from the at least two imaging devices, a position of the predetermined object while the predetermined object is within a predetermined distance from the plane; and (iii) calculate angles of view of each of the least two imaging devices, and calculate coordinates of the predetermined object on the display panel utilizing the calculated angles.
12. Apparatus usable with at least one processing structure for inputting information including coordinate data according to claim 11, wherein the at least two imaging devices are disposed at opposite corners of the display panel.
4107522 | August 15, 1978 | Walter |
4144449 | March 13, 1979 | Funk et al. |
4247767 | January 27, 1981 | O'Brien et al. |
4507557 | March 26, 1985 | Tsikos |
4558313 | December 10, 1985 | Garwin et al. |
4672364 | June 9, 1987 | Lucas |
4737631 | April 12, 1988 | Sasaki et al. |
4742221 | May 3, 1988 | Sasaki et al. |
4746770 | May 24, 1988 | McAvinney |
4762990 | August 9, 1988 | Caswell et al. |
4782328 | November 1, 1988 | Denlinger |
4818826 | April 4, 1989 | Kimura |
4820050 | April 11, 1989 | Griffin |
4822145 | April 18, 1989 | Staelin |
4831455 | May 16, 1989 | Ishikawa |
4868912 | September 19, 1989 | Doering |
4980547 | December 25, 1990 | Griffin |
5025314 | June 18, 1991 | Tang et al. |
5097516 | March 17, 1992 | Amir |
5109435 | April 28, 1992 | Lo et al. |
5130794 | July 14, 1992 | Ritchey |
5140647 | August 18, 1992 | Ise et al. |
5162618 | November 10, 1992 | Knowles |
5168531 | December 1, 1992 | Sigel |
5196835 | March 23, 1993 | Blue et al. |
5239373 | August 24, 1993 | Tang et al. |
5317140 | May 31, 1994 | Dunthorn |
5359155 | October 25, 1994 | Helser |
5374971 | December 20, 1994 | Clapp et al. |
5414413 | May 9, 1995 | Tamaru et al. |
5448263 | September 5, 1995 | Martin |
5483261 | January 9, 1996 | Yasutake |
5483603 | January 9, 1996 | Luke et al. |
5484966 | January 16, 1996 | Segen |
5490655 | February 13, 1996 | Bates |
5502568 | March 26, 1996 | Ogawa et al. |
5525764 | June 11, 1996 | Junkins et al. |
5528263 | June 18, 1996 | Platzker et al. |
5528290 | June 18, 1996 | Saund |
5537107 | July 16, 1996 | Funado |
5554828 | September 10, 1996 | Primm |
5581276 | December 3, 1996 | Cipolla et al. |
5581637 | December 3, 1996 | Cass et al. |
5594469 | January 14, 1997 | Freeman et al. |
5594502 | January 14, 1997 | Bito et al. |
5617312 | April 1, 1997 | Iura et al. |
5638092 | June 10, 1997 | Eng et al. |
5670755 | September 23, 1997 | Kwon |
5686942 | November 11, 1997 | Ball |
5729704 | March 17, 1998 | Stone et al. |
5734375 | March 31, 1998 | Knox et al. |
5736686 | April 7, 1998 | Perret, Jr. et al. |
5737740 | April 7, 1998 | Henderson et al. |
5745116 | April 28, 1998 | Pisutha-Arnond |
5764223 | June 9, 1998 | Chang et al. |
5771039 | June 23, 1998 | Ditzik |
5790910 | August 4, 1998 | Haskin |
5801704 | September 1, 1998 | Oohara et al. |
5818421 | October 6, 1998 | Ogino et al. |
5818424 | October 6, 1998 | Korth |
5819201 | October 6, 1998 | DeGraaf |
5825352 | October 20, 1998 | Bisset et al. |
5831602 | November 3, 1998 | Sato et al. |
5911004 | June 8, 1999 | Ohuchi et al. |
5914709 | June 22, 1999 | Graham et al. |
5920342 | July 6, 1999 | Umeda et al. |
5936615 | August 10, 1999 | Waters |
5943783 | August 31, 1999 | Jackson |
5963199 | October 5, 1999 | Kato et al. |
5982352 | November 9, 1999 | Pryor |
5988645 | November 23, 1999 | Downing |
6002808 | December 14, 1999 | Freeman |
6008798 | December 28, 1999 | Mato, Jr. et al. |
6031531 | February 29, 2000 | Kimble |
6061177 | May 9, 2000 | Fujimoto |
6075905 | June 13, 2000 | Herman et al. |
6100538 | August 8, 2000 | Ogawa |
6104387 | August 15, 2000 | Chery et al. |
6118433 | September 12, 2000 | Jenkin et al. |
6122865 | September 26, 2000 | Branc et al. |
6128003 | October 3, 2000 | Smith et al. |
6141000 | October 31, 2000 | Martin |
6147678 | November 14, 2000 | Kumar et al. |
6153836 | November 28, 2000 | Goszyk |
6161066 | December 12, 2000 | Wright et al. |
6179426 | January 30, 2001 | Rodriguez, Jr. et al. |
6188388 | February 13, 2001 | Arita et al. |
6191773 | February 20, 2001 | Maruno et al. |
6208329 | March 27, 2001 | Ballare |
6208330 | March 27, 2001 | Hasegawa et al. |
6209266 | April 3, 2001 | Branc et al. |
6226035 | May 1, 2001 | Korein et al. |
6229529 | May 8, 2001 | Yano et al. |
6252989 | June 26, 2001 | Geisler et al. |
6256033 | July 3, 2001 | Nguyen |
6262718 | July 17, 2001 | Findlay et al. |
6310610 | October 30, 2001 | Beaton et al. |
6323846 | November 27, 2001 | Westerman et al. |
6328270 | December 11, 2001 | Elberbaum |
6335724 | January 1, 2002 | Takekawa et al. |
6337681 | January 8, 2002 | Martin |
6339748 | January 15, 2002 | Hiramatsu |
6353434 | March 5, 2002 | Akebi et al. |
6359612 | March 19, 2002 | Peter et al. |
6414671 | July 2, 2002 | Gillespie et al. |
6414673 | July 2, 2002 | Wood et al. |
6421042 | July 16, 2002 | Omura et al. |
6427389 | August 6, 2002 | Branc et al. |
6429856 | August 6, 2002 | Omura et al. |
6496122 | December 17, 2002 | Sampsell |
6497608 | December 24, 2002 | Ho et al. |
6498602 | December 24, 2002 | Ogawa |
6507339 | January 14, 2003 | Tanaka |
6512838 | January 28, 2003 | Rafii et al. |
6517266 | February 11, 2003 | Saund |
6518600 | February 11, 2003 | Shaddock |
6522830 | February 18, 2003 | Yamagami |
6529189 | March 4, 2003 | Colgan et al. |
6530664 | March 11, 2003 | Vanderwerf et al. |
6531999 | March 11, 2003 | Trajkovic |
6545669 | April 8, 2003 | Kinawi et al. |
6559813 | May 6, 2003 | DeLuca et al. |
6563491 | May 13, 2003 | Omura |
6567078 | May 20, 2003 | Ogawa |
6567121 | May 20, 2003 | Kuno |
6570612 | May 27, 2003 | Saund et al. |
6577299 | June 10, 2003 | Schiller et al. |
6587099 | July 1, 2003 | Takekawa |
6594023 | July 15, 2003 | Omura et al. |
6597348 | July 22, 2003 | Yamazaki et al. |
6608619 | August 19, 2003 | Omura et al. |
6626718 | September 30, 2003 | Hiroki |
6630922 | October 7, 2003 | Fishkin et al. |
6633328 | October 14, 2003 | Byrd et al. |
6650822 | November 18, 2003 | Zhou |
6674424 | January 6, 2004 | Fujioka |
6683584 | January 27, 2004 | Ronzani et al. |
6690357 | February 10, 2004 | Dunton et al. |
6690363 | February 10, 2004 | Newton |
6690397 | February 10, 2004 | Daignault, Jr. |
6710770 | March 23, 2004 | Tomasi et al. |
6736321 | May 18, 2004 | Tsikos et al. |
6741250 | May 25, 2004 | Furlan et al. |
6747636 | June 8, 2004 | Martin |
6756910 | June 29, 2004 | Ohba et al. |
6760009 | July 6, 2004 | Omura et al. |
6760999 | July 13, 2004 | Branc et al. |
6774889 | August 10, 2004 | Zhang et al. |
6803906 | October 12, 2004 | Morrison et al. |
6864882 | March 8, 2005 | Newton |
6911972 | June 28, 2005 | Brinjes |
6919880 | July 19, 2005 | Morrison et al. |
6933981 | August 23, 2005 | Kishida et al. |
6947032 | September 20, 2005 | Morrison et al. |
6954197 | October 11, 2005 | Morrison et al. |
6972401 | December 6, 2005 | Akitt et al. |
6972753 | December 6, 2005 | Kimura et al. |
7007236 | February 28, 2006 | Dempski et al. |
7015418 | March 21, 2006 | Cahill et al. |
7030861 | April 18, 2006 | Westerman et al. |
7084868 | August 1, 2006 | Farag et al. |
7098392 | August 29, 2006 | Sitrick et al. |
7121470 | October 17, 2006 | McCall et al. |
7176904 | February 13, 2007 | Satoh |
7184030 | February 27, 2007 | McCharles et al. |
7187489 | March 6, 2007 | Miles |
7190496 | March 13, 2007 | Klug et al. |
7202860 | April 10, 2007 | Ogawa |
7232986 | June 19, 2007 | Worthington et al. |
7236162 | June 26, 2007 | Morrison et al. |
7274356 | September 25, 2007 | Ung et al. |
7355593 | April 8, 2008 | Hill et al. |
7414617 | August 19, 2008 | Ogawa |
7619617 | November 17, 2009 | Morrison et al. |
7692625 | April 6, 2010 | Morrison et al. |
20010019325 | September 6, 2001 | Takekawa |
20010022579 | September 20, 2001 | Hirabayashi |
20010026268 | October 4, 2001 | Ito |
20010033274 | October 25, 2001 | Ong |
20020036617 | March 28, 2002 | Pryor |
20020050979 | May 2, 2002 | Oberoi et al. |
20020067922 | June 6, 2002 | Harris |
20020080123 | June 27, 2002 | Kennedy et al. |
20020145595 | October 10, 2002 | Satoh |
20020163530 | November 7, 2002 | Takakura et al. |
20030001825 | January 2, 2003 | Omura et al. |
20030025951 | February 6, 2003 | Pollard et al. |
20030043116 | March 6, 2003 | Morrison et al. |
20030046401 | March 6, 2003 | Abbott et al. |
20030063073 | April 3, 2003 | Geaghan et al. |
20030071858 | April 17, 2003 | Morohoshi |
20030085871 | May 8, 2003 | Ogawa |
20030095112 | May 22, 2003 | Kawano et al. |
20030142880 | July 31, 2003 | Hyodo |
20030151532 | August 14, 2003 | Chen et al. |
20030151562 | August 14, 2003 | Kulas |
20040021633 | February 5, 2004 | Rajkowski |
20040031779 | February 19, 2004 | Cahill et al. |
20040046749 | March 11, 2004 | Ikeda |
20040108990 | June 10, 2004 | Lieberman |
20040149892 | August 5, 2004 | Akitt et al. |
20040150630 | August 5, 2004 | Hinckley et al. |
20040169639 | September 2, 2004 | Pate et al. |
20040178993 | September 16, 2004 | Morrison et al. |
20040178997 | September 16, 2004 | Gillespie et al. |
20040179001 | September 16, 2004 | Morrison et al. |
20040189720 | September 30, 2004 | Wilson et al. |
20040252091 | December 16, 2004 | Ma et al. |
20050052427 | March 10, 2005 | Wu et al. |
20050057524 | March 17, 2005 | Hill et al. |
20050083308 | April 21, 2005 | Homer et al. |
20050151733 | July 14, 2005 | Sander et al. |
20050190162 | September 1, 2005 | Newton |
20050248540 | November 10, 2005 | Newton |
20050276448 | December 15, 2005 | Pryor |
20060022962 | February 2, 2006 | Morrison et al. |
20060158437 | July 20, 2006 | Blythe et al. |
20060202953 | September 14, 2006 | Pryor et al. |
20060227120 | October 12, 2006 | Eikman |
20060274067 | December 7, 2006 | Hikai |
20070019103 | January 25, 2007 | Lieberman et al. |
20070075648 | April 5, 2007 | Blythe et al. |
20070075982 | April 5, 2007 | Morrison et al. |
20070116333 | May 24, 2007 | Dempski et al. |
20070126755 | June 7, 2007 | Zhang et al. |
20070139932 | June 21, 2007 | Sun et al. |
20070236454 | October 11, 2007 | Ung et al. |
20080062149 | March 13, 2008 | Baruk |
20080129707 | June 5, 2008 | Pryor |
2412878 | January 2002 | CA |
2493236 | December 2003 | CA |
198 10 452 | December 1998 | DE |
0 279 652 | August 1988 | EP |
0 347 725 | December 1989 | EP |
0 657 841 | June 1995 | EP |
0 762 319 | March 1997 | EP |
0 829 798 | March 1998 | EP |
1 450 243 | August 2004 | EP |
1 297 488 | November 2006 | EP |
2204126 | November 1988 | GB |
57-211637 | December 1982 | JP |
61-196317 | August 1986 | JP |
61-196317 | August 1986 | JP |
61-260322 | November 1986 | JP |
03-054618 | March 1991 | JP |
3-054618 | March 1991 | JP |
4-350715 | December 1992 | JP |
04-355815 | December 1992 | JP |
4-355815 | December 1992 | JP |
5-181605 | July 1993 | JP |
05-189137 | July 1993 | JP |
5-189137 | July 1993 | JP |
5-197810 | August 1993 | JP |
5-197810 | August 1993 | JP |
07-110733 | April 1995 | JP |
7-110733 | April 1995 | JP |
07-230352 | August 1995 | JP |
7-230352 | August 1995 | JP |
8-16931 | February 1996 | JP |
8-016931 | February 1996 | JP |
8-108689 | April 1996 | JP |
8-240407 | September 1996 | JP |
8-315152 | November 1996 | JP |
8-315152 | November 1996 | JP |
9-091094 | April 1997 | JP |
9-224111 | August 1997 | JP |
9-319501 | December 1997 | JP |
10-105324 | April 1998 | JP |
11-051644 | February 1999 | JP |
11-064026 | March 1999 | JP |
11-064026 | March 1999 | JP |
11-85376 | March 1999 | JP |
11-085376 | March 1999 | JP |
11-110116 | April 1999 | JP |
11-110116 | April 1999 | JP |
2000-105671 | April 2000 | JP |
2000-105671 | April 2000 | JP |
2000-132340 | May 2000 | JP |
2000-132340 | May 2000 | JP |
2001-075735 | March 2001 | JP |
2001-282456 | October 2001 | JP |
2001-282457 | October 2001 | JP |
2002-236547 | August 2002 | JP |
2003-158597 | May 2003 | JP |
2003-167669 | June 2003 | JP |
2003-173237 | June 2003 | JP |
98/07112 | February 1998 | WO |
99/08897 | February 1999 | WO |
99/21122 | April 1999 | WO |
99/28812 | June 1999 | WO |
99/40562 | August 1999 | WO |
02/03316 | January 2002 | WO |
02/07073 | January 2002 | WO |
02/27461 | April 2002 | WO |
03/105074 | December 2003 | WO |
2005/106775 | November 2005 | WO |
2007/003196 | January 2007 | WO |
2007/064804 | June 2007 | WO |
- International Search Report and Written Opinion for PCT/CA2004/001759 mailed Feb. 21, 2005 (7 Pages).
- International Search Report for PCT/CA01/00980 mailed Oct. 22, 2001 (3 Pages).
- International Search Report and Written Opinion for PCT/CA2009/000773 mailed Aug. 12, 2009 (11 Pages).
- European Search Report for EP 06 01 9269 dated Nov. 9, 2006 (4 pages).
- European Search Report for EP 06 01 9268 dated Nov. 9, 2006 (4 pages).
- European Search Report for EP 04 25 1392 dated Jan. 11, 2007 (2 pages).
- European Search Report for EP 02 25 3594 dated Dec. 14, 2005 (3 pages).
- Partial European Search Report for EP 03 25 7166 dated May 19, 2006 (4 pages).
- May 12, 2009 Office Action for Canadian Patent Application No. 2,412,878 (4 pages).
- Förstner, Wolfgang, “On Estimating Rotations”, Festschrift für Prof. Dr. -Ing. Heinrich Ebner Zum 60. Geburtstag, Herausg.: C. Heipke und H. Mayer, Lehrstuhl für Photogrammetrie und Fernerkundung, TU München, 1999, 12 pages. (http://www.ipb.uni-bonn.de/papers/#1999).
- Funk, Bud K., CCD's in optical panels deliver high resolution, Electronic Design, Sep. 27, 1980, pp. 139-143.
- Hartley, R. and Zisserman, A., “Multiple View Geometry in Computer Vision”, Cambridge University Press, First published 2000, Reprinted (with corrections) 2001, pp. 70-73, 92-93, and 98-99.
- Kanatani, K., “Camera Calibration”, Geometric Computation for Machine Vision, Oxford Engineering Science Series, vol. 37, 1993, pp. 56-63.
- Tapper, C.C., et al., “On-Line Handwriting Recognition—A Survey”, Proceedings of the International Conference on Pattern Recognition (ICPR), Rome, Nov. 14-17, 1988, Washington, IEEE Comp. Soc. Press. US, vol. 2 Conf. 9, Nov. 14, 1988 (Nov. 14, 1988), pp. 1123-1132.
- Wang, F., et al., “Stereo camera calibration without absolute world coordinate information”, SPIE, vol. 2620, pp. 655-662, Jun. 14, 1995.
- Wrobel, B., “minimum Solutions for Orientation”, Calibration and Orientation of Cameras in Computer Vision, Springer Series in Information Sciences, vol. 34, 2001, pp. 28-33.
- Press Release, “IntuiLab introduces IntuiFace, An interactive table and its application platform” Nov. 30, 2007.
- Overview page for IntuiFace by IntuiLab, Copyright 2008.
- NASA Small Business Innovation Research Program: Composite List of Projects 1983-1989, Aug. 1990.
- Jul. 5, 2010 Office Action, with English translation, for Japanese Patent Application No. 2005-000268 (6 pages).
- Touch Panel, vol. 1 No. 1 (2005).
- Touch Panel, vol. 1 No. 2 (2005).
- Touch Panel, vol. 1 No. 3 (2006).
- Touch Panel, vol. 1 No. 4 (2006).
- Touch Panel, vol. 1 No. 5 (2006).
- Touch Panel, vol. 1 No. 6 (2006).
- Touch Panel, vol. 1 No. 7 (2006).
- Touch Panel, vol. 1 No. 8 (2006).
- Touch Panel, vol. 1 No. 9 (2006).
- Touch Panel, vol. 1 No. 10 (2006).
- Touch Panel, vol. 2 No. 1 (2006).
- Touch Panel, vol. 2 No. 2 (2007).
- Touch Panel, vol. 2 No. 3 (2007).
- Touch Panel, vol. 2 No. 4 (2007).
- Touch Panel, vol. 2 No. 5 (2007).
- Touch Panel, vol. 2 No. 6 (2007).
- Touch Panel, vol. 2 No. 7-8 (2008).
- Touch Panel, vol. 2 No. 9-10 (2008).
- Touch Panel, vol. 3 No. 1-2 (2008).
- Touch Panel, vol. 3 No. 3-4 (2008).
- Touch Panel, vol. 3 No. 5-6 (2009).
- Touch Panel, vol. 3 No. 7-8 (2009).
- Touch Panel, vol. 3 No. 9 (2009).
- Touch Panel, vol. 4 No. 2-3 (2009).
- Villamor et al. “Touch Gesture Reference Guide”, Apr. 15, 2010.
Type: Grant
Filed: Mar 11, 2010
Date of Patent: Jan 10, 2012
Assignee: SMART Technologies ULC
Inventor: Susumu Fujioka (Zama)
Primary Examiner: Richard Hjerpe
Assistant Examiner: Dorothy Harris
Attorney: Katten Muchin Rosenman LLP
Application Number: 12/722,345
International Classification: G06F 3/042 (20060101); G06F 3/041 (20060101);