Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
Disclosed is a multiple coordinate controller device having a three-dimensional body with a first surface portion and a second surface portion where the second surface portion is not coplanar with the first surface. A first transducer with a first sensing surface is coupled to the first surface portion of the body and capable of detecting both positions and a range of pressure forces at positions on the first sensing surface. The first transducer is further capable of providing a first range of z coordinates at a detected x,y coordinate in response to the range of pressure forces on said first sensing surface. A second transducer having a second sensing surface is coupled to the second surface portion of the body and capable of detecting both positions and a range of pressure forces at the positions on the second sensing surface. The second transducer is further capable of providing a second range of z coordinates of opposite polarity to the first range of z coordinates in response to the range of forces on second sensing surface.
Latest Sandio Technology Corp. Patents:
The present application is a continuation-in-part of U.S. patent application Ser. No. 08/696,366 filed on Aug. 13, 1996, now abandoned which is a continuation-in-part of U.S. patent application Ser. No. 08/509,797 filed on Aug. 1, 1995, now U.S. Pat. No. 5,729,249, which is a continuation of U.S. patent application Ser. No. 08/238,257 filed on May 3, 1994, now abandoned, which is a continuation-in-part of U.S. patent application Ser. No. 07/798,572 filed on Nov. 26, 1991, now U.S. Pat. No. 5,335,557, all of which are incorporated herein by reference. The present application also claims the benefit of U.S. Provisional Application No. 60/086,036, filed May 19, 1998, which is incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to the field of input control devices. More specifically, it relates to force-sensitive input-control devices with multiple surfaces capable of providing intuitive input in one to thirty-six degrees of freedom.
2. Description of the Related Art
(a) Prior Art 3D and 6D Input Control Devices
Two-dimensional input control devices such as mice, joysticks, trackballs, light pens and tablets are commonly used for interactive computer graphics. These devices are refined, accurate and easy to use. Three-dimensional (“3D”) devices allow for the positioning of cursors or objects relative to conventional X, Y and Z coordinates. Six-dimensional (“6D”) devices are also capable of orienting or rotating objects. More specifically, 6D devices may provide position information as in a 3D device and further provide rotational control about each of three axes, commonly referred to as roll, pitch and yaw. However, current 3D and 6D input devices do not exhibit the refinement, accuracy or ease of use characteristic of existing 2D input devices. In fact, existing 3D/6D input devices are typically cumbersome, inaccurate, non-intuitive, tiring to use, and limited in their ability to manipulate objects.
One well known category of 3D computer controllers are the “computer gloves,” such as the Power Glove controller distributed by Mattel, Inc. Similar devices include the Exos Dextrous Hand Master by Exos, Inc., and the Data Glove by VP' Research, Inc. These controllers are worn as a glove and variously include sensors for determining the position and orientation of the glove and the bend of the various fingers. Position and orientation information is provided by ranging information between multiple electromagnetic or acoustic transducers on a base unit and corresponding sensors on the glove. However, the user is required to wear a bulky and awkward glove and movement of these awkward controllers in free space is tiring. Further, these devices are typically affected by electromagnetic or acoustic interference, and they are limited in their ability to manipulate objects because of the inherent dissimilarity between the free-form movement of a glove and the more constrained movement of manipulated objects.
A second category of 3D/6D controllers are referred to as “Flying Mice.” The Bird controller by Ascension Technology Corp. of Burlington, Vt. tracks position and orientation in six-dimensions using pulsed (DC) magnetic fields. However, it is affected by the presence of metals and also requires manipulating the controller in free space. The 2D/6D Mouse of Logitech Inc. is similar in function, but uses acoustic ranging similar to the Mattel device. The 3SPACE sensor from Polhemus, described in U.S. Pat. No. 4,017,858, issued to Jack Kuipers Apr. 12, 1977, uses electromagnetic coupling between three transmitter antennas and three receiver antennas. Three transmitter antenna coils are orthogonally arranged as are three receiver antennas, and the nine transmitter/receiver combinations provide three dimensional position and orientation information. However, all “flying mouse” devices require the undesirable and tiring movement of the user's entire arm to manipulate the controller in free space. Further, these devices are either tethered by a cord or sensitive to either electromagnetic or acoustic noise.
A device similar to the flying mice is taught in U.S. Pat. No. 4,839,838. This device is a 6D controller using 6 independent accelerometers in an “inertial mouse.” However, the device must still be moved in space, and the use of accelerometers rather than ranging devices limits the accuracy. Another inertial mouse system is taught in U.S. Pat. No. 4,787,051 issued to Lynn T. Olson.
A third category of 3D/6D controllers includes 3D/6D joysticks and trackballs. Spaceball of Spatial Systems, Inc. is a rigid sphere containing strain gauges or optical sensors to measure the forces and torques applied to a motionless ball. The user pushes, pulls or twists the ball to generate 3D translation and orientation control signals. Spaceball is described in detail in U.S. Pat. No. 4,811,608 issued to John A. Hilton Mar. 14, 1989. Similarly, the DIMENSION 6/Geoball controller distributed by CiS Graphics Inc. incorporates a 6-axis optical torque sensor housed in a spherical enclosure. The device measures translational forces and rotational torques. However, these devices are subject to a number of disadvantages. For example, it is difficult to provide for precise positioning, as there is no provision for the use of a stylus. Further, these devices are primarily controlled with hand muscles, rather than with the more precise finger muscles. Further still, these devices provide for only relative control and have no provision for providing an absolute origins or an absolute positions. They are therefor not suitable for providing closure in digitized 3D inputs. Finally, they are limited in their ability to provide an intuitive feel for 3D manipulation of a controlled object not specified in the Cartesian coordinate system. For example, they are not readily adaptable to spherical or cylindrical coordinate systems.
(b) Prior Art Force-sensitive Transducers
Force-sensitive transducer are characterized in that they do not require a significant amount of motion in order to provide a control input. These devices have appeared in a number of configurations, some of which are capable of sensing not only the presence or non-presence of the touch of a user's finger or stylus, but also the ability to quantitatively measure the amount of force applied. One such a device is available from Tekscan, Inc. of Boston, Mass. This device includes several force-sensitive pads in a grid-based matrix that can detect the force and position of multiple fingers at one time. Another force-sensitive device is available from Intelligent Computer Music Systems, Inc. of Albany, N.Y. under the TouchSurface trademark. The Touch-Surface device can continuously follow the movement and pressure of a fingertip or stylus on its surface by responding to the position (X and Y) at which the surface is touched and to the force (Z) with which it is touched. Further, if two positions are touched simultaneously in the TouchSurface device, an average position of the two positions is provided. However, these devices are currently limited in manipulating objects beyond 2.5 dimensions, i.e. X-position, Y-position, and positive Z-direction, and are not available in any intuitive controllers.
Force-sensitive transducers have been used in two-dimensional applications in place of spring-loaded joysticks. For example, U.S. Pat. No. 4,719,538 issued to John D. Cox teaches using force-responsive capacitive-transducers in a joystick-type device. However, these devices do not typically provide for 3D/6D inputs. An augmented 2D controller using force-sensitive devices is taught in U.S. Pat. No. 4,896,543 issued to Larry S. Gullman. Gullman describes a three-axis force measurement stylus used as a computer input device wherein the forces sensed by the stylus are used for recognizing ciphers, selecting colors, or establishing line widths and line densities. However, this device does not provide inputs for roll, yaw or pitch, and does not provide any input for a negative Z input (i.e. there is no input once the stylus is lifted). Thus, it is limited in its ability to provide 3D positioning information, as this would require an undesirable bias of some sort.
(c) Prior Art 3D/6D Field Controllers
3D/6D controllers are found in many field applications, such as controllers for heavy equipment. These devices must be rugged, accurate and immune from the affects of noise. Accordingly, many input control devices used for interactive computer graphics are not suitable for use in field applications. As a result, heavy equipment controllers typically consist of a baffling array of heavy-but-reliable levers which have little if any intuitive relationship to the function being performed. For example, a typical heavy crane includes separate lever controls for boom rotation (swing), boom telescope (extension), boom lift and hook hoist. This poor user interface requires the operator to select and select and pull one of a number of levers corresponding to the boom rotation control to cause the boom to rotate to the left. Such non-intuitive controls makes training difficult and time-consuming and increases the likelihood of accidents.
Accordingly, it is desirable to provide a 3D/6D controller that is easy to use, inexpensive, accurate, intuitive, not sensitive to electromagnetic or acoustic interference, and flexible in its ability to manipulate objects. Specifically, a substantial need exists for a graphical input device capable of providing for the precision manipulation of position and spatial orientation of an object. It is desirable that the device accept intuitive and simple input actions such as finger motion to manipulate position and orientation and does not require manipulation of a controller in free space or otherwise cause fatigue. It is desirable that the device provide the dual-functionality of both absolute and relative inputs, that is, inputs similar to a data tablet or touch panel that provide for absolute origins and positions, and inputs similar to mice and trackballs that report changes from former positions and orientations. It is desirable that the device recognize multiple points for versatile positioning and spatial orientation of one or more objects and allow the use of multiple finger touch to point or move a controlled object in a precise manner.
SUMMARY OF THE INVENTIONAn input controller of the present invention incorporates multiple force/touch sensitive input elements and provides intuitive input in up to 36 degrees-of-freedom, including position and rotation, in either a Cartesian, cylindrical or spherical coordinate system. Input can be provided in the provided degrees of freedom without requiring movement of the controller, so that the controller is suitable for controlling both cursors or other computer objects in an interactive computer system and for controlling equipment such as heavy cranes and fork lift trucks.
More specifically, the preferred embodiment of the present invention provides a substantially cube-shaped input controller which includes a sensor on each of the six faces of the controller. The sensors are sensitive to the touch of a user's finger or other pointing object. In various embodiments, a controlled object may be translated by either a “pushing” or “dragging” metaphor on various faces of the controller. A controlled object may be rotated by either a “pushing,” “twisting,” or “gesture+ metaphor on various faces of the controller. In certain embodiments, the same sensor is used for both position and rotational inputs, and the two are differentiated by the magnitude of the force applied to the sensor. Preferably, each sensor includes a main sensor located near the center portion of each face of the controller, and a number of edge sensors surrounding the main sensor and located proximate to the edges of each face of the controller.
According to one embodiment, each face of the controller can be used to provide input in six degrees of freedom to each control an object. If every face of the controller is used, a total of thirty-six degrees of freedom may be utilized. This allows the simultaneous control of multiple objects. In one embodiment, a computer generated object displayed on a computer system includes a virtual hand. The entire hand and individual fingers of the hand may be simultaneously moved in several degrees of freedom by the user when providing input on multiple faces of the controller at the same time. In other embodiments, sets of faces can each control a separate object. For example, two opposing faces on the controller can command the translation and rotation of one object, while two different opposing faces can command the translation and rotation of a second object.
In a different embodiment, the controller of the present invention can be used to provide input to an application program implemented by a computer system, such as a computer aided design (CAD) program. A front face on the controller can be used to control a cursor in the program, and left and right faces can provide commands equivalent to left and right buttons on a mouse or other pointing device typically used with the program. An object displayed by the CAD program can be manipulated by using two touch points simultaneously. An object can be deformed, such as twisted, shrunk, or stretched, by providing input on the edge sensors of the controller. Two points of an object can be simultaneously deformed using separate faces of the controller.
In another embodiment, “pseudo force feedback” is provided to the user when the user controls a computer-generated object in a virtual environment. When a user-controlled computer object, such as a virtual hand, engages another object in the virtual environment, such as an obstacle, the user-controlled object is not allowed to move further in the direction of the obstacle object. The user thus feels the surface of the controller as if it were the surface of the obstacle, and receives visual feedback confirming this pseudo-sensation. In another embodiment, active tactile feedback can be provided to the user with the use of tactile sensation generators, such as vibratory diaphragms, placed on the controller or on peripheral surfaces to the controller.
The present invention provides an intuitive, inexpensive, and accurate controller for providing input in 3 or more degrees of freedom. The controller is flexible in its ability to manipulate objects and provide a relatively large number of degrees of freedom for a user, such that multiple objects can be manipulated simultaneously by a user. This allows realistic control of objects such as virtual hands in a simulated environment. In addition, the controller is not manipulated in free space and thus does not cause hand fatigue. The multiple dimensions of input can be generated without requiring movement of the controller, which provides a controller suitable for controlling both cursors and displayed objects in an interactive computer system. Further, the controller is insensitive to acoustic or electromagnetic noise and is thus suitable for controlling equipment such as heavy cranes and forklift trucks.
These and other advantages of the present invention will become apparent to those skilled in the art upon a reading of the following specification of the invention and a study of the several figures of the drawing.
FIGS. 33a1, 33a2, 33b1, 33b2, 33c1, 33c2, 33d1, 33d2, 33d3, 33d4, 33d5, 33d6, 33e1, and 33e2 illustrate the interpretation of various gestures;
FIG. 47e and
FIG. 53b and
Controller 105 is operated by pressing on any of the six force-sensitive pads. This pressure is preferably applied with one or more of the user's fingers. Alternatively, other objects can be used to apply pressure, such as a stylus or other article. The sensor pads can detect even a small amount of pressure so that the user need only touch the pads. In the described embodiment, the planar faces and the sensor pads of the controller 105 are rigid and do not substantially deforn under the pressure from the user. Thus, accurate x, y, and z-axis commands, referenced to the faces of the controller, can be provided at any point touched on the sensor pads.
The user interface is intuitive since a real or computer generated object will move as if it is responding to the pressure (i.e., force) on controller 105. For example, pressing down on force-sensitive pad 120, positioned on the top of controller 105, will cause a controlled object to move downward (−Y). Similarly, pressing up on force-sensitive pad 135, positioned on the bottom of controller 105, will cause the object to move upward (+Y). Pressing the controller towards the user, by pressing on force-sensitive pad 130, positioned on the back of controller 105, will cause the object to move towards the user (−Z). Pressing the controller away from the user, by pressing on force-sensitive pad 110, positioned on the front of controller 105, will cause the object to move away from the user (+Z). Pressing the controller to the left, by pressing on force-sensitive pad 115 on the right side of controller 105, will cause the object to move to the left (−X). Similarly, pressing the controller to the right, by pressing on force-sensitive pad 125, positioned on the left side of controller 105, will cause the object to move to the right (+X).
One advantage of the controller 105 is that it exhibits a zero neutral force, i.e., the controller does not require a force on any sensors or mechanical members to maintain a neutral position. The user merely stops applying pressure to the sensors, and the controller is in a neutral state that does not input movement signals to the computer 220.
In the preferred first embodiment controller 105 is sensitive to the presence of a touch input and A/D converter 205 provides a binary signal output to integrator 210 for each force-sensitive pad. This provides a controller that provides a single “speed”, that is, activation of a force-sensitive pad will result in the cursor, object or equipment moving in the desired direction at a certain speed. Alternatively, force-sensitive pads 110, 115, 120, 125, 130 and 135 can be of the type that provide analog outputs responsive to the magnitude of the applied force, A/D converter 205 can be of the type that provides a multi-bit digital signal, and integrator 210 can be of the type that integrates multi-bit values. The use of a multi-bit signals allows for multiple “speeds,” that is, the speed of the cursor or object movement in a given direction will be responsive to the magnitude of the force applied to the corresponding force-sensitive pads.
In operation, sensors 310, 315 and 320 provide redundant X, Y and Z position control of a cursor, object or equipment. That is, Y-position information can be entered on either sensor 310 or 315, X-position information can be entered on either sensor 310 or 320, and Z-position information can be entered on either sensor 315 or 320. The two X inputs are summed to provide the final X position information. Y and Z information is obtained in the same manner. Thus a change in position on a sensor is interpreted as a change of position of the real or computer-generated object, with a fixed or programmable gain.
For application requiring six degree-of-freedom input, such as manipulating the orientation of an object or equipment, sensors 310, 315 and 320 also provide the pitch, yaw and roll control. Specifically, the third signal provided by each sensor is used to differentiate “light” from “strong” pressures on each sensors. Threshold detector 535, illustrated in
Orientation interpreter 545 interprets rotational “gestures” as rotational control signals. More specifically, when a user applies pressure above the threshold pressure as detected by threshold detector 535, the analog information from the affected sensor is coupled to orientation interpreter 545 and interpreted as an orientation or rotation about the axis perpendicular to that sensor. The angular position of the pressure point is calculated with reference to the center point of the sensor. In a relative operating mode any angular changes are interpreted as rotations. The rotation can be modified by a programmable gain if desired. Orientation interpreter can also operate in an absolute mode. In an absolute mode, the orientation is determined from the two signals from each sensor by determining the angular position of the input relative to the center point of the sensor.
The X, Y and Z position data and the orientation data is derived in the same way as described with reference to controller 305 illustrated in
Alternatively, rotation commands can be generated by another technique using the 6-sided controller of FIG. 6. Specifically, a rotation command is generated by simultaneously dragging a finger on one panel in a first direction, and dragging another finger on the opposite panel in the opposite direction. For example, as illustrated in
A fourth embodiment of a 6D controller 705 is illustrated in
Knobs 740, 750 and 760 provide the orientation information for roll, yaw and pitch. Specifically, knob 740 provides pitch information about the Y-axis, knob 750 provides roll information about the X-axis, and knob 760 provides yaw information about the Z-axis.
As illustrated with regards to knob 740, each knob includes at least one sensor pad that can detect one dimensional information about the circumference of the knob. Preferably, each sensor can average two inputs. Movement of one or two pressure points on a sensor is interpreted as rotation about the axis of that sensor. Thus each knob generates orientation information about one axis in response to twisting of a thumb and finger about that knob. Specifically, sensor 745 on knob 740 provides one-dimensional position information about the circumference of knob 740. In the case of two inputs applied to a sensor, the average position of the two inputs is interpreted in a relative mode, and a programmable gain is provided. More specifically, the rotational command (the change in rotation) is calculated as follows:
θ=G*360°*dl/L
Where θ is the rotational command; G is the programmable gain; dl is the change in the average position of the fingers; and L is the circumference of the knob.
For example, twisting the thumb and finger one centimeter on knob 740 is interpreted as 90° of rotation about the Y-axis. Alternatively, the gain can be increased or decreased as desired.
Another embodiment of a touch cylinder 900 is illustrated in
Operation of touch cylinder 900 is described with reference to a “push” mode. Specifically, rotational information is provided by “pushing” sensors positioned on the sides of the cylinders to rotate the object about one of the axes other than the one on the cylinder of the enabled sensor as if it had been “pushed” in the same direction as the controller. This is more easily explained by illustration. Referring to
A third embodiment of a touch cylinder 1000 is illustrated in
Alternatively, Z-position can be responsive to the force of signals applied to sensors 1105 and 1115 in a manner similar to controller 105. Theta information can be obtained in a manner similar to that used for rotation information in controller 305. Radial information can be obtained from the force of the pressure applied to sensor 1110.
The raised edges of the controller provide an area of the sensor tactilely distinguished from flat surface 1915 which operates in a different mode. When computer system 220 reads input signals from coordinates of the edge sensor areas, it can distinguish this input as a different command from input entered on the main sensor areas. For example, in a relative mode for X and Y-position a change in position on sensor area 1915 is interpreted as a proportional change in cursor or object position on a display device of the computer 220. Once the operator's finger reaches edge sensor 1910 a steady force (without substantial movement) on edge sensor 1910 is interpreted as a continuation of the cursor movement. Cursor movement can be continued at either the most recent velocity along an axis, or at a preset speed, as long as a force is detected on the portion of edge sensor 1910 on that axis, such as portion 1920 with regards to movement in the positive X-direction. Alternatively, the speed of the cursor movement along an axis could be proportional to the amount of force applied to edge sensor 1910 on that axis. Thus, area 1920 would provide control of +X cursor speed, area 1925 would provide control of +Y cursor speed, area 1930 would provide control of −X cursor speed, and 1935 would provide control of −Y cursor speed. In any case, the operator is provided with the advantages of two alternative operating modes and the ability to combine the two modes in order to continue object movements in a desired direction after reaching the edge of main sensor area 1915.
When a user presses an edge sensor area without previously entering translation input on the adjacent main sensor, then the edge sensor input can be interpreted as a separate command and not as a continuation command. For example, an object or cursor can be rotated using the edge sensors, as described in greater detail below. In an alternative embodiment, only the edge sensors are used, and the main sensor area does not provide input when touched.
Four edge sensors 2520 surround and are immediately adjacent to each of the main sensors 2510 so that a user's finger may move continuously from a main sensor 2510 to an edge sensor 2520. Each of the edge sensors 2520 is inclined and raised relative to the adjacent main sensor to tactilely distinguish it from the associated main sensor 2510. Alternatively, edge sensors 2520 could be otherwise tactilely distinguished, such as by the use a texture different from that used on the adjacent main sensor 2510. One function of the edge sensors 2520 is to provide a continuation command as described above with regard to the operation of FIG. 19. In addition, edge sensors 2520 may be used to provide rotation commands. Specifically, the eight edge sensors 2520(x) parallel to the X-axis may be used to provided rotation commands about the X-axis. As illustrated in
The protocol for rotation command generation is illustrated in
Rotation commands are distinguished from translation commands by determining if a touch on a main sensor 2510 at a position immediately adjacent to an edge sensor 2520 occurred immediately prior to or simultaneously with the initiation of the touch of an edge sensor 2520. If touch points are detected on an edge sensor 2520 and on a main sensor 2510, and the touch points are continuous in time and position, the user's intention is interpreted as a continuation of translation command. If touch points are detected on edge sensors 2520 only, without a prior and adjacent detection on the adjacent main sensor, then the magnitude of force signal on the edge will be interpreted as a rotational command. It is preferable that a certain amount of “hysterisis” is provided in the command interpretation, such that if a user partially touches a main sensor 2510 while applying a rotation gesture, it is not interpreted as a continuation of a translation command. This is easily accomplished, as a continuation of a translation command cannot occur unless a translation command had been previously provided, and that previous translation command is smoothly continued by the candidate continuation command. This is described more fully below in the section titled Gesture Interpretation.
The rotation and continuous-translation input modes are very intuitive. The rotation mode is especially intuitive because the user's push action (one finger) or “twisting gesture” (pushing two diagonally opposite edge sensors by two fingers) of edges causes a controlled object to rotate in the pushing/twisting direction.
Rotation commands about an arbitrary axis may also be generated using controller 2500′ similar to the controller 2500 illustrated in FIG. 25a. Specifically, in this alternative embodiment, edge sensors 2520 are replaced with edge sensors 2520′ capable of providing a signal responsive to the position at which they are touched. For example, edge sensors 2520 along the X-axis provide a signal corresponding to the position along the X-axis at which a touch occurs. Similarly, the edge sensors 2520′ along the Y- (and Z-) axis provides a signal corresponding to the position along the Y- (and Z-) axis. Such position detection on the edge sensors can provide a greater degree of control of user over the movement and manipulation of an object.
An alternative embodiment of the cylinder of
Gesture Interpretation
Gestures applied to the controllers, such as controllers 2500 and 2500′, may be interpreted in a number of different ways by a computer interface and used to control the movement of display objects on an interactive computer display or used to control the movement of a physical piece of equipment, such as an industrial crane. The interpretation of gestures can be broken down into 3 cases.
In case 1, there is no detection of pressure or touch on main sensors 2510, but there is detection of pressure on edge sensors 2520. This case is interpreted as rotation of the camera view, as illustrated in the flow chart of FIG. 30. Referring to
In case 2, there is a detection of a single touch or pressure point on main sensors 2510. This case is interpreted as a cursor manipulation or camera view rotation as illustrated in the flow chart of FIG. 31. Referring to
In case 3, there is a detection of multiple touch points on main sensors 2510. This case is interpreted as an object manipulation as illustrated in the flow chart of FIG. 32. Referring to
Returning to step 3210, if touch points are detected on edge sensors 2520, a test is made in step 3240 to determine if there is only one touch point on edge sensor 2520. If yes, the gesture is interpreted as an object grasp and rotation in step 3245, as illustrated in FIGS. 33b1 and 33b2. If no, a test is made in step 3250 to determine if the edges touched are parallel and if the touch points on the main sensor panel 2510 are within a specified region adjacent to the edge and whether there was a translation command just previously generated (similar to step 3130 of FIG. 31). If these tests are not all met, the gesture is interpreted as a camera view rotation in step 3255. If the conditions of step 3250 are met, a test is made in step 3260 to determine if three touch points occur on edge sensors 2520. If yes, the gesture is interpreted as a continuation of object translation and object rotation in step 3265, as illustrated in FIGS. 33c1 and 33c2. If no, the gesture is interpreted as a continuation of object translation in step 3270.
The controllers described in
The user's finger 3504 can be pushed against the main sensor 3508 in the direction of the z-axis shown by arrow 3518 to provide input in the z degree of freedom. A threshold pressure, greater than the pressure needed for movement in the x- and y-degrees of freedom, preferably commands the z-axis input, as described in greater detail below in FIG. 35e. As shown in
The six degrees of freedom provided by a single face 3502 of controller 3500 can be multiplied by the number of active faces on the cube to achieve the total number of degrees of freedom in which the user may simultaneously provide input to a computer system or controlled device, e.g., when all six faces are used, there are 36 degrees of freedom. By using multiple fingers simultaneously on different faces of the controller, the user can independently and simultaneously control multiple sets of six degrees of freedom.
If the force F is greater than threshold #1 in step 3534, then in step 3536, the process checks whether the force F is between the first threshold and a second force threshold (threshold #2). If so, the force F is used to implement bi-directional z-axis movement, as described for
If the force F does not fit in the range of step 3536, the force F must be greater than threshold #2 (a check for F being greater than threshold #2 can be provided in alternate embodiments). Thus, in step 3539, the x- and y-data of the touch point is used to determine the amount of roll that commanded by the user as described in FIG. 35d. The F data is typically not needed to determine the change in angle of roll of the controlled object. A preferred method of calculating the roll uses the following formula:
Δθ=tan−1(Y1/X1)−tan−1(Y2/X2)
where Δθ is the change in angle of roll of the controlled object, (X1, Y1) is the starting touch point of the roll gesture, and (X2, Y2) is the ending point of the roll gesture.
In other embodiments, each finger 3564, 3566, and 3568 can be controlled independently of the other fingers by a separate face of the controller. For example, pinky finger 3564 can be controlled by the left face of cube 3500, ring finger 3566 can be controlled by the bottom face of cube 3500, and the middle finger 3568 can be controlled by the back face 3548 of controller 3500. However, such an arrangement is somewhat awkward for the user to manipulate with one hand, so that the user finger-virtual finger correspondence would be difficult to maintain.
In step 3616, the process checks whether any touch points have been detected from the user pressing fingers (or other objects) on the sensor pads. In a single touch point has been detected, i.e., the user is pressing only one sensor pad, then the process continues to step 3618, in which a camera view control command is generated. This camera view control command rotates or translates the view as seen by the user in a display such as display screen 3560. The control command is sent to the appropriate destination to implement the command. For example, a microprocessor in the controlling computer system 220 can receive the control command and generate a proper response by rotating or translating the camera view on display screen 3560. Step 3618 is described in greater detail with respect to FIG. 38a. The process then returns to step 3614 to read the six sensor pads.
If the process determines that two touch points have been detected in step 3616, then in step 3620, a virtual hand movement command is generated. This type of command causes the entire virtual hand 3562 to move in three-dimensional space (the simulated space may have less than three dimensions if the simulation is so constrained). The virtual hand command is then implemented, e.g., the computer system moves the hand 3562 to correspond to the current position of the user's finger on a main sensor pad, or continues to move the hand if the user's finger is on an edge sensor after a translation command, as described in the embodiments above. The generation of virtual hand control commands is described in greater detail with respect to FIG. 38b. The process then returns to step 3614 to read the six sensor pads.
If the process determines that three or more touch points have been detected in step 3616, then the process continues to step 3622, where a virtual finger movement command is generated. This type of command causes one or more fingers of hand 3562 to move in three dimensional space. The command is implemented, e.g., by computer displaying the finger moving in the appropriate manner. The generation of virtual finger controls is described in greater detail with respect to FIG. 38c. The process then returns to step 3614 to read the sensor pads.
If the touch point is not on an edge sensor in step 3628, then the process continues to step 3632, where a translation command for the camera view is implemented corresponding to the trajectory of the touch point on the sensor pad. For example, the last-processed touch point on the pad is examined and compared to the current touch point. From these two touch points, a vector can be determined and the view shown on the display device is translated along the vector, as if a camera were being translated by which the user was viewing the scene. The process is then complete at 3634 and returns to the process of FIG. 38.
If the detected touch points are not on diagonally-located edge sensors in step 3642, then, in step 3646, a translation command for the virtual hand is implemented that corresponds to the trajectory of both touch points on the controller. The virtual hand is moved in directions corresponding to the touch points. For example, as shown above in FIGS. 33d5 and 33d6, the two fingers on opposite faces of the controller cause the hand to translate in a plane. This is typically the most common form of input method to translate the virtual hand. In another scenario, if one of a user's fingers is dragged along the y-direction on the front face 3540, and another finger is dragged in the x-direction along the top face 3544, then the virtual hand is moved along a vector resulting from corresponding component vectors along the x- and y-axes. If one finger is not moved and the other finger is dragged, then the virtual hand is translated according to the one finger that is being dragged. After step 3646, the process is complete at 3648 and returns to the main process of FIG. 38.
The process also checks if the force of the user's touch points on main sensors is 5 less than the user-defined threshold value at step 3654. As explained above, multiple fingers can be simultaneously dragged on the main sensors of different faces of the controller. If the touch point is less than the threshold, then step 3660 is performed, in which the process checks if the touch trajectory is along the x-axis and/or the y-axis of the controller. If along the x-axis, step 3662 is performed, in which a bending control command is generated to bend the two (or more) joints of the appropriate virtual finger(s) about the z-axis, thus providing x-axis translation of the tip of the virtual finger. An example of this motion is shown in
The process also checks in step 3654 if any of the detected touch points are on a edge sensor of the controller that is predetermined to correspond with a virtual finger. As explained above with reference to
The above process provides a large and flexible range of virtual hand and virtual finger motions to the user with the intuitive use of the controller. Unlike in other limited input devices, the controller allows fingers and the hand to controlled simultaneously and independently of each other, allowing a user to realistically perform virtual actions and interact with virtual objects in a highly realistic manner.
In the example shown in
In
In
In addition, other functions can also be provided using the controller. For example, the right face 3806 and the left face 3808 can be used to select functions normally selected by the right and left mouse buttons, respectively. Thus, the left face 3808 can be pressed by the user to select an object 3822 that has been modeled or drawn using the CAD program. These functions are described in greater detail below with respect to the process of FIG. 43.
In
If object deformation mode is selected in step 3836, then the process checks in step 3844 if the touch point is on an edge sensor of the controller. If so, the process implements a twisting deformation of the displayed object in step 3846, as described in greater detail with respect to FIG. 43c. The process then returns to step 3834. If the touch point is not on the edge, it is on a main sensor of the controller, and the displayed object is shrunk or stretched in accordance with the user's input in step 3848, as described in greater detail with respect to FIG. 43d. The process then returns to step 3834.
If the detected touch point was not on the front sensor pad, then the process checks whether the detected touch point is positioned on the left sensor pad (relative to the front sensor pad) in step 3860. If so, then a left “click” command, equivalent to the click of a left button on a pointing device, is provided in step 3862. Typically, the left button on a mouse, trackball, touch tablet, or other pointing device, is the main button used to select objects or items displayed on the screen. Any functions selectable with the left mouse button can preferably be selected using the left face 3808 of the controller. For example, a “double click” of the left mouse button is often used to execute a program or perform a function that is different when only a single click is input. The left face of controller can be touched twice in succession to perform the double click. Other buttons or controls on standard input devices can be associated with the left face 3808 of the controller in other embodiments. The process is then complete at 3858.
If the touch point is not detected on the left sensor pad in step 3860, then in step 3864 the process checks if the touch point is detected on the main sensor pad of the right face 3806. If so, a right click command is implemented in step 3866. This command is equivalent to the command generated if the user selected the right mouse button (or equivalent control) on a mouse or other input pointing device. This step is thus similar to step 3862 for the left button of the mouse. Other buttons or controls on standard input devices can be associated with the right face 3806 of the finger in other embodiments. The process is then complete at 3858.
In other embodiments, the tactile sensation generators can be placed on other portions of each face of the controller, such as in the center of each face. Also, the tactile sensation generators can be of different sizes, e.g., a tactile sensation generator can cover an entire main sensor 3976 or an entire face of the controller 3962. In other embodiments, additional tactile sensation generators can be provided, such as a generator on each edge sensor and on the main sensor of a face. Also, the tactile sensation generator 3946 as shown in
The x,y translation signal produced by the first transducer at the first position is determined by the position of the object. When the user moves her finger, the x,y coordinates are changed by a x,y translation signal generated by the first transducer based on the direction of finger movement as follows: towards top edge 4102, the y coordinates are increased, towards bottom edge 4104, the y coordinates are decreased, towards left edge 4106, the x coordinates are decreased, and towards right edge 4108, the x coordinates are increased. That is, the object is moved in a relative, as opposed to absolute fashion in relationship to the movement of the finger on the sensing surface.
When the user moves her finger, a second transducer coupled to second sensing surface 4116 will transmit a pitch and a yaw rotation signal. If the user moves her finger towards: the top edge 4102, a positive pitch signal will be transmitted, towards the bottom edge 4104, a negative pitch signal will be transmitted, towards the left edge 4106, a negative yaw signal will be transmitted, and towards the right edge, 4108, a positive yaw signal will be transmitted.
As shown in each of
After operating controller 4000 as indicated in the methods with reference to
Controller 4200 further includes a second sensor 4120 having a second sensing surface 4121 that may be located on either right surface 4222 or left surface 4247 of wedge shaped body 4205 depending whether the use is right handed or left handed respectively. For purposes of illustration, second sensor 4120 is located on right surface 4122 of wedge shaped body 4205. A second edge sensor 4225 having a second edge sensing surface 4226 is positioned around the periphery of second sensor 4225 to generate a continuation command signal.
The x,y and y,z translation signals produced at the first position is determined by the position of the object being moved. When the user moves her finger on the first sensing surface, the x,y coordinates are changed by a x,y translation signal generated by the first transducer based on the direction of finger movement on the first sensing surface as follows: towards top surface 4272, the y coordinates are increased, towards bottom surface 4290, the y coordinates are decreased, towards left surface 4247, the x coordinates are decreased, and towards right surface 4222, the x coordinates are increased.
When the user moves her finger on second sensing surface 4221, the y,z coordinates are changed by a y,z translation signal generated by the second transducer based on the direction of finger movement on second sensing surface 4221 as follows: towards top surface 4272, the y coordinates are increased, towards bottom surface 4290, the y coordinates are decreased, towards front surface 4212, the z coordinates are decreased, and towards rear surface 4285, the z coordinates are increased.
If a finger is dragged on first sensing surface 4211: towards top surface 4272, then a positive pitch signal is generated, towards bottom surface 4290, then a negative pitch signal is generated, towards right surface 4222, then a positive yaw signal is generated, towards left surface 4247, then a negative yaw signal is generated. If a finger is dragged on second sensing surface 4221: towards top surface 4272, then a positive roll signal is generated, towards bottom surface 4290, then a negative roll signal is generated, towards front surface 4212, then a negative yaw signal is generated, towards rear surface 4285, then a positive yaw signal is generated.
FIG. 47e and
In a preferred embodiment, method further includes an operation when a user presses a finger within a second range of force against either the second sensing surface 4221 to generate an x− translation signal or third sensing surface 4246 to generate an x+ translation signal. Preferably, the second range of force is greater than the first range of force used in method. Again, the third edge sensor 4250 may be used to generate a continuation control signal as described above.
For example, if a user wants to generate an x translation signal, she must swipe her finger along a surface of an available sensor located on a surface of cube shaped body 4320 in the direction of the x axis 4330. For example, a user may execute a finger swipe on the front surface 4321 or the rear surface 4322 of controller 4315b in the direction of x-axis 4330 to generate an x translation signal. If a user wanted to generate a y translation signal from controller 4315f, she would execute a finger swipe in the direction of y-axis 4335 on any of the faces of controller 4315 except for the top surface 4323.
For example, if a user wants to generate an pitch rotation signal, she must swipe her finger along a surface of an available sensor located on a surface of cube shaped body 4320 in the direction of the pitch rotation around x axis 4330. For example, a user may execute a finger swipe on the front surface 4321 or the rear surface 4322 of controller 4315b in the direction of pitch rotation around x axis 4330 while holding another finger against any other available sensor to generate a pitch rotation signal.
FIG. 53b and
The sensors and edge sensors located on left front surface 4425 and right rear surface 4455 may be used to generate an x″ rotation signal, which commands the rotation of an object around an x″ axis. The x″ axis is defined at negative 45 degrees from the x-axis and located on the x,z plane. Each sensor of controller 4405 may be operated to generate a rotation signal by sliding an on the sensor in the desired direction while touching a second sensor with another object.
The invention has been described herein in terms of several preferred embodiments. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. For example, a variety of types of pressure-sensitive sensors can be utilized with the present invention. Various configurations and combinations of input gestures and commands can be detected by the controller in various embodiments as necessary for a particular application. Also, various types of computer-generated objects and real objects can be controlled with the present invention and be commanded to interact with other objects in an environment. Furthermore, certain terminology has been used for the purposes of descriptive clarity, and not to limit the present invention. The embodiments and preferred features described above should be considered exemplary, with the invention being defined by the appended claims.
Claims
1. A multiple coordinate controller device comprising:
- a three-dimensional body having a first surface portion and a second surface portion which is not coplanar with said first surface;
- a first transducer having a first sensing surface, said first transducer being coupled to said first portion of said body and being capable of detecting both positions and a range of pressure forces at said positions on said first sensing surface, wherein said first transducer is further capable of providing a first range of z coordinates at a detected x,y coordinate in response to said range of pressure forces on said first sensing surface, said first range of z coordinates provided only if said range of pressure forces are greater than a first threshold pressure;
- a second transducer having a second sensing surface, said second transducer being coupled to said second surface portion of said body and being capable of detecting both positions and a range of pressure forces at said positions on said second sensing surface, wherein second transducer is further capable of providing a second range of z coordinates of opposite polarity to said first range of z coordinates in response to said range of forces on second sensing surface, said second range of z coordinates provided only if said range of pressure forces are greater than a second threshold pressure.
2. The multiple coordinate controller device as recited in claim 1 wherein said first transducer detects a first position on said first sensing surface producing a first x,y coordinate and a second position on said first sensing surfaces producing a second x,y coordinate.
3. The multiple coordinate controller device as recited in claim 2 further comprising a first edge transducer having a first edge sensing surface positioned at least partially around a periphery of said first sensing surface, said first edge transducer being coupled to said first surface portion of said body and being capable of detecting a force on said first edge sensing surface.
4. The multiple coordinate controller device as recited in claim 3 further comprising a second edge transducer having a second edge sensing surface positioned at least partially around a periphery of said second sensing surface, said second edge transducer being coupled to said second surface portion of said body and being capable of detecting a force on said second edge sensing surface.
5. The multiple coordinate controller device as recited in claim 4, wherein said first edge transducer provides a continuation control signal in response to said force applied to said first edge sensing surface, wherein said continuation control signal commands a continuation of movement in a direction determined by said first detected x,y coordinate and said second detected x,y coordinate.
6. The multiple coordinate controller device as recited in claim 5 wherein said first and second sensing surfaces and said first and second edge sensing surfaces are approximately a rectangular shape.
7. The multiple coordinate controller device as recited in claim 6, wherein said first edge sensing surface is tactilely distinguished from said first sensing surface and said second edge sensing surface is tactilely from said second sensing surface.
8. The multiple coordinate controller device as recited in claim 6, wherein said first edge sensing surface is raised from said first sensing surface and said second edge sensing surface is raised from said second sensing surface.
9. The multiple coordinate controller device as recited in claim 6 wherein said second transducer detects a third and fourth position on said second sensing surface.
10. A multiple coordinate controller device comprising:
- a three-dimensional body having a first surface portion and a second surface portion which is not coplanar with said first surface; and
- a sensor consisting essentially of; a first transducer having a first sensing surface, said first transducer being coupled to said first surface portion of said body and being capable of detecting both positions and a range of pressure forces at said positions on said first sensing surface, wherein said first transducer is further capable of providing a first range of z coordinates at a detected x,y coordinate in response to said first range of forces, said first range of z coordinates provided only if said range of pressure forces are greater than a first threshold pressure; a second transducer having a second sensing surface, said second transducer being coupled to said second surface portion of said body and being capable of detecting both positions and a range of pressure forces at said positions on said second sensing surface, wherein said second transducer is further capable of providing a second range of z coordinates of opposite polarity for said first range of z coordinates in response to said second range of forces, said second range of z coordinates provided only if said range of pressure forces are greater than a second threshold pressure;
- whereby said sensor is capable of providing x,y and z coordinates from said first transducer and said second transducer, and
- whereby, said first sensing surface and said second sensing surface do not substantially deform under pressure.
11. A two sided controller comprising:
- a body having a first surface and an opposing second surface, said first surface and said second surface having dimensions that are substantially greater than a separation between said first surface and said second surface;
- a first sensor assembly supported by said first surface and including a first generally flat pressure sensor surrounded, at least in part, by a first generally flat edge pressure sensor;
- a second sensor assembly supported by said second surface and including a second generally flat pressure sensor surrounded, at least in part, by a second generally flat edge pressure sensor;
- wherein said body is sized to be contacted on said first sensor assembly with the thumb of a hand and simultaneously on said second sensor assembly with a finger of said hand.
12. A wedge shaped controller comprising:
- a body having a front edge surface having a first area, a back edge surface having a second area less than said first area, and a pair of side edge surfaces coupling said front edge surface to said back edge surface, whereby said body has a wedge shaped with angled side edges;
- a first sensor assembly supported by said front edge surface and including a first generally flat pressure sensor surrounded, at least in part, by a first generally flat edge pressure sensor; and
- a second sensor assembly supported by one of said pair of side edge surfaces and including a second generally flat pressure sensor surrounded, at least in part, by a second generally flat edge pressure sensor.
13. A wedge shaped controller as recited in claim 12 further comprising:
- a third sensor assembly supported by the other of said pair of side edge surfaces and including a third generally flat pressure sensor surrounded, at least in part, by a third generally flat edge pressure sensor.
14. A wedge shaped controller as recited in claim 12 wherein said body further has a top surface and a bottom surface, and is provided with a pressure sensor on at least one of said top surface and said bottom surface.
15. A touch-sensitive manually operable controller for providing position control information relative to three axes, the controller comprising:
- a top surface, a bottom surface, and a peripheral side surface;
- a first sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first sensor adapted for providing a first Y-signal in response to the position of a force applied to the sensor along the Y-axis and a first Z-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second sensor positioned on the top surface of the controller and generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a first X-signal in response to the position of a force applied to the sensor along the X-axis and a second Z-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second X-signal in response to the position of a force applied to the sensor along the X-axis and a second Y-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth sensor positioned on the side surface of the controller opposite the first sensor and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the fourth sensor adapted for providing a third Y-signal in response to the position of a force applied to the sensor along the Y-axis and a third Z-signal in response to the position of a force applied to the sensor along the Z-axis.
16. A touch-sensitive manually operable controller for providing position control information relative to three axes, the controller comprising:
- a top surface, a bottom surface, and a peripheral side surface;
- a first sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first sensor adapted for providing a first roll-signal in response to the position of a force applied to the sensor along the Y-axis and a first yaw-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second sensor positioned on the top surface of the controller and generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a second roll-signal in response to the position of a force applied to the sensor along the X-axis and a first pitch-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second pitch-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth sensor positioned on the side surface of the controller opposite the first sensor and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the fourth sensor adapted for providing a third roll-signal in response to the position of a force applied to the sensor along the Y-axis and a second yaw-signal in response to the position of a force applied to the sensor along the Z-axis.
17. A three dimensional controller comprising:
- a body having multiple faces wherein a first, second and a third face of the multiple faces meet at a common apex;
- a first axis controller, positioned on the first face, which is generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first axis controller adapted for providing a first Y-signal in response to the position of a force applied to the first axis controller along the Y-axis and a first Z-signal in response to the position of a force applied to the first axis controller along the Z-axis;
- a second axis controller, positioned on the second face, which is generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a first X-signal in response to the position of a force applied to the second axis controller along the X-axis and a second Z-signal in response to the position of a force applied to the second axis controller along the Z-axis; and
- a third axis controller, positioned on the third face, which is generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second X-signal in response to the position of a force applied to the third axis controller along the X-axis and a second Y-signal in response to the position of a force applied to the third axis controller along the Y-axis.
18. A three dimensional controller as recited in claim 17 wherein the first, the second and the third axis controllers comprise trackballs.
19. A three dimensional controller as recited in claim 17 wherein the first, the second and the third axis controllers comprise stick sensors.
20. A three dimensional controller as recited in claim 17 wherein the first, the second and the third axis controllers comprise zone sensors.
21. A three dimensional controller comprising:
- a body having multiple faces wherein a first, a second and a third face of the multiple faces meet at a common apex;
- a first axis controller, positioned on the first face, which is generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first axis controller adapted for providing a first Y-signal in response to the position of a force applied to the sensor along the Y-axis and a first Z-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second axis controller, positioned on the second face, which is generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a first X-signal in response to the position of a force applied to the sensor along the X-axis and a second Z-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third axis controller, positioned on the third face, which is generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second X-signal in response to the position of a force applied to the sensor along the X-axis and a second Y-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth axis controller, positioned on the third surface, which is generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the fourth axis controller adapted for providing a third Y-signal in response to the position of a force applied to the fourth axis controller along the Y-axis and a third Z-signal in response to the position of a force applied to the fourth axis controller along the Z-axis.
22. A three dimensional controller as recited in claim 21 wherein the first, the second, the third and the fourth axis controllers comprise trackballs.
23. A three dimensional controller as recited in claim 21 wherein the first, the second, the third and the fourth axis controllers comprise stick sensors.
24. A three dimensional controller as recited in claim 21 wherein the first, the second, the third and the fourth axis controllers comprise zone sensors.
25. A multiple coordinate controller device comprising:
- a three-dimensional movable body having a first surface portion, a second surface portion which is not coplanar with said first surface and a tracking surface engaged to handle a reference surface;
- a first transducer having a first sensing surface, said first transducer being coupled to said first portion of said body and being capable of detecting both positions and a range of pressure forces at said positions on said first sensing surface, wherein said first transducer is further capable of providing a first range of z coordinates at a detected x,y coordinate In response to said range of pressure forces on said first sensing surface, said first range of z coordinates provided only if said range of pressure forces are greater than a first threshold pressure;
- a second transducer having a second sensing surface, said second transducer being coupled to said second surface portion of said body and being capable of detecting both positions and a range of pressure forces at said positions on said second sensing surface, wherein second transducer is further capable of providing a second range of z coordinates of opposite polarity to said first range of z coordinates in response to said range of forces on second sensing surface, said second range of z coordinates provided only if said range of pressure forces are greater than a second threshold pressure; and
- a mouse sensor mechanism supported by said body and adapted to engage said reference surface as said body is moved over said reference surface, wherein said first and second sensing surfaces can be engaged by a finger as said body is engaged by a hand of a user.
26. The multiple coordinate controller device as recited in claim 25 wherein said first transducer detects a first position on said first sensing surface producing a first x,y coordinate and a second position on said first sensing surfaces producing a second x,y coordinate.
27. The multiple coordinate controller device as recited in claim 26 further comprising a first edge transducer having a first edge sensing surface positioned at least partially around a periphery of said first sensing surface, said first edge transducer being coupled to said first surface portion of said body and being capable of detecting a force on said first edge sensing surface.
28. The multiple coordinate controller device as recited in claim 27 further comprising a second edge transducer having a second edge sensing surface, said second edge transducer being coupled to said second surface portion of said body and being capable of detecting a force on said second edge sensing surface.
29. The multiple coordinate controller device as recited in claim 28, wherein said first edge transducer provides a continuation control signal in response to said force applied to said first edge sensing surface, wherein said continuation control signal commands a continuation of movement in a direction determined by said first detected x,y coordinate and said second detected x,y coordinate.
30. The multiple coordinate controller device as recited in claim 29 wherein said first and second sensing surfaces and said first and second edge sensing surfaces are approximately a rectangular shape.
31. The multiple coordinate controller device as recited in claim 30, wherein said first edge sensing surface is tactilely distinguished from said first sensing surface and said second edge sensing surface is tactilely from said second sensing surface.
32. The multiple coordinate controller device as recited in claim 30, wherein said first edge sensing surface is raised from said first sensing surface and said second edge sensing surface is raised from second sensing surface.
33. The multiple coordinate controller device as recited in claim 30 wherein said second transducer detects a third and fourth position on said second sensing surface.
34. A three dimensional controller comprising:
- a body having multiple faces wherein a first, a second and a third face of the multiple faces meet at a common apex;
- a first sensor, positioned on the first face, which is generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first sensor adapted for providing a first roll-signal in response to the position of a force applied to the sensor along the Y-axis and a first yaw-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second sensor, positioned on the second face, which is generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a second roll-signal in response to the position of a force applied to the sensor along the X-axis and a first pitch-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third sensor, positioned on the third face, which is generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing second pitch-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth sensor, positioned on the third face, which is generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the fourth sensor adapted for providing a third roll-signal in response to the position of a force applied to the sensor along the Y-axis and a second yaw-signal in response to the position of a force applied to the sensor along the Z-axis.
35. A three dimensional controller as recited in claim 34 wherein the first, the second, the third and the fourth axis controllers comprise trackballs.
36. A three dimensional controller as recited in claim 34 wherein the first, the second, the third and the fourth axis controllers comprise stick sensors.
37. A three dimensional controller as recited in claim 34 wherein the first, the second, the third and the fourth axis controllers comprise zone sensors.
38. A touch-sensitive manually operable controller for providing position control information relative to three axes, the controller comprising:
- a top surface, a front surface, and a peripheral side surface;
- a first sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first sensor adapted for providing a first Y-signal in response to the position of a force applied to the sensor along the Y-axis and a yaw-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second sensor positioned on the top surface of the controller and generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a first X-signal in response to the position of a force applied to the sensor along the X-axis and a first Z-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second X-signal in response to the position of a force applied to the sensor along the X-axis and a second Y-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth sensor positioned on the side surface of the controller opposite the first sensor and generally aligned on and orthogonal relative to an X-axis of a Cartesian Coordinate system, the fourth sensor adapted for providing a third Y-signal in response to the position of a force applied to the sensor along the Y-axis and a second Z-signal in response to the position of a force applied to the sensor along the Z-axis.
39. An input device comprising:
- a plurality of pressure sensitive sensors on a body wherein each pressure sensitive sensor of the plurality of pressure sensitive sensors can be manipulated by a finger;
- wherein said body is a mouse body having a lower surface for engagement with a reference surface for relative movement with respect thereto, said lower surface of said mouse body being provided with at least one sensor for sensing x and y degrees of freedom, said mouse body further having an upper surface;
- wherein said pressure sensitive sensors are each at least one of a button, touch tablet, trackball and joystick and are accessible from said upper surface of said mouse body; and
- wherein at least one of said pressure sensitive sensors senses a degree of freedom other than the x and y degrees of freedom.
40. An input device comprising:
- a movable mouse body having a tracking surface adapted to engage a reference surface and a curved upper surface provided with at least two two-degrees-of-freedom pressure sensitive sensors to provide multiple degrees of freedom; and
- a sensor mechanism supported by said mouse body and adapted to engage said reference surface as said body is moved over said reference surface to track at least two degrees of freedom.
41. The input device as recited in claim 40 wherein the plurality of pressure sensitive sensors comprises two pressure sensitive sensors.
42. The input device as recited in claim 40 wherein the plurality of pressure sensitive sensors comprises three pressure sensitive sensors.
43. The input device as recited in claim 40 wherein the plurality of pressure sensitive sensors comprises four pressure sensitive sensors.
44. The input device as recited in claim 40, wherein said pressure sensitive sensors do not transmit a signal when a pressure is not present on said pressure sensitive sensors.
45. A touch-sensitive manually operable controller for providing position control information relative to three axes, the controller comprising:
- a top surface, a front surface, and a peripheral side surface;
- a first sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the first sensor adapted for providing a roll-signal in response to the position of a force applied to the senor along the Y-axis and a yaw-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second sensor positions on the top surface of the controller and generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a first X-signal in response to the position of a force applied to the sensor along the X-axis and a first Z-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second X-signal in response to the position of a force applied to the sensor along the X-axis and a first Y-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth sensor positioned on the side surface of the controller opposite the first sensor and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the fourth sensor adapted for providing a second Y-signal in response to the position of a force applied to the sensor along the Y-axis and a second Z-signal in response to the position of a force applied to the sensor along the Z-axis.
46. A touch-sensitive manually operable controller for providing position control information relative to three axes, the controller comprising:
- a top surface, a front surface, and a peripheral side surface;
- a first sensor positioned on the side surface of the controller and generally aligned on an orthogonal relative to an X-axis of a Cartesian coordinate system, the first sensor adapted for providing a first Y-signal in response to the position of a force applied to the sensor along the Y-axis and a yaw-signal in response to the position of a force applied to the sensor along the Z-axis;
- a second sensor positioned on the top surface of the controller and generally aligned on and orthogonal relative to an Y-axis of a Cartesian coordinate system, the second sensor adapted for providing a first X-signal in response to the position of a force applied to the sensor along the X-axis and a pitch-signal in response to the position of a force applied to the sensor along the Z-axis;
- a third sensor positioned on the side surface of the controller and generally aligned on and orthogonal relative to an Z-axis of a Cartesian coordinate system, the third sensor adapted for providing a second X-signal in response to the position of a force applied to the sensor along the X-axis and a second Y-signal in response to the position of a force applied to the sensor along the Y-axis; and
- a fourth sensor positioned on the side surface of the controller opposite the first sensor and generally aligned on and orthogonal relative to an X-axis of a Cartesian coordinate system, the fourth sensor adapted for providing a third Y-signal in response to the position of a force applied to the sensor along the Y-axis and a first Z-signal in response to the position of a force applied to the sensor along the Z-axis.
47. A input device comprising:
- a mouse body having a lower surface and an upper surface;
- an x-y position sensor associated with said lower surface of said mouse body and providing output signals representing two degrees of freedom; and
- a plurality of touch panels spaced apart on said upper surface of said mouse body for providing a plurality of output signals representing a plurality of additional degrees of freedom.
48. The input device as recited in claim 47 wherein the additional degrees of freedom include a pitch.
49. The input device as recited in claim 47 wherein the additional degrees of freedom include a yaw.
50. The input device as recited in claim 47 wherein the additional degrees of freedom include a roll.
51. The input device as recited in claim 47 wherein the additional degrees of freedom include a least pitch, yaw and roll.
52. The input device as recited in claim 47 wherein the additional degrees of freedom consists of pitch, yaw and roll.
53. The input device as recited in claim 52 wherein the body includes a bottom surface, a top surface, two opposite side surfaces both of which are in contact with the top surface and the bottom surface and the pitch, yaw and roll touch panels are each located on the body such that only one touch panel is located on the two side surfaces and the top surface.
54. An input device comprising:
- a body;
- an x-y position sensor providing output signals representing two degrees of freedom; and
- a plurality of touch panels spaced apart on said body for providing a plurality of output signals representing a plurality of additional degrees of freedom;
- wherein the additional degrees of freedom consists of pitch, yaw and roll, and wherein the body includes a bottom surface and a top surface, where the pitch, yaw and roll touch panels are each located on the top surface.
3490059 | January 1970 | Paulsen et al. |
4017858 | April 12, 1977 | Kuipers |
4216467 | August 5, 1980 | Colston |
4302011 | November 24, 1981 | Pepper, Jr. |
4313113 | January 26, 1982 | Thornburg |
4394773 | July 19, 1983 | Ruell |
4448083 | May 15, 1984 | Hayashi |
4550221 | October 29, 1985 | Mabusth |
4550617 | November 5, 1985 | Fraignier et al. |
4601206 | July 22, 1986 | Watson |
4684801 | August 4, 1987 | Carroll et al. |
4704909 | November 10, 1987 | Grahn et al. |
4720805 | January 19, 1988 | Vye |
4763100 | August 9, 1988 | Wood |
4787051 | November 22, 1988 | Olson |
4798919 | January 17, 1989 | Miessler et al. |
4811608 | March 14, 1989 | Hilton |
4823634 | April 25, 1989 | Culver |
4839838 | June 13, 1989 | LaBiche et al. |
4954817 | September 4, 1990 | Levine |
4983786 | January 8, 1991 | Stevens et al. |
4988981 | January 29, 1991 | Zimmerman et al. |
5095303 | March 10, 1992 | Clark et al. |
5128671 | July 7, 1992 | Thomas, Jr. |
5165897 | November 24, 1992 | Johnson |
5178012 | January 12, 1993 | Culp |
5185561 | February 9, 1993 | Good et al. |
5262777 | November 16, 1993 | Low et al. |
5327161 | July 5, 1994 | Logan et al. |
5335557 | August 9, 1994 | Yasutake |
5354162 | October 11, 1994 | Burdea et al. |
5376948 | December 27, 1994 | Roberts |
5389865 | February 14, 1995 | Jacobus et al. |
5408407 | April 18, 1995 | Lefkowitz et al. |
5429140 | July 4, 1995 | Burdea et al. |
5440476 | August 8, 1995 | Lefkowitz et al. |
5459382 | October 17, 1995 | Jacobus et al. |
5483261 | January 9, 1996 | Yasutake |
5506605 | April 9, 1996 | Paley |
5543590 | August 6, 1996 | Gillespie et al. |
5555894 | September 17, 1996 | Doyama et al. |
5565891 | October 15, 1996 | Armstrong |
5703623 | December 30, 1997 | Hall et al. |
5717423 | February 10, 1998 | Parker |
5729249 | March 17, 1998 | Yasutake |
5774113 | June 30, 1998 | Barnes |
5778885 | July 14, 1998 | Doyama et al. |
6087599 | July 11, 2000 | Knowles |
6091406 | July 18, 2000 | Kambara et al. |
6597347 | July 22, 2003 | Yasutake |
2060173 | April 1981 | GB |
2254911 | October 1992 | GB |
60-95331 | May 1985 | JP |
60-129635 | July 1985 | JP |
61-292028 | December 1986 | JP |
1244515 | July 1986 | SU |
WO 92/08208 | May 1992 | WO |
WO 93/11526 | June 1993 | WO |
WO 95/20787 | August 1995 | WO |
WO 95/20788 | August 1995 | WO |
- Ken-ichi Kameyama, Koichi Ohtomi; A Shape Modeling System with a Volume Scanning Display and Multisensory Input Device; Presence; vol. 2, No. 2, Spring 1993.
- Tamotsu Murakami, Naomasa Nakajima; Direct and Intuitive Input Device for 3-D Shape Deformation; Human Factors in Computing Systems; Apr. 24-28, 1994.
- Krueger, “Artificial Reality: Perceptual Systems,” pp. 54-75 (1983).
- Kameyama, et al., “A Shape Modeling System with a Volume Scanning Display and Multisensory Input Device,” Presence, vol. 2, No. 2, pp. 104-111 (Spring, 1993).
- Murakami, et al., “Direct and Intuitive Input Device for 3-D Shape Deformation,” Human Factors in Computing Systems (Apr. 24-28, 1994).
Type: Grant
Filed: Jul 22, 2005
Date of Patent: Sep 1, 2009
Assignee: Sandio Technology Corp. (San Jose, CA)
Inventor: Taizo Yasutake (Cupertino, CA)
Primary Examiner: Bipin Shalwala
Assistant Examiner: Vincent E. Kovalick
Attorney: TIPS Group
Application Number: 11/188,284
International Classification: G08C 21/00 (20060101);