Method and system for making natural movement in displayed 3D environment
Techniques for rendering the motions of a selected object as naturally as possible in a 3D environment are disclosed. According to one aspect of the techniques, relative changes in position of a controller in the physical world are used to control the motion of a selected (target) object in a virtual world by imparting inertia into the selected object in a relationship to the changes in speed and duration of the controller. As a result, the movements of the object are rendered naturally in a displayed scene in accordance with the changes in motion or position of the controller.
This is a continuation of co-pending U.S. application Ser. No. 12/835,755, entitled “Method and system for making a selection in 3D virtual environment”, now U.S. Pat. No. 8,384,665, which is a continuation-in-part of co-pending U.S. application Ser. No. 12/020,431, entitled “Self-Contained Inertial Navigation System for Interactive Control Using Movable Controllers”, which claims the priority of a provisional application Ser. No. 60/990,898, filed Nov. 28, 2007, and is a continuation-in-part of U.S. application Ser. No. 11/486,997, filed Jul. 14, 2006, now U.S. Pat. No. 7,702,608.
BACKGROUND OF THE INVENTION1. Technical Field
The invention generally relates to the area of human-computer interaction, and more particularly relates to techniques for selecting objects being displayed or controlling motion and configuration of a virtual object being displayed. With one embodiment of the present invention, inputs from a hand-held controller containing inertial sensors allow a user to control an object within a two or three dimensional representation shown to the user, and allow the user to directly manipulate one or more chosen objects by mapping their location and orientation in a virtual space to those of the user in a physical world. Various embodiments of the present invention may be used in computer applications, video games, or on-screen controls for electronic devices.
2. Related Art
There are a number of man-machine interface devices, such as computer mice, joysticks, remote controllers and trackballs, for controlling computer applications and video games. Each of these devices are well understood in the art and primarily focus on converting motions of a human being into an analog motion that is represented on a two dimensional screen. For example, a joystick translates the position of the control stick relative to a center into a velocity that is applied to a point located on a two dimension screen. According to an established convention, a left-right motion of the control stick corresponds to left-right motion on the screen and a forward-away motion of the control stick corresponds to up-down motion on the screen.
This basic approach of remapping inputs to control motions has been extended to cover three-dimensional computer applications using objects such as 3D mice and 3D joysticks. One approach of doing this is described in U.S. Pat. No. 5,898,421. Most of these approaches available today, however, have the disadvantage that the users must learn an artificial convention for how their motions in the physical world correspond to the motions of a pointer in a computer representation. In general, users prefer natural interactions with a computer application.
A natural interaction for a user would be to have a direct control over the motion of an object in a displayed scene. For example, in a sword-fighting game, a natural control for the user would be to have the sword displayed in the game with the same orientation and position as the motion controller in his/her hand. Currently this is possible by having an external system that measures the exact position and/or orientation of the controller in the physical world. A system for doing this is described in U.S. Pat. No. 4,862,152 but requires the addition of bulky sonic sensors and emitters in the vicinity of the user. Essentially, the system is limited by restricting the motions of the user within a predefined range.
Another natural interaction users desire is the ability to directly point at objects by using their hand to point at the image shown on the display. A two-dimensional solution to this particular style of interaction is introduced by Nintendo in the Wii system (US Patent Publication No.: US20070060384), however it requires additional modification of the environment of the user by adding a sensor bar to define a limited range and field of view, restricting the movements of the user to a small range in front of the sensor bar. It would be desirable to have an approach that requires less modification to the user's environment and allows natural three-dimensional pointing interactions.
There is thus a need for techniques that facilitate full control on motions of displayed objects in positions and orientations in six degrees of freedom. Such techniques shall also work in situations in which there are no additional sensors or emitters, or a motion controller is not able to be detected by some or all of the sensors. There is another need for techniques that provide the ability for users to directly select or point at a portion of a displayed virtual environment in 3D, where the portion of the displayed virtual environment may be an object or a part of a scene in the virtual environment.
SUMMARY OF INVENTIONThis section summarizes some aspects of the present invention and briefly introduces some preferred embodiments. Simplifications or omissions in this section as well as in the abstract or the title of this description may be made to avoid obscuring the purpose of this section, the abstract and the title. Such simplifications or omissions are not intended to limit the scope of the present invention.
Generally speaking, the present invention describes techniques for interpreting user motions of a motion controller in order to allow natural and intuitive interfaces for controlling a computer application or video game. According to one aspect of the present invention, a motion-sensitive device, also referred to as a motion controller herein, held by a user contains inertial sensors providing sensor signals sufficient to derive position and orientation of the controller in six degrees of freedom. Depending on implementation, the user may or may not be in the field of view of a camera. The position and orientation of the motion controller in six degrees of freedom is tracked by analyzing sensor data from the inertial sensors in conjunction with video images, if available, from the camera. This position and orientation are then used for fine control of one or more objects rendered on a display shown to the user. Large motions of the controlled object(s) can then be indicated by the use of specific gestures and button combinations via the motion controller.
According to another aspect of the present invention, the position and orientation of the motion controller are used to control a virtual ray that is used to select one or more objects in a three-dimensional (3D) scene shown on a display, as if the user had a real laser pointer that crosses from the physical world into the 3D virtual scene being displayed. One embodiment of this aspect allows the user to optionally use a defined ray to select one or more points or objects in a 3D space by using a secondary input device to control a distance along the ray being used.
According to still another aspect of the present invention, the relative changes in position of the controller in the physical world is used to control the motion of a selected (target) object in a virtual world by imparting inertia into the selected object in a relationship to the changes in speed and duration of the controller. As a result, the movements of the target object are rendered naturally in a displayed scene in accordance with the changes in motion or position of the controller.
The present invention may be implemented in different forms, including an apparatus, a method or a part of a system. According to one embodiment, the present invention is a system for a user to interact with a virtual environment, the system comprises: a controller including a plurality of inertia sensors providing sensor signals sufficient to derive position and orientation of the controller in six degrees of freedom; a processing unit, receiving the sensor signals, configured to derive the position and orientation of the controller from the sensor signals, map movements of the controller to movements of at least one object in the virtual environment, and allow a mode of operation in which a velocity of the controller is mapped to a rate of change from a scene of the virtual environment to another scene of the virtual environment. As a result, the scene of the virtual environment being displayed is caused to drift over a period of time to a different scene of the virtual environment, after the user activates a mechanism on the controller to cause the scene of the virtual environment to have a sudden movement.
According to another embodiment, the present invention is a method for a user to interact with a virtual environment, the method comprises: receiving sensor signals from a controller sufficient to derive position and orientation of the controller in six degrees of freedom, wherein the controller includes a plurality of inertia sensors that generate the sensor signals when being manipulated by the user; deriving the position and orientation of the controller from the sensor signal; and mapping movements of the controller to movements of at least one object in the virtual environment; and allowing a mode of operation in which a velocity of the controller is mapped to a rate of change from a scene of the virtual environment to another scene of the virtual environment.
According to still another embodiment, the present invention is a system for a user to select a portion of a 3D virtual environment being displayed, the system comprises: a controller including a plurality of inertia sensors providing sensor signals sufficient to derive changes in position and orientation of the controller in six degrees of freedom; a processing unit, receiving the sensor signals, configured to derive the position and orientation of the controller from the sensor signals, and generate a ray originating from a position selected by an application to an interaction of a display screen provided to display the virtual environment, wherein the ray is further projected into the virtual environment by a ray tracing technique. Depending on implementation, the position selected to originate the ray may be a controller used by a user to interact with the 3D virtual environment or a secondary device (e.g., a joystick or another controller).
According to yet another embodiment, the present invention is a method for a user to select a portion of a 3D virtual environment being displayed, the method comprises: receiving sensor signals from a controller sufficient to derive position and orientation of the controller in six degrees of freedom, wherein the controller includes a plurality of inertia sensors that generate the sensor signals when being manipulated by the user; deriving the position and orientation of the controller from the sensor signal; and generating a ray originating from the controller to an interaction of a display screen provided to display the 3D virtual environment, wherein the ray is further projected into the 3D virtual environment by a ray tracing technique.
Other objects, features, benefits and advantages, together with the foregoing, are attained in the exercise of the invention in the following description and resulting in the embodiment illustrated in the accompanying drawings.
These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
The detailed description of the invention is presented largely in terms of procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will become obvious to those skilled in the art that the invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the present invention.
Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the order of blocks in process flowcharts or diagrams representing one or more embodiments of the invention do not inherently indicate any particular order nor imply any limitations in the invention.
Referring now to the drawings, in which like numerals refer to like parts throughout the several views.
According to one embodiment, the controller 104 and the processing unit 106 are integrated as a single device, in which case the processing unit 106 is configured to send instructions to cause the display 101 to render a virtual environment for the user 103 to interact with. In the following description, it is assumed that the controller 104 is being handheld by the user 103 while the processing unit 106 communicates with the controller 104 wirelessly. As shown in the figure, the user 103 is using the controller 104 to perform some movements, referred to as source motion 105, in reacting to a virtual environment being displayed on the display 101. Depending on application, the virtual environment being displayed may be in 2D or 3D. The display 101 may also be a 3D display device or a 3D projector. The source motion 105 in this embodiment is the natural motion performed by the user 103 through his or her movements in 6 degrees of freedom including three translational movements and three rotational movements.
The motion 105 is sensed by inertial sensors embedded in the controller 104 and also captured by the camera 102 in one embodiment. The sensor signals from the controller 104 and the camera 102 are coupled to or transmitted to the processing unit 106. According to one embodiment, the processing unit 106 is loaded with a module that is executed therein to derive the position and orientation of the controller 104 from the sensor signals, with or without the image signals from the camera 102. The derived position and orientation (motion) of the controller 104 are in return used to control the motion of a selected object 107 in a virtual environment. One of the important features, objectives and advantages of this invention is that a target object is controlled with full six degrees of freedom, thus enabling functions of the target object that depend on its orientation.
As an example, there is a case in which a target object represents a flashlight, the motion of the body of the flashlight can be controlled in accordance with the derived motion 108 from the controller 104 by the user, while the orientation of the target object determines what areas 109 within the virtual environment are illuminated. As a second example, a target object represents a pointing device, such as a virtual laser pointer or virtual rifle with a laser sight, and that device could be controlled by the user in order to select other objects within the virtual environment.
It shall be noted that while there is a one-to-one mapping between the original motion of the controller 104 and the motion of a target object, linear and non-linear transformations may be applied when appropriate for a particular application the user is engaged in. As detailed further below, such a mapping relationship may be transformed linearly or nonlinearly to optimize the movements by the user in a physical world and corresponding movements of the target object in a virtual world being displayed.
In operation, the user manipulates the controller by waving it, other actions, or entering commands in responding to a scene on a display. The signals from the inertia sensors as well as from the camera are transported to the processing unit at 204. The processing unit is configured to determine the motion from the signals. According to one embodiment, a module is configured and executed in the processing unit as a controller tracker or simply tracker. Upon the activation of the tracker at 202, the tracker starts to track the motion of the controller in six degrees of freedom at 206.
According to one embodiment, the controller includes a plurality of self-contained inertial sensors that are capable of tracking along six axes: three for linear acceleration along the three linear axes, and three axes for determining angular motion. For example, a combination of one tri-axial accelerometer and one tri-axial gyroscope in a controller will function effectively. However, those skilled in the art will be aware that various other combinations of sensors will also function effectively.
At 206, upon receiving the (sensor and video) signals, the processing unit is configured to integrate and combine gyroscope and accelerometer readings to provide estimates of changes in the controller over a period of time.
Referring back to 206 of
orientation(t+dt)=orientation(t)+Gyro(t)*dt (1)
velocity(t+dt)=velocity(t)+(orientation(t)*(Acc(t)−(Centripetal Accelerations from rotation at time t))−Gravity)*dt; (2)
position(t+dt)=position(t)+velocity(t+dt)*dt (3)
In equation (1) above, Gyro(t) includes three orthogonal readings of angular velocity at time t. Multiplying by dt, the time elapsed since the previous readings, gives the angular change around each axis since the previous readings. This change can be applied to the previous estimate of orientation. Embodiments for making these computations depend on the form in which the orientation information is stored. In the games industry, quaternions are commonly used for this purpose, in which case the angular change from the Gyro(t)*dt term can be converted to a quaternion rotation and added using quaternion arithmetic.
In equation (2) above, Acc(t) includes three orthogonal readings of acceleration at time t in the frame of reference of the object. If the accelerometers are not physically co-located with the gyroscopes, the computation first subtracts any accelerations resulting from the accelerometers rotating around the location of the gyroscopes. For example, if the accelerometers are displaced along the z-axis of the object, the following adjustments would need to be made to the accelerometer readings: Since Acc(t) and Gyro(t) are vectors, [0], [1], and [2] are used to refer to their individual scalar components.
Increase Acc(t+dt)[0] by AA[1]*zOffset−(Gyro(t+dt)[0]*Gyro(t+dt)[2])*zOffset (4)
Increase Acc(t+dt)[1] by −AA[0]*zOffset−(Gyro(t+dt)[1]*Gyro(t+dt)[2])*zOffset (5)
Increase Acc(t+dt)[2] by(Gyro(t+dt)[0]̂2+Gyro(t+dt)[1]̂2)*zOffset (6)
where
AA[0]=(Gyro(t+dt)[0]−Gyro(t)[0])/dt (7)
AA[1]=(Gyro(t+dt)[1]−Gyro(t)[1])/dt (8)
The adjusted accelerometer readings are translated from the object frame to the world frame using the current orientation of the object. Acceleration due to gravity (approximately 9.8 m/s/s on planet Earth's surface) is subtracted. The changes in each of the three dimensions of the object position can be found by multiplying by dt*dt.
With these or equivalent computations, the processing unit can generate estimates of position and orientation of the controller in six degrees of freedom. Due to the accumulation of errors in sensor readings, e.g., caused by noise, limited precision, or other factors, or possibly due to errors in transmission of the time series data, the actual position and orientation of the self-tracking object are likely to generate a set of estimates of position and orientation that differ at least somewhat from reality. Over time the difference may become sufficient to be relevant to operation of controlling a target object and the undergoing application (e.g., a video game). For example, the difference may become large enough that an animation generated by coupling the inferred position and orientation estimates may appear more and more unrealistic to the player as time progresses.
From time to time, the processing unit receives additional information regarding position and orientation of the controller that becomes available at an identifiable time, with the effect that the module in the processing unit is able to determine a new instantaneous position and orientation. For example, this can happen if a player stops moving the controller, with the effect that an identifiable period of quiescence is entered. According to one embodiment, the images from the camera can be analyzed to infer the new instantaneous position and orientation of the controller at a time.
When more precise information or other corrective information becomes available, the information can be used for more than just obtaining more reliable estimates at that moment in time. In particular, the information can be used to infer something about the errors over at least some portion of the recent history of sensor readings. By taking those error estimates into account, a new trajectory of the controller can be calculated. Additional details of determining the motion of the controller may be found in co-pending U.S. application Ser. No. 12/020,431.
At 208, a transformation(s) is determined as to how to transform the derived motion of the controller to motion of one or more target objects. According to one embodiment, the tracker is configured to maintain a mapping between the position and orientation of the controller and the output configuration of the target object. This mapping is initially specified per application depending on which target object is selected at a given time for that application. The details of this mapping are highly application dependent but generally include some set of the following transformations.
A) Scaling: The natural comfortable motions of the controller for an end user are likely confined to a fairly small area, such as a one-foot cube, while the desired range of possible positions for a target object may form a much larger area in the virtual environment. This scaling factor may be a linear factor of the motions from the user, for example, each 1 cm motion of the controller corresponds to a 1 m movement of a target object in the virtual environment. For other applications a non-linear mapping may be more appropriate, if the user moves his/her hand further from a natural rest position, the corresponding changes in position of the target object could be much larger. For example, moving the controller 10 cm from the rest position may correspond to moving a target object 1 m in the virtual environment, while a motion of 20 cm from the rest position may move the same target object 4 m. The rest position can be assumed by default to be the location at which the trigger to track was activated. Alternately additional calculations of the location of the controller relative to the body of the end user could be used to estimate the natural rest position for a human.
B) Orientation transformations: For some applications, it may be imagined that a direct mapping from a controller orientation to a target orientation would be desirable. There may be however situations in which a scaling up is desired in order to reduce the amount of wrist motion required of the user or to give finer control to the rotations of a virtual object or tool. Other applications may wish to disallow certain orientations of the target object, because they would correspond to undesirable configurations in the virtual world (e.g., because of mechanical constraints or intersections with other objects). As an example, a golf game may wish the club to correspond to a controller being used by a user, but disallow orientations that would place the club head into avatars representing players in the game. This could be handled by representing the orientation of the club as the allowable orientation closest to the actual orientation of the controller.
Once the transformation(s) is determined at 208, it is applied to mapping from the motion of the controller to the motion of the selected object at 210. In other words, the controller configuration is mapped to the target object at 210 and the resulting target object configuration is sent to the application at 212. The display is then updated to show the motion of the target object in a virtual environment at 214.
In many applications, it is desirable to give the user some direct control over the mapping being used. In one embodiment, the user is allowed to control the strength of the linear scaling so as to have fine control of the target object during a motion. For example, a user can specify the scale using a second controller when available. Separate gestures for zooming in and out can be defined for the second controller. These gestures are recognized using technology such as that found in U.S. Pat. No. 7,702,608, which is hereby incorporated by reference. Alternately the user can select with a button or other means to adjust the scaling and move either on the primary controller or a secondary controller closer or further from the screen to adjust the amount of zoom. A simple mapping would involve moving towards the screen to zoom in and cause motions to move a smaller distance in the world space and away to zoom out and cause motions to move larger distances in the world space.
According to one embodiment, the user can control the movement of the target object relative to its environment by imparting a continuing impulse or inertia to the target object. This allows the location of the target object within the virtual environment to keep changing while the trigger is not active and the user has stopped moving the controller. There are two main approaches of doing this and which one would be more appropriate is application-specific.
In the first approach, after the user enters a command (e.g., pressing down a button), a portion of the virtual environment being displayed at the moment is grabbed. The motion of the controller is then mapped into an impulse that is imparted onto that portion of the environment when the motion trigger is released. This impulse then causes the portion of the environment to drift, relative to the target object, for some time. This is visually similar to an interaction used with some touch screen devices in which a user makes a rapid motion and then releases his/her finger or stylus in order to cause the screen display to move rapidly before drifting to a stop.
In the second approach, no additional command is required to “grab” the environment. Instead, if the motion trigger is released while the target object still has a significant velocity, the target object is “thrown” and continues in the same direction afterwards for some time.
It shall be noted that, after releasing the trigger, both approaches can be regarded as equivalent by inverting the direction of the impulse implied in the first approach and applying it to the target object instead of the environment. This inertia would naturally decay over time at some rate similar to the action of friction in the real world. In both approaches, if the user desires a more sudden stop, the system will respond to a new trigger of the controller by stopping the drift of the target object or the environment and letting the user resume control from the current position.
As an example, one objective of a game is to build different forms of military units and direct them to defend or attack opponents at different locations on a map. In a setting like this, one embodiment described in the present invention is used to rapidly select individual equipment, e.g., simply using a controller to point at a plane and then taking control of positioning the plane. Small adjustments to the position of the plane can be accomplished by mapping directly the movements of the plane to the movements of the controller, while large adjustments to the position can be accomplished rapidly by using the second approach described above to “throw” the plane towards a desired location, leaving the user free to select another. In a real-time game like this there are occasions where the user would like to change their viewpoint from one portion to another portion of the map rapidly. To accomplish this, the user could “grab” the map being displayed on the screen and spin it to one side or another in order to rapidly move their viewpoint to a new location. Thus, if the user wishes to look at a location far to the east of the current view, the user could “grab” the current view, make a rapid motion to the left (west) to start the map spinning and then grab the map again when the location being interested in is at the center of the view in order to stop the map from sliding further.
The above approach of inertia control of the target object is shown in a flowchart or process 260 shown in
Accordingly, the inertia (current velocity of the target object relative to its environment) is recorded at 264. In other words, the visual velocity of a selected object just before the user releases the motion trigger is captured. The visual velocity is the initial velocity to run down gradually to effectuate the sudden movement so as to show visually a smooth and natural transition. Thus the target object (e.g., an avatar) is set with the controller and the recorded inertia at 264 according to the velocity of the controller at the moment the use first activates a trigger to make a sudden movement (e.g., change a scene). In one embodiment, velocities below some minimum threshold will not impart the inertia to the target object in order to avoid unintended invocation of this approach. When the controller trigger is no longer active, the inertia will decrease over time from its captured value at 262 according to a predefined decay function. An example decay function is a proportionate decay in which the previous velocity/inertia is scaled downwards at each time step by a constant factor, alpha, where alpha is defined between 0 and 1.
A linear offset is applied at 268 (also applicable to 208 of
It shall be noted that although the approach described above describes how the motion of the user affects a target object in three-dimensional space, some applications may wish to restrict the user to a two-dimensional space. This can be done by use of the linear scaling input and simply setting all positions along one of the axes to be mapped to the same value in the world space. Adjustments along this fixed dimension could be controlled as described for changing the scaling sensitivity or by having a threshold for the amount of displacement in the direction perpendicular to the restricted plane and having large displacements revert to a 3D mapping with scaling along all three axes.
In one embodiment, instead of a one-to-one mapping between the controller position and the target object, the approach maps the movements of the controller into movements of the target object in a non-linear fashion. This can allow small motions to generate more precision for small adjustments, while large motions generate rapid changes in position. To see how this differs from the non-linear scaling described above, two ways of moving the controller 20 cm to the right may be considered. In one case, the user rapidly moves in one motion, while in the other case, the user makes 4 small slow motions in the same direction. Under the non-linear mapping, the last of the 4 small motions will travel further than the first but both approaches of moving 20 cm will result in the same target object position. In the alternate movement mapping method described here, all 4 of the small motions will each result in roughly the same change in the target object's position, but the target object will have moved a much smaller distance in total than it did when one large motion was made.
Once this yaw is determined it is possible to estimate the depth 340 of each of the 2D points from the images, resulting in a set of consecutive points 341 in a 3D space determined by the camera. As a result, the points in the tracker space 338, the points 341 in the camera space and the angle between them 339 can be synthesized 342 to produce a new set of consecutive points 343 that incorporates much less error than the original sets of points 338 and 341.
As described above in conjunction with
There are many ways to perform the ray tracing. Details of various ray tracing techniques are described in “Computer Graphics: Principles and Practice” by Foley, vanDam, Feiner, and Hughes, 1990, which is hereby incorporated by reference. For a 2D application, this geometry corresponds to a plane. The appropriate distance from the user to the display gives a natural mapping between what the user is pointing at and the position on the screen. One possibility for determining this plane is to have an initial calibration step during which the user uses the motion controller to point at each of the four corners of their screen. This calibration can then be used to find the actual size and location of the screen with respect to the camera being used to find the absolute position of the controller.
According to one embodiment, no assumptions are made about the screen other than that it is flat, a minimum of eight measurements may be taken to calibrate the position, orientation and size of the screen. The location of the four corners of the screen can be represented mathematically as (c_x+/−width/2, c_y+/−height/2, c_z)*R, where c_x,c_y,c_z is the position of the center of the screen in the camera's coordinate system, width and height are the size of the screen and R is a 3×3 rotation matrix produced from the rotations around the x, y, and z axes in the camera's coordinate system. When the user points the controller at a given corner, the angle between the known location of the controller and the screen corner point can be calculated as an equation of these eight variables. Taking a difference between this angle and the orientation of the controller, an error measure for that reading can be obtained. Doing this for two different controller locations (such as minimum and maximum playing range for the user) and making sure the two points are not co-linear with any of the corners of the screen, a series of eight independent equations can be obtained. Numerous methods and tools are available for optimizing such simultaneous equations, including the well-known program Matlab. More accurate estimates can be obtained by taking additional measurements or making additional assumptions about the orientation of the screen, such as assuming that its bottom edge is aligned with the ground plane and that the screen is aligned vertically with gravity.
This calibration process can then form a basis for the plane in the virtual world corresponding to the display screen. In a 3D application, the geometry used can instead be a 3D model of the objects being displayed, where the location of these objects may be arranged relative to the position of the display device. When intersecting with 3D objects, the object selected may not necessarily match the object that is displayed on the screen at the point the ray intersects the screen. The 3D objects in the scene are collapsed according to the viewing angle of the camera and if the angle of the motion controller is different, the ray will diverge from its entry point as it proceeds deeper into the scene. This will also allow the application to render the ray itself if so desired and could allow the user to select objects that are obscured from view by angling the controller around his or her viewpoint. In an environment in which the display itself is in 3D, this is expected to result in much more natural pointing and selection.
In another embodiment, the display shows three dimensional images to a user (e.g. 3D TVs or 3D motion projectors). In this embodiment, the application-specific geometry 407 can extend both forward and back from the display in order to allow the user to point at objects that appear to be at locations within his/her local (physical) space. It should be noted that in this case the intersection 408 of the ray with the virtual environment may be closer to the user than the display, and the ray from the user may not intersect the display at all.
The embodiments described above assume that the actual physical position of the motion controller has been determined. Similar levels of control can be achieved by having the ray originating from a position that is selected by an application. Short duration motion around that point is then possible using the relative position and orientation of the controller that can be determined without images from the camera or the position changes can be ignored and all selection can be done by the changes in orientation of the controller. The assumed position of the motion controller could also be specified by the user through another method of control such as an analog joystick.
The technique of selecting 3D positions described above requires the existence of a predefined object in the application, whether a 2D plane or a set of virtual objects. In order to allow the user to select an arbitrary 3D point in a 3D space, an additional input is required from the user to specify how far that point should be along the ray. A preferred method of providing this input involves using a second controller. This second controller can be used in a number of ways to provide the depth information for specifying an arbitrary 3D point briefly enumerated below.
A) Use the distance of the second controller from a reference point to determine the depth of the point along the ray specified by the first controller. Possible reference points would include the location of the screen, the location of the camera, or the location of the first controller.
B) Use gestures from the second controller to zoom in or out an application-specific distance.
C) Calculate another ray from the second controller using the same methods as described for the first controller and use this to determine the geometry to intersect the first ray with. One possibility would be to define a plane aligned with this second ray and having the same vertical (y-axis) alignment as the screen. This plane could then be used in place of the virtual representation of the screen to determine an intersection point. Pointing the second ray closer to the position of the first controller will move the intersection point closer to the user in the virtual world while moving the ray closer to parallel will move the intersection point further away in the virtual world. A second possibility would be to choose the intersection point as the closest point between the two rays.
The points can be parameterized along the ray by a length variable, giving a series of points (a_i*l_i, b_i*l_i, c_i*l_i) for ray i, where a_i, b_i, and c_i represent the slopes with respect to the x, y, and z axes respectively. The closest point between the two rays can then be represented by minimizing the equation (a—1*l—1−a—2*l—2)̂2+(b—1*l—1−b—2*l—2)̂2+(c—1*l—1−c—2*l—2)̂2 with respect to the two variables l—1 and l—2. A number of methods and tools exist for solving such equations, such as the software program Matlab.
The above embodiments assume the existence of a camera and use that camera to obtain an absolute position of the controller relative to that camera. The following alternate embodiments remove the requirement of absolute position and give similar control abilities for the user.
First, the inertia commands can be used as one way to change the position of a target object, requiring only a short measurement of relative change in position or accelerations of the controller to determine the desired inertia as desired above. For additional precision, short-term relative tracking in accordance with the mapping described above can be used with the center of the mapping volume always being assumed to be the location of the motion controller when tracking is triggered and the additional requirement that tracking only be triggered while the controller is at rest.
As an alternative to using inertia commands for large motions in the virtual space, the relative motions of the user can be mapped to changes in position of the target object, but in a non-linear way such that longer and faster motions will result in disproportionately larger displacements of the target object. Fine control can then be achieved by making brief short motions once the gross location of the target object has been set. A similar approach is used in computer mice in order to allow a combination of quick relocations on the screen and fine control within a limited area. Note that this non-linear mapping can be applied to either the relative displacements in position of the motion controller or the time series of impulses (accelerations) of the motion controller itself.
As a general note, the above embodiments always refer to a 6D configuration, but for many applications, the roll of the target object may be irrelevant (e.g. on a flashlight or laser pointer). Thus for those applications a 5D configuration would function identically as described above if for some reason the final measurement of the roll of the object was unavailable.
The present invention has been described in sufficient details with a certain degree of particularity. It is understood to those skilled in the art that the present disclosure of embodiments has been made by way of examples only and that numerous changes in the arrangement and combination of parts may be resorted without departing from the spirit and scope of the invention as claimed. Accordingly, the scope of the present invention is defined by the appended claims rather than the foregoing description of embodiments.
Claims
1. A method for controlling an object in a 3D environment being displayed on a display, the method comprising:
- selecting the object with a hand-held controller including one or more self-contained inertial sensors generating sensor signals;
- computing position and orientation of the controller relative to the display, responsive to the sensor signals;
- capturing inertia of the object relative to the 3D environment at a moment when the object is caused to make a sudden movement;
- updating the inertia of the object with the controller and the captured inertia; and
- effectuating the sudden movement so as to show visually a smooth and natural transition of the object relatively in the 3D environment.
2. The method as recited in claim 1, wherein said effectuating the sudden movement comprises: applying a linear offset to a mapping between a physical space of the controller and the 3D environment to produce a refined mapping.
3. The method as recited in claim 2, wherein the linear offset is a ratio of what happens in the physical space to what is being displayed in the 3D environment.
4. The method as recited in claim 3, further comprising: mapping movements of the controller into movements of the object in a non-linear fashion to allow small motions to generate more precision for small adjustments, while large motions generate rapid changes in position.
5. The method as recited in claim 4, wherein said effectuating the sudden movement further comprises: causing the object to move relatively the same distance in the 3D environment when the controller is moved suddenly in one motion or in several smaller motions.
6. The method as recited in claim 4, wherein said effectuating the sudden movement further comprises: causing the object to move more in the 3D environment when the controller is moved suddenly in one motion instead of in several smaller motions.
7. The method as recited in claim 1, wherein said effectuating the sudden movement comprises:
- detecting a location of the object in the 3D environment; and
- allowing the location of the object within the 3D environment to keep changing when the controller has released a control on the object.
8. The method as recited in claim 1, further comprising:
- determining inertia of the controller; and
- imparting the inertia of the controller into the object in a relationship to changes in speed and duration of the controller.
9. The method as recited in claim 1, wherein relative changes in position and orientation of the controller is used to control motions of the object in the 3D environment by imparting inertia of the controller into the object in a relationship to changes in speed and duration of the controller.
10. A system for controlling an object in a 3D environment being displayed on a display, the system comprising:
- a controller used to select the object, wherein the controller includes one or more self-contained inertial sensors generating sensor signals;
- a computing unit configured to receive the sensor signals from which position and orientation of the controller relative to the display are computed, wherein the computing unit is further configured to: capture inertia of the object relative to the 3D environment at a moment when the object is caused to make a sudden movement; update the inertia of the object with the controller and the captured inertia; and effectuate the sudden movement so as to show visually a smooth and natural transition of the object relatively in the 3D environment.
11. The system as recited in claim 10, wherein the computing unit is caused to apply a linear offset to a mapping between a physical space of the controller and the 3D environment to produce a refined mapping.
12. The system as recited in claim 11, wherein the linear offset is a ratio of what happens in the physical space to what is being displayed in the 3D environment.
13. The system as recited in claim 12, wherein the computing unit is caused to map movements of the controller into movements of the object in a non-linear fashion to allow small motions to generate more precision for small adjustments, while large motions generate rapid changes in position.
14. The system as recited in claim 4, wherein the computing unit is caused to move the object relatively the same distance in the 3D environment when the controller is moved suddenly in one motion or in several smaller motions.
15. The system as recited in claim 4, wherein the computing unit is caused to move the object more in the 3D environment when the controller is moved suddenly in one motion instead of in several smaller motions.
16. The system as recited in claim 10, wherein the computing unit is caused to:
- detect a location of the object in the 3D environment; and
- allow the location of the object within the 3D environment to keep changing when the controller has released a control on the object.
17. The system as recited in claim 10, wherein the computing unit is caused to:
- determine inertia of the controller; and
- impart the inertia of the controller into the object in a relationship to changes in speed and duration of the controller.
18. The system as recited in claim 10, wherein relative changes in position and orientation of the controller is used to control motions of the object in the 3D environment by imparting inertia of the controller into the object in a relationship to changes in speed and duration of the controller.
Type: Application
Filed: Oct 6, 2015
Publication Date: Feb 18, 2016
Inventors: William Robert POWERS, III (San Francisco, CA), Charles MUSICK, JR. (Belmont, CA), Dana WILKINSON (Mountain View, CA)
Application Number: 14/876,684