6DoF INSIDE-OUT TRACKING GAME CONTROLLER INITIAL REGISTRATION
Methods and apparatus are provided for 6DoF inside-out tracking game control. In one novel aspect, a multi-processor architecture is used for VI-SLAM. In one embodiment, the apparatus obtains overlapping image frames and sensor inputs of an apparatus, wherein the sensor inputs comprise gyrometer data, accelerometer data and magnetometer data, splits computation work onto a plurality of vector processors to obtain six degree of freedom (6DoF) outputs of the apparatus based on a splitting algorithm, and performs a localization process to generate 6DoF estimations, and a mapping process to generate a cloud of three-dimensional points associated to the descriptors of the map. In one embodiment, the localization process and mapping process are configured to run sequentially. In another embodiment, the localization process and mapping process are configured to run in parallel.
This application is a continuation-in-part, and claims priority under 35 U.S.C. § 120 from nonprovisional U.S. patent application Ser. No. 17/075,853, entitled “6DOF INSIDE-OUT TRACKING GAME CONTROLLER”, filed on Oct. 21, 2020, the subject matter of which is incorporated herein by reference. application Ser. No. 17/075,853, in turn, claims priority under 35 U.S.C. § 120 from nonprovisional U.S. patent application Ser. No. 15/874,842, entitled “6DOF INSIDE-OUT TRACKING GAME CONTROLLER”, filed on Jan. 18, 2018, the subject matter of which is incorporated herein by reference. application Ser. No. 15/874,842, in turn, claims priority under 35 U.S.C. § 119 from U.S. Provisional Application No. 62/447,867 entitled “A MULTI AGENT STEROSCOPIC CAMERA BASED POSITION AND POSTURE TRACKING SYSTEM FOR PORTABLE DEVICE” filed on Jan. 18, 2017, the subject matter of which is incorporated herein by reference.
TECHNICAL FIELDThe present invention relates generally to VR/AR system, more particularly relates to 6DoF inside-out tracking game controller, head mount device in-side-out tracking and multi agent interaction; robot position and posture tracking, routing planning, collision avoidance.
BACKGROUNDThe virtual reality (VR) and augmented reality (AR) is expected to continue to grow rapidly. With the development of new technology in both hardware and software could help the AR/VR market to grow even faster. With more applications using the technology the requirement for the system to run faster, be more accurate and without any drift for the localization.
The SLAM (simultaneous localization and mapping) algorithm is widely adopted to improve the system. However, there are three issues using the SLAM algorithm: the scale factor, the drift problem (even with a stereo camera), and the long processing time. The state of the art solutions for the drift are the on-line loop-closure and on-line re-localization (used in ORB-SLAM). Both are based on a bag of words approach (to store every patches). But the update of this bag of words is very time/CPU consuming.
Further, six degree of freedom (6DoF) data of a game controller are needed for the AR/VR system. However, the game controllers today are not efficient and fast enough to produce the 6DoF data in real time. The three dimensions for the translation in the 3D space are not obtained by the game controller of the market.
Enhancement and improvement are required tracking game controller.
SUMMARYMethods and apparatus are provided for 6DoF inside-out tracking game control. In one novel aspect, a multi-processor structure is used for VI-SLAM. In one embodiment, the apparatus obtains overlapping image frames and sensor inputs of an apparatus, wherein the sensor inputs comprise gyrometer data, accelerometer data and magnetometer data, splits computation work onto a plurality of vector processors to obtain six degree of freedom (6DoF) outputs of the apparatus based on a splitting algorithm, and performs a localization process to generate 6DoF estimations, and a mapping process to generate a cloud of three-dimensional points associated to the descriptors of the map. In one embodiment, the splitting algorithm involves: dividing a current frame in N equal part; and each of a set of selected vector processors processes a portion of the current frame based on a split-by-corner rule, and wherein the split-by-corner rule determining whether each pixel of is a corner and classifying each pixel determined to a corner to a compressed descriptor by converting each sub-image centered by the pixel to a 16-float descriptor using a base matrix. In one embodiment, the localization process and mapping process are configured to run sequentially, wherein the localization process is split over all of the vector processors and the mapping process is split over all the vector processors. In another embodiment, the localization process and mapping process are configured to run in parallel, wherein the localization process is split over a first subset of the vector processors and the mapping process is split over the rest subset of the vector processors. In one embodiment, the 6DoF outputs is in one format selecting from an output format group comprising: six floating point values with three for the translated 3D space and three for the rotation space, twelve floating point values with three for the translated 3D space and nine for the rotation space, six fix point values with three for the translated 3D space and three for the rotation space, and twelve fix point values with three for the translated 3D space and nine for the rotation space.
In one novel aspect, a map of the background environment is generated in advance. This reference map is a batch of visual features with pre-estimated 3D position and visual feature description. The map is used for real-time localization. During the localization process, the 3D position of the features is not updated, so the map is static. Because the map is known, there is no need to map the environment constantly. And because the map is static, the localization will not drift. The potential issue of this approach is a failure of the localization when we move too far from the reference map. We solve this problem using a light SLAM algorithm.
In one embodiment, client server topology is used in deploying the mapping and localization technology, which makes the client lighter in computing and less power hungry. There could be one or more clients working on the server network. Or the client works on its own without a server at the cost of power consumption.
In another embodiment, tracking and localization are based on a known map. This allows to achieve fast processing speed. This is useful for the VR/AR application. A calibrated stereo camera is provided in this approach to fix the scale factor problem.
Other embodiments and advantages are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims.
The accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention.
Reference will now be made in detail to some embodiments of the invention, examples of which are illustrated in the accompanying drawings.
Game controller 100 also includes an inertial measurement unit (IMU) 131, an optional external memory card (SD Card) 132 Other embodiments and advantages are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims, and one or more wireless interface 133, such as a WiFi interface, a Bluetooth interface. An interface module 111 communicates and controls the sensors, IMU 131, SD 132, and the wireless interface, such WiFi 133 and Bluetooth 134. A hardware accelerator and image signal processing unit 112 helps image processing of the sensor inputs. IMU 131 detects of movements and rotations and magnetic heading of game controller 100. In one embodiment, IMU 131 is an integrated 9-axis sensor for the detection of movements and rotations and magnetic heading. It comprises a triaxial, low-g acceleration sensor, a triaxial angular rate sensor and a triaxial geomagnetic sensor. IMU 131 senses orientation, angular velocity, and linear acceleration of game controller 100. In one embodiment, game controller 100 processes data of an IMU frame rate of at least 500 Hz.
In one embodiment, a plurality of cameras are mounted on the outer case of the game controller to generate overlapping views for the game controller. Using multiple cameras with overlapping view has many advantages compared to monocular solution, such as the scale factor of the 3D motion does not drift, the 3D points seen on the overlapping area can be triangulated without a motion of the device, the matching on the overlapping area is faster and more accurate using the epipolar geometry, the global field of view is wider which increase the accuracy and reduce the jittering.
In one novel aspect, the VI-SLAM algorithm is split to run on a plurality of processors based on a splitting algorithm and the sensor inputs.
In one embodiment, the feature detection and extraction procedure 510 is split to be run on N vector processors following the splitting rule. Step 511 divides the current frame to be processes into N equals part. Step 512 assign each frame part to a corresponding vector processor. Each processor processes one part of the frame following a predefine algorithm. First, a corner is determined. For each pixel pi, described by a 2D coordinate in the image, and an adjustable threshold t, pi is determined to be a corner if there exist a set of K contiguous pixels in the neighbor circle, which are all brighter than (pi+t) or all darker than (pi−t). In some embodiment, threshold t is in the range of 5<t<200. In another embodiment, the K is in the range of 5<K<13. In yet another embodiment, the neighbor circle has a radius of three pixels. Subsequently, at the second step, each corner pixel pi is classified, using a n×n sub-image centered on pi, to a compressed descriptor. This is done using a base matrix to convert each sub-image to a 16 floats descriptor. The base matrix is computed with a singular value decomposition on a large set of selected features. In one embodiment, the n×n sub-image is 11×11. Let P=(p1, . . . , pn) the list of features points (2D coordinate in the image) detected from the current frame. Let D=(d1, . . . , dn) the list of descriptors associated pair with each feature point with its associated descriptor.
In another embodiment, the matching procedure 520 is split onto N vector processors. Step 521 splits the descriptor list into N parts. In one embodiment, the descriptor list is equally split into N part. Step 522 performs descriptor matching for each descriptor Di by matching Di with a subset of the map descriptors. The descriptors are split in N equal range. For each vector process i, a matching algorithm applies for Di. The processor i (0<i<N+1) run the matching algorithm on the range Di. The descriptors Di are matched with a subset of the descriptors of the map LocalMap (subset of the map), using the cross-matching method: each match is a a pair of descriptor (da,db) such as da is the best candidate for db among the descriptors Di of the current frame and db is the best candidate for da among the descriptors of the map LocalMap. Some of the descriptors of the map are associated to some 3D points geo-referenced in world (this 3D estimation is performed by the mapping algorithm). So the matching associates each descriptor di de D to a 3D point p3d of the LocalMap. The output of the matching is a list of descriptor pairs associating the features points P to the 3D points of the map: Mi=((p1,p3d1), . . . , (pn,p3dn)).
In yet another embodiment, estimation 6DoF procedure 530 is split onto N processors. The input of this step is the N lists Mi (from the matching). The 6DoF estimation minimizes, for each pair (pi,p3di) in M, the difference in 2D between the projected of p3di in the current frame and pi. This minimization is performed with the non-linear least square algorithm Levenberg-Marquardt combined with the M-Estimator (robust method) of Geman-McClure. The robust method of Levenberg-Marquard is used on N processors. Once split, each processor i computes the reprojection error of all the elements of Mi:Ei, computes the Jacobian error function of all elements of Mi:Ji. Subsequently, the total number of N Ei in E and the total number of N Ji in J are merged with concatenation. The median of the absolute different of E (MAD) is computed. The estimation of 6DoF is obtained by solving the linear system of (JTJ) X=JTE·MAD, where X is the update of the 6DoF.
In one novel aspect, using the multi-processor processors architect, the efficiencies of the localization process and the mapping process are greatly improved.
The 6DoF tracking method provides the efficient way to track the 6DoF pose for an AR/VR system. One configuration of the AR/VR system involves tracking one device in the AR/VR. Another configuration includes a head mount device (HMD) and one or more handheld/hand-control game controllers. In an HMD tracking system, more than one 6DoF tracking devices are presented. A world coordinate system is used to coordinate the multiple devices. The world coordinate system is a common coordinate system shared by multiple devices in a system. With multiple system sharing the same coordinate system, an origin/starting point is needed.
Each tracking device tracks 6DoF poses in a coordinate system 930. Each devices tracks 6DoF outputs, which include six dimensions of the apparatus including three dimensions (3D) of an orientation in a rotation space 931 and three dimensions translation in a 3D space 932. Each AR/VR tracking device, such as HMD 901 and game controller 902, obtains and maintains a map. The map is a collection of 2D and 3D points in the space relative to a coordinate system. In an AR system, both real objects and virtual objects are presented in the coordinate system. For example, real objects, such as a table 935 and a plate 936, and virtual objects, such as cheese 937, are both presented in the AR system in reference to the coordinate system 930. Each tracking devices collects local information through its sensor unit. Each map is generated based on the local information collected.
With multiple tracking devices in the HMD tracking system, each device tracks its own locale information relative to their own coordinate system, which may or may not be the same as the coordinate system 930. The purpose of the registration is to synchronize the coordinate system of each individual devices to a world coordinate system. A world coordinate system is a common coordinate system shared by multiple devices in a system. In an AR/VR environment, its origin is typically the starting position of the HMD device. In one embodiment, one base map in shared by multiple devices. One or more devices receives a base map and generates its 6DoF relative to the base map received. The base map may be generated by the HMD and shared with the game controller. In another embodiment, the base map may be generated by the game controller and shared with the HMD. The base map may also be generated remotely by other devices in the AR/VR system. A single base map is shared by multiple devices in the AR/VR system. The based map is in reference to the world coordinate system for the HMD tracking system. Each 6DoF generated for the base map from each tracking device, therefore, is in reference to the world coordinate system. In another embodiment, each device generates its map in its own coordinate system or in reference to the same coordinate system. In the first scenario, when different maps are generated in reference to the same coordinate system, the initialization process synchronize the maps such that the 6DoF generated by each device are synchronized. In the second scenario, when different maps are generated in reference to different coordinate system, the different coordinate systems of each device are synchronized at the initialization process to the world coordinate system. In other embodiments, with multiple tracking devices, the combination of the shared map and individual map/coordinate system may be used for different devices. The synchronization methods presented below apply to the combination scenarios. A world coordinate system is initialized/synchronized to be shared by multiple devices.
In one novel aspect, the virtual view of the world coordinated system 930 is streamed to one or more viewing devices 960, such as a mobile device 961 and/or a desktop/laptop 962. In one embodiment, the virtual view of 930 is streamed in real-time. In another embodiment, the virtual view is broadcasted to multiple remote viewing devices.
In another novel aspect, a 2D 6DoF tag is used to register the initial 6DoF in the HMD system.
Although the present invention has been described in connection with certain specific embodiments for instructional purposes, the present invention is not limited thereto. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the claims.
Claims
1. An apparatus operating in an augmented reality/virtual reality (AR/VR) system comprising:
- a sensor unit that tracks series of motions of the apparatus and collects locale information of the apparatus relative to world coordinate system shared by a plurality of devices;
- an initialization button that registers an initial six degree of freedom (6DoF) pose of the apparatus when pressed, wherein the initial 6DoF pose comprises a predefined three dimensions (3D) coordinate and an initial pose of the apparatus relative to the world coordinate system; and
- one or more processors configured to communication with the AR/VR system and to generate series of 6DoF outputs of the apparatus based on the series of motion and locale information, wherein each 6DoF output comprises six dimensions of the apparatus including 3D of an orientation in a rotation space and three dimensions translation in a 3D space.
2. The apparatus of claim 1, wherein the sensor unit comprises one or more cameras and an inertial measurement unit (IMU), and wherein the IMU detects movements, rotations, and magnetic headings of the apparatus.
3. The apparatus of claim 1, wherein the sensor unit tracks motions of the apparatus and generates pose changes relative to the predefined coordinate relative to the world coordinate system.
4. The apparatus of claim 1, further comprising a map module that obtains map data to generate a base map in reference to the world coordinate system and register the initial pose to the base map in reference to the world coordinate system.
5. The apparatus of claim 4, wherein the apparatus determines real-time 6DoF outputs based the map data and locale information relative to the initial pose.
6. A system comprising:
- a head mount device (HMD) with an HMD sensor unit and one or more processors configured to communication with the system and to generate series of six degree of freedom (6DoF) outputs of the HMD relative to a world coordinate system shared by a plurality of devices;
- a game controller with a controller sensor unit and one or more processors configured to communication with the system and to generate series of 6DoF outputs of the controller relative to the world coordinate system; and
- a two dimensional (2D) 6DoF tag attached to a first device selecting from the HMD and the game controller, wherein a second device determines an initial relative pose to the first device when the 6DoF tag is in the field of view of the second device.
7. The system of claim 6, wherein the HMD sensor unit comprises one or more HMD cameras and an HMD inertial measurement unit (IMU) and the controller sensor unit comprises one or more controller cameras and a controller IMU.
8. The system of claim 6, wherein the 6DoF tag is an AprilTag.
9. The system of claim 6, wherein the controller sensor unit tracks motions of the apparatus and generates relative pose changes relative to the initial relative pose.
10. The system of claim 6, wherein the one or more processors of the HMD device is further configured to split computation work onto a plurality of vector processors to 6DoF outputs based on a splitting algorithm.
11. A method for an augmented reality/virtual reality (AR/VR) system, comprising:
- collecting locale information of an apparatus, wherein series of six degree of freedom (6DoF) outputs of the apparatus is generated based on locale information, and wherein each 6DoF output comprising six dimensions of the apparatus including three dimensions of an orientation in a rotation space and three dimensions translated in a 3D space;
- obtaining a map in reference to a world coordinate system of the AR/VR system, wherein the map in reference to the world coordinate system is generated based on locale information collected by the apparatus;
- initializing an initial 6DoF position of the apparatus relative to the map in reference to the world coordinate system by an initializing process selecting from pressing a physical button on an apparatus or capturing a two-dimensional (2D) 6DoF tag; and
- performing a localization process to generate localization information of the apparatus based on the apparatus initial position.
12. The method of claim 11, wherein the initializing is performed by pressing the physical button on the apparatus, and wherein the initializing registers the apparatus to a predefined coordinate on the map in reference to the world coordinate system.
13. The method of claim 11, wherein the initializing is performed by capturing a 2D 6DoF tag, and wherein the 2D 6DoF tag is attached to a second apparatus.
14. The method of claim 13, wherein the apparatus determines an initial relative pose to the second apparatus when the 2D 6DoF tag is in the field of view of the apparatus.
15. The method of claim 11, further comprising: receiving real-time map data generated by the AR/VR system.
16. The method of claim 15, wherein the apparatus determines 6DoF outputs based the received real-time map data.
17. The method of claim 11, further comprising: splitting computation work onto a plurality of vector processors to obtain six degree of freedom (6DoF) outputs of the apparatus based on a splitting algorithm.
18. The method of claim 17, wherein the splitting algorithm involves: dividing a current frame in N equal part; and each of a set of selected vector processors processes a portion of the current frame based on a split-by-corner rule, and wherein the split-by-corner rule determining whether each pixel of is a corner and classifying each pixel determined to a corner to a compressed descriptor by converting each sub-image centered by the pixel to a 16-float descriptor using a base matrix.
19. The method of claim 11, wherein the localization process and mapping process are configured to run sequentially, wherein the localization process is split over all of the vector processors and the mapping process is split over all the vector processors.
20. The method of claim 11, wherein the localization process and mapping process are configured to run in parallel, wherein the localization process is split over a first subset of the vector processors and the mapping process is split over the rest subset of the vector processors.
Type: Application
Filed: Sep 27, 2022
Publication Date: Jan 19, 2023
Inventors: Datta Ramadasan (Cournon d'Auvergne), Qiong Lin (Fremont, CA), Hao Ye (San Jose, CA)
Application Number: 17/935,897