METHOD FOR ACTING ON AUGMENTED REALITY VIRTUAL OBJECTS

The invention relates to methods for acting on augmented reality virtual objects. According to the invention, the coordinates of a device for creating and viewing augmented reality are determined in relation to a real-world physical marker by means of analysis of an image from a camera of the device; a virtual camera is positioned in calculated coordinates of the device in relation to a physical base coordinate system in such a way that the marker, which is visible to the virtual camera, is positioned in the field of vision thereof, just as the physical marker is positioned in the field of vision of the device camera; a vector is calculated, which corresponds to a direction from the marker to the virtual camera in real-time mode; information is generated relating to all of the movements of the camera in relation to the marker, i.e. rotation, zoom, tilt.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present patent application is a National Stage application of the PCT application PCT/RU2016/050070 filed Nov. 17, 2016 which claims priority to Russian patent application RU2015149499 filed Nov. 18, 2015.

TECHNICAL FIELD OF THE INVENTION

The present invention relates to methods of influencing augmented reality virtual objects wherein markers of a real three-dimensional space are determined from images obtained from a video camera device to create and view an augmented reality, form a physical base coordinate system tied to the spatial position of markers of a real three-dimensional space, devices for creating and viewing augmented reality relative to the basic coordinate system, specify the coordinates of the three-dimensional virtual objects of augmented reality in the base coordinate system, perform the specified actions for modifying the virtual objects for all or a part of the objects from the generated set of virtual objects of augmented reality using user's motion.

The following terms are used in this paper.

A virtual object is an object created by technical means, transmitted to a person through his senses: sight, hearing, and others.

Point of interest (a characteristic point)—the point of the image, which has a high local informativeness. As a numerical measure of informativeness, various formal criteria are proposed, called interest operators. The operator of interest must ensure a sufficiently accurate positioning of the point in the image plane. It is also necessary that the position of the point of interest possess sufficient resistance to photometric and geometric distortions of the image, including uneven changes in brightness, shift, rotation, change in scale, and angular distortions.

The Kalman filter is an effective recursive filter that estimates the state vector of a dynamic system using a series of incomplete and noisy measurements.

Image pyramids are a collection of images obtained from the original image by its sequential compression until the breakpoint is reached (of course, the endpoint may be one pixel).

Smartphone (English smartphone—smart phone)—phone, supplemented by the functionality of a pocket personal computer.

BACKGROUND

Currently, an increasing number of people use various electronic devices and interact with virtual objects. This happens not only in computer games, but also in the learning process, as well as, for example, in a remote trade of goods, when the buyer decides to purchase using a virtual model of goods.

There is a well-known method of influencing the virtual objects of augmented reality, in which markers of real three-dimensional space are determined from the images obtained from the video camera of the device to create and view augmented reality, form a physical base coordinate system tied to the spatial position of markers of real three-dimensional space, determine the coordinates of the device to create and view the augmented reality relative to the base coordinate system, specify the coordinates of the three-dimensional virtual objects of the augmented reality in the base coordinate system, perform the specified actions for modifying the virtual objects for all or a part of objects from the generated set of virtual objects of the augmented reality, see the description of the Russian patent for invention No. 2451982 of May 27, 2012.

This method is the closest in technical essence and achieved technical result and is chosen for the prototype of the proposed invention as a method.

The disadvantage of this prototype is that interaction with virtual objects is done using a separate device, which determines the position of the user in space and the need to respond to changing the user's position. Simply changing the position in space of the device for creating and viewing the augmented reality does not change the virtual object, except for changing its orientation on the device's display.

SUMMARY

Based on this original observation, the present invention is mainly aimed at proposing a method for influencing augmented reality virtual objects, in which markers of real three-dimensional space are determined from images obtained from a video camera device adapted to create and view augmented reality, form a physical base coordinate system tied to the spatial position of the markers of the real three-dimensional space, determine the coordinates of the device adapted to create and view and augmented reality relative to the basic coordinate system, specify the coordinates of the three-dimensional virtual objects of the augmented reality in the base coordinate system, perform the said actions for modifying the virtual objects for all or a part of objects from the generated set of virtual objects of the augmented reality by means of user motion, allowing at least to smooth out at least one of the specified above the shortcomings of the prior art, namely achieving additional interaction with virtual objects by changing the position of the device to create and view the augmented reality associated with additional reactions of the virtual object, in addition to simply changing the orientation of the virtual object on the device's display, thereby achieving the technical objective.

In order to achieve this objective, coordinates of the device adapted to create and view augmented reality are determined relative to the actual physical marker by analyzing the image from the device camera, a virtual camera is placed in the calculated coordinates of the device adapted to create and view the added reality relative to the physical base coordinate system so that the marker located in its field of vision is visible in the same way as the physical marker located in the field of view of the physical camera of the device adapted to create and view the augmented reality, the vector corresponding to the direction from the marker to the virtual camera is calculated in real time, information time is generated by successive iteration in real time regarding all movements of the camera relative to the marker, i.e. turning, approaching and tilting.

Thanks to these advantageous characteristics, it becomes possible to provide additional interaction with virtual, said interaction associated with additional reactions of the virtual object objects by changing position of the device adapted to create and view the augmented reality, said interaction provided in addition to simply changing the orientation of the virtual object on the device's display. This is due to the fact that it becomes possible to accurately determine the position of the device adapted to create and view augmented reality, including the direction in which it is placed. Therefore, it becomes possible to perform the specified actions for modifying virtual objects for all or a part of objects of the generated set of virtual objects of augmented reality in that specific direction.

Note that the vector can be specified in any way, not only by the direction, but also by three coordinates, one or more coordinates and one or more angles, polar coordinates, Euler angles, or quaternions.

There is an embodiment of the invention in which information is generated about all movements of the camera relative to the marker by analyzing the video stream received from the device to create and view the augmented reality.

Thanks to this advantageous characteristic, it becomes possible to calculate the direction in which the device is placed to create and view the augmented reality in real time and at each next time to calculate corrections to the previous calculated position.

There is an embodiment of the invention in which analysis of the image from the device camera is performed by means of an algorithm for searching for points of interest.

Thanks to this advantageous characteristic, it becomes possible to use specialized methods of searching for points of interest, namely:

The SIFT (Scale Invariant Feature Transform) method detects and describes local features of the image. The characteristics obtained by means of it are invariant with respect to scale and rotation, are resistant to a number of affine transformations, noise. It is to use the Gauss pyramid, which is built for the image. Then the images are reduced to the same size, and their difference is calculated. And as the candidates for the points of interest, only those pixels that are very different from the others are selected, this is done, for example, by comparing each pixel of the image with several neighbors of a given scale, with several corresponding neighbors in a larger and a smaller scale. A pixel is selected as a point of interest only if its brightness is

PCA-SIFT (PCA, Principal Component Analysis) descriptor is one of the variations of SIFT, in which the descriptor dimension is reduced by analysis of the main components. This is achieved by finding the space of eigenvectors, which are subsequently projected on the feature vectors.

SURF (Speeded Up Robust Features), which is several times faster than SIFT. In this approach, integrated images are used to accelerate the search for points of interest. The value at each point of the integral image is calculated as the sum of the values at a given point and the values of all the points that are above and to the left of the given point. With the help of integral images for constant time, the so-called rectangular filters are computed, which consist of several rectangular regions.

MSER and LLD methods are the most invariant to affine transformations and scale-up. Both methods normalize 6 parameters of affine distortions. More in detail we will stop on MSER. “Extreme areas” is the name of the method obtained due to the sorting of the special points by intensity (in the lower and upper levels). A pyramid is constructed, at which the initial image corresponding to the minimum intensity value contains a white image, and at the last level, corresponding to the maximum intensity value, black.

Harris-Affine normalizes the parameters of affine transformations. Harris uses angles as special areas, and identifies key points in a large-scale space, using the approach proposed by Lindenberg. Affine normalization is carried out by a repetitive procedure in order to evaluate the parameters of the elliptical region and normalize them. With each repetition of the elliptic region, the parameters are evaluated: the difference between the proper moments of the second-order matrices of the selected region is minimized; the elliptical region is normalized to a circular one; an assessment of the key point, its scale on a space scale.

Hessian-Affine uses blobs instead of corners as a special area. The determinant of the local maxima of the Hessian matrix is used as the base points. The rest of the method is the same as Harris-Affine.

ASIFT—the idea of combining and normalizing the main parts of the SIFT method. SIFT detector normalizes rotation, movement and simulates all images, remote from search and request.

GLOH (Gradient location-orientation histogram) is a modification of the SIFT descriptor, which is built to improve reliability. In fact, the SIFT descriptor is calculated, but the polar grid of the neighborhood partitioning into bins is used

DAISY is initially introduced to solve the problem of matching images in the case of significant external changes, i.e. This descriptor, in contrast to the previously discussed ones, operates on a dense set of pixels of the entire image.

BRIEF-descriptor (Binary Robust Independent Elementary Features) provides recognition of identical parts of the image, which were taken from different points of view. At the same time, the task was to minimize the number of computations performed. The algorithm of recognition is reduced to the construction of a random forest (randomize classification trees) or naive Bayesian classifier on some training set of images and subsequent classification of test image areas.

There is also an embodiment of the invention wherein image analysis from the device camera is performed by an image classifier algorithm.

There is an embodiment of the invention wherein image analysis from the device camera is performed by the Kalman Filter algorithm.

Thanks to this advantageous characteristic, it becomes possible to analyze incomplete and noisy images, using an effective recursive filter that estimates the state vector of a dynamic system using a series of incomplete and noisy measurements. The idea of Kalman in this case is to get the best approximation to the true coordinates of the images from inaccurate camera measurements and the predicted positions of the image boundaries. The accuracy of the filter depends on the time used, which means improving stability of output of the image on subsequent frames.

There is an embodiment of the invention wherein the image analysis from the camera of the device is made by means of the algorithm “Image Pyramids”.

Due to this advantageous characteristic, it becomes possible to shorten the image processing time and determine more accurate initial approximations for processing the lower levels based upon the processing results of the upper levels.

BRIEF DESCRIPTION OF THE DRAWINGS

Other features and advantages of this group of inventions clearly follow from the description given below for illustration and not being limiting, with reference to the accompanying drawings wherein:

FIG. 1 shows a diagram of an apparatus for interacting with virtual objects according to the invention,

FIG. 2 schematically shows the steps of a method of interacting with virtual objects according to the invention.

The object marker is designated as 1. The device adapted for creating and viewing the augmented reality is 2, it further shows the video camera 21 and the display 22.

Devices such as a smartphone, a computer tablet or devices such as glasses of added reality can be used as a device adapted for creating and viewing augmented reality.

The image obtained from the video camera of the device adapted to create and view augmented reality is shown as 23.

The physical base coordinate system is associated with the marker designated as OmXmYmZm

The coordinates of the device 2 adapted for creating and viewing the augmented reality relative to the base coordinate system, while the device 2 itself has its own coordinate system OnXnYnZn

A vector corresponding to the direction from marker 1 to virtual camera 21 is designated as R.

DETAILED DESCRIPTION OF THE INVENTION

The device for interacting with virtual objects works as follows. The most exhaustive example of the implementation of the invention is provided below, bearing in mind that this example does not limit the application of the invention.

According to FIG. 2:

Step A1. Identify the markers of real three-dimensional space from the images obtained from the device's video camera to create and view augmented reality. In general, a marker can be any figure or object. But in practice, we are limited to allowing a webcam (phone), color rendering, lighting, and processing power of the equipment, as everything happens in real time, and therefore must be processed quickly, and therefore usually select a black and white marker of simple form.

Step A2. Form a physical base coordinate system tied to the spatial position of the markers of a real three-dimensional space.

Step A3. Specify the coordinates of three-dimensional virtual objects of augmented reality in the base coordinate system.

Step A4. Determine coordinates of the device adapted to create and view the augmented reality relative to the base coordinate system by analyzing the image from the camera of the device.

    • i. Step A41. To do this, a virtual camera is set in the calculated coordinates of the device adapted to create and view the augmented reality relative to the physical base coordinate system so that the marker visible by the virtual camera is located in its field of view in the same way as the physical marker is located in the field of view of the camera of the device adapted for creating and viewing augmented reality.
    • ii. Step A42. The vector corresponding to the direction from the marker to the virtual camera in real time is calculated.
    • iii. Step A43. Form in real time information about all movements of the camera relative to the marker—turning, approaching, tilting by sequential iteration.
    • iv. Step A44. Alternatively, generate information about all movements of the camera relative to the marker by analyzing the video stream received from the device to create and view the augmented reality.

Step A5. The above actions are repeated at each iteration of the running of the computing module of the device adapted to create and view augmented reality. Aggregation of the directions received from each iteration forms information about all the camera movements relative to the marker—turning, approaching, tilting, etc.

Step A6. Performing with the help of user motion the specified actions for modifying virtual objects for all or a part of objects of the formed set of virtual objects of augmented reality.

The sequence of steps is exemplary and allows you to rearrange, subtract, add, or perform some operations simultaneously without losing the ability to interact with virtual objects. Examples of such operations can be:

    • calculation of movement in the position space of the device to create and view augmented reality with the use of corrections that compensate for the vibration of the user's client device. For example, vibration compensation of a user's user device is performed using a Kalman filter.
    • when calculating the movement in the space of the device, a model of artificial neural networks is used to create and view augmented reality.

Example 1

A character created as an augmented reality object (a person or an animal) can follow by it's eyes the direction of the device adapted to create and view the augmented reality, creating the user's illusion that this person or animal is watching him in a way that a real man or animal would do. When a user tries to get around the character from the back, the character can react accordingly, turning the body towards the user.

Example 2

An interactive game, wherein the marker in the role of the content of augmented reality is a conventional opponent, shooting toward the user by missiles moving at low speed. To win the game, the user must “avoid” from the missiles, shifting the device's camera adapted to create and view the augmented reality from their trajectory.

INDUSTRIAL APPLICABILITY

The proposed method of interaction with virtual objects can be carried out by a skilled person and, when implemented, ensures the achievement of the claimed designation, which allows to conclude that the criterion “industrial applicability” for the invention is met.

In accordance with the present invention, a prototype of a device for interacting with virtual objects is made in the form of a computer tablet having a display and a video camera.

Tests of the prototype system showed that it provides the following capabilities:

    • definition of markers of real three-dimensional space from images obtained from video camera of the device adapted to create and view augmented reality,
    • the formation of a physical base coordinate system tied to the spatial position of the markers of a real three-dimensional space,
    • determine the coordinates of the device adapted to create and view the augmented reality relative to the base coordinate system;
    • assign of coordinates of three-dimensional virtual objects of augmented reality in the base coordinate system;
    • determine the coordinates of the device adapted to create and view augmented reality relative to a real physical marker by analyzing the image from the device's camera,
    • set location of the virtual camera in the calculated coordinates of the device adapted to create and view augmented reality relative to the physical base coordinate system so that the marker visible by the virtual camera located in its field of view in the same way as the physical marker is located in the field of view of the camera of the device adapted to create and view augmented reality,
    • calculate the vector corresponding to the direction from the marker to the virtual camera in real time,
    • generate information about all movements of the camera relative to the marker—rotate, zoom, tilt by sequential iteration in real time.
    • perform with the help of user motion the specified actions for modifying virtual objects for all or a part of objects from the generated set of virtual objects of augmented reality.

Thus, the present invention achieves the stated objective of providing an additional ability of interacting with virtual objects by changing the position of the device to create and view the augmented reality associated with additional reactions of the virtual object, in addition to simply changing the orientation of the virtual object on the device display.

Claims

1.-6. (canceled)

7. A method for influencing virtual objects of augmented reality, said method comprising the following steps:

obtaining images of a real three-dimensional space by a camera of a device adapted to create and view augmented reality,
identifying one or more markers of the real three-dimensional space based upon said images obtained by the camera,
forming a base coordinate system tied to a spatial position of the markers of the real three-dimensional space,
determining coordinates of the device adapted to create and view augmented reality relative to the base coordinate system,
specifying coordinates of virtual objects of augmented reality in the base coordinate system,
modifying at least some of the virtual objects of augmented reality based upon an identified motion of a user,
setting a virtual camera in the determined coordinates of the device adapted to create and view augmented reality relative to the base coordinate system so that a virtual marker visible by the virtual camera is located in a field of view of the virtual camera in the same way that a marker of the real three-dimensional space is located in a field of view of the camera of the device adapted to create and view augmented reality,
calculating a vector corresponding to a direction from the virtual marker to the virtual camera in real time using quaternions, and
generating information about all movements of the virtual camera relative to the markers of the real three-dimensional space, said movements comprising rotation, approximation, and tilt, by sequential iteration and in real time.

8. The method of claim 7, wherein the quaternions of the vector corresponding to the direction from the virtual marker to the virtual camera are calculated using at least one or more coordinates and one or more angles

9. The method of claim 7, wherein the quaternions of the vector corresponding to the direction from the virtual marker to the virtual camera are calculated using polar coordinates.

10. The method of claim 7, wherein the quaternions of the vector corresponding to the direction from the virtual marker to the virtual camera are calculated using a Euler angle method.

11. The method of claim 7, further comprising generating information regarding all movements of the virtual camera relative to the markers of the real three-dimensional space by analyzing a video stream received from the camera of the device adapted to create and view augmented reality.

12. The method of claim 11, wherein an analysis of an image from the camera of the device adapted to create and view augmented reality is performed by an algorithm for searching for points of interest.

13. The method of claim 11, wherein an analysis of an image from the camera of the device adapted to create and view augmented reality is performed by an image classifier algorithm.

14. The method of claim 11, wherein an analysis of an image from the camera of the device adapted to create and view augmented reality is performed by a Kalman Filter algorithm.

15. The method of claim 11, wherein an analysis of an image from the camera of the device adapted to create and view augmented reality is performed by an Image Pyramid algorithm.

Patent History
Publication number: 20180321776
Type: Application
Filed: Nov 17, 2016
Publication Date: Nov 8, 2018
Inventors: Vitaly Vitalyevich AVERYANOV (Tula), Andrey KOMISSAROV (Tula)
Application Number: 15/773,248
Classifications
International Classification: G06F 3/048 (20060101); G06T 19/00 (20060101);