CAMERA ANGLE ESTIMATION METHOD FOR AROUND VIEW MONITORING SYSTEM

A camera angle estimation method for an around view monitoring system includes uniformizing and extracting, by a control unit of the around view monitoring system, feature points from an image of each of at least three cameras, acquiring, by the control unit, corresponding points by tracking the extracted feature points, integrating, by the control unit, the corresponding points acquired from the image of each of the cameras with one another, estimating, by the control unit, vanishing points and vanishing lines by using the integrated corresponding points, and estimating an angle of each of the cameras on the basis of the estimated vanishing points and vanishing lines.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

The present application claims priority under 35 U.S.C. § 119(a) to Korean Patent Application Nos. 10-2017-0074556 and 10-2017-0132250, filed on Jun. 14, 2017 and Oct. 12, 2017, respectively, which are incorporated herein by reference in their entirety.

BACKGROUND 1. Technical Field

Embodiments of the present disclosure relate to a camera angle estimation method for an around view monitoring system, and more particularly, to a camera angle estimation method for an around view monitoring system, by which in order to correct an image of a camera for generating an around view image in an around view monitoring system of a vehicle, feature points are extracted from a peripheral image captured by the camera even though correction patterns are not installed in the vicinity of the vehicle installed with the camera, so that it is possible to estimate an angle of the camera on the basis of corresponding points acquired by tracking the feature points.

2. Related Art

In vehicles recently sold, an advanced driver assistance system (ADAS) is increasingly mounted to help safe driving of a driver.

As a part of such an advanced driver assistance system (ADAS), vehicles, in which a ultrasonic sensor or a rear view camera is mounted, are increased in order to reduce the occurrence of accidents due to a blind spot. Recently, vehicles, in which an around view monitoring (AVM) system is mounted, are also increased.

Particularly, since the around view monitoring (AVM) system can monitor all directions of 360° about a vehicle, it attracts attention. However, since a wide space, where a specific facility (for example, a lattice or a lane pattern for camera correction) is installed, and a skilled person for correction work are required in order to correct a camera after the system is installed, it is disadvantageous in terms of cost and time. Therefore, there is a limitation in wide use of the around view monitoring (AVM) system.

The background technology of the present invention is disclosed in Korean Unexamined Patent Publication No. 2016-0056658 (May 20, 2016 (Publication date). Entitled “an around view monitoring system and a control method thereof”).

SUMMARY

Various embodiments are directed to a camera angle estimation method for an around view monitoring system, by which in order to correct an image of a camera for generating an around view image in an around view monitoring system of a vehicle, feature points are extracted from a peripheral image captured by the camera even though correction patterns are not installed in the vicinity of the vehicle installed with the camera, so that it is possible to estimate an angle of the camera on the basis of corresponding points acquired by tracking the feature points.

In an embodiment, a camera angle estimation method for an around view monitoring system includes: uniformizing and extracting, by a control unit of the around view monitoring system, feature points from an image of each of at least three cameras; acquiring, by the control unit, corresponding points by tracking the extracted feature points; integrating, by the control unit, the corresponding points acquired from the image of each of the cameras with one another; estimating, by the control unit, vanishing points and vanishing lines by using the integrated corresponding points; and estimating an angle of each of the cameras on the basis of the estimated vanishing points and vanishing lines.

In an embodiment, the feature points are points easily distinguished from a surrounding background in the image of each of the cameras, and are points easily distinguished even when a shape, a size, or a position of an object is changed and easily found from the image even when a point of view of the camera or illumination is changed.

In an embodiment, in the uniformizing and extracting of the feature points, the control unit divides the image of each of the cameras into a plurality of preset areas in order to uniformize the feature points, and allows a predetermined number of feature points to be forcibly extracted from the divided each area.

In an embodiment, the image of each of the at least three cameras is an image continuously or sequentially captured, and includes an image captured immediately subsequent to a previously captured image or a camera image having a temporal difference of a specific frame or more.

In an embodiment, in order to estimate the vanishing points and the vanishing lines, the control unit draws virtual straight lines, along which two corresponding points extend in a longitudinal direction, to estimate one vanishing point at a spot at which the virtual straight lines cross each other, and draws virtual straight lines, which respectively connect both ends of the two corresponding points to each other, to estimate a remaining vanishing point at a spot at which the virtual straight lines extend and cross each other, thereby estimating a vanishing line that connects the two vanishing points to each other.

In an embodiment, in the estimating of the angle of each of the cameras, the control unit estimates, as the angle of each of the cameras, a rotation matrix Re for converting a coordinate system of a real world road surface to an image coordinate system on which a distortion-corrected image is displayed.

In accordance with an embodiment, in order to correct an image of a camera for generating an around view image in an around view monitoring system of a vehicle, feature points are extracted from a peripheral image captured by the camera even though correction patterns are not installed in the vicinity of the vehicle installed with the camera, and an angle of the camera is estimated on the basis of corresponding points acquired by tracking the feature points, so that the image of the camera is automatically corrected on the basis of the estimated angle of the camera and thus it is possible to more simply generate an exact around view image.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is an exemplary diagram illustrating a schematic configuration of an around view monitoring system in accordance with an embodiment.

FIG. 2 is a flowchart for explaining a camera angle estimation method for an around view monitoring system in accordance with an embodiment.

FIG. 3 is an exemplary diagram for explaining one of feature point detection methods in the related art as compared with a feature point detection method in accordance with an embodiment.

FIG. 4 is an exemplary diagram for explaining a feature point extraction method in accordance with an embodiment.

FIG. 5 is an exemplary diagram for explaining a method for acquiring corresponding points through feature point tracking in a continuous image in FIG. 2.

FIG. 6 is an exemplary diagram illustrating a result obtained by integrating corresponding points acquired by tracking feature points in a plurality of continuously captured images in FIG. 2;

FIG. 7 is an exemplary diagram for additionally explaining a method explained in FIG. 6, in which corresponding points acquired from a continuous camera image are integrated with one another.

FIG. 8 is an exemplary diagram for explaining a method for estimating vanishing points and vanishing lines by using acquired corresponding points in FIG. 2.

FIG. 9 is an exemplary diagram for explaining a relation between a camera angle and vanishing points related to an embodiment.

FIG. 10 is an exemplary diagram illustrating an around view image generated using an image corrected through a camera angle estimation method for an around view monitoring system in accordance with an embodiment.

DETAILED DESCRIPTION

Hereinafter, a camera angle estimation method for an around view monitoring system will be described below with reference to the accompanying drawings through various examples of embodiments.

FIG. 1 is an exemplary diagram illustrating a schematic configuration of an around view monitoring system in accordance with an embodiment.

As illustrated in FIG. 1, the around view monitoring system in accordance with the embodiment includes one or more cameras 120, 121, 122 installed at a vehicle and a control unit 110 that outputs an around view image obtained by processing images captured by the cameras 120, 121, 122 to a screen of an audio video navigation (AVN) device (not illustrated).

The control unit 110 estimates installation angles (or camera angles) of the cameras 120, 121, 122 and automatically corrects camera images on the basis of the estimated camera installation angles (or camera angles and see FIG. 2).

Then, the control unit 110 outputs the around view image obtained by processing the corrected camera images to the screen of the audio video navigation (AVN) device (not illustrated).

When the camera angles are estimated, the camera images can be easily corrected on the basis of the estimated camera angles. Hereinafter, in the present embodiment, a method for estimating the camera installation angles (or the camera angles) will be described in detail with reference to FIG. 2 to FIG. 10.

As illustrated in FIG. 1, in the case of the around view monitoring system, at least four cameras may be installed at the front, rear, right, and left sides of a vehicle.

For example, six four cameras may be installed at the front, rear, right (for example, a right side view mirror), left (for example, a left side view mirror), an inner front (for example, a rear view mirror), and an inner rear sides of a vehicle.

FIG. 2 is a flowchart for explaining the camera angle estimation method for the around view monitoring system in accordance with the embodiment.

As illustrated in FIG. 2, the control unit 110 uniformizes and extracts feature points (for example, points easily distinguished from a surrounding background in image matching, and points easily distinguished even when the shape, the size, or the position of an object is changed and easily found from an image even when the point of view of a camera or illumination is changed) from the camera images (S101).

The uniformization and extraction of the feature points represent extraction of feature points uniformly distributed in an entire area of the camera image.

Furthermore, the control unit 110 tracks the uniformized feature points and acquires corresponding points (S102) (in this case, since the corresponding points are acquired by tracking the feature points, the corresponding points may be linear in correspondence to the shape of the tracked feature points), and integrates the acquired corresponding points with corresponding points acquired from a plurality of images (for example, camera images of three frames or more) captured continuously or sequentially (S103).

Furthermore, the control unit 110 estimates vanishing points and vanishing lines by using the acquired corresponding points and estimates camera angles on the basis of the estimated vanishing points and vanishing lines (S104).

Furthermore, the control unit 110 corrects images of the cameras on the basis of the estimated camera angles (S105).

After the camera images are corrected as described above, the control unit 110 generates an around view image by combining the corrected camera images with one another according to a predetermined around view algorithm.

Hereinafter, in the present embodiment, the method according to steps of FIG. 2 for estimating the camera angles will be described in more detail with reference to FIG. 3 to FIG. 10.

FIG. 3 is an exemplary diagram for explaining one of feature point detection methods in the related art as compared with the feature point detection method in accordance with the embodiment.

In extracting (or detecting) the feature points in FIG. 2, the control unit 110 detects positions (that is, feature points) which are characteristic in the camera images.

However, the feature points may not be uniformly distributed according to capturing environments and thus an estimation error of the camera angles may be increased. In this regard, in the present embodiment, a process for uniformizing a distribution of the feature points is performed.

As well-known methods for detecting the feature points, there are various methods such as a Harris corner detector, Shi and Tomasi corner detector, FAST, and DOG.

For example, FIG. 3 is an exemplary diagram illustrating a feature point extraction result before uniformizing the feature points extracted (or detected) using the Shi and Tomasi corner detector of the well-known methods for detecting the feature points, and illustrates an example in which about 320 feature points (red points) are extracted. However, when a prescribed number of feature points are extracted as illustrated in FIG. 3, it is highly probable that feature points will be mainly detected on a short-distance road surface or around a remote obstacle.

In this regard, in the present embodiment, a camera image is divided into a plurality of preset areas in order to uniformly distribute feature points (see (a) of FIG. 4), and a prescribed number of feature points are forcibly extracted (or detected) in the divided each area. That is, in the present embodiment, the feature points are extracted from the camera image through uniformization.

FIG. 4 is an exemplary diagram for explaining the feature point extraction method in accordance with the embodiment, wherein (a) of FIG. 4 illustrates an example in which the camera image is uniformly divided into 4×8 pixels and (b) of FIG. 4 illustrates a result (for example, 320 feature points) obtained by extracting (or detecting) a prescribed number of (for example, 10) feature points in the divided each area.

Accordingly, when the feature point extraction result illustrated in (b) of FIG. 4 is compared with the feature point extraction result illustrated in of FIG. 3, it is possible to confirm that a distribution of the feature points is relatively very uniformized.

Referring again to FIG. 2, in the step of tracking the uniformized feature points and acquiring the corresponding points (S102), the control unit 110 tracks the extracted feature points from a continuous image (that is, a continuously captured camera image) in order to acquire the corresponding points (see FIG. 5).

FIG. 5 is an exemplary diagram for explaining the method for acquiring the corresponding points through the feature point tracking in the continuous image in FIG. 2, and typically, in the related art, feature points are matched with one another by using only two (or two frames of) images to acquire corresponding points. However, in the present embodiment, in order to stably ensure a larger number of corresponding points from a road surface having a small characteristic, a method for tracking feature points from a continuous image (for example, three frames or more).

In this case, in tracking the feature points, it may be possible to use well-known various optical flow tracking methods such as census transform (CT, a method for comparing a change in the brightness of a surrounding area of one pixel with the brightness of a center pixel) and Kanade-Lucas Tomasi (KLT). FIG. 5 shows a result in which corresponding points are acquired by tracking the feature points detected in (b) of FIG. 4.

FIG. 6 is an exemplary diagram illustrating a result obtained by integrating the corresponding points acquired by tracking the feature points in the plurality of continuously captured images in FIG. 2, and it is advantageous to acquire and use a plurality of corresponding points as much as possible in order to improve the accuracy of camera angle estimation. However, in order to acquire the plurality of corresponding points, since it is necessary to simultaneously extract and track a plurality of feature points, there is a problem that much execution time is required.

In this regard, the present embodiment uses a method for extracting and tracking a small number of feature points from each camera image by using a continuous camera image, acquiring corresponding points according to each camera image, and integrating the corresponding points acquired from each camera image with one another.

For example, as illustrated in FIG. 6, in the case of using a method for extracting and tracking feature points from camera images of continuous three frames ((a), (b), and (c) of FIG. 6), acquiring corresponding points according to the camera images, and then integrating the corresponding points acquired from the camera images with one another ((d) of FIG. 6), as compared with a method for acquiring corresponding points (for example, 2,000 to 3,000) from one camera image, the method, in which a small number of corresponding points (for example, 400 to 600) are acquired from the continuously captured camera images and integrated with one another, is more effective in terms of time in obtaining substantially the same result (for example, obtaining 2,000 to 3,000 corresponding points) and it is possible to ensure the corresponding points ((d) of FIG. 6) uniformly distributed in the entire camera image through the integration of the corresponding points even when the corresponding points exist only in a part of each camera image (see (a), (b), and (c) of FIG. 6). Consequently, it is possible to improve the stability of estimation of parameters (that is, parameters for estimating camera angles).

FIG. 7 is an exemplary diagram for additionally explaining the method explained in FIG. 6, which integrates the corresponding points acquired from the continuous camera image with one another, wherein (a) of FIG. 7 is an exemplary diagram illustrating a result obtained by extracting and tracking 960 feature points from one camera image at one time and (b) of FIG. 7 is an exemplary diagram illustrating a result obtained by extracting and tracking 320 feature points from each camera image of continuously captured three frames and integrating the feature points with one another.

It can be understood that a computation amount (that is, a load for computation) used in the result of (b) of FIG. 7 at one time is very smaller than that of (a) of FIG. 7 and it is possible to ensure the feature points uniformly distributed in the entire camera image.

In the present embodiment, the continuously captured image (or the continuous image) is not limited to an image captured immediately subsequent to a previously captured image. That is, in the present embodiment, when a plurality of images are used, the images are not necessarily to be continuous and camera images having a temporal difference of a specific frame or more may also be used. It represents that currently acquired corresponding points and corresponding points acquired 10 minutes ago may be integrated with each other for use.

FIG. 8 is an exemplary diagram for explaining a method for estimating vanishing points and vanishing lines by using the acquired corresponding points in FIG. 2

In the present embodiment, the control unit 110 estimates vanishing points and vanishing lines by using the corresponding points acquired from camera images, and estimates a rotation matrix between the ground and the camera.

The rotation matrix is a matrix for obtaining a coordinate of a new point when one point on a two-dimensional or three-dimensional space is counterclockwise rotated about the origin by a desired angle.

For example, when a vehicle moves straight, two acquired corresponding points should be parallel to each other and should have substantially the same length on the real world road surface (a bird's-eye view) as illustrated in (a) of FIG. 8.

However, when the corresponding points are captured as a camera image, the directions and lengths of the corresponding points are not substantially equal to each other due to perspective distortion as illustrated in (b) of FIG. 8.

Since the two corresponding points form a parallelogram in the real world, the control unit 110 draws virtual straight lines, along which the corresponding points extend in the longitudinal direction, to obtain one vanishing point at a spot at which the virtual straight lines cross each other, and draws virtual straight lines, which respectively connect both ends of the corresponding points to each other, to obtain the other vanishing point at a spot at which the virtual straight lines extend and cross each other, so that it is possible to obtain a vanishing line that connects the two vanishing points to each other as illustrated in (c) of FIG. 8.

FIG. 9 is an exemplary diagram for explaining a relation between a camera angle and vanishing points related to the embodiment.

As illustrated in FIG. 9, the control unit 110 estimates, as a camera angle, a rotation matrix Re for converting a coordinate system of the real world road surface to an image coordinate system on which a distortion-corrected image is displayed. That is, in the present embodiment, it is noted that a camera angle to be estimated indicates the rotation matrix Re for converting the coordinate system of the real world road surface to the image coordinate system on which the distortion-corrected image is displayed.

It indicates the conversion of the coordinate system of the real world road surface as illustrated in (a) of FIG. 9 to the image coordinate system (see (b) of FIG. 9) on which a distorted image (actually the distortion-corrected image, and since a straight line may be slightly displayed as a curve in an actual camera image, it indicates an image obtained by correcting a curve to be a straight line) is displayed as illustrated in (b) of FIG. 9.

Accordingly, for the convenience of estimation, an angle Re of a camera may be estimated through reverse conversion (that is, conversion of the distortion-corrected image coordinate system to the coordinate system of the real world road surface). That is, in FIG. 9, when the image is rotated by ReT (the image rotation of (b) of FIG. 9 to (a) of FIG. 9), tow vanishing points v1 and v2 estimated in (b) of FIG. 9 are converted to points p1 and p2 at infinity.

For example, the conversion operation illustrated in FIG. 9 is expressed by Equation 1 below.

KR e T K - 1 v 1 = p 1 KR e T K - 1 v 2 = p 2 R e T K - 1 v 1 = K - 1 p 1 R e T K - 1 v 2 = K - 1 p 2 R e T v 1 = p 1 R e T v 2 = p 2 Equation 1

In Equation 1 above, K denotes a matrix including camera internal variables and v1′, and v2′ denote results obtained by multiplying the vanishing points v1 and v2 by K−1, wherein K is a value obtainable in advance.

Furthermore, since p1′ denotes a straight direction of a vehicle in the image, p1′ is [1 0 0]T in the case of a side camera and is [0 1 0]T in the case of front/rear cameras.

Accordingly, in the case of the side camera, the coordinate of the first vanishing point v1′ is r1, which is the first column vector of the angle Re of the camera to be estimated, as expressed by Equation 2 below.

R e T v 1 = p 1 R e T v 1 = [ 1 0 0 ] v 1 = R e [ 1 0 0 ] = [ r 1 r 2 r 3 ] [ 1 0 0 ] = r 1 v 1 = r 1 Equation 2

Meanwhile, it is not possible to fix the exact position of p2′, but since this point is separated to infinity, it may be expressed by [a b 0]T.

When v2′ is converted to ReT, since p2′ is obtained, {circle around (1)} of Equation 3 below may be obtained.

R e T v 2 = p 2 R e T v 2 = [ a b 0 ] [ r 1 T r 2 T r 3 T ] v 2 = [ a b 0 ] r 3 T v 2 = 0 R e T v 1 = p 1 R e T v 1 = [ a b 0 ] [ r 1 T r 2 T r 3 T ] v 2 = [ a b 0 ] r 3 T v 1 = 0 [ v 1 T v 2 T ] r 3 = 0 Equation 3

Furthermore, when Equation 2 above is converted in substantially the same manner, {circle around (3)} of Equation 3 above may be obtained. Finally, when {circle around (1)} and {circle around (2)} of Equation 3 above are added to each other, {circle around (3)} of Equation 3 above may be obtained.

Furthermore, {circle around (3)} of Equation 3 above represents that r3 is a parameter of a vanishing line which is a straight line geometrically connecting the two vanishing points v1′, and v2′ to each other.

Meanwhile, since r1 is v1′, by Equation 2 above, r1 may be calculated based on the following Equation 4 for calculating vanishing points. That is, r1 may be calculated by the following Equation 4.

In the following Equation 4, (xi, yi) and (xi′, yi′) denote a coordinate of an ith corresponding point.

Ar 1 = 0 [ y 1 - y 1 x 1 - x 1 x 1 y 1 - x 1 y 1 y N - y N x N - x N x N y N - x N y N ] [ r 11 r 12 r 13 ] = [ 0 0 ] Equation 4

Furthermore, r3 denotes a parameter of a vanishing line.

Accordingly, a plurality of vanishing points v2 are calculated from a combination of corresponding points and straight line estimation is performed based on the plurality of vanishing points v2, so that it is possible to calculate r3.

In this case, when using a forced condition that the straight line (that is, the vanishing line, r3) should pass through r1(v1), r3 may be obtained as expressed by Equation 5 below. That is, r3 may be calculated by Equation 5 below.

[ v 2 , 1 x - v 1 x v 2 , 1 y - v 1 y v 2 , N x - v 1 x v 2 , N y - v 1 y ] [ r 31 r 32 ] = [ 0 0 ] r 3 = [ r 31 r 32 - r 31 v 1 x - r 32 v 1 y ] Equation 5

In Equation 5 above, v1x and y1y denote a coordinate of the vanishing point v1 calculated through Equation 4 above, and (v2,ix, v2,jy) denote a coordinate of an itth vanishing point obtained through the combination of the corresponding points. As described above, after r1 and r3 are calculated, r2 is calculated through a cross product of r1 and r3, so that the Re of the camera, which is desired to be estimated, is estimated.

FIG. 10 is an exemplary diagram illustrating an around view image generated using an image corrected through the camera angle estimation method for the around view monitoring system in accordance with the embodiment.

(a) of FIG. 10 is a camera image captured in a place where an experiment was performed, and (b) of FIG. 10 is an around view image generated by automatically performing calibration on the basis of a camera angle of the around view monitoring system estimated on the basis of the captured camera image.

As illustrated in FIG. 10, in the present embodiment, it can be understood that calibration is performed by automatically estimating an angle of a camera even in a situation in which there is almost no characteristic such as a lane and a pattern (a situation in which there are a mark obtained by removing a lane from the ground and only a building in a long distance) as illustrated in (a) of FIG. 10.

As described above, in the present embodiment, it is possible to automatically estimate a camera angle of an around view monitoring (AVM) system on the basis of an image captured by a camera. Particularly, it is possible to estimate the camera angle even in a situation in which there is no special pattern (a special pattern for camera correction) such as a lattice and a lane on the ground.

Consequently, in the present embodiment, when a general driver, other than a professional engineer, drives a vehicle on an arbitrary road (anywhere), since an angle of a camera is automatically estimated, the convenience of the AVM system is improved and the installation cost is reduced. Furthermore, since a limitation condition (a special facility such as a tolerance correction device) for system operation environments or correction is reduced, utilization is improved, so that a limitation for overseas export can be solved.

While various embodiments have been described above, it will be understood to those skilled in the art that the embodiments described are by way of example only. Accordingly, the camera angle estimation method for an around view monitoring system described herein should not be limited based on the described embodiments.

Claims

1. A camera angle estimation method for an around view monitoring system, comprising:

uniformizing and extracting, by a control unit of the around view monitoring system, feature points from an image of each of at least three cameras;
acquiring, by the control unit, corresponding points by tracking the extracted feature points;
integrating, by the control unit, the corresponding points acquired from the image of each of the cameras with one another;
estimating, by the control unit, vanishing points and vanishing lines by using the integrated corresponding points; and
estimating an angle of each of the cameras on the basis of the estimated vanishing points and vanishing lines.

2. The camera angle estimation method for an around view monitoring system of claim 1, wherein the feature points are points easily distinguished from a surrounding background in the image of each of the cameras, and are points easily distinguished even when a shape, a size, or a position of an object is changed and easily found from the image even when a point of view of the camera or illumination is changed.

3. The camera angle estimation method for an around view monitoring system of claim 1, wherein, in the uniformizing and extracting of the feature points, the control unit divides the image of each of the cameras into a plurality of preset areas in order to uniformize the feature points, and allows a predetermined number of feature points to be forcibly extracted from the divided each area.

4. The camera angle estimation method for an around view monitoring system of claim 1, wherein the image of each of the at least three cameras is an image continuously or sequentially captured, and includes an image captured immediately subsequent to a previously captured image or a camera image having a temporal difference of a specific frame or more.

5. The camera angle estimation method for an around view monitoring system of claim 1, wherein, in order to estimate the vanishing points and the vanishing lines, the control unit draws virtual straight lines, along which two corresponding points extend in a longitudinal direction, to estimate one vanishing point at a spot at which the virtual straight lines cross each other, and draws virtual straight lines, which respectively connect both ends of the two corresponding points to each other, to estimate a remaining vanishing point at a spot at which the virtual straight lines extend and cross each other, thereby estimating a vanishing line that connects the two vanishing points to each other.

6. The camera angle estimation method for an around view monitoring system of claim 1, wherein, in the estimating of the angle of each of the cameras, the control unit estimates, as the angle of each of the cameras, a rotation matrix Re for converting a coordinate system of a real world road surface to an image coordinate system on which a distortion-corrected image is displayed.

Patent History
Publication number: 20180365857
Type: Application
Filed: Jun 14, 2018
Publication Date: Dec 20, 2018
Inventors: Sung Joo LEE (Seoul), Hoon Min KIM (Anyang-si), Dong Wook JUNG (Seoul), Kyoungtaek CHOI (Hwaseong-si), Jae Kyu SUHR (Incheon), Ho Gi JUNG (Seoul)
Application Number: 16/008,812
Classifications
International Classification: G06T 7/80 (20060101); H04N 7/18 (20060101); G06T 7/536 (20060101);