PORTABLE STEREOSCOPIC IMAGE CAPTURING CAMERA AND SYSTEM
A portable stereoscopic image capturing camera of the present invention comprises: a case including an opening; a plurality of lenses disposed in a line at a first interval within the case; and a driving module for rotating each of the lenses, wherein the driving module includes a rotating body combined with the lens to rotate the lens, and rotates each of the lenses toward a subject to form a projection intersection point.
The present invention relates to a stereoscopic image capturing camera and a stereoscopic image capturing system, and relates to an efficient method for capturing and producing stereoscopic images using a ‘lenticular’ or ‘parallax barrier’ method.
Background of the Related ArtA method of observing stereoscopic images without wearing glasses or a VR headset is called a glasses-free stereoscopic method. In order to develop the glasses-free stereoscopic method for commercial purpose in this way, first, the most efficient stereoscopic method should be selected, and second, application of an efficient method of capturing and displaying stereoscopic images should be considered.
A ‘lenticular’ method, a ‘hologram’ method, and an ‘integral photography’ method are representative stereoscopic expression methods of the glasses-free stereoscopic method.
In particular, since production of a stereoscopic video requires a ‘3D video editing work’ after capturing multi-view (multi-angle) images, the ‘lenticular method’ may be selected as the most widely used method from the aspect of efficiency.
The lenticular method is a method that displays images using a lenticular lens array sheet (plate), and this is a generalized technique, and since the ‘parallax barrier method’ is a derived technique that also has a similar principle, detailed description thereof will be omitted.
However, the prior art for capturing stereoscopic images uses a method of capturing images mainly using expensive cameras or broadcasting equipment, and as professional devices are required, there is an economical problem for general people to use, and professional skills are also required.
In addition, ‘stereoscopic image capturing assistant devices’ for capturing stereoscopic images have been developed in the prior art, and they also have been developed to be able to arrange and mount multiple cameras to be used as a ‘rig’ for multi-view shooting or a ‘multi-view stereoscopic image capturing device’. However, since these devices are too complicated and require professional skills for operation and setting, it is very difficult for general people to use.
(Patent Document 0001) 1. Korean Patent Application No. 10-2017-0147685 Multi 3D stereoscopic image capturing device
(Patent Document 0001) 2. Korean Patent Application No. 10-2014-0068214 Rig for multi-view shooting
(Patent Document 0001) 2. Korean Patent Application No. 10-2015-0078315 Camera rig device for multi-view shooting and image processing method applying the same
SUMMARY OF THE INVENTIONSince the present invention is not a method of observing stereoscopic images while wearing glasses, i.e., not a ‘dual camera image capturing method’, it is essential to capture multi-view videos using three or more cameras. Therefore, a method of capturing stereoscopic images using a ‘lenticular’ or ‘parallax barrier’ method in the prior art is as follows.
The prior art relates to a method of capturing images using a ‘multi-camera set’ in which multiple cameras for capturing images are prepared and installed. As a result, there is a problem in that the cameras should be handled and set individually.
Particularly, since it needs to satisfy the capturing conditions and uniformly set the capturing directions and distances for the cameras, there is a problem in that a separate setting device is required to handle the cameras.
Therefore, professional skills are required to handle these devices, and since a very detailed and prompt handling method is required, there is a problem that it is difficult for general people to approach.
In addition, such a multi-camera has a problem in that it is very difficult to adopt a method of capturing images while moving like a video camera since the volume of the set inevitably increases as multiple cameras should be installed.
Therefore, it is unable to prepare a lot of cameras (ten or more) for multi-view shooting without a plan. It is since that the cost and the process of converting a large amount of stereoscopic images, which requires to process as much data as the number of cameras, are a big problem.
A portable stereoscopic image capturing camera according to an embodiment of the present invention comprises: a case including an opening; a plurality of lenses disposed in a line at a first interval within the case; and a driving module for rotating each of the lenses, wherein the driving module includes a rotating body combined with the lens to rotate the lens, and may rotate each of the lenses toward a subject to form a projection intersection point.
The case may include an upper case having an opening, a lower case in which a display is disposed, and a substrate on which the plurality of lenses is disposed.
The portable stereoscopic image capturing camera may include a distance sensor disposed on the substrate at the center of an array where the plurality of lenses is disposed to detect a distance to the subject.
The rotating body may include at least one among a motor, a voice coil motor (VCM), an encoder, and a piezoelectric motor.
The driving module may include a power body for adjusting the first distance.
The rotation angle of the lens may satisfy the following equation. <Equation>θ=[arc tan (X/f)]*180/π (In the above equation, θ denotes the rotation angle of the lens, X denotes a distance to the lens from a second reference line extended from the center position of the plurality of lenses toward the subject, and f denotes a distance from the projection intersection point to a first reference line connecting the plurality of lenses while the plurality of lenses is aligned in the forward direction.)
A stereoscopic image capturing system according to an embodiment of the present invention may comprise: a control module for interlacing an image input from a portable stereoscopic image capturing camera; a display layer on which the interlaced image is arranged; a transparent thick layer disposed on one side of the display layer; and a lenticular lens array disposed on one side of the transparent thick layer and including a lens array having a plurality of micro lenses, wherein the control module repeatedly arranges a set of captured images, in which the images captured by the plurality of lenses are sequentially arranged, on the display layer, and each set of the captured images includes an alpha image disposed on at least one of both ends.
The alpha image may be one among a monochromatic image, a color image, a gradation image, an image captured by a lens disposed at one end among the plurality of lenses, and an image captured by a lens disposed at the other end among the plurality of lenses.
The control module may arrange the captured image set to match the micro lenses of the lenticular lens array.
A stereoscopic image capturing system according to an embodiment of the present invention comprises: a control module for interlacing an image input from a portable stereoscopic image capturing camera; a display layer on which the interlaced image is arranged; a transparent thick layer disposed on one side of the display layer; and a parallax barrier film disposed on one side of the transparent thick layer and including a mask part that blocks light and a slit part that transmits light, wherein the control module repeatedly arranges a set of captured images, in which the images captured by the plurality of lenses are sequentially arranged, on the display layer, and each set of the captured images includes an alpha image disposed on at least one of both ends.
The alpha image may be one among a monochromatic image, a color image, a gradation image, an image captured by a lens disposed at one end among the plurality of lenses, and an image captured by a lens disposed at the other end among the plurality of lenses.
The control module may arrange the captured image set to match the slits of the parallax barrier film.
The present invention solves the problems of the prior art, and provides a miniaturized and lightweighted stereoscopic image capturing camera, and since stereoscopic image capturing is possible by simultaneously controlling micro lenses installed in one camera, there is an advantage in that the problem of the conventional method of adjusting the shooting conditions by handling a plurality of cameras one by one is solved, and a separate setting device or an operation method of the setting device is not required.
In addition, it is possible to capture images while moving quickly owing to the miniaturized camera, and it not a special device operated by experts to capture images, but has an advantage as a product that can be mass-produced, with which ordinary people may capture images.
A method of observing stereoscopic images without wearing glasses or a VR headset is called a glasses-free stereoscopic method. In order to develop the glasses-free stereoscopic method for commercial purpose in this way, first, the most efficient stereoscopic method should be selected, and second, application of an efficient method of capturing and displaying stereoscopic images should be considered.
A ‘lenticular’ method, a ‘hologram’ method, and an ‘integral photography’ method are representative stereoscopic expression methods of the glasses-free stereoscopic method.
In particular, since production of a stereoscopic video requires a ‘3D video editing work’ after capturing multi-view (multi-angle) images, the ‘lenticular method’ may be selected as the most widely used method from the aspect of efficiency.
The lenticular method is a method that displays images using a lenticular lens array sheet (plate), and this is a generalized technique, and since the ‘parallax barrier method’ is a derived technique that also has a similar principle, detailed description thereof will be omitted.
However, the prior art for capturing stereoscopic images uses a method of capturing images mainly using expensive cameras or broadcasting equipment, and as professional devices are required, there is an economical problem for general people to use, and professional skills are also required.
In addition, ‘stereoscopic image capturing assistant devices’ for capturing stereoscopic images have been developed in the prior art, and they also have been developed to be able to arrange and mount multiple cameras to be used as a ‘rig’ for multi-view shooting or a ‘multi-view stereoscopic image capturing device’. However, since these devices are too complicated and require professional skills for operation and setting, it is very difficult for general people to use.
Meanwhile, although a game company and a cellular phone manufacturer have developed recently a technique of capturing images using a dual lens and seeing ‘stereoscopic images’, it has not been developed commercially any more as the ‘width of the observation angle of view’ capable of recognizing stereoscopic images is too narrow by the nature of a dual camera.
Therefore, since it is desirable to use three or more cameras in order to stably secure the observation angle of view, it should be able to handle the multi-camera easily and conveniently.
Therefore, in the present invention, the stereoscopic image capturing camera is manufactured using a micro lens embedded in a cellular phone, a lens for an image capturing camera, or the like, and as the lens quality of a micro camera has been improved recently, the functionality and efficiency of the stereoscopic camera of the present invention can be improved.
A small lens module is one of core IT parts adopted in a cellular phone, tablet PC, notebook computer, web camera, dashboard camera, game console, or the like and used for various purposes such as motion recognition and the like, as well as acquisition of images and videos. Accordingly, in the present invention, stereoscopic images can be easily produced using a ‘multi-view stereoscopic image capturing camera’ that can be easily accessed by general people owing to the minimum size and adjustment method.
Hereinafter, it will be described in detail with reference to the drawings.
Therefore, it is natural that the stereoscopic image capturing camera of the present invention is controlled by software (control system) and hardware (electrical device, sensor device, lens module, electronic circuit board, and the like) capable of controlling operation and performance for capturing stereoscopic images.
In addition, the circuit board 400 for capturing stereoscopic images may include an arithmetic unit, a storage device, and the like to process image data captured by each lens, and may also include a wired and/or wireless transmission/reception device as a communication device.
In addition, before describing the present invention in detail, first of all, it needs to classify of the terms. Briefly, a ‘camera’ is configured of a lens (aperture, shutter speed), a photographic exposure device (film), and a handling body, and a ‘digital camera’ is configured of a lens, an imaging device (CCD or CMOS), and a handling case (memory, storage device, or transmission device).
Therefore, the camera used in the present invention is manufactured to handle (control) multiple lenses 200 within one handling system, and the multiple lenses used therein are ‘micro lenses’ mainly used for an imaging camera of a cellular phone or an electronic device.
However, instead of using two or three (telephoto, standard, wide angle) lenses having different angles of view like a cellular phone, it is configured of lenses 200 having the same angle of view and performance
In addition, it will be characterized in that the lenses are manufactured to be connected to a rotating body or a driving device and rotate at a specific angle in a given direction according to the position of the subject.
Therefore, the difference between the present invention and the conventional method and the specific problems will be described first below.
Generally, the purpose of glasses-free stereoscopic image capturing is utilizing the method mainly for movies or advertisement media. However, it is actually used only in some fields since the method is very complicated and expensive for capturing and producing images.
Since the glasses-free stereoscopic image capturing method is a method of arranging (disposing) multiple cameras having excellent performance and capturing images only within the space (radius) of the arranged cameras, it is complicated and has limitation in space utilization and mobile shooting. In addition, the glasses-free stereoscopic image capturing method requires a process of simultaneously shooting a subject from multiple multi-view directions (multi-angle), collecting and editing images captured by the multiple cameras, and creating final stereoscopic image data.
As shown in
However, the problem is that although the shooting direction of each camera is set toward the focusing point (subject), the focusing point r2 needs to be continuously changed from the distance f to the original arc array to the distance f2 to an arc array at a nearby place when the subject 211 moves, and actually, it is very troublesome to instantly align and handle a plurality of cameras 800 placed in an arc array and acts as a factor that hinders prompt shooting.
In addition, in another prior art, there is a method of manufacturing and using a device ‘rig’ capable of arranging camaras, and the rig is a device for assembling multiple cameras at regular intervals in an arc or horizontal array. This device is also designed to manually handle the cameras to face the subject considering the orientation, interval, and distance to the subject. However, since the cameras should be arranged to adjust the shooting direction toward the focusing point (subject), this method also has a problem in that each camera should be individually adjusted whenever the distance to the subject changes.
In addition, as another problem of the conventional method, there is a problem in that images captured by each camera should be individually checked and handled one by one. ‘Multi-view stereoscopic image capturing’ is basically a method of producing a video stereoscopic screen by performing a stereoscopic (interlacing) editing process on the images captured and collected by multiple cameras, and since the images are captured by multiple cameras, the starting points of the captured images, i.e., the timelines, are inevitably different from each other, the calibration values of the images captured by the cameras are also different, and the exposure values of light, i.e., a surrounding environmental factor, are also different, so that the method for editing the images into a stereoscopic image is very complicated and inconvenient.
It is very complicated and time-consuming to set up the cameras before capturing stereoscopic images, and since a complicated step of checking the captured images one by one should be performed to correct the images even after capturing the images, this is a serious problem.
In addition, in the multi-view shooting for stereoscopic images, although the timeline for fixing the movement of the subject should be consistent when editing the video, there is a problem in that switches of the cameras should be pressed at the same time since the images should be captured by using multiple cameras in multi-view directions, and since the reaction speed to a start signal inevitably varies, there is a problem in that when the operation speed is unsynchronized even by 0.1 second, it acts as a factor that hinders the stereoscopic effect due to the difference in the visions perceived by two eyes.
Therefore, the present invention will be described together with the drawings illustrated below as a method for solving problems of the prior art.
Since the stereoscopic camera of the present invention is a method for controlling and using multiple ‘micro lenses’ arranged in one camera, the problems generated by the conventional handling method and stereoscopic image production can be solved.
It has been mentioned above that the lenses of the present invention are configured of micro lenses 200. Therefore, since the interval between the arranged lenses is inevitably much smaller than the interval between the cameras 800 of the conventional method, i.e., the interval according to the body size of the arranged cameras, the lenses can be controlled promptly, and although they are formed in a horizontal array rather than an arc array, there is no problem in simultaneous capturing images.
Actually, although the arc array facing a subject located on the front side is an important factor in the arrangement of cameras for ‘multi-view stereoscopic image capturing’ , the difference between the arc array and the horizontal array is that all the cameras obviously have a difference in the distance to the subject, and particularly, in the horizontal array, since the distance of a camera located at an end is greater than the distance of a camera located at the center, there may be a difference in the focus, and although the subject is photographed, the larger the error, the more an unnatural distortion phenomenon occurs when a stereoscopic image is produced since the subject and the surrounding background are shown to be blurry or different in size with respect to the focus.
However, unlike this phenomenon, the present invention will be comparatively described as a horizontal array structure of micro lenses.
When a general camera for video shooting in the conventional method is assumed to have a DSLR size, the size like this requires an array pitch interval of 13 to 15 Cm no matter how small the interval is, and therefore, an arc array of a large interval within the radius of curvature centered on the subject is inevitable.
On the other hand, since the lenses of the present invention have a diameter of 1 Cm or less, the pitch interval between the lenses is a configuration condition of a non-comparable size of 1 to 5 Cm or less, and therefore, although a horizontal array, it does not make a problem generated by an error, unlike the arc array.
Accordingly, a numerical comparison between the errors generated by the arc array and the horizontal array of DSLR cameras and micro lenses is described below.
For the sake of simple comparison, as shown in
When it is assumed that the distance f between the micro lenses 200 and the subject 211 or the projection intersection point 210 is about 1 meter (1,000 mm) to 2 meters (2,000 mm), the measurement distance should be the distance to the position of lens ‘No. 4’, and therefore, the distance should be measured based on the point X2 45 mm away from the center line 250 toward the outside in practice. That is, the distance n from the subject 211 to the end point (lens No. 4) of the horizontal array is measured, and the error of the distance f from the subject 211 to the arc point or the radius r is 0.45 to 0.224 mm
Therefore, since the error deviation of the focal length (distance to the intersection point) is 0.0445 to 0.0112%, it falls within the error level captured by the lens. In particular, considering the size and interval of the lenses used in a cellular phone, since the interval is only about 1 Cm or less at the minimum, the minimum error range is further reduced to ⅓ level, so that the distance error is negligible as a result.
Therefore, since multi-view shooting conditions of general ‘cameras’ and ‘micro lenses’ vary greatly according to the arrangement interval as shown above in the table, the error range further increases particularly when multi-view shooting is assumed based on an array of four or more cameras or lenses, e.g., an array of ten, rather than four cameras or lenses as described above, and therefore, only the horizontal array of micro lenses of the present invention is possible.
Accordingly, a plurality of lenses disposed in the case may be arranged in a line at a first interval. Here, the first interval may be greater than or equal to 25 mm and less than or equal to 35 mm It goes without saying that the first interval may be changed according to the size of the case.
Therefore, when mass production and assembly process are taken into account considering generalization of stereoscopic shooting, it is most efficient to rotate (drive) the lenses in one camera toward one point 210 of the subject since both the economic feasibility and functionality should be considered.
Therefore, when the lenses in the horizontal array of the present invention are projected toward the subject and capture images as shown in
The projection intersection point or the position of the subject is an element that should be frequently adjusted while shooting. The reason is that the user needs to intentionally handle the lenses all at once while shooting according to the movement or change in the position of the subject. That is, it is very inefficient to stop shooting, handle the lenses to adjust the position of the projection intersection point, and then start shooting again. As a result, the shooting should be conducted to make dynamic stereoscopic production possible without discontinuity of video by simultaneously controlling changes of the intersection point according to the intention of the user even in the middle of shooting.
Accordingly, the method of ‘adjusting the projection intersection point’ of the present invention, i.e., frequently adjusting to control all at once, is an important key factor for stereoscopic shooting, and a method of finding the ‘projection intersection point’ based on rotation values of the lenses is required.
The diameter of the micro lenses used in the present invention is less than 1 Cm in most cases, and three or more lenses are arranged horizontally. Although ten or more lenses may be arranged to be used for precise multi-view shooting, it may be adjusted according to the intention of the user.
First, when an even number of lenses are arranged in an array, the same number of lenses will be arranged on the left and right sides of the center, and when an odd number of lenses are arranged, it is natural that a lens is provided at the center and the same number of lenses are arranged on the left and right sides in a balanced manner
Accordingly, as shown in
In addition, it can be seen that the arranged lenses are equally arranged on the left and right sides of the ‘second reference line 250’, i.e., in the direction perpendicular to the first reference line, at the center position of the arranged lenses to face the subject, and it shows an example in which lens ‘a’ on the left side of the ‘lens No. 1’ and lens ‘b’ on the right side of the ‘lens No. 4’ may be equally and additionally arranged.
Therefore, since the intervals of all the lenses are symmetrical and equal around the center, the positions of ‘lens No. 2’ and ‘lens No. 3’ are ‘(−)X1’ and ‘(+)X1’ at a distance of minus (−) ½ pitch P and at a distance of plus (+) ½ pitch P, and the positions of ‘lens No. 1’ and ‘lens No. 4’ are ‘(−)X2’ and ‘(+)X2’ at a distance of ‘−(½ P+P)’ and ‘+(½ P+P)’ to be placed at symmetrical positions.
Accordingly, when the positions of ‘lens No. 3’, ‘lens No. 4’, and ‘lens b’ are determined, the positions of ‘lens No. 1’, ‘lens No. 2’, and ‘lens a’ will be determined symmetrically. As a result, since each of the lens exists at a different position, it is inevitable to have a different rotation value toward the intersection point.
Therefore, since the projection angle θ of each lens is in a trigonometric function relationship according to the intervals p of the lenses, the vertical distance f to the intersection point, and the distance n from the horizontally arranged lenses to the projection intersection point, the rotation angle of each lens can be controlled according to ‘Equation 1’ shown below.
θ=[arc tan (X/f)]*180/π Equation 1.
Accordingly, the rotation value of each lens is as follows.
-
- Projection angle of lens No. 1: (−)θ2=(−)[arc tan (x2/f)]*180/π
- Projection angle of lens No. 2: (−)θ1=(−)[arc tan (x1/f)]*180/π
- Projection angle of lens No. 3: θ1=[arc tan(x1/f)]*180/π
- Projection angle of lens No. 4: θ2=[arc tan(x2/f)]*180/π
The method of adjusting the lenses toward the projection intersection point may be divided into a ‘depth focus first adjustment method’ and a ‘projection intersection point first adjustment method’ according to the intention of the user.
For example, in the ‘depth focus first adjustment method’, when it is assumed that a subject is positioned two meters ahead f, and any one lens functions as a ‘measurement lens’ and adjusts the depth focus n of the subject according to the position of the subject, the data value is transmitted to all the other lenses as a signal, and the depth focus n of all the lenses is set equally. In addition, since a digitized distance of the depth focus n makes it possible to know the ‘vertical distance to the intersection point’ f, the lenses simultaneously rotate toward the ‘projection intersection point’ according to the value.
Since any one of the lenses is determined as a measurement lens, in the case of a lens array of an odd number, the lens located at the center functions as the ‘measurement lens’, and in the case of a lens array of an even number, it will be desirable to select a lens among two lenses located in the middle, e.g., among the ‘lens No. 2’ and ‘lens No. 3’ in ‘
In addition, as another method, as shown in
The distance sensor 420 includes (1) an ultrasonic sensor: sound wave, (2) an infrared sensor: infrared, (3) a LiDAR sensor: laser, (4) a radar sensor: radio wave, (5) a camera sensor (passive sensor): visible light, and the like, and it can be selected and applied according to the intention of the user. Therefore, the direction of rotation of the lenses toward the ‘depth of focus’ and the ‘projection intersection point’ can be collectively controlled according to the measured distance value.
As a result, in a stereoscopic image, objects around the subject are shown three-dimensionally around the subject (projection intersection point), and objects in front of the subject (projection intersection point) appear to be ‘projected’, and objects behind the subject (projection intersection point) are perceived in a ‘sense of depth’. In addition, when the depth focus is continuously adjusted according to the movement of the subject, the continuously changing surrounding environment may produce, together with the subject, a dynamic stereoscopic image.
In addition, the ‘projection intersection point first adjustment method’ is a case of shooting after the projection intersection point is arbitrarily set to the distance of a specific location in advance by the user, and it is a method of three-dimensionally showing a moving subject according to moving forward and backward when a stereoscopic environment is set around the projection intersection point. That is, it is a method in which a subject looks ‘protruding’ gradually when the subject moves forward from the projection intersection point, and the subject is produced to have a ‘sense of depth’ gradually when the subject moves backward from the projection intersection point.
As a result, it is a method of shooting by directing a subject that moves around a given distance to look dynamic and stereoscopic.
First, a driving motor that adjusts the ‘depth focus’ and a ‘lens assembly’ having several sheets of lenses overlapped in layers are configured inside the micro lens of the present invention. In addition, the ‘driving motor’ adjusts the depth focus by adjusting the intervals between the several sheets of lenses. A voice coil motor (VCM), an encoder, a piezoelectric motor, or the like is used as a motor for adjusting the depth focus, and they are mainly used in ‘cellular phone’ cameras.
In addition, in the case of the ‘micro lens’ used in the present invention, there also exists an ‘infinity focus lens’ that does not have a depth focus control function, so that when the ‘projection intersection point’ is properly formed in the event of capturing stereoscopic images, it does not matter to use an ‘infinity focus lens’. However, it is natural that the quality is lowered compared to a lens having a depth focus control function.
In addition, in the present invention using several small lenses 200, each of the lenses rotates as much as a predetermined angle by a ‘rotating body module 300’ or a rotating body 500 combined with the lens.
In addition,
Worm gears 305 are inserted at regular intervals in the rotation shaft 320 connected to the motor, and lens holders 310 devised to fix and rotate each of the lenses and gear wheels 321 and 322 inserted in the holder rotation shafts 311 are connected to the worm gears 305. However, it can be seen that the sizes of the gearwheels 321 and 322 are different, and this is since that as the ‘gear ratio’ is adjusted according to the ‘size’ or the ‘number of teeth’ of the gearwheels, the lenses may rotate simultaneously at a specific angle θ.
However, although a worm gear and a gearwheel may be used since each lens needs to move at a very precise angle less than 0.1 degree, it is preferable to use the rotating body 500 or a small motor that moves each lens as shown in
The lens 200 according to the embodiment of
This is a method that may use a voice coil motor (VCM), an encoder, an ultrasonic piezoelectric motor, or the like to control precise rotation, that is, it is a driving body that may rotate or linearly move by ultrasonic or electromagnetic force.
Accordingly, since the ‘rotation driving body 500’ may be a rotating body surrounding the body of the ‘lens’ according to selection of the user or a micro rotating body moving inside the lens, the configuration method may vary according to the intention of the user.
In addition,
This is a function for rotating each lens and increasing or decreasing the distance between the lenses at regular intervals. Therefore, this is a method for complementing weakening of the stereoscopic effect that can be sensed with naked eyes in the case of seeing a stereoscopic image captured when the subject or the projection intersection point is located at a distance.
Generally, in the case of a small lens, it does not make a big problem when the distance is about 1 meter since the shooting angle of view is relatively wide. However, a phenomenon that gradually lowers the stereoscopic effect occurs at a distance greater than 1 meter. Since this occurs due to the difference in the projection angle θ of the subject captured by each lens, it can be seen that a subject 210 and 211 at a distance shows a slight change (difference in the angle of view) in the captured image shown on the screen, and a subject 210 and 211 captured at a close distance obviously shows a change (difference in the angle of view). Particularly, this appears further severe in the case of a wide-angle lens, compared to a telephoto lens, and only a weak sense of depth can be perceived from the subject 210 and 211 at a distance, and a rich stereoscopic effect is hardly felt.
Therefore, in this case, the projection angle θ is secured by increasing the intervals X between the lenses to be able to sufficiently feel the stereoscopic effect as much as the increased distance to the projection intersection point f, and as a result, it is possible to shoot a subject to be able to observe a ‘sense of depth’ and a ‘sense of protrusion’ even when shooting a subject placed at a distance.
Accordingly, as the linear movement method of the present invention for adjusting the first interval, which is the interval between a plurality of lenses, a ‘worm gear’ rotation method by a small motor or an ‘ultrasonic piezoelectric motor’ method that moves based on micro-vibration of frequency can be used.
Accordingly,
Accordingly, the worm gear 305 connected to the driving body 510 may be controlled to move on the linear saw teeth 511 while rotating, and when the driving body 510 is configured of an ‘ultrasonic piezoelectric motor’, it will be natural that linear movement, in addition to more precise control, is possible without the worm gear and linear saw teeth.
In addition, as shown in
Accordingly, the lens used in the present invention may be configured of a lens capable of a ‘zoom’ function, and this is manufactured as one ‘zoom lens module’ configured of a wide-angle lens 201, a standard lens 202, and a telephoto lens 203, and the ‘zoom lens module’ may be manufactured to be combined with a rotating body to rotate toward the projection intersection point. This is a function that allows a user to conveniently capture more dynamic stereoscopic images.
The stereoscopic image capturing camera and the stereoscopic image capturing system of the present invention can be manufactured in connection with a cellular phone, and it is a method that can save much hardware manufacturing cost since the power supply, display, and arithmetic unit of the cellular phone can be used as they are.
This allows a camera configured of a minimum stereoscopic image capturing device to be connected with a cellular phone through wired or wireless communication, and it can be manufactured using a portable tablet PC device or the like, instead of the cellular phone. Therefore, the conditions and functions for capturing stereoscopic images can be adjusted by executing the software (or a dedicated app) that may handle the camera on the cellular phone, while seeing the cellular phone screen.
In addition, when the arithmetic unit of the cellular phone is utilized, this may function as an innovative method that stores the images (data) captured by the camera in the cellular phone, converts the captured images into stereoscopic images (interlacing), and transmits and views the stereoscopic image data.
The ‘3D dimensioning (interlacing)’ is generally an essential work for producing a stereoscopic screen, which can be a process of compressing images captured at various angles into a single image, and the process of converting data to be seen three-dimensionally is referred to as a ‘3D dimensioning’ or ‘interlacing’ work.
Accordingly, the present invention requires a work process for making ‘image data’ of various angles captured by the lenses into one ‘image data’, and the result should be visible through the screen. Depending on how many lenses (points of view) are used to capture the image, the processing speed and result values, such as capacity of data compression, stability of stereoscopic perspective angle of view, quality of resolution, and the like, appear differently.
For example, data should be processed and stored or transmitted within the shortest possible time (or immediately), and the capacity of processing data appears to be different according to first, how many multi-view screens will be applied, second, how many types of correction works (timeline, color, brightness, position, angle of view, etc.) will be used, third, how many points of view will be used as a basis for the observer's (perspective) point of view, and fourth, how much sharpness of a stereoscopic observation perception resolution will be maintained in order to perform the interlacing work.
For example, when an image is captured by ten cameras or lenses (10-viewpoint shooting), the ten images are split, classified, combined, and repeatedly arranged within one lenticular valley (lens) pitch, and when this is explained as follows in comparison with a case of applying an image captured by five cameras, and it is based on the assumption that the screen resolution is observed on a 400 lpi display.
Since the display observation resolution of the image captured by 10-viewpoint shooting is based on 400 lpi, it is the same as watching a screen divided into ten captured images on a 400 lpi screen. As a result, ten images of 40 lpi are displayed to be seen according to the perspective angle. This is, as a result, seeing a stereoscopic image configured of a screen having a resolution of 40 lines (40 lpi) with eyes.
In addition, since 5-viewpoint shooting is 400/5=80 lpi, a stereoscopic image expressed as 80 lines per inch (80 lpi) is perceived with eyes. Accordingly, comparing in detail, although the original resolution of the display screen is 400 lpi and the thickness of one line is 0.0635 mm, as the resolution of the screen for viewing with ten viewpoints is 40 lpi, it has a result the same as seeing a screen configured of pixels with a thickness of 0.635 mm, and as the screen for viewing with five viewpoints is 80 lpi, it is seeing a screen configured of pixels having a thickness of 0.3175 mm
A problem occurs hereinafter. That is, it is desirable to express the pixel size of a natural video perceived by a person with eyes within at least about 0.35 mm although it varies more or less depending on the viewing distance. However, when the distance is longer, the resolution is lowered, and a screen uncomfortable to see is displayed. Accordingly, it is preferable to see a stereoscopic screen of a resolution comfortable to see although the angle of view is as narrow as five viewpoints, rather than an angle of multi-view that can be obtained by 10-viewpoint shooting.
In particular, in the case of watching an image on a small display product such as a cellular phone, the angle of view does not need to be wide since it is observed from the viewpoint of one person at any rate by the nature of the product. As a result, it would be desirable to minimize the optimal perspective viewpoint that can be observed at an optimal resolution.
However, no matter how much the resolution is increased within a set resolution and the perspective viewpoint is minimized, when the viewpoints are less than three, watching the screen is dizzy and uncomfortable due to a ‘jumping phenomenon’ that appears repeatedly even with a slight movement.
The ‘jumping phenomenon’ is a phenomenon that does not show the screen three-dimensionally at a certain angle although it is shown three-dimensionally according to a viewing angle, and this is a phenomenon that appears at a timepoint when the screen looks dizzy, other than a sequential array that looks three-dimensionally in the array shown on the screen.
As shown in
The captured images are repeatedly displayed as pixel images of regular intervals according to the resolution (lpi) condition of the display. As a result, it is possible to see (observe) the displayed images through the lenticular lens array 30 or a parallax barrier 40.
As shown in
In addition, a stable stereoscopic screen may be seen at the angles of following perspective view positions of viewpoint 1 (images 1 and 2), viewpoint 2 (images 2 and 3), viewpoint 3 (images 3 and 4), and viewpoint 4 (images 4 and 5). As a result, it appears as a problem in which a phenomenon occurring at a point where it looks three-dimensionally and then dizzy according to the perspective view position, i.e., the ‘jumping phenomenon’, is generated repeatedly.
Accordingly, in the present invention, the above problems can be solved as shown in
A portable stereoscopic image capturing system according to the present invention may include a control module for 3D dimensioning an image captured by a portable stereoscopic image capturing camera. The control module may convert 5-viewpoint images (five moving images) captured during the interlacing process into images of six or more viewpoints. The control module may produce an image that does not look dizzy by performing interlacing to include an alpha image 15 ‘α’ or (and) ‘β’ in the images 1, 2, 3, 4, and 5 captured by the lenses A1, A2, A3, A4, and A5.
Referring to
The alpha image 15 is a third general image that is not three-dimensionally seen. This can be generated using a monochrome image, a color image, a gradation image, or other pictures, or a copy image of a captured image, i.e., ‘image No. 1’ or ‘image No. 5’, may be repeated to be used as the alpha image 15. Therefore, it is a method of arranging the alpha image and the captured image in order so that the left eye or the right eye may see the image.
For example, as shown in the drawing, when viewpoints are classified according to the perspective angle to observe, an array of seven viewpoints is created in order of viewpoint 1 (α and 1), viewpoint 2 (1 and 2), viewpoint 3 (2 and 3), viewpoint 4 (3 and 4), viewpoint 5 (4 and 5), viewpoint 6 (5 and β), and viewpoint 7 (α and β), and in the case where the alpha image 15 is white (or a black or monochrome image), ‘α’ or ‘β’ is seen to be overlapped with the captured image No. 1 or 5 only when it is seen from ‘viewpoint 1’ and ‘viewpoint 6’. That is, as the left and right eyes see the captured image and the white image to be overlapped with each other, as a result, an image that is gradually getting brighter and less dizzy is seen.
The reason is that when the right eye sees image ‘No. 1’ and the left eye sees image ‘α’, it will feel like a process of changing the subject screen to white, and as a result, although a white image is seen instantaneously, it does not look dizzy.
In addition, since it is recognized that the stereoscopic screen is completely changed to a white screen at viewpoint 7 (α and β) that is seen immediately thereafter, the white screen is instinctively perceived as a boundary surface (boundary angle of view) when a person sees it. Accordingly, the person will try to move himself or herself to ‘viewpoints 2 to 5’ where he or she can see well three-dimensionally, i.e., to the center where he or she can see the best three-dimensionally, through the instinctive learning experience, and will instinctively try to move the cellular phone screen or the like held in the hand to the ‘position where he or she can see well three-dimensionally ’ to see the screen. As a result, the ‘alpha image 15’ functions as an important criterion for unconscious self-correction of the perspective angle of view by the instinctive behavioral dynamics that a human has.
Therefore, as shown in
-
- viewpoint 1 recognizing α&1 =blurred white screen,
- viewpoint 2 recognizing 1&2=stereoscopic screen,
- viewpoint 3 recognizing 2&3=stereoscopic screen,
- viewpoint 4 recognizing 3&4=stereoscopic screen,
- viewpoint 5 recognizing 4&5=stereoscopic screen,
- viewpoint 6 recognizing 5&β=blurred white screen,
- viewpoint 7 recognizing α&β=white screen,
- viewpoint 1 recognizing α&1 =blurred white screen,
- viewpoint 2 recognizing 1&2=stereoscopic screen, and
- viewpoint 3 recognizing 2&3=stereoscopic screen, and
- this is repeated thereafter.
In addition, as shown in
That is, a display array structure of a total of eight viewpoints configured of four images (1,2,3,4) captured by four lenses,
-
- two monochromatic images β, and two copy images α1&a4 as the alpha image 15 is exemplified.
This will be seen in an order of
-
- viewpoint 1 recognizing β&α1=white and copy of image No. 1=blurred white,
- viewpoint 2 recognizing 62 &1=white and image No. 1=blurred white,
- viewpoint 3 recognizing α1&2=copy of image No. 1 and image No. 2=stereoscopic screen,
- viewpoint 4 recognizing 1&3=image No. 1 and image No. 3=stereoscopic screen,
- viewpoint 5 recognizing 2&4=image No. 2 and image No. 4=stereoscopic screen,
- viewpoint 6 recognizing 3&α4=image No. 3 and copy of image No. 4=blurred white screen,
- viewpoint 7 recognizing 4&β=copy of image No. 4 and white=blurred white screen,
- viewpoint 8 recognizing α4&β=copy of image No. 4 and white=blurred white screen,
- viewpoint 1 recognizing β&α1=white and copy of image No. 1=blurred white, and
- viewpoint 2 recognizing β&1=white and image No. 1=blurred white, and
- this is repeated thereafter.
Accordingly, this is a method of securing four reliable viewpoints and angles of view that can observe a stereoscopic object without dizziness even when an image is captured using a small number of lenses according to a combination of arranging the alpha images.
In addition, according to the intention of the user, there may be a case where only one alpha image 15, i.e., ‘α’, is used, and since an array of six viewpoints of perspective position can be made in order of α&1, 1&2, 2&3, 3&4, 4&5, and 5&α, the arranging method can be sufficiently applied.
Therefore, although an array of at least three or more lenses is used, this can be a method that removes the disadvantage of seeing the screen with disgusting dizziness that occurs in the conventional lenticular or parallax barrier method, and application of the alpha image 15 is an innovative method for the stereoscopic image capturing camera of the present invention to provide a wider perspective angle of view and improve the data transmission and interlacing processing speed although a minimum number of lenses are used.
In addition, according to the present invention, interlacing is performed by including the alpha image 15 together with a stereoscopic image, and as a result thereof, the stereoscopic image data produced can be stored or transmitted, and the data can be reproduced.
The data processing and display method is implemented in the same manner as the structure of the lenticular lens array 30 of
As a generalized technique, the parallax barrier method, unlike the lenticular lens array, is a method of three-dimensionally showing an image by making perspective angles of both eyes different by blocking light of a light source and transmitting the light to the display on the rear side, and it is implemented to be sensitive to the pixel size of the display.
The parallax barrier 40 film is divided into a mask 41 portion that blocks light and a slit portion 42 that transmits light, and is manufactured at pitch intervals 16 of a repeated pattern. The size of the slit 42 through which light is transmitted is mainly determined to be around the ‘width’ size of one display pixel, and since it is general that the area of the mask 41 increases as the number of shooting viewpoints that desire to observe increases, the area of the slit 42 decreases relatively. Accordingly, it generates an adverse effect of darkening the entire screen.
For example, as shown in
When a multi-view image is produced by ten or more viewpoints, as the visibility is lowered due to the display getting dark and or the loss increases due to excessive power consumption, rather than the benefit obtained by widely using the observer's perspective angle of view, the stereoscopic screen will lead to a dark view.
Accordingly, when a case of seeing with a tool such as a small personal display or a cellular phone screen is considered, it is preferable to use three to six viewpoints for the images captured by the lenses, and produce an image with a total of ten viewpoints or less, including at least one alpha image 15.
Although the technical spirits of the present invention have been described above together with the accompanying drawings, this is an embodiment of the present invention described as an example and is not intended to limit the present invention, and it is obvious that anyone skilled in the art can make various modifications and imitations without departing from the scope of the technical spirits of the present invention, and such modifications and imitations fall within the scope of the technical spirits of the present invention.
Claims
1. A portable stereoscopic image capturing camera comprising:
- a case including an opening;
- a plurality of lenses disposed in a line at a first interval within the case; and
- a driving module for rotating each of the lenses, wherein the driving module includes a rotating body combined with the lens to rotate the lens, and rotates each of the lenses toward a subject to form a projection intersection point, and a rotation angle of the lens satisfies the following equation. θ=[arc tan (X/f)]*180/π <Equation>
- (In the above equation, θ denotes the rotation angle of the lens, X denotes a distance to the lens from a second reference line extended from a center position of the plurality of lenses toward the subject, and f denotes a distance from the projection intersection point to a first reference line connecting the plurality of lenses while the plurality of lenses is aligned in a forward direction.)
2. The camera according to claim 1, wherein the case includes an upper case having an opening, a lower case in which a display is disposed, and a substrate on which the plurality of lenses is disposed.
3. The camera according to claim 2, comprising a distance sensor disposed on the substrate at a center of an array where the plurality of lenses is disposed to detect a distance to the subject.
4. The camera according to claim 1, wherein the rotating body includes at least one among a motor, a voice coil motor (VCM), an encoder, and a piezoelectric motor.
5. A system according to claim 11, further comprising:
- a transparent thick layer disposed on one side of the display layer; and
- a lenticular lens array disposed on one side of the transparent thick layer and including a lens array having a plurality of micro lenses.
6. The system according to claim 5, wherein the alpha image is one among a monochromatic image, a color image, a gradation image, an image captured by a lens disposed at one end among the plurality of lenses, and an image captured by a lens disposed at the other end among the plurality of lenses.
7. The system according to claim 6, wherein the control module arranges the captured image set to match the micro lenses of the lenticular lens array. camera;
8. A stereoscopic image capturing system comprising:
- a control module for interlacing an image input from a portable stereoscopic image capturing a display layer on which the interlaced image is arranged;
- a transparent thick layer disposed on one side of the display layer; and
- a parallax barrier film disposed on one side of the transparent thick layer and including a mask part that blocks light and a slit part that transmits light, wherein
- the portable stereoscopic image capturing camera includes a plurality of lenses disposed in a line at a first interval, and a driving module for rotating each of the lenses toward a subject to form a projection intersection point, wherein
- the control module repeatedly arranges a set of captured images, in which the images captured by the plurality of lenses are sequentially arranged, on the display layer, and each set of the captured images includes an alpha image disposed on at least one of both ends.
9. The system according to claim 8, wherein the alpha image is one among a monochromatic image, a color image, a gradation image, an image captured by a lens disposed at one end among the plurality of lenses, and an image captured by a lens disposed at the other end among the plurality of lenses.
10. The system according to claim 8, wherein the control module arranges the captured image set to match the slits of the parallax barrier film.
11. A stereoscopic image capturing system comprising:
- a control module for interlacing an image input from a portable stereoscopic image capturing camera; and
- a display layer on which the interlaced image is arranged, wherein
- the portable stereoscopic image capturing camera includes a plurality of lenses disposed in a line at a first interval, and a driving module for rotating each of the lenses toward a subject to form a projection intersection point, wherein
- the control module repeatedly arranges a set of captured images, in which the images captured by the plurality of lenses are sequentially arranged, on the display layer, and each set of the captured images includes an alpha image disposed on at least one of both ends.
12. The system according to claim 11, wherein the alpha image is one among a monochromatic image, a color image, a gradation image, an image captured by a lens disposed at one end among the plurality of lenses, and an image captured by a lens disposed at the other end among the plurality of lenses.
Type: Application
Filed: Mar 14, 2023
Publication Date: Sep 14, 2023
Inventor: Hyunin CHUNG (Seoul)
Application Number: 18/183,366