ELECTRONIC SYSTEM AND METHOD FOR GENERATING PANORAMIC LIGHT FIELDS

An electronic system for generating panoramic light fields is provided. The electronic system includes a camera, a depth estimation circuit, and a light field generation circuit. The camera is configured to capture a panoramic image of a scene. The depth estimation circuit is configured to estimate panoramic depth information of the scene based on the panoramic image captured by the camera. The light field generation circuit is configured to generate a panoramic light field based on the estimated panoramic depth information.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 63/286,036, filed Dec. 4, 2021, the entirety of which is incorporated by reference herein.

BACKGROUND OF THE INVENTION Field of the Invention

The present disclosure relates in general to an electronic system, and it relates in particular to an electronic system and a method for generating panoramic light fields.

Description of the Related Art

Three-dimensional (3D) visualization devices, such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) devices, generate 3D sensation based on the stereoscopic vision principle, and render a panoramic scene (i.e., a scene that incorporate a wide viewing angle) at a single depth. Since the distance between the display panel and the eye of the viewer is fixed, the accommodation of the eyes does not change with the vergence. This leads to the effect of vergence accommodation conflict (VAC), which can cause discomfort such as visual fatigue and eye strain to viewers, or even daze some viewers who are not used to the 3D visualization effect.

The light field display device is a display device using light field technology, allowing the observer to see the light field with depth perception. The light field with depth perception can avoid the impact of vergence accommodation conflict. Therefore, there is a need for an electronic system and a method capable of generating panoramic light field for the light field display device to display a more comfortable scene to the observer.

BRIEF SUMMARY OF THE INVENTION

An electronic system for generating panoramic light fields is provided by an embodiment of the present disclosure. The electronic system includes a camera, a depth estimation circuit, and a light field generation circuit. The camera is configured to capture a panoramic image of a scene. The depth estimation circuit is configured to estimate panoramic depth information of the scene based on the panoramic image captured by the camera. The light field generation circuit is configured to generate a panoramic light field based on the estimated panoramic depth information.

In some embodiments, the camera is mounted on a rotational mechanism used for revolving the camera. The camera is configured to capture a sequence of image sets of the scene from a plurality of points on a first trajectory of the camera. The camera is further configured to stitch images in each image set to obtain a stitched image. The camera is further configured to transform the stitched image to the rectangular panoramic image using an equirectangular projection method.

In some embodiments, the rotational mechanism is further augmented to lift or lower the camera. The camera is further configured to capture another sequence of image sets of the scene from a plurality of points on a second trajectory of the camera.

In further embodiments, the first trajectory and the second trajectory are circles. The first trajectory and the second trajectory may have different radiuses.

In some embodiments, the depth estimation circuit is further configured to estimate the depth information using a convolutional neural network-based model. In further embodiments, the convolutional neural network uses a distance-based kernel.

In some embodiments, the processor is further configured to generate the panoramic light field using a convolutional neural network-based model.

In an embodiment, each light ray in the panoramic light field is addressed by two sets of cylindrical coordinates. In another embodiment, each light ray in the panoramic light field is addressed by two sets of spherical coordinates.

An electronic system for generating panoramic light fields is provided by another embodiment of the present disclosure. The electronic system includes a camera and a processor. The camera is configured to capture a panoramic image of a scene. The processor is configured to estimate panoramic depth information of the scene based on the panoramic image captured by the camera, and to generate a panoramic light field based on the estimated panoramic depth information.

A method for generating panoramic light fields is provided by an embodiment of the present disclosure. The method is for use in an electronic system that includes a camera. The method includes the step of capturing a panoramic image of a scene by the camera. The method further includes the step of estimating panoramic depth information of the scene based on the panoramic image. The method further includes the step of generating a panoramic light field based on the panoramic depth information.

The embodiments of the present disclosure provides panoramic light field for the light field display device to directly projecting light rays from various directions into the viewer's eye, which is more in line with the way humans observe the world, so that the effect of VAC can be mitigated.

BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure can be better understood by reading the subsequent detailed description and examples with references made to the accompanying drawings. Additionally, it should be appreciated that in the flow diagram of the present disclosure, the order of execution for each blocks can be changed, and/or some of the blocks can be changed, eliminated, or combined.

FIG. 1A is the block diagram illustrating the electronic system for generating panoramic light fields, according to an embodiment of the present disclosure.

FIG. 1B is the block diagram illustrating the electronic system for generating panoramic light fields, according to another embodiment of the present disclosure.

FIG. 2 is the flow diagram illustrating the method for generating panoramic light field, according to an embodiment of the present disclosure.

FIG. 3 shows an exemplary rotational mechanism on which a camera is mounted, according to an embodiment of the present disclosure.

FIG. 4 shows an exemplary top view of the camera revolving around the rotation center.

FIG. 5 is a flow diagram illustrating the steps for capturing the panoramic images in greater detail, according to an embodiment of the present disclosure.

FIG. 6 shows an exemplary equirectangular projection, according to an embodiment of the present disclosure.

FIG. 7 is a schematic diagram illustrating the operation of the CNN-based depth estimation model, according to an embodiment of the present disclosure.

FIG. 8A shows the spherical representation of an exemplary light ray, according to an embodiment of the present disclosure.

FIG. 8B shows the two-cylinder representation of an exemplary light ray, according to an embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE INVENTION

The following description provides embodiments of the invention, which are intended to describe the basic spirit of the invention, but is not intended to limit the invention. For the actual inventive content, reference must be made to the scope of the claims.

In each of the following embodiments, the same reference numbers represent identical or similar elements or components.

It must be understood that the terms “including” and “comprising” are used in the specification to indicate the existence of specific technical features, numerical values, method steps, process operations, elements and/or components, but do not exclude additional technical features, numerical values, method steps, process operations, elements, components, or any combination of the above.

Ordinal terms used in the claims, such as “first,” “second,” “third,” etc., are only for convenience of explanation, and do not imply any precedence relation between one another.

The term “panoramic” used herein is intended to cover any wide-angle view, such as 360-degree view, 300-degree view, 280-degree view or the like, the present disclosure is not limited thereto.

FIG. 1A is the block diagram illustrating the electronic system 10A for generating panoramic light fields, according to an embodiment of the present disclosure. As shown in FIG. 1A, the electronic system 10A may include a camera 11, a depth estimation circuit 14A, and a light field generation circuit 15A.

The camera 11 may include a plurality of lenses, each of which is used for capturing images with a specific angle of view. In the example of FIG. 1A, the camera 11 is drawn as a dual-fisheye camera having two lenses, namely the lens 111 and the lens 112. Each of the two lenses 111 and 112 produces strong visual distortion intended to capture images with 180-degree (or substantially 180-degree) of view. The orientation of the lens 111 can be opposite to that of the lens 112, so that the two lenses together can complementarily capture a 360-degree (or substantially 360-degree) image. However, it should be noted that the number of lenses of the camera 11 is not limited by the present disclosure. For example, the camera 11 can alternatively be a quad camera having four lenses, each of which is used for capturing images with 90-degree (or substantially 90-degree) of view.

The camera 11 may further include an image processing unit 113. The image processing unit 113 can be a specialized microprocessor dedicated to perform specific image processing tasks, such as stitching a set of images (also referred to as “image set” herein) captured by the lens 111 and the lens 112.

The depth estimation circuit 14A is a specifically designed hardware that can be implemented by one or more electronic circuits, such as an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or the like, but the present disclosure is not limited thereto. The depth estimation circuit 14A is electrically connected to the camera 11, so as to obtain the panoramic image captured by the camera 11. The functionalities of the depth estimation circuit 14A will be described later.

The light field generation circuit 15A is a specifically designed hardware that can be implemented by one or more electronic circuits, such as an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or the like, but the present disclosure is not limited thereto. The light field generation circuit 15A is electrically connected to the depth estimation circuit 14A, so as to obtain the depth information of the scene estimated by the depth estimation circuit 14A. The functionalities of the light field generation circuit 15A will be described later.

In an embodiment, the depth estimation circuit 14A and the light field generation circuit 15A can be integrated into electronic circuitry 145, that can be implemented by an integrated circuit, an FPGA, or a system-on-chip (SoC), but the present disclosure is not limited thereto.

FIG. 1B is the block diagram illustrating the electronic system 10B for generating panoramic light fields, according to an embodiment of the present disclosure. As shown in FIG. 1B, the electronic system 10B may include a camera 11, a processor 12, and a storage device 13.

The camera 11 in the electronic system 10B is substantially the same as the camera 11 in the electronic system 10A, which has been described previously, and thus the description is not repeated here.

The processor 12 can be a central processing unit (CPU) or a graphic processing unit (GPU) capable of executing instructions and performing high-speed computation. The storage device 13 may include a non-volatile memory device, such as a hard disk drive, a solid-state disk, a flash memory, or a read-only memory, but the present disclosure is not limited thereto. The storage device 13 stores the depth estimation module 14B and the light field generation module 15B, each of which is a software module that includes a set of instructions executable by the processor 12. The processor 12 may be connected to the storage device 13 through system bus or network, such as local area network (LAN), wide area network (WAN), virtual private network (VPN), internet, intranet, extranet, public switched telephone network, infrared network, wireless network, or any combination thereof. The processor 12 is configured to load the depth estimation module 14B and the light field generation module 15B from the storage device 13 to execute corresponding steps or operations of the disclosed method for generating panoramic light field which will be introduced later.

In some embodiments, the processor 12 and storage device 13 may be equipped in a computer device 123. The computer device 123 can be a personal computer (e.g., laptop computer or notebook computer) or a server computer running an operating system (e.g., Windows, Mac OS, Linux, UNIX, etc.). The computer device 123 may communicate with the camera 11 through wired transmission interfaces and/or wireless transmission interfaces (not shown in FIG. 1B), in order to obtain the images captured by the camera 11. The wired transmission interfaces may include High Definition Multimedia Interface (HDMI), DisplayPort (DP) interface, embedded display Port (eDP) interface, Universal Serial Bus (USB) interface, USB Type-C interface, Thunderbolt interface, Digital Video Interface (DVI), or the combination thereof. The wireless transmission interface may include fifth generation (5G) wireless system, Bluetooth, WiFi, Near Field Communication (NFC) interface, etc., but the present disclosure is not limited thereto.

FIG. 2 is the flow diagram illustrating the method 200 for generating panoramic light fields, according to an embodiment of the present disclosure. The method 200 is for use in the electronic system 10A of FIG. 1A or in the electronic system 10B of FIG. 1B. As shown in FIG. 2, the method 200 may include steps 201-203. Step 201 is executed by the camera in FIG. 1A or FIG. 1B. Steps 202 and 203 are executed by the depth estimation circuit 14A and the light field generation circuit 15A in FIG. 1A, respectively. Alternatively, steps 202 and 203 can be performed by software modules, such as the depth estimation depth estimation module 14B and the light field generation module 15B in FIG. 1B, respectively.

The method 200 starts from step 201. In step 201, a panoramic image of a scene (or a scenario) is captured by the camera 11. Then, the method 200 proceeds to step 202.

In step 202, panoramic depth information of the scene is estimated based on panoramic image captured by the camera 11. Then, the method 200 proceeds to step 203.

In step 203, a panoramic light field is generated based on the panoramic depth information estimated in step 202.

In some embodiments, the depth information can be a depth map containing information relating to the distance of the surfaces of objects in the scene from a viewpoint, namely the location of the camera 11.

One panoramic image alone is not sufficient to provide the information about the light field of the scene, because the construction of the light field requires sampling the radiance of light rays emitted from the scene along different directions. In fact, sampling large amount of panoramic images from different angles of view is required for gathering the information about the light field. The techniques for sampling the panoramic images of the scene are introduced herein.

FIG. 3 shows an exemplary rotational mechanism 30 on which a camera 11 is mounted, according to an embodiment of the present disclosure. As shown in FIG. 3, the rotational mechanism 30 may include a supporting part 31, a rotation part 32, and a leg part 33. The supporting part 31 may include, for example, a plate or a platform, for supporting the camera 11 and holding the gesture (avoid tilting or colliding) of the camera 11. The rotation part 32 may include, for example, a rotation shaft, controlled (e.g., manually or electrically) to rotate the supporting part 31, causing the camera 11 mounted on the supporting part 31 to revolve around the rotation axis 302. The leg part 33 may include, for example, the legs of a tripod, for supporting the rotation part 32 and the supporting part 31. Each of the supporting part 31, the rotation part 32, and the leg part 33 may further include male elements and female elements for linking and securing to one another. Though the rotation is drawn to be clockwise, it can alternatively be counter-clockwise, the present disclosure is not limited thereto. The trajectory of the revolving camera 11 can be a circle, an ellipse, a polygon, or any other shape, but the present disclosure is not limited thereto. The plane which the trajectory is on is typically parallel to the ground surface, but the present disclosure is not limited thereto. Furthermore, though only one camera (i.e., the camera 11) is drawn in FIG. 3, the number of cameras mounted on the rotational mechanism is not limited to one by the present disclosure. In some alternative implementations, an array of wide-angle cameras can be mounted on the rotational mechanism in order to capture more panoramic images at the same time.

In the example shown in FIG. 3, the camera 11 is a dual-fisheye camera (but the present disclosure is not limited thereto), and the lens 111 of the camera 11 is facing inward (i.e., toward the rotation axis 302) to capture images with the 180-degree field of view (FoV) spread from the vertical plane of the camera 11 toward the rotation axis 302. It should be appreciated that the other lens (i.e., the lens 112, not shown in FIG. 3) of the camera 11 is facing outward (i.e., against the rotation axis 302) to capture images with the other 180-degree FoV spread from the vertical plane of the camera 11 against the rotation axis 302.

FIG. 4 shows an exemplary top view of the camera 11 mounted on the rotational mechanism 30 of FIG. 3 and revolving around the rotation center 400 (i.e., the top view of the rotation axis 302 in FIG. 3). In FIG. 4, the fisheye icons on the circle around the rotation center 400 denotes (not all of) the possible points where the camera 11 captures the panoramic images as samples. The pattern of the rotation can be regular or random, the present disclosure is not limited thereto. For example, the camera 11 can revolve for a fixed angular increment (e.g., 15-degree, 20-degree, 30-degree, etc.) each time. Alternatively, the points on the circle where the camera 11 captures the panoramic images can be randomly selected, or in other words the camera 11 can revolve for a variable angular increment.

It should be appreciated that the implementations provided in FIG. 3-4 are for illustrative purpose only and not meant to be limiting. In other implementations, the motion of the camera 11 does not have to be strictly on a circle. As long as the camera can move around the scene to capture light rays in various directions needed for constructing the light field, the trajectory of the camera 11 is not limited by the present disclosure. For example, the trajectory of the camera 11 can be a circle, an ellipse, a polygon, or any other shape. In yet another embodiment, the camera 11 can be manipulated and moved by a human in a handheld manner, and the trajectory of the camera 11 can be irregular.

FIG. 5 is a flow diagram illustrating the steps of step 201 in FIG. 2 in greater detail, according to an embodiment of the present disclosure. As shown in FIG. 5, step 201 in FIG. 2 may include steps 501-503.

In step 501, the camera 11 is revolved (e.g., manually controlled or electrically controlled), and a sequence of image sets of the scene from a plurality of points (as shown in FIG. 4) on the trajectory of the camera 11 is captured. Then, the method proceeds to step 502.

In step 502, the images in each image set are stitched (i.e., combined into an image) to obtain a stitched image (e.g., by the image processing unit 113 in FIG. 1A or FIG. 1B). Then, the method proceeds to step 503.

In step 503, the stitched image is transformed to the panoramic image using the equirectangular projection method (e.g., by the image processing unit 113 in FIG. 1A or FIG. 1B). Thus, the panoramic image as a result of the equirectangular projection method is rectangular.

In some embodiments, the rotation part 32 of the rotational mechanism 30 in FIG. 3 may be augmented to lift or lower the supporting part 31 as well as the camera 11 mounted on the supporting part 31. Thus, the height (e.g., the vertical distance from the ground) of the camera 11 and the trajectory thereof can be changed. For example, the height of the first trajectory of the camera 11 is 100 centimeters, and the height of the second trajectory of the camera 11 after being lifted can be 90 or 110 centimeters. This means that new samples of panoramic images of the scene can be gathered by performing steps 501-503 with the camera 11 at different heights. Furthermore, the calculation for panoramic depth information may not only be based on the sequence of panoramic images captured at the original height, but it may also be based on the other sequence of panoramic images captured at the new height. In an embodiment, the radius of the trajectory of the camera 11 can also be changed as the height is changed. In another embodiment, as the height of the camera 11 is changed, the radius of the trajectory of the camera 11 remains the same.

FIG. 6 shows an exemplary equirectangular projection 600, according to an embodiment of the present disclosure. As shown in FIG. 6, the equirectangular projection 600 maps each point of a spherical panoramic view from the unit sphere 601 to a cylindrical plane 602 circumscribing the unit sphere 601. The north/south pole of the unit sphere 601 becomes the top/bottom edge of the cylindrical plane 602, and the equator of the unit sphere 601 becomes a circle in the middle of the cylinder plane 602. The corresponding coordinates of a point on the sphere and the cylindrical plane are denoted by (Φ,θ) and (x,y), respectively. For easier understanding, it can be regarded that the concepts of Φ and θ are analogous to the longitude and the latitude of the earth, respectively. A point P(Φ,θ) on the unit sphere 601 is mapped to a pixel I(x,y) on the cylindrical plane 602. The mapping can be mathematically described by {x=Φ cos θ, y=θ}.

In some embodiments, the estimation of the panoramic depth information uses a convolutional neural network (CNN)-based model. FIG. 7 is a schematic diagram illustrating the operation of the CNN-based depth estimation model 70, according to an embodiment of the present disclosure. As shown in FIG. 7, the CNN-based depth estimation model 70 includes the feature extraction layers 71 and cost aggregation layers 72.

The feature extraction layers 71 may include a plurality of convolution layers for extracting the feature representation (or feature maps) 702 of each of the input panoramic images 701. The number of convolution layers is not limited by the present disclosure. In an embodiment, each of the panoramic images 701 is divided into two polar regions (i.e., the north pole region and the south pole region) and a central region before being input to the feature extraction layers 71. The distortion effect of equirectangular projection varies from region to region. Specifically, the distortion level of a polar region is probably higher than the distortion level of the central region. In order to compensate for such distortion effect of equirectangular projection, each of the convolution layers of the feature extraction layers 71 use a distance-based kernel, which means the size of the convolution kernel for each of the regions is determined by the distortion level of the region. The size of the kernel used by the feature extraction layers 71 for a polar region is configured bigger than the size of the kernel used by the feature extraction layers 71 for the central region, so as to enhance the training for the polar regions of the panoramic image.

The cost aggregation layers 72 use the extracted feature maps 702 to compute the cost volume for calculating the depth information 703. The cost volume denotes the data matching costs for associating a pixel in a panoramic image with its corresponding pixels at another panoramic image. In an embodiment, the cost aggregation layers 72 applies a semi-global aggregation (SGA) layer and local guided aggregation (LGA) layer to refine the edge of objects and compensate for the accuracy degradation caused by the down-sampling of the cost volume. The depth information 703 is obtained by the minimization of the cost volume.

In some embodiments, the generation of light field in step 203 may use a plurality of 3D convolutional neural layers (e.g., using another CNN model) for resampling. First, a coarse light field is warped from the panoramic images to generate a warped new light field based on the panoramic depth information (e.g., panoramic depth map). Then, the warped new light field is refined by the light field resampling layers to generate the light field output by the light field generating circuit 15A or the light field generating module 15B.

In some embodiments, the weights used in the CNN models for depth estimation and light field generation can be trained simultaneously by imposing spatiotemporal consistency, which computes the error between the predicted panoramic images at certain angular coordinates and the ground truth. The lost value of the spatiotemporal consistency can be minimized using a gradient descent method such as Stochastic Gradient Descent (SGD) method or an adaptive movement estimation (Adam) algorithm, but the present disclosure is not limited thereto.

FIG. 8A shows the spherical representation of an exemplary light ray 800A, according to an embodiment of the present disclosure. In the example shown in FIG. 8A, the light ray 800A (also applies for other light rays even though they are not shown) in the panoramic light field is addressed by two sets of spherical coordinates, (α,β) and (Φ,θ), to describe the angular coordinates and the spatial coordinates (representing the location of the light ray 800A) of the light ray 800A, respectively. Specifically, the light ray 800A emitted from the scene intersects the sphere 802 at angular coordinates (α,β) and intersects the sphere 801 at spatial coordinates (Φ,θ).

FIG. 8B shows the two-cylinder representation of an exemplary light ray 800B, according to an embodiment of the present disclosure. In the example shown in FIG. 8B, equirectangular projection with two cylinders 803 and 804 is used. Thereby, the representation of the angular coordinates of the panoramic light field is consistent with that of the spatial coordinates. The light ray 800B in the panoramic light field is addressed by two sets of cylindrical coordinates, (α,β) and (Φ,θ), to describe the angular coordinates (representing which direction the light ray 800B is from) and the spatial coordinates (representing the location of the light ray 800B) of the light ray 800B, respectively. Specifically, the light ray 800B emitted from the scene intersects the cylinder 804 at angular coordinates (α,β) and intersects the sphere 803 at spatial coordinates (Φ,θ). In further embodiments, the coordinates of the light ray 800B can be denoted by (Φ, θ, α, β) where −π≤Φ,α≤π and −π/2≤θ,β≤π/2.

It should be appreciated that the implementations provided in FIGS. 8A and 8B are for illustrative purpose only and not meant to be limiting. In other implementations, the coordinate system used for representing the spatial and angular coordinates of a light ray is not limited to the dual cylindrical coordinate system or the dual spherical coordinate system. Any coordinate system that is capable of representing the panoramic light field can be adopted.

The embodiments of the present disclosure provides panoramic light field for the light field display device to directly projecting light rays from various directions into the viewer's eye, which is more in line with the way humans observe the world, so that the effect of VAC can be mitigated.

The above paragraphs are described with multiple aspects. Obviously, the teachings of the specification may be performed in multiple ways. Any specific structure or function disclosed in examples is only a representative situation. According to the teachings of the specification, it should be noted by those skilled in the art that any aspect disclosed may be performed individually, or that more than two aspects could be combined and performed.

While the invention has been described by way of example and in terms of the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims

1. An electronic system for generating panoramic light fields, comprising:

a camera, configured to capture a panoramic image of a scene;
a depth estimation circuit, configured to estimate panoramic depth information of the scene based on the panoramic image captured by the camera;
a light field generation circuit, configured to generate a panoramic light field based on the estimated panoramic depth information.

2. The electronic system as claimed in claim 1, wherein the camera is mounted on a rotational mechanism used for revolving the camera;

wherein the camera is further configured to capture a sequence of image sets of the scene from a plurality of points on a first trajectory of the camera;
wherein the camera is further configured to stitch images in each image set to obtain a stitched image;
wherein the camera is further configured to transform the stitched image to the panoramic image using an equirectangular projection method; and
wherein the panoramic image is rectangular.

3. The electronic system as claimed in claim 2, wherein the rotational mechanism is further used for lifting or lowering the camera;

wherein the camera is further configured to capture another sequence of image sets of the scene from a plurality of points on a second trajectory of the camera.

4. The electronic system as claimed in claim 3, wherein the first trajectory and the second trajectory are circles.

5. The electronic system as claimed in claim 4, wherein the first trajectory and the second trajectory have different radiuses.

6. The electronic system as claimed in claim 2, wherein the depth estimation circuit is further configured to estimate the panoramic depth information using a convolutional neural network-based model.

7. The electronic system as claimed in claim 6, wherein the convolutional neural network uses a distance-based kernel.

8. The electronic system as claimed in claim 1, wherein the light field generation circuit is further configured to generate the panoramic light field using a convolutional neural network-based model.

9. The electronic system as claimed in claim 1, wherein each light ray in the panoramic light field is addressed by two sets of cylindrical coordinates.

10. The electronic system as claimed in claim 1, wherein each light ray in the panoramic light field is addressed by two sets of spherical coordinates.

11. An electronic system for generating panoramic light fields, comprising:

a camera, configured to capture a panoramic image of a scene;
a processor, configured to estimate panoramic depth information of the scene based on the panoramic image captured by the camera, and to generate a panoramic light field based on the estimated panoramic depth information.

12. The electronic system as claimed in claim 11, wherein the camera is mounted on a rotational mechanism used for revolving the camera;

wherein the camera is further configured to capture a sequence of image sets of the scene from a plurality of points on a first trajectory of the camera;
wherein the camera is further configured to stitch images in each image set to obtain a stitched image;
wherein the camera is further configured to transform the stitched image to the panoramic image using an equirectangular projection method; and
wherein the panoramic image is rectangular.

13. A method for generating panoramic light fields, for use in an electronic system, wherein the electronic system comprises a camera, the method comprising:

capturing a panoramic image of a scene by the camera;
estimating panoramic depth information of the scene based on the panoramic image; and
generating a panoramic light field based on the panoramic depth information.

14. The method as claimed in claim 13, wherein the step of capturing the panoramic image of the scene by the camera comprises:

revolving the camera, and capturing a sequence of image sets of the scene from a plurality of points on a first trajectory of the camera;
stitching images in each image set to obtain a stitched image; and
transforming the stitched image in the sequence to the panoramic image using an equirectangular projection method;
wherein the panoramic image is rectangular.

15. The method as claimed in claim 14, further comprising:

lifting or lowering the camera; and
revolving the camera, and capturing another sequence of image sets of the scene from a plurality of points on a second trajectory of the camera.

16. The method as claimed in claim 14, further comprising:

estimating the panoramic depth information using a convolutional neural network-based model.

17. The method as claimed in claim 16, wherein the convolutional neural network uses a distance-based kernel.

18. The method as claimed in claim 13, further comprising:

generating the panoramic light field using a convolutional neural network-based model.

19. The method as claimed in claim 13, wherein each light ray in the panoramic light field is addressed by two sets of cylindrical coordinates.

20. The method as claimed in claim 13, wherein each light ray in the panoramic light field is addressed by two sets of spherical coordinates.

Patent History
Publication number: 20230177710
Type: Application
Filed: Nov 9, 2022
Publication Date: Jun 8, 2023
Inventors: I-Chan LO (New Taipei City), Homer CHEN (New Taipei City)
Application Number: 18/053,988
Classifications
International Classification: G06T 7/557 (20060101);