SYSTEMS FOR PROVIDING IMAGE OR VIDEO TO BE DISPLAYED BY PROJECTIVE DISPLAY SYSTEM AND FOR DISPLAYING OR PROJECTING IMAGE OR VIDEO BY A PROJECTIVE DISPLAY SYSTEM

A system for providing image or video to be displayed by a projective display system includes: an encoding subsystem and a packing subsystem. The encoding subsystem is configured to encode at least one image or video of a subject to generate encoded image data. The packing subsystem is coupled to the encoding subsystem, and configured to pack the encoded image data with projection configuration information regarding the projective display system to generate packed image data. The projective display system comprises a projection source device and a projection surface, the projection source device projects the image or video to the projection surface according to the packed image data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/003,260, filed on May 27, 2014, and U.S. Provisional Application No. 62/034,952, filed on Aug. 8, 2014. The entire contents of the related applications are incorporated herein by reference.

TECHNICAL FIELD

The present invention relates generally to projective display, and more particularly, to a transmission system including a capture and transmitting system and a display and receiving system that is capable of generating images of a subject and having the images displayed by a projective display system.

BACKGROUND

A stereo display is a display device capable of conveying depth perception to the viewer and reproducing real-world viewing experiences. The stereo display can be implemented with different technologies. However, technologies nowadays respectively have some disadvantages. Stereoscopic display technology has the disadvantage that the viewer must be positioned in a well-defined spot to experience the 3D visual effect and the disadvantage that the effective horizontal pixel count viewable for each eye is reduced by one half as well as the luminance for each eye is also reduced by one half. In addition, glasses-free stereoscopic display is desirable but glasses-free stereoscopic display currently leads to poor user experience. Holographic display technology has a great viewing experience but the cost and the size is too high to apply to mobile devices.

SUMMARY

According to a first aspect of the present invention, a system for providing image or video to be displayed by a projective display system is provided. The system comprises: an encoding subsystem and a packing subsystem. The encoding subsystem is configured to encode at least one image or video of a subject to generate encoded image data. The packing subsystem is coupled to the encoding subsystem, and configured to pack the encoded image data with projection configuration information regarding the projective display system to generate packed image data. The projective display system comprises a projection source device and a projection surface, the projection source device projects the image or video to the projection surface according to the packed image data.

According to a second aspect of the present invention, a system for displaying or projecting image or video by a projective display system comprises: a de-packing subsystem, a decoding subsystem and a display subsystem. The de-packing subsystem is configured to derive packed image data and de-pack the derived packed image data to obtain encoded image data and projection configuration information. The decoding subsystem is coupled to the de-packing subsystem, and configured to decode the encoded image data to generate at least one image or video. The a projection display subsystem is coupled to the decoding subsystem, and configured to project or display the at least one image or video by a projection source component of the projective display system according to the projection.

These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 illustrates a projective display system in a single-view configuration according to one embodiment of the present invention.

FIG. 2A illustrates a projective display system in a multi-view configuration according to one embodiment of the present invention.

FIG. 2B illustrates source areas for displaying source images on a display panel of a projective display system according to one embodiment of the present invention.

FIG. 3A illustrates a capture and transmitting system according to one embodiment of the present invention.

FIG. 3B illustrates a display and receiving system according to one embodiment of the present invention.

FIG. 4 illustrates an implementation of a capture subsystem according to one embodiment of the present invention.

FIG. 5 illustrates combination patterns for captured images/videos.

FIG. 6 illustrates a flowchart regarding how the capture subsystem captures images or videos according to one embodiment of the present invention.

FIG. 7 illustrates a flowchart regarding how the captured images/videos or render images/videos are encoded according to one embodiment of the present invention.

FIG. 8 illustrates projection configuration information to be packed with encoded image data according to one embodiment of the present invention.

DETAILED DESCRIPTION

Certain terms are used throughout the following descriptions and claims to refer to particular system components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not differ in function. In the following discussion and in the claims, the terms “include”, “including”, “comprise”, and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” The terms “couple” and “coupled” are intended to mean either an indirect or a direct electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.

Projective Display System Single-View Configuration

FIG. 1 illustrates a projective display system in a single-view configuration according to one embodiment of the present invention. As shown by figure, a projective display system 100 comprises at least one of a base 110, a projection source device 120, a projection surface 130 and an optional optical adjustment unit 140. In some embodiments, the projection source device 120 may be rotatably mounted, for example, on the base 110. In some other embodiments, the projection source device 120 may be placed in a carrier, which may be rotatably mounted, for example, on the base 110. The projection source device 120 may be a portable device, such as a smartphone, a tablet, touch controlled device, or any other electronic device with a display panel or a projection source component. The projection source device 120 comprises a projection source component 122 and a projection processor 124.

The projection source component 122 is configured to project at least one source image. In some embodiments, the projection source component 122 may be an organic light-emitting diode display panel, a liquid crystal display panel, or any other passive or active display panel. In some other embodiments, the projection source component 122 may be a solid-state (laser or LED) light or any other type of light source. The projection processor 124 is configured to adaptively control an image adjustment on an input image to generate the source image. One purpose of the projection processor 124 is to maintain a projection quality of the projective display system 100. The projection processor 124 could be implemented with a general-purpose processor or dedicated hardware. Please note that, the position of the projection processor 124 in FIG. 1 is just for the purpose of illustration rather than limitations. The projection surface 130 could be made of transflective material or non-opaque material (e.g. transparent/semitransparent material.) The projection surface 130 may be rotatably attached to the base 110. Also, the projection surface 130 could be flat or curved. The projection surface 130 is configured to mirror or partially reflect the source image that is projected from a first side of the projection surface 130 to form a virtual image on a second side that is opposite to the first side, thereby forming a stereo viewing effect. In detail, some intensity of the source image may be projected through the projection surface 130, and some other intensity of the source image may be reflected by the projection surface 130, such that the projection surface 130 partially reflects the source. As a result, a user may see the virtual image displayed on the projection surface 130 or floating behind the projection surface 130, thereby forming a stereo viewing effect, especially for the source image with 3d effect.

The optical adjustment element 140 may be rotatably attached to and detachable from the base 110. The optical adjustment element 140 may optically adjust forming of the virtual image on the projection surface 130. In various embodiment of the present invention, the optical adjustment element 140 could be a single lens or a compound lens.

Multi-View Configuration

FIG. 2A illustrates the projective display system 100 in a multi-view according to another embodiment of the present invention. In this embodiment, the base 110 of the projective display system 100 may be placed at the center of the projection source component 122 of the projection source device 120. A polyhedron may be formed by the projection surface 130 and may be put on the top of the base 110. In addition, multiple pieces of the optional optical adjustment units 140 are optionally attached to different sides of the surface of the base 110.

The projection surface 130 has four viewable sides P1-P4. The polyhedral projection surface 130 may be formed by folding the projection surface 130 of FIG. 1 or by combining parts of the projection surface 130 of FIG. 1 together. Please note that, the shape and the number of viewable sides of the projection surface 130 illustrated by FIG. 2A is not limitations of the present invention. Multiple viewable sides of the projection surface 130 allow the virtual images being viewed from different sides. The viewable sides P1-P4 respectively correspond to four source areas SA1-SA4 on the projection source component 122 of the projection source device 120(the source area SA4 is not shown). Source images shown on the source areas SA1-SA4 are mirrored or partially reflected by four viewable sides P1-P4, respectively, thereby forming virtual images to users' eyes.

Each of the optical adjustment elements 140 is detachable from the base 110. Hence, in various embodiments of the present invention, not every optical adjustment element 140 shown in FIG. 2A is attached to the base 110. When they are attached to the base, the optical adjustment elements 140 can optically adjust forming of the virtual image on the projection surface 130. In various embodiment of the present invention, the optical adjustment element 140 could be a single lens or a compound lens.

FIG. 2B illustrates a layout of source areas for displaying the source images on the projection source component 122 when the projective display system 100 is in a multi-view configuration. The arrangement of the source areas may be associated with the shape of the projection surface 130. The illustrated arrangement in FIG. 2B is not a limitation of the present invention.

Transmission System for Projective Display System

According to one embodiment of the present invention, a processing system for the projective display system 100 is provided. The processing system of the present invention may include a capture and transmitting system 300 as illustrated by FIG. 3A and a display and receiving system 400 as illustrated by FIG. 3B. The capture and transmitting system 300 and the display and receiving system 400 could be implemented in a single device or in different devices.

The capture and transmitting system 300 may include a capture subsystem 310, an encoding subsystem 320, and a packing subsystem 330. Packed image data generated by the capture and transmitting system 300 may be stored in a storage device 340 or be sent to a channel coding and modulation device 350 or any other device. Afterwards, the coded and modulated data stream may be sent to the display and receiving system 400 with wired or wireless transmission.

The display and receiving system 400 includes a display subsystem 410, a decoding subsystem 420, and a de-packing subsystem 430. The display and receiving system 400 receives the packed data generated by the capture and transmitting system 300 through the storage device 340 or wired/wireless transmission. A channel decoding and demodulation device 600 channel-decodes and demodulates the packed image data that is processed by the channel coding and modulation device 350.

Operations of the capture and transmitting system 300 and the display and receiving system 400 will be illustrated later in further details.

Capture Subsystem

The capture subsystem 310 of the capture and transmitting system 300 may include one or multiple camera devices. Please refer to FIG. 4. In this embodiment, the capture subsystem 310 may comprise camera devices of different electronic devices 310_1-310_4. The camera devices of the electronic devices 310_1-310_4 capture images or videos of the subject from different capturing views CV1-CV4. In some embodiments, images or videos IMG1-IMG4 corresponding to different views CV1-CV4 may be combined by the capture subsystem 310 to generate one single image or video in patterns (a)-(c) as shown in FIG. 5. In pattern (a), the image images or videos IMG1-IMG4 corresponding to views CV1-CV4 are horizontally arranged in one image or video. In pattern (b), the image images or videos IMG1-IMG4 corresponding to views CV1-CV4 are arranged as a 2×2 square in one image or video. In pattern (c), the image images or videos IMG1-IMG4 corresponding to views CV1-CV4 are vertically arranged in one image or video. Alternatively, in some other embodiments, images or videos IMG1-IMG4 may be stored or sent separately as shown in FIG. 5 (d) but not combined into a single image or video.

In one embodiment, the camera devices of the electronic devices 310_1-310_4 may utilize wide-angle lens or fish-eye lens to cover a wide scene.

The number of the views for capturing the subject can be determined according to the number of the viewable sides of the projection surface 130. For example, in the embodiment of FIG. 2, the number of the viewable sides is four. Thus, the capture subsystem 310 could capture the images or videos of the subject in four different views.

The capture subsystem 310 may include single camera device. In such case, the single camera device of the capture subsystem 310 needs to shoot the subject for several times from each view to satisfy the projective display system 100 in the multi-view configuration.

In one embodiment, the capture subsystem 310 could generate images or videos by utilizing a graphic engine to render 3D objects in single view or multiple views.

FIG. 6 illustrates a flowchart regarding image/video capture of the present invention. At step 610, the flow starts. At step 620, the user may select to enable the projection function of the projective display system 100. It is determined whether the projection surface 130 is in a single-view configuration or in a multi-view configuration in step 630. For example, the projection surface 130 in the embodiment of FIG. 1 is in a single-view configuration, while the projection surface 130 in the embodiment of FIG. 2 is in the multi-view configuration. In some embodiments, the determination of the step 630 may be based on information inputted by the user. In some other embodiments, the determination of the step 630 may be performed automatically based on information related to the projection surface 130. For example, in some embodiments of the present invention, magnetic hinges may be used to connect different sides of a collapsible projection surface 130. By detecting the magnetic hinges, it can be obtained the information about the number of viewable sides of the projection surface 130 in the multi-view configuration.

If the projection surface 130 is in the single-view configuration, the flow goes to step 640, where the camera device(s) of the capture subsystem 310 is instructed to shot once in front of the subject. If the projection surface 130 is in the multi-view configuration, the flow may go to step 650, where it is determined whether the subject is still. If yes, the flow may go to step 660; otherwise, the flow may go to step 674, a notice message may be shown, which reminds the user of the failure of the capturing because when the subject is moving, it is not easy to derive the image of subject in multiple views.

At step 660, it is determined whether the depth information of the subject is obtained. If yes, the flow may go to step 672, where the depth synthesis is performed. By the depth synthesis, the images of the subject in other views can be generated according to the depth information. If it is no in step 660, the flow may go to step 670, where multiple shoots are conducted by the camera device(s) to capture the images or videos of the subject in multi-view. Afterwards, the flow may go to step 680, where the multiple view synthesis is performed. The multiple view synthesis may combine the images or videos captured from different views into one or adjust the relative positions of the images or videos captured from different views on the projection source component 122. For example, image or videos captured from different views could be combined into a single image or video like the patterns shown by FIG. 5 or FIG. 2A.

Encoding/Decoding Subsystem

The encoding subsystem 320 of the capture and transmitting system 300 encodes the captured images generated by the capture subsystem 310 with respect to a single view or multiple views. FIG. 7 illustrates a flowchart regarding image/video encoding performed by of the encoding subsystem 320. At step 710, the flow starts. At step 720, the user may select to enable the projection function of the projective display system 100. Then, it may be determined whether the projection surface 130 is in the single-view configuration or in the multi-view configuration in step 730. The determination of the step 730 can be based on information inputted by the user. In some other embodiments, the determination of the step 730 may be performed automatically based on information related to the projection surface 130. For example, in some embodiments of the present invention, magnetic hinges may be used to connect different sides of a collapsible projection surface 130. By detecting the magnetic hinges, it can be obtained the information about the number of viewable sides of the projection surface 130 in the multi-view configuration.

If the projection surface 130 is in the single-view configuration, the flow may go to step 750; if the projection surface 130 is in the multi-view configuration, the flow may go to step 740, where the multi-view coding may be conducted to remove data redundancy between captured images/videos (by camera device) or rendered images/videos (by graphic engine) with respect to multiple views. In step 750, it is determined whether the projection processor 124 of the projection source device 120 is applied before. If the projection processor 124 of the projection source device 120 has been applied, this means the projection processor 124 has obtained the depth information of the subject. Therefore, if it is yes in step 750, the flow may go to step 760, in which shape/depth coding is performed, where the projection processor 124 provides the shape/depth mapping information to the encoding subsystem 320 for performing the shape/depth encoding; otherwise, the flow may go to step 770, in which texture coding is performed.

In one embodiment, the shape/depth coding performed in step 760 may include MPEG-4shape coding. In the case of image encoding, the texture coding may comprise JPEG, GIF, PNG, and still profile of MPEG-1, MPEG-2, MPEG-4, WMV, AVS, H.261, H.263, H.264, H.265, VP6, VP8, VP9, any other texture coding or combination thereof. In the case of video encoding, the texture coding may comprise motion JPEG, MPEG-1, MPEG-2, MPEG-4, WMV, AVS, H.261, H.263, H.264, H.265, VP6, VP8, VP9, any other texture coding or combination thereof.

The decoding subsystem 420 in the display and receiving system 400 is used to decode the encoded data generated by the encoding subsystem 320 based on how the data is encoded as described above.

Packing/De-Packing Subsystem

After the captured or rendered images are encoded by the encoding subsystem 330 to generate the encoded image data, the packing subsystem 330 of the capture and transmitting system 300 packs the encoded image data with projection configuration information. The projection configuration information may be in form of H.264 SEI (Supplemental Enhancement Information) or any other configuration format. In one embodiment, the projection configuration information includes at least one of the number of the viewable sides of the projection surface 130 and the enablement of the projection function of the projective display system 110. An example of the information to be packed with the encoded image data is illustrated in FIG. 8.

The universally unique identifier (UUID) identifies the following fields, which carry information of the number of the viewable sides of the projection surface 130 and information of the enablement of the projection function of the projective display system 110. In this embodiment, UUID is 128-bit long, the field corresponding to the number of the viewable sides of the projection surface 130 is one-byte long, and the field corresponding to the enablement of the projection function of the projective display system 110 is also one-byte long. However, this is just one possible way to implement packing the projection configuring information with encoded image data, rather than a limitation of the present invention.

The de-packing subsystem 430 of the display and receiving system 400 derives the packed image data stored in the storage device 340 or the wired/wireless transmission between the capture and transmitting system 300 and the display and receiving system 400. Accordingly, the de-packing subsystem 430 de-packs the received packed image data to derive encoded image data (which will be decoded by the decoding subsystem 420) and the projection configuration information.

Display Subsystem

Display subsystem 410 of the display and receiving system 400 are configured to display captured or rendered images/videos provided by capture subsystem 310 on one or more source areas, such as source areas SA1-SA4 on the projection source component 122. The number of the source areas on which the display subsystem 410 displays the images is determined according to the number of the viewable sides of the projection surface 130 (which could be provided by the de-packing subsystem 430). For example, in the embodiment of FIG. 2, the display subsystem 410 have the images/videos to be shown on four source areas SA1-SA4 in response to four viewable sides P1-P4 of the projection surface 130. However, if the viewable sides of the projection surface 130 are fewer in another embodiment, the display subsystem 410 could have the images/videos shown on fewer source areas of the projection source component 122, such as two or three.

For different source areas of the projection source component 122, the display subsystem 410 could project or display identical or different images thereon. This depends on the number of views for capturing the subject and the number of the viewable sides of the projection surface.

When the number of view of capturing the subject is smaller than the number of the viewable points of the projection surface, the display subsystem 410 projects or displays duplicated captured images/videos on the different source areas of the projection source component 122 of the projection source device 100. For example, if the image is captured in single view, the number of views of capturing the subject is 1. Hence, if the captured image is projected to the projection surface 130 in FIG. 2, which has 4 viewable sides, the display subsystem 310 may have all the images/videos shown on the source areas SA1-SA4 identical. In addition, when the number of views of capturing the subject is identical to the number of the viewable points of the projection surface, the display subsystem 310 has each of images/videos display on one of the source areas of the projection source component 122.

Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an implementation. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Thus, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.

Circuits in the embodiments of the invention may include function that may be implemented as software executed by a processor, hardware circuits or structures, or a combination of both. The processor may be a general-purpose or dedicated processor. The software may comprise programming logic, instructions or data to implement certain function for an embodiment of the invention. The software may be stored in a medium accessible by a machine or computer-readable medium, such as read-only memory (ROM), random-access memory (RAM), magnetic disk (e.g., floppy disk and hard drive), optical disk (e.g., CD-ROM) or any other data storage medium. In one embodiment of the invention, the media may store programming instructions in a compressed and/or encrypted format, as well as instructions that may have to be compiled or installed by an installer before being executed by the processor. Alternatively, an embodiment of the invention may be implemented as specific hardware components that contain hard-wired logic, field programmable gate array, complex programmable logic device, or application-specific integrated circuit, for performing the recited function, or by any combination of programmed general-purpose computer components and custom hardware component.

Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims

1. A system for providing image or video to be displayed by a projective display system, comprising:

an encoding subsystem, configured to encode at least one image or video of a subject to generate encoded image data; and
a packing subsystem, coupled to the encoding subsystem, and configured to pack the encoded image data with projection configuration information regarding the projective display system to generate packed image data,
wherein the projective display system comprises a projection source device and a projection surface, the projection source device projects the image or video to the projection surface according to the packed image data.

2. The system of claim 1, further comprising a capture subsystem, configured to generate the at least one image or video of a subject in a single view or multiple views.

3. The system of claim 2, wherein the capture subsystem comprises a plurality of camera devices.

4. The system of claim 2, wherein the camera devices are disposed on different electronic devices.

5. The system of claim 2, wherein the capture subsystem captures a plurality of the images or videos of the subject and combines images or videos in a specific arrangement pattern.

6. The system of claim 2, wherein the capture subsystem captures a plurality of images or videos of the subject and the encoding subsystem encodes the images or the videos to reduce data redundancy between the images or videos.

7. The system of claim 1, further comprising a graphic engine configured to render at least one object in single view or multiple views to generate the at least one image or video.

8. The system of claim 1, wherein the encoding subsystem performs shape/depth encoding to encode the at least one image or video.

9. The system of claim 1, wherein the projection configuration information indicates the number of viewable sides of a projection surface of the projective display system or enablement of projection function of the projective display system.

10. A system for displaying or projecting image or video by a projective display system, comprising:

a de-packing subsystem, configured to derive packed image data and de-pack the derived packed image data to obtain encoded image data and projection configuration information;
a decoding subsystem, coupled to the de-packing subsystem, and configured to decode the encoded image data to generate at least one image or video; and
a projection display subsystem, coupled to the decoding subsystem, and configured to project or display the at least one image or video by a projection source component of the projective display system according to the projection configuration information, and a projection surface of the projective display system mirrors or partially reflects the at least one projected or displayed image.

11. The system of claim 10, wherein the projection configuration information indicates the number of viewable sides of a projection surface of the projective display system or enablement of projection function of the projective display system.

12. The system of claim 10, wherein the decoding subsystem decodes the encoded image data to generate a plurality of images or videos of a subject with respect to different views, and the display subsystem simultaneously displays the images or videos on a plurality of different areas of the projection source component.

13. The system of claim 10, wherein when a number of views of images of the subject is smaller than the number of viewable sides of a projection surface of the projective display system, the project display subsystem projects or displays identical image on several areas of the display panel.

14. The system of claim 10, wherein the de-packing subsystem derives the packed image data from a storage device.

15. The system of claim 10, wherein the de-packing subsystem derives the packed image data from a wired/wireless transmission between the capture and transmitting system and the display and receiving system.

Patent History
Publication number: 20160165208
Type: Application
Filed: May 27, 2015
Publication Date: Jun 9, 2016
Inventors: Tsu-Ming Liu (Hsinchu City), Chih-Kai Chang (Taichung City), Chi-Cheng Ju (Hsinchu City), Chih-Ming Wang (Hsinchu County)
Application Number: 14/904,700
Classifications
International Classification: H04N 13/00 (20060101); H04N 13/02 (20060101); H04N 13/04 (20060101);