System and Method of Video Wall Setup and Adjustment Using Automated Image Analysis
A system is disclosed for identifying, placing and configuring a physical arrangement of a plurality of displays via image analysis of captured digital camera images depicting unique configuration images output to said displays to facilitate uniform operation of said plurality of displays as a single display area for example as a video wall. The system pairs and configures displays depicted in the captured images to individual displays within the physical arrangement through controlling and analyzing of the output of said displays captured in said images. A method and computer readable medium are also disclosed that operate in accordance with the system.
Latest Userful Corporation Patents:
- Latency compensation for external networks
- Latency compensation for external networks
- Method and system of connecting and switching grouped input and output devices between computers
- System and method of processing images into sub-image portions for output to a plurality of displays such as a network video wall
- System And Method of Processing Images into Sub-Image Portions for Output to A Plurality of Displays Such as a Network Video Wall
This application claims priority from U.S. Provisional Patent Application No. 61/926,295 filed on Jan. 12, 2014, which is hereby incorporated by reference.
FIELD OF INVENTIONLarge electronic displays, may be formed from an array of monitors referred to as a “video-wall”. For example video-wall might be comprised of a 3 by 3 array of nine monitor, each monitor simultaneously displaying a segment of a single image, thereby creating the appearance of a single large display comprised of rectangular portions.
The present invention relates generally to improving the setup and operation of large displays and particularly to network addressable video-wall displays.
BACKGROUND OF THE INVENTIONThe present invention relates generally to improving the setup and operation of video-wall displays and particularly to network addressable displays.
A video-wall display system is a method to overcome the costs of manufacturing and installing very large displays, by assembling a large display using multiple smaller displays arranged and working together. By dividing a single image into several sub-images and displaying the sub-images on an appropriately arranged array of display devices a larger display with higher resolution can be created.
Because the plurality of display devices need to be operated together to display a single image or canvas across a video-wall (rather than a separate independent image for each display), the set-up of the output displays is critical and their fine tuning can be laborious. Informing the server of the initial positioning of each display (so that the image segments are sent to the appropriate displays); the precise cropping of each of the sub-images (to allow the eye to interpret continuity of the total image across the bezels of the displays where no image can appear); and the adjustment of the color of the sub-segments of the image to provide equal luminosity, color and intensity/brightness ranges across the whole array of displays within the video-wall, are all essential to providing the optimal viewing experience. With conventional approaches to video-wall setup these tasks can be laborious. This invention offers methods of automating the setup process to improve the ease and speed of video-wall setup.
DESCRIPTION OF THE INVENTIONA video wall server splits source-video into sub-images and distributes these sub-images to multiple listening display devices. Built-in algorithms optimize, parse and scale the individual video-wall segments. To accomplish this splitting efficiently it is beneficial to create a configuration file stored in a computer readable medium using information on the position, configuration and settings for each of individual physical display and how they relate to the video-wall canvas. Using such a configuration file allows the video wall server to efficiently create a seamless canvas across the display units. This invention deals with methods of supplying the information for the creation of such files by means of feedback based on test-canvasses and to sequentially changing the configuration file before redeploying a test-canvas to further improve the overall viewer-image.
Configuration of Displays: This invention provides methods equipping the server with a configuration file containing:
-
- the overall shape of the video wall;
- the ordering of the sub-images within the video wall;
- any further rotation or displacement of displays required to form the appropriate canvas on the video wall;
- interactively fine-tuning the positioning and bezel width of the displays to achieve perfect alignment across display monitor bezels;
- adjusting the color intensity of displays to achieve a uniform color across the video-wall;
Once this information is established it is stored in the server's configuration files.
The methods to achieve the five types of adjustments outlined above by automation presented typically involve a user interacting with the server via a GUI containing instructions and a camera in communication with the server. In a typical usage the user would have a smart-phone, tablet, laptop or similar device, interacting with the server via the web. The user giving permission to the server to use that camera to obtain digital images of the canvas as displayed across the video wall and the server giving instructions to the user about positioning he camera and where required to supply eye-based evaluation concerning the correctness of any changes to the displays made.
The server knows (via DPMS and EDID) certain details about each display (aspect ratio, number of pixels, etc.). Using these in conjunction with the image captured from the camera gives a unique ability to identify the exact positioning of the display.
The ordering and overall shape. Once the display units have been mounted to form the wall and connected to the server the server will know the number of display units involved and will analyze for shape. This can be accomplished by sending each display a unique computer-recognizable image. This could for example be a specialized “bar codes” designed for image recognition software (similar to QR codes). The image should have special symbols used to identify the exact spatial location of the corner pixels of each display. Next a message would be sent requesting the user to point the camera at the displays in the wall. Digital analysis of the image in comparison to the information as displayed allows the server to determine which displays are in the wall (some may be displaying in a different room), to identify the geometric placement of the displays (rectangular, linear or “artistic” (meaning in an informal non-geometric setup) and the position in which each signal sent appears in the display (which Ethernet or other connection leads to each display position). In addition it determines the rotation (do the images need to be rotated through 90 or 180 degrees and what rotations are needed for non-standard setups).
Once the digital image analysis has been completed the server would re-adjust the canvas presented across the screens and instruct the user to ready the camera for another image. This correction process would continue until the server's digital analysis was satisfied with the overall alignment, in addition it might ask for by-eye evaluation to confirm the result.
Interactive fine tuning of placement and rotation. Generally the canvas on the video wall will appear to be interrupted by the bezels making up the edges of each display monitor. The fine tuning is used to minimize the bezel effect by appropriately moving each of the displays a few pixel widths horizontally or vertically. For example this could be achieved by displaying a test canvas of diagonal lines on the video wall. The digital analysis being aware of the exact location of these lines in the canvas sent to the displays can examine the lines on the digital image very precisely for alignment and by calculation measure the number of pixels each display must be moved vertically or horizontally to achieve perfect alignment. Once these corrections have been made and a new canvas displayed it can be checked digitally and by eye.
Adjusting color intensity across the canvas. In a typical embodiment the next stage would be to check for color. The canvas might be such that each display contains the same pattern of rectangles each of a different color (perhaps red, blue and green) displayed with a range of intensities. Now the analysis is of each color intensity across all of the displays, so that any fine distinction between the treatment of a particular color/intensity combination can be adjusted for. Other tests of a similar nature can be used for particular differences between displays.
In an alternative and potentially complimentary method of calibration a moving image is output to the video-wall (for example horizontal and lines moving across the video-wall canvas) are captured and communicated in real-time by the camera and image analysis software interprets the captured frames to determine positioning.
The stage-wise process the methods outlined above are carried out in stages and at each stage the configuration file being used by the server is updated based on the new adjustments calculated, so that the end result is a file that can be used to promote perfect display of any video file presented to the server. Color calibration can be achieved in two possible ways.
In one embodiment of the invention color calibration is done by controlling monitor settings via the centralized server software being in communication with the display settings (potentially via an RS232 or other interface) and a uniform image canvas is output to the display. In an alternative embodiment color adjustments are stored in the server software and color adjustments are done by the server as it is output to the display itself. In the first of these cases the display settings are permanently stored on the server in a configuration file.
In one realization the same color is output on each of the displays within the video-wall and after each change in the display, an image is captured for analysis. This image analysis detects relative differences between each display and adjust color output characteristics on individual displays within the video-wall, successively adjusting hue, intensity and brightness of the individual images so that the same high and low values for each display are achievable by each of the individual displays within the video-wall, making the fine adjustments necessary to the color output characteristics and settings of each individual display.
In one embodiment of the invention the computer-recognizable images output to each of displays includes a right-angled line in various corners of the displays comprising the video-walls to aid in detecting the exact placement of these corners in relation to other display corners within the video-wall.
In another embodiment of the invention, component displays within the video-wall provide instructions to the user on how to connect their camera to the display (for example by providing a URL, to visit on their network-connected or Internet-connected device).
Visual prompting and status indicators to assist during video-wall setup. As displays are linked into a video-wall it is helpful to the individual setting up the video-wall to receive visual feedback from the displays themselves as screens are added to or removed from the video-wall. In one embodiment of the invention, visual status indicators shows progress as each display's position within the video-wall has been successfully identified and the display is “linked into” the video-wall. For example, a line, pattern, color change, picture, or animated effect is used to differentiate monitors which have been added or positioned within the video-wall from those that haven't. A different status indicator such as an image, icon, or input prompt could be output to those displays which are being output to by the video-wall server, but are still awaiting placement/assignment/relative-positioning within the video-wall. In one embodiment, once an an adjacency relationship is established between edges of displays within the video-wall a status indicates that the edges of both displays have been successfully linked. In one embodiment, once the full video-wall has been setup, will show a visual success image indicator spanning the full video-wall.
In one embodiment of the invention, in addition to the image data, the digital camera device also provides meta-data about the image. Data such as: camera orientation, detected ambient light, detected distance from the subject, focal length, shutter speed, flash settings, camera aperture, detected camera rotation angle relative to the horizon, GPS location. this additional data can be used to increase the accuracy or speed of image analysis or provide additional details about the video wall.
In one embodiment of the invention, a smart phone or other mobile device with an embedded camera device is in communication wirelessly with a video wall control server (which is in turn in communication with the video wall displays). The video wall control server outputs one or more optimized configuration images to the video-wall displays. Application code executed on the mobile device, either by the browser or by a native mobile device application) captures image data from said camera (this could be a still image, a stream of video data, or a sequence of still images) and forwards this image data over a wireless connection to the server.
An an image analysis module (could be either executed on the server or on the mobile device, or parts of the analysis could be performed by each) processes the captured image data
determining the display identity and placing each within the captured image then subsequently assessing differences in display placement, rotation, color, brightness, contrast, and other attributes of the various displays present within the capture image data. Via these comparisons the automated image analysis module is able to determine any adjustments required for mappings of various ones of the displays captured in the image and subsequently translate these adjustments into changes to the video wall configuration mapping file(s) or data stores. The updated mapping would then be communicated by the control module, in response to these changes to the server, server updates test images or sequences to the next test image, repeating any failed steps as necessary and moving to subsequent configuration tests as successful calibration of each unique configuration image is achieved.
In one embodiment the user is visiting a web-page with their mobile device (equipped with a camera), and the server is a web-server. That web-server also being in communication (able to send controlling signals) to the displays comprising the video wall. The displays being controlled by the web-server to output the configuration images.
With the above embodiments in mind, it should be understood that the embodiments might employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing. Any of the operations described herein that form part of the embodiments are useful machine operations. The embodiments also relate to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The embodiments can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable medium include hard drives, solid state drives (SSD), network attached storage (NAS), read-only memory, random-access memory, Optical discs (CD/DVD/Blu-ray/HD-DVD), magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion. Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
Although the method operations were described in a specific order, it should be understood that other operations may be performed in between described operations, described operations may be adjusted so that they occur at slightly different times or the described operations may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing.
While the system and method has been described in conjunction with several specific embodiments, it is evident to those skilled in the art that many further alternatives, modifications and variations will be apparent in light of the foregoing description. Thus, the embodiments described herein are intended to embrace all such alternatives, modifications, applications and variations as may fall within the spirit and scope of the appended claims.
Embodiments will now be described more fully with reference to the accompanying drawings in which:
The preliminary steps in the setup of the video wall system are now complete and the system is ready to process and deliver content to the video wall displays. It can now receive content for display via the primary GPU processing application output frame by frame to the frame buffer (78), and process (e.g., crop/split/rotate/resize/color-convert) based on stored identity, placement, and calibration settings individual sub-image portions (79) to be encoded and sent to the appropriate devices for output to the appropriate secondary display adapters which in turn outputs the transformed image data to the corresponding displays (710), together creating displaying the video wall image across all displays. This decoding and displaying process is continued while the video-wall is in use (711), and ends when terminated (712).
Claims
1. A control module in communication with each of said plurality of displays, configured to receive display information from, and provide output commands to, individual ones of the plurality of displays;
- unique configuration images, designed to be interpretable via computerized image analysis of their captured output, to provide information on ones of identity, edges, corners, color characteristics, settings, size and placement of individual displays. Said unique configuration images being output to individual ones of the plurality of displays, in response to commands from the control module;
- a digital camera device in communication with the control module, being configured to capture and send for analysis digital camera images depicting ones of the plurality of displays including the unique configuration images output thereupon at the time of capture;
- an automated image analysis module, in communication with the control module, for receiving and analyzing said digital camera images, said analyzing comprising: isolating image data from the unique configuration images output thereupon; pairing ones of the depicted displays in digital camera images to corresponding ones of the plurality of displays; deriving individual display mapping data relative to ones of identity, position, placement, rotation, settings and color for ones of the displays within the video wall;
- said mapping data being stored in computer readable memory and applied to facilitate uniformity of output of said plurality of displays.
2. The system of claim 1, further comprising a Graphical User Interface (GUI) module consisting of a user interacting with a web-page being rendered by a web-browser running on a web-browsing device comprising a digital camera, the web-browser in communication with the control module, being configured to request the user to grant camera access and capture digital images of the plurality of displays.
3. The system of claim 2, further comprising the automated method being used in conjunction with a GUI controlled by a user certain ones of the setup and configuration information required being provided by the user other being performed via the automated image analysis module.
4. The system of claim 3, further comprising the GUI being configured to display a graphical representation of the mapping comprising a plurality of blocks, each block a representing, and corresponding to, one of devices comprising the video wall, the user being able to manipulate elements of the display to further adjust the mapping data.
5. The system of claim 2, where the digital camera device embedded within a smart-phone device and the GUI is provided by a native smart-phone application in communication with the control module over wireless network connection.
6. The system of claim 2, wherein the user is interacting with the GUI via ones of:
- a web browser;
- a laptop;
- a smartphone;
- a tablet;
- a personal computer;
- a mobile device;
- a touch-screen;
- a mouse;
- a keyboard;
- an input device;
- voice commands;
- gesture input;
- touch input.
7. The system of claim 1, wherein the control module further comprises a web-server running an embedded PC housed within at least one of the plurality of displays.
8. The system of claim 1, further comprising the plurality of displays being updated to output, using the mapping data, at least one image spanning the plurality of displays.
9. The system of claim 1, being further configured to perform the outputting (of the unique configuration images), capturing (via a digital camera device), and analyzing (to derive mapping data) multiple times in sequence, each time utilizing the updated mapping data and each time further facilitating uniformity of output to the plurality of displays, the output of subsequent unique configuration images being controlled by the system.
10. The system of claim 1 wherein, the updating of the mapping data based on digital image analysis performed by the automated image analysis module includes ones of:
- adjusting the aspect ratio or size of the video-wall canvas to match one or more of the bounding edges of of the total display canvas captured by the camera;
- spatially positioning (shifting and rotating) of ones of the displays based on detected markers;
- adjusting the relative size of each display based on detected locations of display corner markers;
- modifying the positioning and scaling of the images in response to detected physical display sizes;
- increasing or decreasing the relative brightness settings for image data sent to individual ones of the displays;
- increasing or decreasing various color settings for image data sent to individual ones of the displays;
- increasing or decreasing various color settings in communication with the display itself via a communications protocol;
- detecting the size of the bezel for ones of the displays.
11. The system of claim 1, wherein, the sending for analysis of the captured images comprises wireless transmission of image data from the digital camera device over a wireless communication network.
12. The system of claim 1, further comprising the digital camera device supplying additional meta-data about the captured image comprising ones of: camera orientation, detected ambient light, detected distance from the subject, focal length, shutter speed, flash settings, camera aperture, detected camera rotation angle relative to the horizon, GPS location these additional data being used to increase the accuracy or speed of image analysis or provide additional details about the video wall.
13. The system of claim 1, wherein, visual elements, being specific unique identification symbols, are used in the configuration images to facilitate assessing ones of the identity, relative position rotation and color of the displays, these visual elements being ones of:
- embedded QR codes;
- specific corner markers; to facilitate spatial location of corners of the display;
- specific edge markers;
- linear patterns across the canvas as a way of assessing continuity across bezel edges between different displays;
- individual pixels at the edge of each display are illuminated to ensure they are visible within the canvas providing an edge-check method;
- specific color(s) as a means of assessing color uniformity between multiple ones of the displays;
- QR code indicating embedded display identity within the image lines proximal to display edges indicating display edges;
- markers proximal to display corners indicating display corners;
- solid blocks of color depicting color characteristics;
- settings;
- corner and edge markers depicting relative display size;
- a sequence of lines spanning the multiple displays within the video wall canvas facilitating precise positioning of displays;
- a uniform color across all displays.
14. The system of claim 1, where the image analysis software corrects for planar spatial analysis based on the position and angle of the camera.
15. The system of claim 1, where the display information received from the display via the control module includes display sizing and resolution information. The automated image analysis module further comprising used this sizing and resolution information to assist in paring ones of the depicted displays.
16. The system of claim 1, where several images are used in rotation to precisely determine alignment, the images comprising:
- at least one an identification image to determine the identify of each display;
- at least one a corner coordinates image to determine the spacing rotation and placement of displays;
- at least one color calibration images to match and calibrate color amongst multiple displays.
17. The system of claim 1, further comprising error checks being performed on captured image data either prior to or after sending for analysis, where checks and feedback to the camera operator form part of the are performed on the captured image and error messages are generated for output to the user, said detected errors conditions comprising ones of:
- the detected number of displays in the captured image not match the detected number of displays in communication with the server the incidence-angle of the captured image deviating too far from the recommended 90 deg angle;
- the clarity, contrast, and resolution of the captured image being sub-optimal for automated detection routines;
- captured image being too far or too close to the video wall;
- light or flash reflections being too strong for image detection.
18. A computer implemented method of adjusting, within a video-wall canvas, ones of identity, placement, color characteristics and configuration of individual ones of a plurality of displays by a control module in communication with each of said plurality of displays, the control module also being in communication with an image analysis module, the image analysis module also being in communication with a digital camera device, in order to facilitate the operation of said plurality of displays as a video-wall, the method comprising:
- detecting the plurality of displays;
- retrieving information from said displays;
- generating of unique configuration images, the configuration images having been designed to communicate, via computerized image analysis, ones of the corresponding display's identity, edges and corners, placement within the canvas and color calibration;
- creating a test canvas based on said configuration images for outputting said unique configuration images to individual ones of the plurality of displays;
- outputting the said test canvas to the displays capturing via the digital camera device digital images of the plurality of displays including the unique configuration images output thereupon;
- retrieving by the image analysis module over a network said digital images for analysis;
- analyzing, by the image analysis module, of the received digital images;
- pairing ones of the depicted displays in digital camera images to corresponding ones of the plurality of displays;
- deriving individual display mapping data relative to ones of identity, position, placement, rotation, settings and color for ones of the displays within the video wall;
- adjusting in response to the analyzing said identification, placement, and configuration for individual ones of the plurality of displays;
- storing said settings in computer readable memory;
- applying the updated settings to facilitate uniformity of output through an updated canvas.
19. The method of claim 18, further comprising a Graphical User Interface (GUI) consisting of a user interacting with a web-page being rendered by a web-browser running on a web-browsing device comprising a digital camera, the web-browser in communication with the control module, being configured to request the user to grant camera access and capture digital images of the plurality of displays.
20. A computer-readable medium storing one or more computer readable instructions configured to cause one or more processors to:
- display, via a control module in communication with each of a plurality of
- displays, unique configuration images, said images designed to be interpretable via computerized image analysis of their captured output, to provide information information on ones of identity, edges, corners, color characteristics, settings, size and placement of individual displays in the form of a test canvas;
- receive, via the control module, display information from individual ones of the plurality of displays:
- receive, via the control module, images from a digital camera device configured to capture, and send for analysis, digital images depicting ones of the plurality of displays including the unique configuration images output thereupon at the time of capture;
- deliver via the control module both said digital images and the said test canvas as displayed to an automated image analysis module, for analysis of the digital camera images;
- analyze the images in the automated image analysis module, said analyzing comprising: isolating image data from the unique configuration images output thereupon; pairing ones of the depicted displays in digital camera images to corresponding ones of the plurality of displays; deriving individual display mapping data relative to ones of identity, position, placement, rotation, settings and color for ones of the displays within the physical arrangement;
- retrieve, via the control module, the individual display mapping data write, via the control module, configuration file to permanent storage;
Type: Application
Filed: Jan 12, 2015
Publication Date: Oct 1, 2015
Applicants: Userful Corporation (Calgary), (Calgary)
Inventors: Timothy Griffin (Calgary), Adam Ryan McDaniel (Calgary)
Application Number: 14/595,203