Abstract: Novel tools and techniques are provided for implementing media content streaming or playback, and, more particularly, for implementing media stream synchronization. In some embodiments, a synchronization system might receive a first signal that is output from a first device, which receives an original video signal from a video source and outputs a first video signal. The synchronization system might analyze the first signal to determine a first frame buffer delay, generate a delay adjustment signal based on such determination, and send the delay adjustment signal to a frame buffer delay device. The frame buffer delay device and the first device might concurrently receive the original video signal from the video source. The first delay adjustment signal causes the frame buffer delay device to apply the first frame buffer delay to the original video signal to produce a second video signal that is synchronized with the first video signal.
Abstract: A projector includes a projection section having a light source, a light modulation device and a projection lens, and modulating light emitted from the light source with the light device, and then projecting the light modulated through the projection lens to project a projection image on a screen, a projection optical system drive section for changing a projection position of the projection image, a projection control section for making the light device display a reduced input image, and a shift control section for controlling the projection optical system drive section wherein the projection position of the projection image when displaying the reduced input image at a central display position and at a shift display position become the same when the projection control section changes the display position of the reduced input image from the central display position to the shift display position in a display area of the light device.
Abstract: A video display device includes: a receiver receiving video information including a bit string which defines a gradation value of an input image with a first frame rate; a video generator generating output images including multiple frames corresponding to a one-frame input image, based on the received video information; and a display displaying the generated output images for each frame rate. The bit string includes an upper bit, a middle bit, and a lower bit which are arranged in descending order therein. The upper bit corresponds to a number of gradations per frame of the output images. The video generator varies the gradation value among multiple frames in the output images according to the middle bit of the input image, and performs spatial dithering to correct the output images according to the lower bit of the input image.
Abstract: Systems and methods for converting live action alpha-numeric text to re-rendered and embedded pixel information for video overlay. For example, a computer-implemented method may include converting a captured image of alphanumeric characters into ASCII code, transmitting the ASCII code to a hub, capturing a video stream, generating a first output video stream, wherein the first output video stream includes the captured video stream and an overlay including the ASCII code, converting a captured image of a second set of alphanumeric characters into an ASCII code, transmitting the ASCII code to the hub, generating an updated output video stream, wherein the updated output video stream includes the captured video stream and an updated overlay including the ASCII code, and transmitting the updated output video stream for display.
Abstract: In some aspects, the disclosure is directed to methods and systems for transformation between media formats, such as between standard dynamic range (SDR) and high dynamic range (HDR) media or between HDR media formats, without undesired hue shifting, via one or both of a luminance mapping ratio technique and a direct color component mapping technique.
October 30, 2018
Date of Patent:
February 25, 2020
Avago Technologies International Sales Pte. Limited
Abstract: Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the capturing of an image of a test pattern with the array camera such that each imaging component in the array camera captures an image of the test pattern. The image of the test pattern captured by a reference imaging component is then used to derive calibration information for the reference component. A corrected image of the test pattern for the reference component is then generated from the calibration information and the image of the test pattern captured by the reference imaging component. The corrected image is then used with the images captured by each of the associate imaging components associated with the reference component to generate calibration information for the associate imaging components.
Abstract: A projection system includes an invisible light projector, an imaging unit, an image generator, and a visible light projector. The invisible light projector projects a predetermined invisible light image onto the object via invisible light. The imaging unit captures an image of the invisible light projected from the invisible light projector. The image generator measures a shape of the object based on the image captured by the imaging unit to generate image data showing image content for projection onto the object in accordance with the measured shape. The visible light projector projects the image content shown by the image data onto the object via visible light. The invisible light projector emits pulsed invisible light to project the measurement pattern. The image generator generates the image data based on an image captured in accordance with a timing for the pulsed light emission.
Abstract: A synchronized video stream switching method for a video wall system. The video wall system includes a transmitter having first and second channels, and a receiver having third and fourth channels. The method includes: (a) After receiving a control command, the transmitter uses the first channel to transmit the first video stream to the third channel, and uses the second channel to transmit a pre-switch command to the fourth channel. (b) After receiving the pre-switch command, the receiver preserves the first video stream in the third channel. (c) After the switching event occurs at the transmitter in response to the control command, the transmitter transmit the second video stream via the first channel, but the receiver continues to preserve the first video stream in the third channel within a predetermined time period. (d) After the predetermined time period, the receiver accepts the second video stream in the third channel.
Abstract: A broadcast receiving apparatus includes: a first receiving unit receiving broadcast data for a first transmission method; a second receiving unit receiving broadcast data for a second transmission method; a first decoding unit decoding the broadcast data received by the first receiving unit; a second decoding unit decoding the broadcast data received by the second receiving unit; a video output unit outputting video, the video being generated based on data decoded by the first or second decoding unit; an operation input unit receiving control information from a remote controller; and a control unit controlling the units based on the control information received by the operation input unit. The remote controller includes a button to output control information for switching between a display state and a non-display state of service, the service being linked with a broadcasting program.
Abstract: A method for calibrating an imaging device includes calculating attitude information of the imaging device relative to a screen based at least in part on an image captured by the imaging device, generating a calibration signal based at least in part on the attitude information, displaying the calibration signal on the screen, and displaying a guiding signal on the screen.
Abstract: A method and system for visualizing micro-contrast. At least one processor obtains a selected root image from a digital video, generates an assistive image, and blends the assistive image with the selected root image to obtain a mutated image, which configured to be displayed by a display device. The selected root image includes root pixels each associated with color values. The assistive image is generated based on a plurality of micro-contrast scores comprising a micro-contrast score calculated for each of at least a portion of the root pixels. The micro-contrast score is calculated for a selected one of the root pixels by identifying a submatrix centered at the selected root pixel, and calculating the micro-contrast score for the selected root pixel based on the color values associated with only sample pixels positioned one each at corners of the submatrix.
Abstract: A multi-panel video wall and system is disclosed having a computer with a memory or access to a public or private cloud containing a video file and a processor for executing the video file and a plurality of video display screens interconnected to one another and to the computer via wired or wireless transmission, each of the plurality of video display screens configured to work together to display a video content generated from the video file that extends across all of the plurality of video display screens. Upon user interaction or detection of a user, one or more of the plurality of video display screens seamlessly transitions away from the video content to display a separate video content for interaction with the user.
June 23, 2019
Date of Patent:
January 21, 2020
PanoScape Holdings, LLC
Jeremiah Fitzgerald, Matthew Mascheri, John Clarke
Abstract: A technology of preventing feeling of a viewer on brightness from significantly changing when content is switched is provided. A video display device (1) includes a calculation unit (23) that calculates video feature relating to a display video, and a mute video display unit that displays a mute video posterior to a first display video, in which luminance of the mute video is luminance corresponding to a value of the video feature relating to the first display video.
Abstract: Transmission of HDR image data is satisfactorily performed between apparatuses. A transmission apparatus (synchronized apparatus) transmits the HDR image data to a reception apparatus (source apparatus) over a transmission path. At that time, the transmission apparatus transmits information on a transmission method for and/or information on gamma correction for the HDR image data to the reception apparatus over the transmission path. The reception apparatus performs processing (decoding processing, gamma correction processing, and the like) on the received HDR image data, based on the information on the transmission method and/or the information on the gamma correction that are received. For example, the transmission apparatus receives from the reception apparatus the pieces of information on the transmission method and/or the gamma correction that the reception apparatus can support, selects the method that the reception apparatus can support, and thus uses the selected method.
Abstract: A projector, a projecting system and a transmission delay detection method thereof are provided. The projector includes an image processor and a delay calculator. The image processor, in a test mode, receives a first test image signal through an input end during a first time period, generates a first processed image signal through an output end, receives a second test image signal through the input end during a second time period, and generates a second processed image signal through the output end. The delay calculator detects a first time point at which the input end of the image processor receiving the first test image signal and a second time point at which a signal on the output end of the signal processor to translate from the first processed image signal to the second processed signal, and generates a first transmission delay according to the first and second time points.
Abstract: An audio and video transmitting device includes a source-end connecting unit configured to receive an audio and video signal from a video source; a transmitting-end processing unit configured to process the audio and video signal and generate a processed audio and video signal; a first wireless unit configured to modulate the processed audio and video signal and generate a modulated audio and video signal; a transmitting-end antenna configured to send the modulated audio and video signal; and a transmitting-end peripheral interface configured to receive a first peripheral information from a source-end peripheral device and send the first peripheral information to the transmitting-end processing unit which sends the first peripheral information to the video source through the source-end connecting unit.
Abstract: An image projection system includes a projector and a pointing element. The projector includes: an image projection unit which projects an image; an image pickup unit; a detection unit which detects a pointed position of the pointing element, based on an image picked up by the image pickup unit; a synchronization signal transmission unit which transmits a synchronization signal to the pointing element; a screen size acquisition unit which acquires a screen size of the image projected from the image projection unit; and a synchronization signal adjustment unit which sets a light emission intensity of the synchronization signal to a first intensity if the acquired screen size is a first size and which sets the light emission intensity of the synchronization signal to a second intensity that is lower than the first intensity if the acquired screen size is a second size that is smaller than the first size.
Abstract: A dual fisheye lens calibration method includes: generating a first pair of fisheye images of two calibration chessboards located directly in front of two fisheye lenses and one or more random feature point calibration boards located in overlapping regions of the two fisheye lenses; adjusting positions of optical centers of the two fisheye lenses based on differences between projection coordinates of corner points on fisheye images and coordinates of images of the corner points on the first pair of fisheye images, to obtain optimal optical centers; generating a second pair of fisheye images of the calibration chessboards and random feature point calibration boards, based on the optimal optical points; generating a panoramic image based on the second pair of fisheye images; and adjusting distortion polynomial coefficients and extrinsic parameters of the two fisheye lenses based on a difference between panoramic-image coordinates of a matching feature point pair.
Abstract: A temporal alignment system and method for example for detecting temporal misalignment in video frames when the frames are divided for transport using a signal divider for dividing a single signal S into portions S1 . . . SN and using average picture level in determining whether data sets within a particular frame are misaligned.
Abstract: A system and a method for activating a widget based on sound volume adjustment are provided. The system includes a remote controller, an electronic device and a display device. When a trigger button is pressed, the remote controller outputs a first sound volume adjusting signal to be transmitted to the display device through the electronic device to control the display device to adjust a sound volume of an original video signal to output a first sound source signal, and may output a reminder signal. When the trigger button is pressed again, the remote controller outputs a second sound volume adjusting signal for controlling a sound volume of the display device to adjust the first sound source signal to output a second sound source signal, the electronic device activates the widget and controls the display device to display an operation interface of the widget.