Abstract: A display device includes: a display panel with pixels arranged in a first direction and a second direction; and a light source. Each pixel includes: a first sub-pixel including a color filter for a first color; and a second sub-pixel arranged adjacent to the first sub-pixel in the first direction and including a color filter for a complementary color between a second color and a third color. The light source includes: a first light emitter configured to emit light in the first color; a second light emitter configured to emit light in the second color; and a third light emitter configured to emit light in the third color. One frame period includes: a first light emission period of causing the first and second light emitters to emit light simultaneously; and a second light emission period of causing the first and third light emitters to emit light simultaneously.
Abstract: A system and a method for determining aiming point and/or six degrees of freedom of an aiming device. The system include the aiming device, a camera mounted thereon, and a computing device. The aiming point is an intersection between a line of interest (LOI) of the aiming device and surface of an aimed object. A parallel auxiliary line (PAL) is a line starting from the camera and parallel to the LOI. The PAL is a point (PAL-T) in a template image captured by the first camera. The computing device is configured to: provide a reference image; map the template image to the reference image; project the PAL-T to the reference image using the mapping relation to obtain a reference point (PAL-R); determine 3D coordinates PAL-3D of the PAL-R; and determine 3D coordinates of the aiming point based on the PAL-3D and a relationship between the LOI and the PAL.
Abstract: The invention discloses a system and methods for predicting attributes of effective television ads, digital video, and other audiovisual content before production. Output comprises instructions for producing videos optimized to achieve at least one performance objective. Some embodiments predict performance of existing video, such as predicting winning Super Bowl ads before they air. The system comprises one or more processors and a memory configured to receive at least one data stream of audiovisual content in at least one public and/or a private domain; analyze said data streams to determine one or more attributes associated with said data streams; analyze said data streams to determine one or more performance scores associated with said data streams; attribute at least a portion of one said performance score to at least a portion of one said attribute associated with said data streams, and output, to a memory, any or all combinations thereof.
Abstract: Described are systems and methods for substituting missing graph information caused during the transmission of graph information from one entity to another. In one example, a system includes a processor and a memory with machine-readable instructions that cause the processor to receive an incoming graph stream comprising a plurality of graphs having detected object information and at least one missing graph, transform the plurality of graphs into incoming embedded vectors having values that represent detected object information of the incoming graph stream, provide the embedded vector that corresponds to the at least one missing graph with a missing graph value, and substitute the missing graph value with a substitute value using a recurrent neural network that interpolates the substitute value using the incoming embedded vectors that are not equal to the missing graph value.
Type:
Grant
Filed:
August 31, 2021
Date of Patent:
May 28, 2024
Assignee:
Toyota Motor Engineering & Manufacturing North America, Inc.
Abstract: Systems, methods and apparatus for processing video can include a processor. The processor can be configured to perform object detection to detect visual indications of potential objects of interest in a video scene, to receive a selection of an object of interest from the potential objects of interest, and to provide enhanced video content within the video scene for the object of interest indicated by the selection.
Type:
Grant
Filed:
July 12, 2021
Date of Patent:
May 14, 2024
Assignee:
Avago Technologies International Sales Pte. Limited
Abstract: A system for automatically placing virtual advertisements in sports videos, which includes a shot detection module, background extraction module, a calibration module and an asset placement module. The shot detection module detects the target shot of a sports video via a first trained model. The background extraction module performs a background extraction to extract the background of the first frame of the target shot to obtain a first background mask. The calibration module performs a camera calibration to detect a first transformation relation, between the first frame and a sport field template, via a second trained model. The asset placement module transforms an advertisement asset according to the first transformation relation to obtain a first transformed asset, and execute an asset placement to place the first transformed asset onto the first frame according to the first background mask to obtain a first image frame with placed advertisement.
Type:
Grant
Filed:
December 17, 2021
Date of Patent:
May 7, 2024
Assignee:
INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
Inventors:
Sheng-Han Wu, Feng-Sheng Lin, Ruen-Rone Lee
Abstract: An example device for coding point cloud data includes a memory configured to store data representing points of a point cloud, and one or more processors implemented in circuitry and configured to: determine height values of points in a point cloud; classify the points into a set of ground points or a set of object points according to the height values; and code the ground points and the object points according to the classifications. The one or more processors may determine top and bottom thresholds and classify the ground and object points according to the top and bottom thresholds. The one or more processors may further code a data structure, such as a geometry parameter set (GPS), including data representing the top and bottom thresholds.
Type:
Grant
Filed:
December 21, 2021
Date of Patent:
April 2, 2024
Assignee:
QUALCOMM Incorporated
Inventors:
Luong Pham Van, Adarsh Krishnan Ramasubramonian, Bappaditya Ray, Geert Van der Auwera, Marta Karczewicz
Abstract: An electronic device includes at least one processor or circuit configured to function as a setting unit, an acquisition unit and a processing unit. The setting unit sets an object region in an image acquired from an imaging unit. The acquisition unit acquires a distance map including distance information corresponding to a pixel included in the object region. The processing unit determines object distance information indicating a distance to an object included in the object region, based on the distance information corresponding to the pixel included in the object region.
Abstract: A method, computer program, and computer system is provided for coding video data. Video data including a reference view and a current view is received. A co-located block in the reference view is identified for a current block in the current view. A predicted block vector is calculated based on an offset vector between the current block and the co-located block, and a disparity vector between the co-located block and the reference block in the reference view. The video data is encoded/decoded based on the calculated predicted block vector.
Abstract: The present technology is to enable a transmitter device to read appropriate device information (EDID) from a receiver device with a reduced burden on the user. A memory unit that stores first device information and second device information, and a communication unit that communicates with an external device are included. The control unit determines to cause the external device to read the second device information, on the basis of reception of a rewrite signal for the first device information from the external device. It is possible to cause the external device (a transmitter device) to read appropriate device information (EDID) from a receiver device (a reception device) with a reduced burden on the user, and thus, user-friendliness can be increased.
Abstract: A video streaming transmission system and a transmitter device are provided. The video streaming transmission system includes a receiver device and a transmitter device. The receiver device is coupled to a display module. The transmitter device is coupled to an image providing device through a transmission line. The transmitter device includes a processing unit, a storage unit, and a wireless communication module. The storage unit is coupled to the processing unit. The wireless communication module is coupled to the processing unit, and is coupled to the receiver device in a wireless communication manner. The processing unit generates virtual extended display identification data and stores them in the storage unit, and provides the virtual extended display identification data to the image providing device through the transmission line, so that the image providing device recognizes multiple virtual displays coupled thereto, and outputs multiple video streams through the transmission line.
Abstract: The system includes a processor and memory. The processor is configured to receive sensor data set and video data set associated with a vehicle; determine, using a reduction model, a compressed video data set based at least in part on the sensor data set and the video data set; and transmit or store the compressed video data set. The memory coupled to the processor and configured to provide the processor with instructions.
Abstract: A decrease in a color gamut in a low gradation region is reduced to improve image quality in a liquid crystal panel. Therefore, a color image signal for a liquid crystal display panel in which a display image is generated by light passing through a rear liquid crystal cell and a front liquid crystal cell is converted into a black-and-white image signal by using a predetermined coefficient. Then, the black-and-white image signal thus obtained is subjected to gradation value conversion so as to become a signal for performing gradation expression in a gradation region set as a gradation range in which a color gamut greatly changes in a case where the front liquid crystal cell is driven by the color image signal, and thus a rear image signal serving as the black-and-white image signal for the rear liquid crystal cell is generated.
Abstract: The present disclosure relates to methods and apparatus for graphics processing. Aspects of the present disclosure can determine at least one scene including one or more viewpoints. Also, aspects of the present disclosure can divide the at least one scene into a plurality of zones based on each of the one or more viewpoints. Further, aspects of the present disclosure can determine whether a zone based on one viewpoint of the one or more viewpoints is substantially similar to a zone based on another viewpoint of the one or more viewpoints. Aspects of the present disclosure can also generate a geometry buffer for each of the plurality of zones based on the one or more viewpoints. Moreover, aspects of the present disclosure can combine the geometry buffers for each of the plurality of zones based on the one or more viewpoints.
Type:
Grant
Filed:
June 29, 2021
Date of Patent:
April 18, 2023
Assignee:
QUALCOMM Incorporated
Inventors:
Dieter Schmalstieg, Bernhard Kerbl, Philip Voglreiter
Abstract: A method, system and computer readable instructions for video encoding comprising, determining one or more region of interest (ROI) parameters for pictures in a picture stream and a temporal down sampling interval. One or more areas outside the ROI in a picture in the picture stream are temporally down sampled according to the interval. The resulting temporally down sampled picture is then encoded and the encoded temporally down-sampled picture is transmitted. Additionally, a picture encoded in this way in an encoded picture stream may be decoded and areas outside an ROI of the picture may be temporally up sampled. The temporally up sampled areas outside the ROI are inserted into the decoded encoded picture stream.
Abstract: A videoconferencing calibration system includes a first codec, a second codec connected to the first codec via a videoconferencing connection, and a first controller in communication with the first codec. The first controller is configured to control a videoconferencing component to transmit a videoconferencing signal to the second codec though the videoconferencing connection. The system also includes a second controller in communication with the second codec. The second controller is configured to analyze the videoconferencing signal transmitted though the videoconferencing connection to determine a calibration adjustment value according to at least one calibration adjustment rule, and to transmit the determined calibration adjustment value to the first controller. The first controller is configured to adjust a signal level setting of the first codec according to the calibration adjustment value transmitted by the second controller.
Abstract: A videoconferencing calibration system includes first and second videoconferencing components, a first codec connected with a second codec via a videoconferencing connection, and first and second controllers. The first controller is configured to control the first videoconferencing component to transmit a videoconferencing signal to the second codec, and the second controller is configured to analyze the transmitted videoconferencing signal to determine a calibration adjustment value by comparing at least one signal level value of the videoconferencing signal to a calibration target according to at least one calibration adjustment rule, and to transmit the determined calibration adjustment value to the first controller. The first controller is configured to adjust a signal level setting of the first codec according to the calibration adjustment value transmitted by the second controller.
Abstract: A display device is provided. The display device includes: an image processor image-processing contents and scaling the contents at a first scaling magnification; a controller determining a second scaling magnification on the basis of a resolution of the contents scaled at the first scaling magnification and an output resolution of the display device; and a display scaling the contents scaled at the first scaling magnification depending on the second scaling magnification and displaying the contents scaled depending on the second scaling magnification.
Type:
Grant
Filed:
August 28, 2015
Date of Patent:
November 27, 2018
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Min-woo Lee, Jong-ho Kim, Hyun-hee Park, Jae-hun Cho, Ho-jin Kim, Yong-man Lee, Won-hee Choe, Dong-kyoon Han
Abstract: A device comprises an input unit, a motion acquisition unit, a matrix operation unit, and a drawing unit. The input unit implements sequential input of a first frame image and a second frame image. The motion acquisition unit acquires motion data between the first frame image and the second frame image. The matrix operation unit calculates a projection matrix to project the output frame image to the second frame image, from a first matrix including a rolling shutter distortion component, a second matrix including at least one of a parallel translation component in directions perpendicular to an image-shooting direction and a rotation component with respect to the image-shooting direction, and an auxiliary matrix including a motion component not included in the first matrix and the second matrix. The drawing unit generates the output frame image from the second frame image by using the projection matrix.
Abstract: In one aspect, a method for generating higher resolution volumetric image data from lower resolution volumetric image data includes receiving volumetric image data of a scanned subject, wherein the volumetric image data includes data representing a periodically moving structure of interest of the scanned subject, and wherein the volumetric image data covers multiple motion cycles of the periodically moving structure of interest. The method further includes estimating inter-image motion between neighboring images of the received volumetric image data. The method further includes registering the received volumetric image data based at least on the estimated inter-image motion. The method further includes generating the higher resolution volumetric image data based on the registered volumetric image data, a super resolution post-processing algorithm, and a point spread function of an imaging system that generated the volumetric image data.
Abstract: Texture features of images are calculated for recognition of pixel sets in the images as one category among multiple candidate categories. For example, wavelet transformation is applied to obtain a wavelet vector. Via analyzing components of the wavelet vector, one pixel set may be recognized as part of an architecture object or a natural plant object. In addition, line segments within the pixel sets may be calculated and their statistics result may be used for recognizing different objects.
Abstract: Embodiments of the present invention comprise systems, methods and devices for increasing the perceived brightness of an image. In some embodiments this increase compensates for a decrease in display light source illumination.
Type:
Grant
Filed:
June 15, 2005
Date of Patent:
December 16, 2014
Assignee:
Sharp Laboratories of America, Inc.
Inventors:
Louis Joseph Kerofsky, Scott James Daly
Abstract: A method for encoding pictures within a groups of pictures using prediction, where a first reference picture from a group of pictures and a second reference pictures from the subsequent group of pictures are used in predicting pictures in the group of pictures associated with the first reference picture. A plurality of anchor pictures in the group of pictures associated with the first reference picture may be predicted using both the first and second reference pictures to ensure a smooth transition between different groups of pictures within a video frame.
Abstract: A single-ended blur detection probe and method with a local sharpness map for analyzing a video image sequence uses two sets of edge filters, one for “fast edges” and the other for “slow edges.” Each set of edge filters includes a horizontal bandpass filter, a vertical bandpass filter and a pair of orthogonal diagonal filters where the frequency response of the fast edge filters overlap the frequency response of the slow edge filters. The video image sequence is input to each filter of each set, and the output absolute values are combined with weighting factors to produce a slow edge weighted sum array and a fast edge weighted sum arra. The respective weighted sum arrays are then decimated to produce a slow edge decimated array and a fast edge decimated array.
Abstract: An apparatus for driving a display panel includes: a time variant signal (TVS) generator configured to generate a time variant signal group; a common pulse signal generator configured to generate a plurality of pulse signals; a selector configured to receive the time variant signal, the plurality of the pulse signals, and video data and select a grayscale voltage corresponding to the video data; and a buffer configured to buffer and transfer an output of the selector. Herein, the selector and the buffer are provided to each of a plurality of channels, and the time variant signal and the plurality of the pulse signals are inputted in common to the selector of each channel.
Type:
Grant
Filed:
August 24, 2010
Date of Patent:
February 18, 2014
Assignee:
Magnachip Semiconductor, Ltd.
Inventors:
Beom-Jin Kim, Hee-Jung Kim, Dae-Ho Lim, Ki-Seok Cho
Abstract: A noise reduction method, medium, and system. The noise reduction method includes calculating a noise level of an input image and removing noise from a central pixel within a window of a predetermined size in the input image using a weight determined based on a difference between signal intensities of the central pixel and a plurality of adjacent pixels within the window and the calculated noise level.
Abstract: A signal processing apparatus includes a lower-level region data detecting section detecting, in luminance data of an input video signal, luminance data corresponding to a value in a set lower-level region, and a data converting section converting a value of the luminance data corresponding to the lower-level region to a set conversion value.
Abstract: An image processing apparatus reads a unit of first image data corresponding to a first region, a unit of pixels at a time, from a first memory storing image data in a band area and reads second image data to be used in processing of multiple processing object pixels corresponding to the first region from a second memory. The image processing apparatus processes each of the multiple processing object pixels by using pixels in a second region containing each of the multiple processing object pixels and stores data of pixels contained in the first image data to be used in processing of other multiple pixels in the second memory.