Abstract: The focus position of each projection apparatus is changed depending on the configuration state of a projection system comprising a plurality of projection apparatuses.
Abstract: A method and system for non-linear blending in motion-based video processing is described. Aspects of a system for processing images may include circuitry within a chip that computes a blending factor, with a nonlinear relationship to a motion metric, which defines an amount of motion between a current video picture, and at least one preceding video picture and/or at least one subsequent video picture. At least one pixel in the current video picture may be adjusted based on the computed blending factor. Aspects of a method for processing images may include computing a blending factor, with a nonlinear relationship to a motion metric, which defines an amount of motion between a current video picture, and at least one preceding video picture and/or at least one subsequent video picture. At least one pixel in the current video picture may be adjusted based on the computed blending factor.
Abstract: An apparatus and method to generate an image in which images having different exposure amounts are generated are provided. The apparatus and method synthesize the generated images and a high-sensitivity (or quality) image can be generated. The apparatus to generate an image includes an exposure adjustment unit to adjust an exposure amount, an image generation unit to generate a plurality of images of different exposure amounts and different resolutions, and an image synthesis unit to synthesize the plurality of generated images.
Abstract: By setting an area for displaying OSD data, a high-intensity part of this area is highlighted and an area which is not to be highlighted is set. Also, by performing translucent display of the OSD data and natural-image data, an area which is not to be highlighted can be set.
Abstract: A method for switching a channel of an image display device and an apparatus adopting the method are disclosed. The method for switching a channel includes switching a currently displayed channel to a major channel, which is adjacent to the currently displayed channel, if a channel switch command is input using a first direction key, and switching the currently displayed channel to a minor channel, which is adjacent to the currently displayed channel, if a channel switch command is input using a second direction key. Accordingly, a user can switch to a desired channel, thereby increasing user convenience, and a list of minor channels pertaining to a current channel is displayed, so that all channels provided by the broadcasting station of the current channel can be identified.
Abstract: Certain aspects of a method and system for motion compensated temporal filtering using both finite impulse response (FIR) and infinite impulse response (IIR) filtering may include blending at least one finite impulse response (FIR) filtered output picture of video data and at least one infinite impulse response (IIR) filtered output picture of video data to generate at least one blended non-motion compensated output picture of video data. A motion compensated picture of video data may be generated utilizing at least one previously generated output picture of video data and at least one current input picture of video data. A motion compensated picture of video data may be blended with at least one current input picture of video data to generate a motion compensated output picture of video data. The generated motion compensated output picture of video data and the generated non-motion compensated output picture of video data may be blended to generate at least one current output picture of video data.
Abstract: A method and apparatus is provided for collecting data and generating synthesized data from the collected data. For example, a request for an image may be received from a requestor and at least one data capture device may be identified as capable of providing at least a portion of the requested image. A request may be sent to identified data capture devices to obtain an image corresponding to the requested image. Multiple images may be received from the data capture devices and may further be connected or stitched together to provide a panoramic, 3-dimensional image of requested subject matter.
Type:
Grant
Filed:
November 17, 2006
Date of Patent:
July 30, 2013
Assignee:
Microsoft Corporation
Inventors:
Ruston John David Panabaker, Eric Horvitz, Johannes Klein, Gregory Baribault, Feng Zhao
Abstract: Methods, computer-readable media, and systems are provided for combining multiple video streams. One method for combining the multiple video streams includes extracting a sequence of media frames (224-1/224-2) from presenter (222-1) video and from shared digital rich media (222-2) video (340). The media frame (224-1/224-2) content is analyzed (226) to determine a set of space and time varying alpha values (228/342). A compositing operation (230) is performed to produce the combined video frames (232) based on the content analysis (226/344).
Type:
Application
Filed:
October 8, 2010
Publication date:
July 25, 2013
Applicant:
HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Abstract: A method for generating a combined image that comprises, during capturing a sequence comprising a plurality of initial images, detecting a subset of the plurality of images having no substantial motion thereamong, combining data from the subset to produce a combined image, and outputting the combined image.
Abstract: A system for video compositing is comprised of a storage device for storing a composite timeline file. A timeline manager reads rendering instructions and compositing instructions from the stored file. A plurality of filter graphs, each receiving one of a plurality of video streams, renders frames therefrom in response to the rendering instructions. A uniform resource locator (URL) incorporator generates URL based content. Hardware is responsive to the rendered frames, URL based content, and compositing instructions for creating a composite image. A frame scheduler is responsive to the plurality of filter graphs for controlling a frequency at which the hardware creates a new composite image. An output is provided for displaying the composite image. Methods of generating a composite work and methods of generating the timeline file are also disclosed. Because of the rules governing abstracts, this Abstract should not be used to construe the claims.
Abstract: A BD-ROM stores a video stream and a graphics stream. The video stream represents a moving picture. The graphics stream is used for overlaying a multi-page menu on the moving picture, and includes interactive control information (ICS) and graphics data (ODS) used for generating the multi page menu. A PTS attached to a PES packet containing the interactive control information shows timing for the first presentation of a main page the multi-page menu in accordance with the proceeding of the video stream playback. In addition, the interactive control information includes information (selection_time_out_pts) showing a timeout upon which a button on a page of the multi-page menu is automatically activated, and information (user_time_out_duration) showing a timeout upon which a sub-page of the multi-page menu is automatically removed.
Type:
Grant
Filed:
October 28, 2009
Date of Patent:
July 16, 2013
Assignee:
Panasonic Corporation
Inventors:
Joseph McCrossan, Tomoyuki Okada, Masayuki Kozuka
Abstract: A display device includes a plurality of reception units receiving a plurality of content, a storage unit, a plurality of scaler units reducing data sizes of the plurality of content, storing the respective content with the reduced data sizes in the storage unit, and reading the respective content stored in the storage unit according to an output timing, a plurality of frame rate conversion units converting frame rates of the respective read content, and a video output unit combining and displaying the respective content output from the plurality of frame rate conversion units. Accordingly, the resources can be minimized.
Type:
Application
Filed:
September 13, 2012
Publication date:
July 4, 2013
Applicant:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Jin-ho CHOO, Tae-sung KIM, Hak-hun CHOI, Jung-min KIM, Hyeong-gil KIM, Choon-sik JUNG, Soon-jae CHO, Cheul-hee HAHM
Abstract: The systems and methods disclosed transmit a composite channel to a receiver. The composite channel may be a static channel that contains different original channels of content in different locations on a displayed page, or may be a dynamic channel that is processed by the receiver to display a multiple different video streams on a single display device.
Type:
Grant
Filed:
July 7, 2010
Date of Patent:
May 28, 2013
Assignee:
EchoStar Technologies L.L.C.
Inventors:
Greg Goldey, Casey Manuel Paiz, Kerry Phillip Langloys Miller, John Card, II, David Christopher St. John-Larkin, Scott Higgins, Hugh Aaron Selway, Daniel Mark Overbaugh
Abstract: In a method for the video coding of image sequences images in the image sequence are coded in a scaled manner, in such a way that the video data produced contains information which permits the images to be represented in a plurality of differing stages of image resolution, the latter being defined by the number of pixels per image representation. The coding is block-based, in such a way that to describe a displacement of parts of one of the images, said displacement being contained in the image sequence, at least one block structure that describes the displacement is created. Said block structure is configured from one block, which is subdivided into sub-blocks, whereby some of the sub-blocks are further subdivided into successively smaller sub-blocks. A first block structure is temporarily created for at least one first resolution stage and a second block structure is created for a second resolution stage, the first resolution stage having a lower number of pixels than the second resolution stage.
Type:
Grant
Filed:
July 27, 2005
Date of Patent:
April 23, 2013
Assignee:
Siemens Aktiengesellschaft
Inventors:
Peter Amon, Andreas Hutter, Benoit Timmermann
Abstract: A system for providing stitched video from a first camera and a second camera to an electronic display system includes a processing circuit configured to associate a view a first camera with an approximate location. The processing circuit is further configured to build relationship data between the first camera and a second camera using the approximate location. The processing circuit is further configured to transform video from the first camera relative to video from the second camera, the transformation based on the relationship data. The processing circuit is further configured to use the transformed video to cause the stitched video to be provided to the electronic display system.
Abstract: A video processor and method for swiftly detecting swaps occurring between links during transmission of images by the dual-link system. A first image combiner unit combines an image D1 of a first link as an odd-numbered image, with an image of a second link, to generate a first combination image. A second image combiner unit combines an image of a second link as an odd-numbered image, with the image of the first link to generate a second combination image. An edge detector unit detects the horizontal edge of the first combination image and the second combination image. A judgment unit compares the number of triple edges in the first combination image and second combination image, and judges the combination image having more triple edges as the error image. The triple edges contain three consecutive edges along the horizontal direction, and the rising edges and falling edges are arrayed alternately.
Abstract: A method for displaying a video image includes acquiring foreground information about a video image to be output, where the foreground information includes information that defines a size of a foreground picture. The method further includes determining an adjustment coefficient for the foreground picture according to the size of the foreground picture, a size and a resolution of a display device, and a preset adjustment rule. The preset adjustment rule indicates that the product of the adjustment coefficient for the foreground picture and a zooming multiple for display on the display device is equal to a fixed constant. The method also includes adjusting the video image to be output according to the adjustment coefficient for the foreground picture, and outputting, to the display device for display, the video image after adjustment.
Abstract: As information to be processed at an object-based video or audio-visual (AV) terminal, an object-oriented bitstream includes objects, composition information, and scene demarcation information. Such bitstream structure allows on-line editing, e.g. cut and paste, insertion/deletion, grouping, and special effects. In the interest of ease of editing, AV objects and their composition information are transmitted or accessed on separate logical channels (LCs). Objects which have a lifetime in the decoder beyond their initial presentation time are cached for reuse until a selected expiration time. The system includes a de-multiplexer, a controller which controls the operation of the AV terminal, input buffers, AV objects decoders, buffers for decoded data, a composer, a display, and an object cache.
Type:
Grant
Filed:
March 20, 2007
Date of Patent:
April 16, 2013
Assignee:
The Trustees of Columbia University in the City of New York
Abstract: A rendering control unit determines movie and graphic display modes with reference to a rendering processing command, and acquires maximum speed information indicating the maximum value of a read/write speed allowed for a memory. The rendering control unit decides a speed to be distributed to a read/write speed of the movie data and a speed to be distributed to a read/write speed of the graphic data with respect to the memory, of a maximum speed indicated by the maximum speed information, based on the determination result. The rendering control unit controls a read/write access of an image with respect to the memory based on the rendering processing command, in accordance with the decided speeds.
Abstract: Systems and methods for providing custom video mosaic pages are provided. The custom pages may be locally-generated, remotely-generated, or partially locally-generated and partially remotely-generated. The custom pages may include local content, such as content recorded to a digital video recorder (DVR), overlaid on a multi-video composite feed. A local compositing system may render the mosaic pages and dynamically customize the pages based on user profile data, user preferences, and active user monitoring.
Abstract: A video image transfer device includes a transfer section arranged to selectively transfer to a display device a plurality of video signals acquired from at least one image pickup device, an assigning section arranged to divide a refresh rate of the display device into portions and assign the portions among the plurality of video signals, and a transfer controller arranged to control the transfer section in such a manner that each of the video signals is transferred to the display device at a timing according to the portion of the refresh rate assigned to each of the video signals. This makes it possible to prevent a dropped frame and an insufficient resolution of an important video image.
Abstract: Disclosed are various embodiments of high dynamic range (HDR) video. In one embodiment a method includes obtaining first and second frames of a series of digital video frames, where the first and second frames have different exposure levels. The second frame is reregistered with respect to the first frame based at least in part upon motion estimation, where the motion estimation accounts for the different exposure levels of the first and second frames, and the first frame is combined with the reregistered second frame to generate an HDR frame. In another embodiment, a video device includes means for attenuating the exposure of a video frame captured by an image capture device and an HDR converter configured to combine a plurality of digital video frames to generate an HDR frame, where each digital video frame combined to generate the HDR frame has a different exposure level.
Abstract: An apparatus which outputs an image using a plurality of chroma-key colors is provided. The apparatus includes a chroma-key-color-storage unit that stores multiple chroma-key colors; a microprocessor unit (MPU) that sets a block where the chroma-key color is applied using a pixel address of a foreground image; and a video controller that composes a background image and the foreground image using the block set by the MPU, and displays the composed image in the display unit.
Abstract: In one embodiment, a method includes identifying priority objects in a composite image created from one or more input video streams, processing the composite image, and generating a plurality of output video streams. The output video streams correspond to display screens available for viewing the output video streams and the number of output video streams is different than the number of input video streams. Processing the composite image includes positioning the priority objects to prevent placement of the priority objects at a location extending over two of the display screens. An apparatus is also disclosed.
Abstract: A method of processing on screen display data with an image post processor includes receiving a data stream from a video processor at a post processing device having at least one port, the data stream including on screen display data overlaid on a white background and the on screen display data overlaid on a black background, finding a difference between the on screen display data overlaid on white and the on screen display data overlaid on black, using the difference to determine a complement of an alpha blend value, performing image processing on the image data with the post processor by applying the complement of an alpha blend value to the image data to produce processed image data, and transmitting the processed image data and the on screen display data through a display port.
Abstract: Systems and methods provide drag-and-drop pasting for seamless image composition. In one implementation, a user casually outlines a region of a source image that contains a visual object to be pasted into a target image. An exemplary system automatically calculates a new boundary within this region, such that when pasted at this boundary, visual seams are minimized. The system applies a shortest path calculation to find the optimal pasting boundary. The best path has minimal color variation along its length, thus avoiding structure and visual objects in the target image and providing the best chance for seamlessness. Poisson image editing is applied across this optimized boundary to blend colors. When the visual object being pasted has fine structure at its border that could be truncated by the Poisson editing, the exemplary system integrates the alpha matte of the visual object into the Poisson equations to protect the fine structure.
Abstract: A splitter circuit means for use with a CATV network comprising: a first signal input for receiving a CATV signal; a first splitter for splitting the CATV signal into a first split signal and a second split signal; a second signal input for receiving a MoCA signal; a second splitter for splitting the MoCA signal into a third split signal and a fourth split signal; a first diplex filter arranged to lowpass filter the first split signal and highpass filter the third split signal and to combine said filtered signals into a first combined signal to be supplied in a first output; and a second diplex filter arranged to lowpass filter the second split signal and highpass filter the fourth split signal and to combine said filtered signals into a second combined signal to be supplied in a second output.
Abstract: Aspects of the invention are directed towards an apparatus and method for detecting local video pixels in mixed cadence video. The local video detector comprises a comb detector that is adaptive to the contour of moving objects and local contrast, a motion detector that is robust to false motion due to vertical details, and a fader value estimator that provides a video confidence value to a fader that combines film mode and video mode processing results. The coupling of the local video detector to a film mode detector increases the robustness, accuracy, and efficiency of local film/video mode processing as compared to the prior art.
Type:
Application
Filed:
June 30, 2011
Publication date:
January 3, 2013
Applicant:
STMicroelectronics Asia Pacific Pte Ltd.
Abstract: In a case of changing the resolution of the synthesized image signal obtained by synthesizing the first image signal and a second image signal, identification information for identifying whether or not the character signal is contained is added to the synthesized image signal based on presence/absence of the character signal, by a character signal creating unit and a character signal synthesizing unit. Then, based on the identification information, a resolution changing unit separates the synthesized image signal into a third image signal containing a component of the character signal and a fourth image signal containing no component of the character signal, changes the resolutions of the third image signal and the fourth image signal, and synthesizes the third image signal and the fourth image signal with the resolutions changed, based on the identification information.
Abstract: Disclosed is an image signal generating apparatus that includes a video information obtaining unit that obtains a plurality of video information, a characteristic information obtaining unit that obtains a plurality of predetermined characteristic information from each of the plurality of video information obtained by the video information obtaining unit, and a sorting unit that changes an order of displaying the plurality of the video information based on each of the plurality of characteristic information obtained from the characteristic information obtaining unit. The image signal generating apparatus further includes a display image signal generating unit that generates a video signal to display the plurality of video information based on information obtained, as a result of changing the order of displaying the plurality of the video information, from the sorting unit.
Type:
Grant
Filed:
June 10, 2008
Date of Patent:
December 25, 2012
Assignee:
Sony Corporation
Inventors:
Tetsujiro Kondo, Yoshinori Watanabe, Tsuyoshi Tanaka, Takuro Ema, Yusuke Akiwa
Abstract: A signal processing apparatus, comprising: an input section; a storage section; first and second signal processing sections; and a control section.
Abstract: Disclosed are a system and a method for a computerized automatic placement of objects in media files in post-production. Embodiments of the present invention enable the automatic placement of objects which appear in a media file, such as a digital video file. According to one embodiment of the present invention, the disclosed system and method allow the replacement of a specific pattern which appears in a given video file with a new image, in a fully transparent manner. According to embodiments of the present invention the makers of the media file place a designated pattern in the media file, such as a sticker on an object. Embodiments of the present invention enable the replacing of a new image on the designated pattern on the sticker with a new image.
Abstract: A mobile terminal including a plurality of cameras and a method of processing images acquired in a plurality of cameras is provided. The image processing method includes simultaneously operating a plurality of cameras, outputting a synchronous signal during an inactive time period and a data image signal during an active time period, wherein the active time period during which one camera of the plurality of cameras provides the data image signal occurs during the inactive time period of the other camera or cameras of the plurality of cameras.
Abstract: Various disclosed embodiments included systems and methods which allow two persons in different locations to enjoy a synchronized and shared viewing experience of original content that has been edited differently for broadcast in each location. Closed captioning text data to identify synchronization points in the broadcasts and used to provide synchronization services between the two broadcasts.
Type:
Grant
Filed:
August 26, 2009
Date of Patent:
December 4, 2012
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Toshiro Ozawa, Dang Van Tran, Praveen Kashyap, Fei Xie
Abstract: An apparatus processes a video stream signal. The video stream signal represents a succession of frames, comprising a plurality of dependent frames that each provide for prediction of visual content using selectable ones of a preceding and following anchor frame that precede and follow the plurality of frames respectively. When a shortage of transmission bandwidth occurs, the apparatus generates a processed version of the video stream signal wherein a bit-rate alteration is performed starting from one of the dependent frames before an end of said plurality. Said alteration is executed by substituting a standard prediction from the following anchor frame at least for data in the video stream signal that encodes a region in said one of the dependent frames.
Abstract: For providing a technology for preferably achieving high resolution of video signals of a video signal or moving video, with a small number of frames thereof, a video signal processing apparatus comprise an input unit to be inputted a plural number of video frames, a resolution converter unit, having resolution converting characteristics differing from each other, depending on a direction thereof, by composing 2 pieces of the video frames from the inputted video frames, thereby increasing a number of pixels making up the video frames, and a mixer unit for obtaining the output video frame by mixing output results of the resolution converter unit.
Abstract: A device and a method for obtaining a clear image, the method is executed by a digital signal processor (DSP) chip or a microprocessor. Through merging clear parts of two images with different focal lengths, a single clear image is obtained. The image is divided into a plurality of blocks, and then edge detection is processed to obtain an edge image. Blocks having more complete edge information are selected as clear blocks. Then, the clear blocks are further merged into a single clear image. Once the images are processed with the method, a depth of field of the image can be adjusted, without adding hardware elements of a digital camera such as a variable diaphragm.
Abstract: A video layer effects processing system which receives normal video and special effects information on separate layers has been presented. The system selectively mixes various video layers to transmit a composite video signal for a video display such as a television, or a virtual reality system. Special effects include spotlights, zooming, etc. Additional special effects such as shaping of objects and ghost effects are created by masking and superimposing selected video layers. The selective mixing, for example, to enable or disable, strengthen or weaken, or otherwise arrange special effects, can be directed from a remote source or locally by a user through real-time control or prior setup. The video layer effects processing system can also be incorporated into a set-top-box or a local consumer box.
Abstract: A graphics integrated circuit chip is used in a set-top box for controlling a television display. The graphics chip processes analog video input, digital video input, and graphics input. The chip includes a single polyphase filter that preferably provides both anti-flutter filtering and scaling of graphics. Anti-flutter filtering may help reduce display flicker due to the interlaced nature of television displays. The scaling of graphics may be used to convert the normally square pixel aspect ratio of graphics to the normally rectangular pixel aspect ratio of video.
Type:
Application
Filed:
April 23, 2012
Publication date:
October 25, 2012
Applicant:
BROADCOM CORPORATION
Inventors:
Alexander G. MacInnis, Chengfuh Jeffrey Tang, Xiaodong Xie, James T. Patterson, Greg A. Kranawetter
Abstract: An image processing device comprising an acquisition interface for acquiring recorded image data or recorded image signals and a graphics interface for a display device is constructed in such a way that a temporal sequence of recorded images can be acquired via the acquisition interface and an image data acquisition device connected to the latter and a temporal sequence of display images can be generated from the recorded image sequence, preferably with a smaller quantity of display images over the period of time in which the recorded image sequence is acquired. A display image of the display image sequence is generated from a partial sequence of at least two already acquired recorded images of the recorded image sequence, this partial sequence being associated with the display image of the display image sequence, and the display images can be sent to the display device via the graphics interface.
Abstract: A combined video image is created from a plurality of video images. Each video image has a plurality of video image components, and each video image component has an image component header. The image header is removed from each video image to be included in the combined video image, and a new image header is generated for the combined video image. The image component header of each video image component to be included in the combined video image is altered to set an image position for the video image component within the combined video image. The combined video image is generated by concatenating the new image header with the plurality of video images having no image headers and the video image components having the altered image component headers.
Abstract: A dynamic region, such as subtitles, is detected in a stream of digital video, and displayed along with a static region also in the stream, such as a video region, so that nearly all of the total vertical display area of a monitor displaying the dynamic and static regions is filled. For example, when the dynamic region is detected, the vertical size of the static region is adjusted to allow the vertical display of the dynamic and static region on the monitor simultaneously, without extending beyond or reducing to less than the total vertical display size of the monitor. Also, when the dynamic region is not detected, the vertical height of the static region is adjusted to fill the total vertical display size. Moreover, iterative increase and decrease in the vertical sizes of the regions may allow for a more pleasant viewer experience.
Abstract: It is possible to display a caption with an aspect ratio independent from the aspect ratio of a main video. When a flag indicating that the aspect ratio of the caption is 16:9 is set, the caption video image frame size (720×480) is converted so as to match the aspect ratio of 16:9 and the caption video obtained as the result is superimposed on the main video and displayed. That is, when the main video has an aspect ratio of 4:3, as shown in FIG. 19, reduction in the lateral direction is performed and the main video is displayed with addition of black tone at the right and left but the caption video is displayed with the aspect ratio of 16:9.
Type:
Grant
Filed:
November 30, 2005
Date of Patent:
October 2, 2012
Assignees:
Sony Corporation, Sony Computer Entertainment Inc.
Abstract: In an image processing apparatus, an image processor includes: a matrix switcher, wherein intersecting input and output lines are connected by crosspoint switches; a signal processing block connected to output lines on the upstream side, and input lines on the downstream side by reentry paths; and an output block connected to output lines on the upstream side. An external reentry settings unit sets a first and second port of the matrix switcher as external reentry output ports, the first port being a matrix switcher output port, and the second port being a matrix switcher input port. A reentry stage information generator generates reentry stage information, which indicates the stage of the internal signal processing path where a special function unit is logically positioned, and wherein the special function unit corresponds to the external reentry output ports of the signal processing block and the output block.
Abstract: A method for merging first and second images includes determining a pixel difference image from the first and the second images, determining first and second locations of the foreground subject from the pixel difference image, determining a minimum path of values from the pixel difference image for a region between the first and the second locations of the foreground subject, forming a merged image by stitching the first and the second images along the minimum path, and adjusting pixels of the merged image within a width of the minimum path.
Abstract: The present invention provides a true-scale, coordinate-matched, linked in real-time, dual three-dimensional/two-dimensional visual display/viewer. The display simultaneously shows a 3D digital image and an associated 2D digital image of a selected drawing. The display of the present invention allows a user to visualize an asset's location, surrounding environment and hazards and true scale structural details for interior or external structural scenes. Using the display and associated tools, the user can obtain real-time information of an environment, true-scale measurement, plan ingress/egress paths, shortest paths between points and the number of doorways in a structure and track objects within the displayed environment. The intelligence gained using the tools and 3D/2D display may be used and further manipulated by a single user or may be distributed to other users.
Abstract: A system for providing stitched video from a first camera and a second camera to an electronic display system includes a processing circuit configured to associate a view a first camera with an approximate location. The processing circuit is further configured to build relationship data between the first camera and a second camera using the approximate location. The processing circuit is further configured to transform video from the first camera relative to video from the second camera, the transformation based on the relationship data. The processing circuit is further configured to use the transformed video to cause the stitched video to be provided to the electronic display system.
Abstract: A method of raster-scan search for multi-region OSD and a system using the same are provided. The multi-region OSD is to be displayed on a screen after an alpha-blending of a mixer. The method includes at least following procedures. First, a global header search is executed in the first memory module for each or a portion of a plurality of search lines so as to determine a blending region and store header addresses of OSD regions in a second memory module. Next, whether there is a dummy region at the search line is determined. In addition, an alpha value for the dummy region, a dummy data of the dummy region and the OSD data of the OSD regions at the search line are transmitted to the mixer.
Abstract: An imaging apparatus that can output images with a plurality of resolutions is provided. The imaging apparatus comprises an imaging section (101) for imaging a subject using an imaging device (103) to generate an image signal and a resolution conversion section (110) for converting a resolution of the image signal captured by the imaging section (101) and outputting it. The resolution conversion section (110) reduces the image inputted from the imaging section (101) to a plurality of images of different sizes and outputs an embedded image in which the size-reduced images of different sizes are embedded in the input image size from the imaging section (101). Thereby, images of a plurality of image sizes can be outputted at the same time.
Abstract: The invention includes a system and the associated method for decoding multiple video signals. The video signals may be component video, composite video or s-video signals each having multiple portions using a multimode video decoder. A selection stage may combine the multiple video signals and select some of their video signal portions for processing. The selection stage may time-multiplex some of the video signal portions. An analog to digital conversion stage may be shared by the time-multiplexing of the video signals. A decoder stage may decode the various signal portions and provide decoded output video signals. These feature may reduce the overall cost of the system. Various clock signals may be used to operate various stages of a multimode video decoder. Some of the clock signals may run at different frequencies and others may operate at a different phase.
Type:
Grant
Filed:
April 17, 2007
Date of Patent:
September 11, 2012
Assignee:
Marvell World Trade Ltd.
Inventors:
Sanjay Garg, Bipasha Ghosh, Nikhil Balram, Kaip Sridhar, Shilpi Sahu, Richard Taylor, Gwyn Edwards, Loren Tomasi, Vipin Namboodiri