Abstract: A processing device includes a signal processing unit configured to: input first and second pixel signals from an image sensor having pixels for receiving light from a subject illuminated with pulsed light and generating a pixel signal, the first pixel signal being a one-frame signal read at read timing at least a part of which is included in an illumination period of the pulsed light, the second pixel signal being a one-frame signal read after the one frame of the first pixel signal; and generate a one-frame third pixel signal by synthesizing first and second overlap pixel signals, the first overlap pixel signal being defined as the first pixel signal corresponding to an overlap line of the pixels in which the illumination period of the pulsed light is overlapped with the read timing, the second overlap pixel signal being defined as the second pixel signal corresponding to the overlap line.
Abstract: In an intra-picture, a predetermined region is set as a normal encoding region, and a region other than the predetermined region is set as a simplified encoding region. In a subsequent picture, the normal encoding region is set as a larger region including a normal encoding region of a previous picture, and a region other than the normal encoding region is set as a simplified encoding region. In each picture, normal encoding is performed on a block of the normal encoding region, and simplified encoding in which a generated code amount and a computation amount are smaller than in the normal encoding is performed on a block of the simplified encoding region.
January 22, 2013
Date of Patent:
August 14, 2018
Nippon Telegraph And Telephone Corporation
Masaki Kitahara, Atsushi Shimizu, Ken Nakamura, Seisuke Kyochi, Naoki Ono
Abstract: A mobile terminal including a main body; a camera mounted in the main body and configured to capture an external environment; a display unit configured to display an image obtained by the camera in real time; a sensing unit configured to sense a motion of the main body while the camera is activated; a memory configured to store therein the image displayed on the display unit in response to a capture control command being applied; and a controller configured to generate a processed image formed by continuous images displayed on the display unit when the motion of the main body forms a continuous virtual track while displaying the image.
Abstract: A trailer monitoring system is provided herein. The system includes an imager configured to image a scene rearward of a vehicle and containing a target disposed on a trailer attached to the vehicle. The imager is configured separate the images into a first portion and a second portion. A display is configured to display the first portion of images. A controller is configured to analyze the second portion of images, adjust an image capture setting of the imager based on a status input, and modify each image in the second portion to increase the size of the imaged target relative to the total size of the captured image to determine at least one trailer related information.
Abstract: A method including obtaining a first plurality of decode-independent segments corresponding to an original video bit-stream, re-encoding each one of the first plurality of decode-independent segments according to a quality criterion, giving rise to a second plurality of decode-independent segments; and generating an output video bitstream including a third plurality of decode-independent segments, wherein each segment in the third plurality of decode-independent segments is selected from either the first plurality of decode-independent segments or from the second plurality of decode-independent segments using a selection criterion that is related at least to a segment's bit-rate.
Abstract: An image coding method includes: adding, to a candidate list, a first adjacent motion vector as a candidate for a predicted motion vector to be used for coding the current motion vector; selecting the predicted motion vector from the candidate list; and coding the current motion vector, wherein in the adding, the first adjacent motion vector indicating a position in a first reference picture included in a first reference picture list is added to the candidate list for the current motion vector indicating a position in a second reference picture included in a second reference picture list.
Abstract: Systems and methods are provided in which eyewear with time shared viewing is capable of supporting delivery of differing content to multiple viewers/users. The content that is delivered to each viewer includes a respective frame sequence that is displayed on a screen. The frame sequences are mixed when they are displayed on the screen. A lens assembly may be used by each viewer to view the frame sequence that is delivered to that viewer. For instance, a first lens assembly may pass a first frame sequence but not frame sequences other than the first frame sequence. A second lens assembly may pass a second frame sequence but not frame sequences other than the second frame sequence, and so on. The content that is delivered to a viewer may depend on a maturity of the viewer (e.g., whether the viewer's maturity is less than a maturity threshold).
December 30, 2010
Date of Patent:
May 22, 2018
Avago Technologies General IP (Singapore) Pte. Ltd.
James D. Bennett, Nambirajan Seshadri, Jeyhan Karaoguz, Adil Jagmag
Abstract: An imaging apparatus is capable of preventing unintended shooting setting when a through-image is displayed at a zoom position different from a field angle for reach shooting. The imaging apparatus includes a recording control unit configured to record a zoom position taken before a start of a function for temporarily changing the zoom position as a first position, a zoom control unit configured to perform control to move the zoom position from the first position to a second position when the function is started, and from the second position to the first position when the function is ended, and a control unit configured to perform control, when the zoom position is at the second position by the function, not to make any changes according to an instruction for changing specific shooting setting.
Abstract: A moving picture coding method includes: performing context adaptive binary arithmetic coding in which a variable probability value is used, on first information among multiple types of sample adaptive offset (SAO) information used for SAO that is a process of assigning an offset value to a pixel value of a pixel included in an image generated by coding the input image; and continuously performing bypass arithmetic coding in which a fixed probability value is used, on second information and third information among the multiple types of the SAO information, wherein the coded second and third information are placed after the coded first information in the bit stream.
Abstract: A display apparatus includes a display panel configured to display an image frame that includes a right-eye image and a left-eye image, and a touch panel configured to sense a user touch, wherein the touch panel includes a polarizing switch panel configured to switch a direction of polarization of light emitted from the display panel, and a parallax realization layer which is formed on one side of the polarizing switch panel and is configured to provide a binocular disparity image by using light emitted from the polarizing switch panel.
Abstract: A dump truck having a periphery monitoring apparatus is adapted to display an image of surroundings of the dump truck on a display apparatus. A rearward camera is attached to a vehicle body of the dump truck underneath a vessel of the dump truck and positioned forward of a rear edge portion of the vessel. A rearward camera image captured by the rearward camera includes the rear edge portion of the vessel, an area below the vessel, and an area rearward of the vehicle body. A display control section is configured to display the rear edge portion of the vessel in an upper region of the rearward camera image, to display a ground surface below the vessel in a lower region of the rearward camera image, and to display a vehicle body outer edge line obtained by vertically projecting an outer edge of the vessel onto the ground surface.
Abstract: An image decoding device includes an illumination compensation flag derivation section that derives an illumination compensation flag indicating whether illumination compensation is executed and an illumination compensation section (3093) that generates a predicted image of a target prediction unit using illumination change parameters derived from an image in a neighbor of the target prediction unit on a target image and a reference region image of a reference image corresponding to the target prediction unit in a case where the illumination compensation flag is a value indicating validity. In a case where a prediction unit which is a generation target of the predicted image is in a merge mode, the illumination compensation flag derivation section decodes the illumination compensation flag from coded data.
Abstract: An image coding method includes: generating a predicted block; calculating a residual block; calculating quantized coefficients by performing transform and quantization on the residual block; calculating a coded residual block by performing inverse quantization and inverse transform on the quantized coefficients; generating a temporary coded block; determining whether or not an offset process is required, to generate first flag information indicating a result of the determination; executing the offset process on the temporary coded block when it is determined that the offset process is required; and performing variable-length coding on the quantized coefficients and the first flag information.
Abstract: An apparatus for decoding a current block from an encoded bitstream includes a memory and a processor. The processor is configured to execute instructions stored in the memory to decode, from the encoded bitstream, a prediction mode of the current block and decode the current block using a transform type selected from a set that includes only a symmetrical discrete sine transform (SDST) and a two-dimensional discrete cosine transform (2D DCT). If the prediction mode is an inter prediction mode, the transform type used is the SDST. If the prediction mode is an intra prediction mode, the transform type used is the 2D DCT.
Abstract: Disclosed herein is an imaging device including: a lens case on which a lens is installed; a columnar member having a central axis coincidental with a rotational axis of the lens case and formed on a lateral surface of the lens case, the rotational axis extending in a tilt direction being an in-plane direction perpendicular to a horizontal direction; a base supporting the lens case so as to be turnable in the tilt direction; and a plate spring secured to the base and pressing a circumferential surface of the columnar member toward the base by elastic force.
Abstract: One image processing method has at least the following steps: receiving an image input in a device, wherein the image input is composed of at least one source image; receiving image selection information; regarding a source image included in the image input, checking the image selection information to determine whether the source image is selected or skipped; and performing an object oriented image processing operation upon each selected source image. Another image processing method has at least the following steps: receiving an image input in a device, wherein the image input is composed of at least one source image; receiving algorithm selection information; and regarding a source image included in the image input, checking the algorithm selection information to determine a selected image processing algorithm from a plurality of different image processing algorithms, and performing an image processing operation upon the source image based on the selected image processing algorithm.
September 15, 2013
Date of Patent:
March 27, 2018
Cheng-Tsai Ho, Ding-Yun Chen, Chi-Cheng Ju
Abstract: The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.
Abstract: A method for generating a 360 degree view. N input images are captured from N cameras fixed at a baseline height equidistantly about a circle in an omnipolar camera setup where N?3. Two epipolar lines are defined per field of view from the cameras to divide the field of view of each one of the cameras into four parts. Image portions from each one of the cameras are stitched together along vertical stitching planes passing through the epipolar lines. Regions corresponding to a visible camera are removed from the image portions using deviations performed along the epipolar lines. Output images are formed for left and right eye omnistereo views by stitching together the first one of the four parts from each one of the fields of view and the second one of the four parts from each one of the fields of view, respectively.
Abstract: A method for decoding an image is discussed. The method can include generating a residual block; reconstructing an intra prediction mode group indicator and a prediction mode index of a prediction unit; constructing a first group including three intra prediction modes using available intra prediction modes of left and top blocks of the prediction unit; determining the intra prediction mode corresponding to the prediction mode index in the first group as the intra prediction mode of the prediction unit when the intra prediction mode group indicator indicates the first group; generating a prediction block on the basis of the determined intra prediction mode of the prediction unit; generating a reconstructed block using the residual block and the prediction block; and performing deblocking filtering on a reconstructed picture.