Abstract: Provided is a method for a plurality of processing elements to filter a plurality of pixel blocks in a plurality of picture partitions for a single frame image. The method for filtering pixel blocks includes: checking the status of a second boundary pixel block adjacent to a picture partition boundary, the second boundary pixel block being one of a plurality of pixel blocks in a second picture partition and neighboring a first boundary pixel block in a first picture partition, the first boundary pixel block neighboring the picture partition boundary; selecting a filtering area for the first boundary pixel block based on the status of the second boundary pixel block; and filtering the filtering area for the first boundary pixel block.
Type:
Grant
Filed:
May 23, 2014
Date of Patent:
November 29, 2016
Assignee:
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
Inventors:
Seunghyun Cho, Hyun Mi Kim, Kyung Jin Byun, Nak Woong Eum
Abstract: A video coding device, such as a video decoder, may be configured to derive at least one of a coded picture buffer (CPB) arrival time and a CPB nominal removal time for an access unit (AU) at both an access unit level and a sub-picture level regardless of a value of a syntax element that defines whether a decoding unit (DU) is the entire AU. The video coding device may further be configured to determine a removal time of the AU based at least in part on one of the CPB arrival time and a CPB nominal removal time and decode video data of the AU based at least in part on the removal time.
Abstract: There is described a method for generating an image view from at least one input image for a 3D display using a backward processing enabling post processing for handling holes in the image view.
Type:
Grant
Filed:
December 21, 2011
Date of Patent:
November 15, 2016
Assignee:
ST-ERICSSON SA
Inventors:
Laurent Pasquier, Yves Mathieu, Jean Gobert
Abstract: A viewing system includes a detector of electromagnetic radiation (EMR); and a control system connected to receive signals from the detector, and configured to identify an image locator, disposed within a field of view of the detector in use, and select for detection by the detector the image locator or an image located by the image locator. A viewing method includes detecting electromagnetic radiation from an image locator disposed within a field of view of a detector; identifying the image locator with a control system; and displaying the image locator or an image located by the image locator on one or more displays.
Abstract: A video coding device, such as a video decoder, may be configured to decode a buffering period supplemental enhancement information (SEI) message associated with an access unit (AU). The video decoder is further configured to decode a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in the AU and CPB removal time of a second DU from the buffering period SEI message, wherein the AU has a TemporalId equal to 0. The video decoder is configured to determine a removal time of the first DU based at least in part on the decoded duration and decode video data of the first DU based at least in part on the removal time.
Abstract: A method is provided for a surveillance system including a remote sensor apparatus a portable receiver. The method includes receiving, at the portable receiver, image data collected simultaneously from a plurality of image sensors installed in the remote sensor apparatus. The method also includes processing, at the portable receiver, the received image data by, for each of the first plurality of captured images, detecting image features and performing feature matches by matching control points across neighboring images based on a control-point assumption that the control points are relevant only in areas of field of view overlap. The areas of field of view are determined by positions of the plurality of image sensors and a field of view of a wide-angle lens fitted with each of the plurality of image sensors. The method further includes generating a panoramic view by blending the processed first plurality of captured images.
Type:
Grant
Filed:
March 13, 2013
Date of Patent:
October 25, 2016
Assignee:
BOUNCE IMAGING, INC.
Inventors:
Francisco Aguilar, Pablo Alvarado-Moya, Mikhail Fridberg
Abstract: An image processing apparatus synthesize a synthesized image by taking pixel values of pixels in a synthesized image corresponding to each point on the three dimensional projection plane as viewed from a specific viewpoint position, as pixel values of corresponding pixels of the first image based on the first correspondence relationship, and taking pixel values of each pixel in the synthesized image corresponding to pixels identified in the first image as being pixels representing a solid object, as pixel values of corresponding pixels of the second image based on the second correspondence relationship.
Abstract: A method for performing point-to-point measurements includes (i) determining a distance to a first point and obtaining an image at a first pose, and (ii) determining a distance to a second point and obtaining an image at a second pose. The images have an overlapping portion. A change in pose between the first pose and the second pose is determined using observed changes between common features in the overlapping portion of the images and a scale associated with the images. A distance between the first point and the second point is determined based on the first distance, the second distance, and the change in pose between the first pose and the second pose.
Type:
Grant
Filed:
November 12, 2013
Date of Patent:
October 18, 2016
Assignee:
Trimble Navigation Limited
Inventors:
Kurtis Maynard, Gregory C. Best, Robert Hanks, Hongbo Teng
Abstract: A method and an apparatus of encoding/decoding intra prediction mode using a plurality of candidate intra prediction modes are disclosed. A method of decoding an intra prediction mode can comprise deriving three candidate intra prediction modes about a current block and deriving an intra prediction mode of the current block. Therefore, by predicting an intra prediction mode of the current block based on a plurality of candidate intra prediction modes, video encoding efficiency can be improved.
Abstract: Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display by detecting depth values occurring in the 3D image data and setting auxiliary depth values for the auxiliary graphical data adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
Type:
Grant
Filed:
February 9, 2010
Date of Patent:
September 6, 2016
Assignee:
Koninklijke Philips N.V.
Inventors:
Philip Steven Newton, Gerardus Wilhelmus Theodorus Van Der Heijden, Wiebe De Haan, Johan Cornelis Talstra, Wilhelmus Hendrikus Alfonsus Bruls, Georgios Parlantzas, Marc Helbing, Christian Benien, Vasanth Philomin, Christiaan Varekamp
Abstract: A disparity calculating device 100 obtains a base image from a first camera 1 and a reference image from a second camera 2. An image dividing unit 101 divides each of the base image and reference image into multiple regions in accordance with dividing conditions set by a condition setting unit 102. A correction amount determining 103 calculates image shift amount for each of the divided regions, and determines correction amount based on the images shift amount thereof. A disparity calculating unit 104 performs correction of the image on each region of divided base images and divided reference images obtained from the image dividing unit 101 based on correction amount determined by the correction amount determining unit 103 to obtain disparity based on the images after correction.
Abstract: A method and apparatus for operating a security system are provided. The method includes the steps of the security system monitoring a secured area, a video camera obtaining images of the secured area, detecting at least one person within the obtained images and identifying a predetermined context associated with the identified at least one person within the obtained images.
Type:
Grant
Filed:
November 19, 2010
Date of Patent:
August 30, 2016
Assignee:
HONEYWELL INTERNATIONAL INC.
Inventors:
Eric Oh, David S. Zakrewski, Mi Suen Lee
Abstract: A method and an apparatus of encoding/decoding intra prediction mode using a plurality of candidate intra prediction modes are disclosed. A method of decoding an intra prediction mode can comprise deriving three candidate intra prediction modes about a current block and deriving an intra prediction mode of the current block. Therefore, by predicting an intra prediction mode of the current block based on a plurality of candidate intra prediction modes, video encoding efficiency can be improved.
Abstract: A method of providing video feeds from a plurality of cameras to a plurality of screens including determining a plurality of constraints on a centralized processor processing the video feeds, determining a camera semantic classification for each of the plurality of cameras, determining historical events captured by each of the plurality of cameras, and providing at least one video feed to at least one of the screens according to the plurality of constraints on the centralized processor, the camera semantic classifications and the historical events.
Type:
Grant
Filed:
July 9, 2013
Date of Patent:
August 16, 2016
Assignee:
GLOBALFOUNDRIES INC.
Inventors:
Wei Shan Dong, Ning Duan, Arun Hampapur, Ke Hu, Hongfei Li, Li Li, Wei Sun
Abstract: A method and system for real time luminance correction and detail enhancement of a video image including the steps of extracting a luminance component from a video image, separating the luminance component into an illumination layer and a scene reflectivity layer, the illumination layer having a dynamic range, compressing the dynamic range of the illumination layer to generate a corrected illumination layer, filtering the reflectivity layer to generate an enhanced reflectivity layer; and combining the corrected illumination layer with the enhanced scene reflectivity layer to generate an enhanced luminance image, is provided. A system for real time luminance correction and detail enhancement of a video image is also provided.
Type:
Grant
Filed:
December 31, 2012
Date of Patent:
August 9, 2016
Assignee:
Karl Storz Imaging, Inc.
Inventors:
Martin Steiner, Jonathan Bormet, Gaurav Sharma
Abstract: The embodiments reduce output delay for pictures by determining after a current picture has been decoded and stored in a decoded picture buffer, DPB, a number of pictures in the DPB that are marked as needed for output. This number is compared, after a current picture has been decoded and stored in the DPB against a value derived from at least one syntax element present or to be present in a bitstream representing pictures of a video sequence. If this number is greater than the value a picture, which is the first picture in output order, of the pictures in the DPB that are marked as needed for output is preferably output and marked as not needed for output.
Abstract: An unused value of the general_profile_space syntax element can be used to indicate that a layer with a non-zero value of nuh_layer_id in a multi-layer bitstream is otherwise conforming to a profile.
Abstract: With an optical inspection tool, images of a plurality of patches of a plurality of dies of a reticle are obtained. The patch images are obtained so that each patch image is positioned relative to a same reference position within its respective die as another die-equivalent one of the patch images in each the other ones of the dies. For each patch image, an integrated value is determined for an image characteristic of sub-portions of such patch image. For each patch image, a reference value is determined based on the integrated values of the patch image's corresponding die-equivalent patch images. For each patch image, a difference between that patch image's integrated value and an average or median value of its die-equivalent patch images is determined whereby a significant difference indicates a variance in a pattern characteristic of a patch and an average or median pattern characteristic of its die-equivalent patches.
Abstract: A computer-implemented method may include receiving, via a mobile access network, surveillance data from one or more mobile surveillance devices, wherein the one or more mobile surveillance devices are associated with a monitored location, system, or group. An event condition associated with the monitored location, system, or group is identified based on the received surveillance data, wherein the event condition corresponds to at least one of the one or more mobile surveillance devices. An alert notification is generated and transmitted to one or more user devices based on the event identified condition. A request to view at least a portion of the surveillance data is received from a user device in response to the alert notification. At least the portion of the surveillance data is transmitted to the user device in response to the request.
Type:
Grant
Filed:
June 24, 2013
Date of Patent:
July 12, 2016
Assignee:
Cellco Partnership
Inventors:
Kevin Lim, Rahim Charania, Ramesh Marimuthu, Alex Hoyos
Abstract: An imaging system having a first laser emitting a light beam to illuminate the object is provided. The system includes first and second beam splitters. The first beam splitter combines a first light beam portion and a third light beam portion emitted from a second laser to form a first interference pattern. The second beam splitter combines a second light beam portion and a fourth light beam portion to form a second interference pattern. The system includes digital cameras generating raw image data based on the first and second interference patterns, and a computer processing the raw image data to obtain synthetic image plane data.
Type:
Grant
Filed:
July 16, 2014
Date of Patent:
July 12, 2016
Assignee:
Lockheed Martin Corporation
Inventors:
Melvin S. Ni, Joseph Marron, James Wes Irwin