Abstract: A controller and method therein for controlling encoding of a set of images to enable blending of an overlapping area, where a first image and a second image overlap each other are disclosed. The controller encodes macroblocks of the non-overlapping area in the first image using a set of base quantization parameter values, QP-values, and adds the same set of base QP-values to a header of each macroblock. The controller encodes macroblocks of the overlapping area in the first image using a set of first QP values, and adds a modified set of the first QP-values to a header of each macroblock. The controller encodes macroblocks of the overlapping area in the second image using a set of second QP values, and adds a modified set of the second OP-values to a header of each macroblock.
Abstract: A method and an image processing entity for applying a convolutional neural network to an image are disclosed. The image processing entity processes the image while using the convolutional kernel to render a feature map, whereby a second feature map size of the feature map is greater than a first feature map size of the feature maps with which the feature kernel was trained. Furthermore, the image processing entity repeatedly applies the feature kernel to the feature map in a stepwise manner, wherein the feature kernel was trained to identify the feature based on the feature maps of the first feature maps, wherein the feature kernel has the first feature map size.
Type:
Grant
Filed:
December 4, 2018
Date of Patent:
November 10, 2020
Assignee:
AXIS AB
Inventors:
Niclas Danielsson, Simon Molin, Markus Skans
Abstract: Apparatus, systems and techniques associated with battery powered wireless camera systems are provided. In an example, a system includes a battery powered wireless camera including an internal battery to provide energy and an image capture module to capture images. Further, the battery powered wireless camera includes a low-bandwidth radio transceiver which may wirelessly communicate with a base station and a micro switch, and may receive commands for operation of the battery powered wireless camera. The battery powered wireless camera also includes a high-bandwidth radio transceiver which may wirelessly communicate with the base station. If the high-bandwidth radio transceiver is powered down, the image capture module may store captured images. If the micro switch triggers activation, the high-bandwidth radio transceiver may power up and transmit the captured images to the base station.
Abstract: A method of adding comfort noise to a video sequence comprising setting parameters of a deblocking filter of a video encoder to change values during the video sequence, encoding frames of the video sequence using the parameters of the deblocking filter that are set to change values during the video sequence, thereby introducing comfort noise in the video sequence, and including the encoded frames in a bitstream together with an indication of which parameters of the deblocking filter were used when encoding the frames of the video sequence.
Type:
Grant
Filed:
December 18, 2018
Date of Patent:
November 10, 2020
Assignee:
AXIS AB
Inventors:
Alexander Toresson, Viktor Edpalm, Fredrik Pihl
Abstract: A method for identifying events in a scene captured by a motion video camera comprises two identification processes, a temporary identification process and a long-term identification process. The temporary process includes: analyzing pixel data from captured image frames and identifying events; registering camera processing data relating to each image frame subjected to the identification of events; and adjusting weights belonging to an event identifying operation, wherein the weights are adjusted for achieving high correlation between the result from the event identifying operation and the result from the identification based on analysis of pixels from captured image frames of the captured scene. The long-term identification process includes: identifying events in the captured scene by inputting registered camera processing data to the event identifying operation.
Type:
Grant
Filed:
December 21, 2017
Date of Patent:
November 3, 2020
Assignee:
AXIS AB
Inventors:
Viktor Edpalm, Erik Andersson, Song Yuan
Abstract: The present invention relates to the field of image encoding. In particular, it relates to methods and devices where the concept of auxiliary frames may be employed to reduce or remove the need of copying data, for reference encoding purposes, between encoders which encode different parts of an image frame. This purpose is achieved by spatially modifying (S104) original image data before encoding (S106, S108) it using the encoders, and using (S110) the encoded image data as image data of an auxiliary frame. The auxiliary frame is referenced by an inter frame comprising motion vectors corresponding to a restoration of the auxiliary frame image data back to a spatial arrangement of the original image data.
Abstract: There is provided a method and system for tracking a plurality of objects in a sequence of images. The method comprises: receiving an image from a sequence of images; calculating, based on the received image, a detection probability map which, for each state of a predefined set of states of an object, specifies a probability that any of the plurality of objects is detected in that state in the received image; updating, based on the calculated detection probability map, an object identity map and an accumulated probability map recursively from a previous object identity map and a previous accumulated probability map corresponding to a previously received image of the sequence of images, and tracking each object in the received image based on the updated object identity map and the updated accumulated probability map.
Abstract: A power control system for controlling operation of a first battery-powered device positioned at a remote location from a camera. The power control system includes a receiving module, a retrieving module, a comparison module, and a transmission module. The receiving module is configured to receive at least one of a pan angle and a tilt angle of the camera indicative of a field of view. The retrieving module is configured to retrieve a position of the first battery-powered device. The comparison module is configured to compare the pan angle and the tilt angle with the position of the first battery-powered device, and determine whether the first battery-powered device is positioned within the field of view of the camera based on the comparison. The transmission module is configured to transmit instructions to control power consumption of the first battery-powered device, based on the determination.
Abstract: A method and a camera system for stitching video data from two image sensors arranged to each capture video data of overlapping camera views comprises detecting motion in an area in the camera views corresponding to the overlapping camera views, determining an activity distance, being the distance from a position at the location of the two image sensors to an activity position including the detected motion, positioning in a three-dimensional coordinate system a predefined projection surface at a position having a distance between the position at the location of the image sensors and a position of the projection of the activity onto the projection surface that corresponds to the determined activity distance, projecting the video data from each of the image sensors onto the predefined projection surface that have been positioned at the activity distance, and outputting a two-dimensional video corresponding to the projection onto the projection surface.
Abstract: A method for detecting an object in a first distorted image using a sliding window algorithm, comprising: receiving an inverse of a mathematical representation of a distortion of the first distorted image; wherein the detection of an object comprises sliding a sliding window over the first distorted image, the sliding window comprising a feature detection pattern, and for each position of a plurality of positions in the first distorted image: transforming the sliding window based on the inverse of the mathematical representation of the distortion at the position, wherein the step of transforming the sliding window comprises transforming the feature detection pattern of the sliding window such that a resulting distortion of the feature detection pattern of the transformed sliding window corresponds to the distortion of the first distorted image at the position; and using the transformed sliding window comprising the transformed feature detection pattern in the sliding window algorithm.
Abstract: A method and an encoder for encoding a video stream in a video coding format supporting auxiliary frames which includes receiving first image data captured by a video capturing device, using the first image data as image data of a first auxiliary frame, encoding the first auxiliary frame as an intra frame, and encoding a first frame as an inter frame referencing the first auxiliary frame, wherein motion vectors of the first frame are representing a first image transformation to be applied to the first image data.
Abstract: There is provided a method, a device (104), and a system (100) for enhancing changes in an image (103a) of an image sequence (103) captured by a thermal camera (102). An image (103a) which is part of the image sequence (103) is received (S02) and pixels (408) in the image that have changed in relation to another image (103b) in the sequence are identified (S04). Based on the intensity values of the identified pixels, a function (212, 212a, 212b, 212c, 212d, 212e) which is used to redistribute intensity values of changed as well as non-changed pixels in the image is determined (S06). The function has a maximum (601) for a first intensity value (602) in a range (514) of the intensity values of the identified pixels, and decays with increasing distance from the first intensity value.
Abstract: The present invention relates to the field of image encoding. In particular, it relates to methods and devices where the concept of auxiliary frames may be employed to reduce or remove the need of copying data, for reference encoding purposes, between encoders which encode different parts of an image frame. This purpose is achieved by spatially modifying (S104) original image data before encoding (S106, S108) it using the encoders, and using (S110) the encoded image data as image data of an auxiliary frame. The auxiliary frame is referenced by an inter frame comprising motion vectors corresponding to a restoration of the auxiliary frame image data back to a spatial arrangement of the original image data.
Inventors:
Joakim Veberg, Henrik Svedberg, Johan Persson, Johan Widerdal, Jonas Sjögren, Mathias Walter, Mariano Vozzi, Ola Andersson, Christian Jacobsson, Daniel Ahman