Abstract: Storing and retrieving media recordings in an object comprises a first method of storing a media recording comprising the following steps performed at an ingest component of a system: assigning a recording ID to the media recording; storing media data in one or more data objects in an object store, each carrying the recording ID; storing media metadata in an attribute object carrying the recording ID; computing a hash of the metadata; and storing the hash, the recording ID, a recording interval and a recording source in an index object. The first method further comprises concatenating the index object with an existing index object using a maintenance component. A second method is suitable for retrieving a media recording stored in an object store using the first method. A third method is suitable for performing maintenance on the object store by concatenating specific groups of index objects.
Abstract: A device and a method of signing an encoded video sequence comprising: obtaining an encoded video sequence composed of encoded image frames; generating a set of one of more frame fingerprints for each encoded image frame; generating a document comprising a header of a supplemental information unit, and a representation of the generated sets of one or more frame fingerprints; generating a document signature by digitally signing the document; generating the supplemental information unit to only consist of the document, the document signature and an indication of an end of the supplemental information unit; and signing the encoded video sequence by associating the generated supplemental information unit with the encoded video sequence.
Abstract: The present disclosure generally relates to the field of camera surveillance, and in particular to a method and control unit for controlling a bitrate of a video stream captured with an image acquisition device.
Abstract: An encoding device and a method for encoding an image frame comprising a plurality of pixel blocks are provided. A respective offset compression value is set for each of the plurality of pixel blocks based on an interest level associated with the pixel block, wherein each offset compression value defines an offset in relation to a reference compression value set for the image frame. In the image frame, one or more low contrast regions having a contrast which is below a predefined contrast threshold are identified. For pixel blocks within the one or more low contrast regions having set offset compression values based on the associated interest levels higher than a predefined offset compression threshold, the set offset compression values are selectively restricted to be at most equal to the predefined offset compression threshold. The image frame is then encoded using the set offset compression values.
Type:
Application
Filed:
December 2, 2022
Publication date:
June 8, 2023
Applicant:
Axis AB
Inventors:
Viktor EDPALM, Alexander TORESSON, Johan PALMAEUS
Abstract: A method for determining whether or not a transparent protective cover of a video camera comprising a lens-based optical imaging system is partly covered by a foreign object is disclosed. The method comprises: obtaining (402) a first captured image frame captured by the video camera with a first depth of field; obtaining (404) a second captured image frame captured by the video camera with a second depth of field which differs from the first depth of field; and determining (406) whether or not the protective cover is partly covered by the foreign object by analysing whether or not the first and second captured image frames are affected by presence of the foreign object on the protective cover such that the difference between the first depth of field and the second depth of field results in a difference in a luminance pattern of corresponding pixels of a first image frame and a second image frame.
Abstract: A and method encode a view area within a current image frame of a video into an encoded video area frame. The view area is a respective subarea of each image frame, each image frame comprising first and second image portions, and between previous and current image frames, the view area moves across a boundary between the first and second image portions. First and second encoders are encode image data of the first and second image portions, respectively. First, second and third portions of the view area are identified based on their respective location in the previous and current image frames. Image data of the first and third portions are inter-coded as first and third encoded slices/tiles. Image data of the second portion of the view area in the current image frame are intra-coded as a second encoded slice/tile. The encoded slices/tiles are merged into the encoded video area frame.
Type:
Application
Filed:
November 3, 2022
Publication date:
June 1, 2023
Applicant:
Axis AB
Inventors:
Viktor EDPALM, Song YUAN, Toivo HENNINGSSON, Johan PALMAEUS
Abstract: A method of controlling an operating temperature of a data storage device is disclosed. A threshold temperature for the storage device is set. Over time, during operation of the data storage device, an operating temperature of the storage device is measured at a plurality of points in time. A plurality of temperature measurements as a function of time are thereby obtained. Above threshold temperature measurements are accumulated over time to form a high temperature accumulation value (Vhigh), and below threshold temperature measurements are accumulated to form a low temperature accumulation value (Vlow). The low temperature accumulation value (Vlow) and the high temperature accumulation value (Vhigh) are compared. If an outcome of the comparison is that the high temperature accumulation value (Vhigh) is too high in relation to the low temperature accumulation value (Vlow), an operating temperature lowering action is initiated.
Abstract: An exposure time controller for controlling an exposure time (ET) variable of a video camera, which is associated with an auto-exposure algorithm configured to reduce an exposure mismatch (?E) by incrementing and decrementing the ET variable, which comprises: a memory for recording ET values applied while the video camera is imaging a scene and the algorithm is active; and processing circuitry configured to: determine that the exposure mismatch exceeds a threshold while the video camera is imaging the scene; estimate a distribution of the recorded ET values; based on the estimated distribution, identify multiple relatively most frequent ET values; and, in reaction to determining that the exposure mismatch exceeds the threshold, assign one of the identified ET values to the ET variable.
Abstract: A method of configuring a camera comprising: collecting a video data with the camera, providing a plurality of imaging profiles, each imaging profile being associated with a set of scene characteristics, for each imaging profile, generating a spatial output data in dependence on the video data and the set of scene characteristics of the imaging profile, wherein the spatial output data is indicative of spatial locations of events detected in the video data matching one or more of the scene characteristics, performing a comparison of the spatial output data of each of the plurality of imaging profiles, selecting a preferred imaging profile in dependence on the comparison, configuring the camera to operate according to the preferred imaging profile.
Type:
Grant
Filed:
August 31, 2021
Date of Patent:
May 16, 2023
Assignee:
Axis AB
Inventors:
Björn Benderius, Jimmie Jönsson, Johan Jeppsson Karlin, Niclas Svensson
Abstract: A method for suppressing chromatic aberration, especially blue or red fringing, in a digital image with multiple color channels is disclosed. The method comprises negatively correcting a first color channel by subtracting an overshoot component of the first color channel. The subtraction is subject to a lower threshold, which is dependent on a local value of at least one further color channel.
Abstract: Upon detecting an event indicating enabling of security sensitive functionality of an electronic device, a value previously unknown to the electronic device is obtained and the current content of the data storage to a new current content of the data storage is updated according to an updating function based on the current content of the data storage and the value, wherein, without privileged access, the current content of the data storage can only be updated using the updating function. The value is further obtained in a management module and an expected new current content of the data storage is determined in the management module according to the updating function based on known original content of the data storage and the value. Upon determining that the new current content of the data storage differs from the expected new current content of the data storage, a validation module generates a security notification.
Abstract: An imaging system is described having at least three cameras and a processing unit. The at least three cameras have a common field of view and camera centres positioned along a line. The at least three cameras are configured to image a calibration object to generate a set of calibration object images, wherein the calibration object is located nearby the at least three cameras and in the common field of view. The at least three cameras are further configured to image a scene comprising a set of distant scene position points to generate a set of position point images. The processing unit is configured to generate a set of calibration parameters in dependence on the set of calibration object images and the set of position point images.
Type:
Grant
Filed:
December 4, 2020
Date of Patent:
May 2, 2023
Assignee:
Axis AB
Inventors:
Håkan Ardö, Mikael Nilsson, Karl Erik Åström, Martin Ahrnbom
Abstract: There are provided encoding and decoding methods, and corresponding systems which are beneficial in connection to performing a search among regions of interest, ROIs, in encoded video data. In the encoded video data, there are independently decodable ROIs. These ROIs and the encoded video frames in which they are present are identified in metadata which is searched responsive to a search query. The encoded video data further embeds information which associates the ROIs with sets of coding units, CUs, that spatially overlap with the ROIs. In connection to independently decoding the ROIs found in the search, the embedded information is used to identify the sets of CUs to decode.
Abstract: The present disclosure relates to a method of providing a video stream from a system comprising a main unit and a plurality of sensors, wherein the main unit is configured to receive data from the plurality of sensors, the method comprising the steps of: transmitting a multi-view video stream from the main unit to a client, wherein the multi-view video stream represents a multi-view composed of sensor data views from the plurality of sensors in the system; receiving, in the main unit, a command from the client representing a zoom-in operation; computing an updated multi-view according to the received command; evaluating if the updated multi-view includes an area outside a dominating sensor data view in the updated multi-view that is greater than a predetermined threshold; if the area outside the dominating sensor data view is greater than the predetermined threshold, transmitting a multi-view video stream representing the updated multi-view; if the area outside the dominating sensor data view is less than, or
Abstract: A first image is captured by the camera, using a first focus setting and a first aperture size. A first protrusion focus measure in a protrusion area of an object in the first image and a first recess focus measure in a recess area of the object in the first image are determined. A second image is captured by the camera, using the first focus setting and a second aperture size, and the object is detected. A second protrusion focus measure and a second recess focus measure are determined in the second image. A protrusion focus difference between the first and second protrusion focus measures, and a recess focus difference between the first and second recess focus measures are calculated. The protrusion focus difference and the recess focus difference are compared and if they differ by less than a predetermined threshold amount, it is determined that the object is fake.
Type:
Application
Filed:
September 28, 2022
Publication date:
April 20, 2023
Applicant:
Axis AB
Inventors:
Björn BENDERIUS, Jimmie JÖNSSON, Johan Jeppsson KARLIN, Niclas SVENSSON, Andreas MUHRBECK
Inventors:
Jakob Beckerot, Kristoffer Neutel, Jake Snowdon, Charlotte Gunsjö, Jonas Sjögren, Mariano Vozzi, Mathias Walter, Sebastian Engwall, Dan Carlberg