Abstract: A method for encoding a video stream for the provision of prunable video data, comprising identifying, in the video stream, a first event-generating occurrence in one or more frames of the video stream, and, in an encoder, using the event-generating occurrence to initiate a hierarchical branch extending from a base-layer when encoding the video stream.
Type:
Grant
Filed:
August 10, 2021
Date of Patent:
September 26, 2023
Assignee:
Axis AB
Inventors:
Sebastian Hultqvist, Viktor Edpalm, Axel Keskikangas, Anton Eliasson
Abstract: A method of masking or marking an object in an image stream is provided, including: generating one or more output image streams by processing an input image stream capturing a scene, including discarding pixel information about the scene provided by pixels of the input image stream, such that the discarded pixel information about the scene is not included in any output image stream; and detecting an object in the scene using the discarded pixel information, wherein generating the one or more output image streams includes masking or marking the detected object in at least one output image stream once deciding that the object is at least partially visible within this at least one output image stream. A corresponding device, computer program and computer program product are also provided.
Abstract: A method of masking in an output image stream includes receiving an input image stream capturing a scene, processing the input image stream to generate the output image stream, including using a detector to detect objects in the scene and a tracker to track objects in the scene based on information provided by the detector, and further including to generate a particular output image of the output image stream by checking whether there exists a particular area in the scene in which an evaluation of a historical performance of the detector and/or tracker fulfills at least one condition, and to, if confirming that such a particular area exists, mask the particular area of the scene in the particular output image. A corresponding device, computer program, and computer program product are also provided.
Type:
Application
Filed:
February 22, 2023
Publication date:
September 21, 2023
Applicant:
Axis AB
Inventors:
Ludvig HASSBRING, Jessica Nilsson, Song Yuan
Abstract: A method for image stabilization of a video stream captured by a panable and/or tiltable video camera the method comprising: generating a motor position signal, Y1, of a pan/tilt motor of the video camera; generating a gyro signal, Y2, of a gyroscopic sensor of the video camera; generating a reference signal from a predetermined movement curve of the pan/tilt motor of the video camera, the reference signal is a reference on how a pan/tilt operation of the video camera is made without shaking of the video camera; from the motor position signal, Y1, and the gyro signal, Y2, generating a combined signal, Y, according to: Y=F1*Y1+F2*Y2, wherein F1 is a low pass filter and F2 is a high pass filter; and performing image stabilization on the video stream based on a difference between the combined signal, Y, and the reference signal.
Type:
Grant
Filed:
November 22, 2021
Date of Patent:
September 19, 2023
Assignee:
Axis AB
Inventors:
Tor Nilsson, Johan Förberg, Toivo Henningsson, Johan Nyström
Abstract: A wearable form factor wireless camera may include an image sensor, coupled to an infrared detection module, which captures infrared video. The wearable form factor wireless camera may attach to clothing worn on a user and be ruggedized. A storage device may store the captured infrared video at a first fidelity. The stored infrared video may be capable of being transmitted at the first fidelity and at a second fidelity, with the first fidelity providing a higher frame rate than the second fidelity. A burst transmission unit may transmit the stored infrared video at the second fidelity via a cellular network. The infrared detection module, the image sensor, the storage device and the burst transmission unit may be powered by a battery. The image sensor, the infrared detection module, the battery, the storage device and the burst transmission unit may be internal to the wearable form factor wireless camera.
Abstract: A method of transfer learning an object recognition neural network comprises acquiring a set of image frames; determining, by a first object recognition algorithm implementing an object recognition neural network, a plurality of object recognitions in the set of image frames; determining verified object recognitions by evaluating the plurality of object recognitions by a second, different from the first, object recognition algorithm, wherein an object recognition with a positive outcome in said evaluating forms a verified object recognition; forming a training set of annotated images comprising image frames associated with the verified object recognitions; performing transfer learning of the object recognition neural network based on the training set of annotated images.
Abstract: The present invention relates to the field of video encoding. In particular, it relates to a method 300 of encoding images captured by a camera and to an image processing device. An image sequence captured with an image sensor of the camera is obtained S310, and an oscillation frequency of a periodic movement of the camera during capturing of the image sequence is determined S320. A base subset of images of the image sequence corresponding to the oscillation frequency is identified S330 and the base subset of images are encoded S340 into an encoded video stream comprising intra frames and inter frames.
Abstract: There is provided techniques for setting frame rates of a camera. The camera comprises a plurality of image sensors arranged to capture images to be stitched into a panorama image of a scene. Information of a respective angle between the optical axis of each of the plurality of image sensors and a direction along which a straight path structure, to be monitored by the camera, extends through the scene is obtained. The plurality of image sensors are divided into at least two groups as a function of the angles. All the angles of the image sensors within each of the at least two groups are part of its own continuous angle interval. One frame rate is set per each of the at least two groups. The frame rate decreases from the group with highest continuous angle interval towards the group with lowest continuous angle interval.
Abstract: A method of encoding a video stream including an overlay is provided, including: capturing a first image; adding an overlay to the first image at a first position, and encoding the first image in a first frame of a video stream; capturing a second image of the scene; determining a desired position of the overlay in the second image; encoding the second image in a second frame marked as a no-display frame, and generating and encoding a third frame including temporally predicted macroblocks at the desired position of the overlay referencing the first frame with motion vectors based on a difference between the desired position and the first position, and skip-macroblocks outside of the desired position of the overlay referencing the first frame. A corresponding device, computer program and computer program product are also provided.
Abstract: A method of performing better controlled switching between day mode and night mode imaging in a camera, where those illuminants contributing to the ambient light in day mode are considered when determining the visible light during night mode. Characteristic values of those illuminants are mixed with several levels of IR light to simulate the presence of an IR illuminator, and these characteristic values are compared to corresponding values derived from the color components of the ambient light in night mode in order to determine the IR proportion, and from that the amount of visible light.
Abstract: A method for forming a combined image frame of a combined video stream comprises: capturing image frames of first and second video streams; encoding image data of the image frames of the first and second video streams, wherein each image frame of the first and second video streams are respectively encoded into first and second encoded data comprising a plurality of rows wherein each row has a height of a single coding unit and a width equal to a width of the image frame and is encoded as one or more slices; and combining the first and second encoded data into combined encoded data by interleaving rows of the first and second encoded data. The combined encoded data representing the combined image frame of the combined video stream.
Type:
Grant
Filed:
November 19, 2021
Date of Patent:
August 8, 2023
Assignee:
Axis AB
Inventors:
Viktor Edpalm, Alexander Toresson, Johan Palmaeus, Jonas Cremon
Abstract: A dome camera has a base, two camera heads and a dome cover. The base is configured for mounting the dome camera to a surface. The two camera heads are arranged on the base. The dome cover is arranged over the camera heads and the base. The dome cover includes at least two dome-shaped sections, wherein each dome-shaped section covers a camera head, and a center section joining the dome-shaped sections. The dome cover is formed in a single piece.
Type:
Grant
Filed:
September 23, 2020
Date of Patent:
August 8, 2023
Assignee:
Axis AB
Inventors:
Andreas Hertzman, Åke Södergård, Magnus Ainetoft
Abstract: An image processing device 300, a non-transitory computer readable storage medium, a monitoring camera 200 and a method 100 of pre-processing images of a video stream before encoding the video stream are disclosed. The images are obtained S110, wherein the obtained images have a first resolution. The obtained images are subsampled S120 to intermediate images having a second resolution lower than the first resolution and lower than a third resolution. The intermediate images are upsampled S130 to output images having the third resolution, wherein the third resolution is the same for all images of the video stream.
Abstract: An arrangement for determining an amount of light reaching an image sensor of a video camera is disclosed. The video camera comprises an imaging lens system guiding a beam path towards an image sensor and has an aperture plane where a variable aperture is arranged. The inventive arrangement comprises a light sensor arranged to probe light intensity continuously from a portion of the beam path, which portion is located in or near the aperture plane of the imaging lens system.
Inventors:
Ola Andersson, Kim Nordkvist, Johan Bergsten, Johan Widerdal, Jakob Holmquist, Jonas Sjögren, Sebastian Engwall, Mariano Vozzi, Christian Jacobsson, Mikael Persson
Inventors:
Ola Andersson, Kim Nordkvist, Johan Bergsten, Johan Widerdal, Jakob Holmquist, Jonas Sjögren, Sebastian Engwall, Mariano Vozzi, Christian Jacobsson, Mikael Persson