Abstract: A method, system, and computer-readable medium for synchronizing video are described. The system captures video data with a camera, the video data including a first video data segment and a second video data segment. When a network between the camera and a hub is insufficient to allow downstream real-time streaming of the video data, the system stores the first video data segment on a first storage. When the network is sufficient to allow downstream real-time streaming of the video data, the system transfers the second video data segment from the camera to the hub, reads the first video data segment from the first storage, and transfers the first video data segment to the hub. The system stores the video data segments onto a second storage such that a non-real-time playback from the second storage shows the first video data segment and the second video data segment in sequence.
Abstract: There is provided a method performed in a multi-sensor video camera having a first and a second sensor with partly overlapping fields of view. A first and a second received video frame being simultaneously captured each has a non-overlapping portion and an overlapping portion. A frame of a first video stream is generated by joining together image data from the non-overlapping portions of the first and the second video frame with image data from the overlapping portion of the first video frame only, and a frame of a second video stream is generated to include image data from the overlapping portion of at least the second video frame. The frame of the first video stream and the frame of the second video stream are processed in parallel, wherein the processing of the frame of the second video stream includes preparing an overlay based on the image data from the overlapping portion of at least the second video frame.
Abstract: The present invention concerns a method for encoding a video sequence, comprising the following steps by a processing unit of an encoding device: splitting a digital image from the video sequence into blocks of values; for each block: transforming the values of the block into transform coefficients; organizing these transform coefficients into several sets of transform coefficients; quantizing the transform coefficients into quantized coefficients; encoding the block using the quantized coefficients; encoding the video sequence based on the encoding of the blocks; wherein the quantizing step further comprises for quantized coefficients corresponding to one set of transform coefficients: comparing a sum value representing a result of summing magnitudes of the quantized coefficient values with a threshold, the threshold depending on the number of the summed quantized coefficients, the quantized coefficient magnitudes being comprised within a predefined range; and setting the quantized coefficients to zer
Abstract: The present invention relates to a system and a method for encoding a video stream. The encoding includes determining a level of relevance for areas in an image frame to be included in the video stream, determining a block size value for coding blocks in the image frame, the block size value is depending on the level of relevance determined for the area including each coding block, respectively, encoding the image frame using coding block sizes based on the determined block size values for each of the coding blocks.
Abstract: A method and system for monitoring video data based on gaze is disclosed. The method may include receiving a video stream generated by a camera and presenting the received video stream on a display. The method may further include receiving a notification signal triggered by an event and determining that the video stream is associated with the notification signal. The method may further include detecting a gaze point of an operator viewing the display and determining at least one parameter associated with the gaze point. The method may include controlling a state of an alarm associated with the notification signal based upon the at least one parameter associated with the gaze point.
Abstract: A video processing device which generates motion metadata for encoded video comprises a decoder configured to decode frames of an encoded video into image frames; and a processing circuitry configured to execute a motion meta data deriving operation on image frames decoded by the decoder. The motion meta data deriving operation comprises: a dividing function configured to divide a current image frame into a mesh of cells, wherein each cell comprises multiple image pixels, a comparison function configured to determine a metric of change for each cell by comparing pixel data of each cell with pixel data of a correspondingly positioned cell of a previous and/or subsequent image frame, and a storing function configured to store the metric of change for each cell as the motion metadata related to the current image frame.
Type:
Application
Filed:
October 28, 2019
Publication date:
April 30, 2020
Applicant:
Axis AB
Inventors:
Viktor A Andersson, Moa Leonhardt, Jonas Hakansson
Abstract: A method, system, and computer program product of encoding a digital image comprising a privacy mask. Information representative of pixels in the digital image is received. The pixels are grouped into encoding units. Information representative of a privacy mask area in which a privacy mask is to be applied on the image is also received. All encoding units that at least partially are located within the privacy mask area are identified, and the privacy mask area is extended to be aligned with the identified encoding units. For each encoding unit, a respective quantization parameter to be used for encoding the image is determined. The privacy mask is applied in the extended privacy mask area of the image, and the image with the applied privacy mask is encoded using the determined quantization parameters. The digital image encoding system may be included in a camera.
Type:
Grant
Filed:
December 20, 2017
Date of Patent:
April 28, 2020
Assignee:
Axis AB
Inventors:
Viktor Edpalm, Song Yuan, Xing Danielsson Fan
Abstract: A method for finding one or more candidate digital images being likely candidates for depicting a specific object comprising: receiving an object digital image depicting the specific object; determining, using a classification subnet of a convolutional neural network, a class for the specific object depicted in the object digital image; selecting, based on the determined class for the specific object depicted in the object digital image, a feature vector generating subnet from a plurality of feature vector generating subnets; determining, by the selected feature vector generating subnet, a feature vector of the specific object depicted in the object digital image; locating one or more candidate digital images being likely candidates for depicting the specific object depicted in the object digital image by comparing the determined feature vector and feature vectors registered in a database, wherein each registered feature vector is associated with a digital image.
Type:
Grant
Filed:
September 6, 2018
Date of Patent:
April 28, 2020
Assignee:
Axis AB
Inventors:
Niclas Danielsson, Simon Molin, Markus Skans, Jakob Grundström
Abstract: A method includes receiving a first set of sensor data including data representing an object or an event in a monitored environment, receiving a second set of sensor data representing a corresponding time period as a time period represented by the first set of sensor data, inputting to a tutor classifier data representing the first set of data and including data representing the object or the event, generating a classification of the object or event in the tutor classifier, receiving the second set of sensor data at an apprentice classifier training process, receiving the classification generated in the tutor classifier at the apprentice classifier training process, and training the apprentice classifier in the apprentice classifier training process using the second set of sensor data as input and using the classification received from the tutor classifier as a ground-truth for the classification of the second set of sensor data.
Abstract: The present invention relates to an emergency notification system where indicators are mounted on or in a building in such a way that optical or thermal signals emitted from the indicators form a time-variant indication detectable outside the building of an emergency event taking place inside the building. Sensors detecting a predetermined sound are mounted inside the building and are each connected to a nearby indicator. When a sensor makes a detection, it sends event information to its associated indicator which will prompt the indicator to emit a first optical or thermal signal. Based on a signal from a timer connected to the indicator, a property of the first signal will change after a certain time has passed, thereby providing the time-variant indication.
Type:
Application
Filed:
September 24, 2019
Publication date:
April 23, 2020
Applicant:
Axis AB
Inventors:
Ingemar LARSSON, Anders HANSSON, Daniel ANDERSSON
Abstract: A method for interference reduction in a stationary radar unit of a frequency-modulated continuous-wave (FMCW) type is provided. A sequence of beat signals is received, and a reference beat signal is calculated as an average or a median of one or more of the beat signals in the sequence. By comparing a difference between a beat signal and the reference beat signal, or a derivative of the difference, to one or more thresholds, a segment which is subject to interference is identified. The segment of the beat signal is replaced by one or more of a corresponding segment of an adjacent beat signal in the sequence, and a corresponding segment of the reference beat signal.
Abstract: A fastening arrangement (100), a fastening arrangement kit comprising such a fastening arrangement, in addition to an electronic device, and use of such a fastening arrangement for mounting the electronic device. The fastening arrangement comprises a mounting support (200) for attachment to a mounting surface. The electronic device is attached to the mounting support using a fastener portion. The fastener portion forms part of a connector (400), which is attached to the mounting support by securing an H-shaped attachment portion in an elongated opening. The H-shaped structure is rotated in place in the elongated opening and fixed using at least two stops positioned on either side of the of the attachment portion. The mounting support comprises one of the H-shaped attachment portion and the elongated opening, and the connector comprises the other.
Abstract: A method and an encoder for encoding a video stream in a video coding format supporting auxiliary frames, where such auxiliary frames, in conjunction with the frames that reference the auxiliary frames, can be used to for rate control, in that the image data of the auxiliary frames comprises a down scaled version of an image data captured by a video capturing device, and that motion vectors of the frame referring to the auxiliary frame are calculated/determined to scale up the down scaled version of the image data to again have the intended resolution.