Abstract: A vibration measurement system comprises an image capturing apparatus, a distance measuring apparatus, a sensor that outputs a signal according to an inclination of the image capturing apparatus relative to the vertical direction, and a vibration measurement apparatus. The vibration measurement apparatus includes calculating an angle formed by the normal of an image capturing surface of the image capturing apparatus and the normal of the measurement target surface that the image capturing apparatus shoots, based on the signal output by the sensor, converting the image obtained that the image capturing apparatus shoots into an image that would be obtained were the normal of the measurement target surface coincident with the normal of the image capturing surface of the image capturing apparatus, using the calculated angle, and measuring vibration of the structure, using the converted image and the measured distance from the image capturing apparatus to the measurement target surface.
Abstract: Methods, and systems including computer programs encoded on a computer storage medium, for training a detection model for surveillance devices using semi-supervised learning. In one aspect, the methods include receiving imaging data collected by a camera of a scene within a field of view of the camera. Annotated training data is generated from the imaging data and one or more detection models are trained using the annotated training data. Based on a set of performance parameters, an optimized detection model is selected of the one or more detection models, and the optimized detection model is provided to the camera.
Type:
Grant
Filed:
June 5, 2020
Date of Patent:
November 2, 2021
Assignee:
Objectvideo Labs, LLC
Inventors:
Allison Beach, Donald Gerard Madden, Narayanan Ramanathan
Abstract: Systems, devices, and methods redact one or more light-emitting screens in data recorded on a recording device. The redaction may include receiving recorded data comprising a plurality of pixel values. The redaction may include detecting one or more light-emitting screens in the received image. The redaction may include redacting a subset of the pixel values from the recorded associated with the one or more detected light-emitting screens. The redaction may be commonly applied to multiple frames of recorded data through the use of a unique identifier assigned to a same detected light-emitting screen. A same light-emitting screen may be tracked across multiple non-sequential or sequential frames and assigned a same unique identifier.
Type:
Grant
Filed:
June 5, 2019
Date of Patent:
September 14, 2021
Assignee:
Axon Enterprise, Inc.
Inventors:
Anh Tuan Nguyen, Noah Spitzer-Williams, Jacob Hershfield
Abstract: A method of assessing a feature on a patient's skin surface includes capturing (i) an image of the patient's skin surface with a camera of a portable capture device and (ii) range information with an auxiliary range measurement device that is attached to the portable capture device. Based on the range information, a single range value can be determined between the capture device and the patient's skin surface. In some embodiments, a scale can be associated with the image based on the single range value.
Type:
Grant
Filed:
November 17, 2017
Date of Patent:
September 14, 2021
Assignee:
ARANZ Healthcare Limited
Inventors:
Oliver John Dickie, Philip John Barclay, Brent Stephen Robinson, Russell William Watson
Abstract: A monitoring server of a platform-side monitoring system analyzes an image taken by a plurality of platform monitoring cameras installed on a platform, and transmits information on the results of detection to a car-side monitoring system in the case where an object of attention is detected from the image. A control unit of the car-side monitoring system allows the monitor to display an image taken by the car camera which corresponds to the platform monitoring camera that has taken an image of the object of attention in a mode that is different from that of the images taken by the other car cameras on the basis of the information on the results of detection that has been received from the platform-side monitoring system.
Abstract: A deep learning based compression (DLBC) system applies trained models to compress binary code of an input image to a target codelength. For a set of binary codes representing the quantized coefficents of an input image, the DLBC system applies a first model that is trained to predict feature probabilities based on the context of each bit of the binary codes. The DLBC system compresses the binary code via adaptive arithmetic coding based on the determined probability of each bit. The compressed binary code represents a balance between a reconstruction quality of a reconstruction of the input image and a target compression ratio of the compressed binary code.
Abstract: A video decoding method includes determining whether an ultimate motion vector expression (UMVE) mode is allowed for an upper data unit including a current block, when the UMVE mode is allowed for the upper data unit, determining whether the UMVE mode is applied to the current block, when the UMVE mode is applied to the current block, determining a base motion vector of the current block, determining a correction distance and a correction direction for correction of the base motion vector, determining a motion vector of the current block by correcting the base motion vector according to the correction distance and the correction direction, and reconstructing the current block based on the motion vector of the current block.
Abstract: Depth imagers can implement time-of-flight operations to measure depth or distance of objects. A depth imager can emit light onto a scene and sense light reflected back from the objects in the scene using an array of sensors. Timing of the reflected light hitting the array of sensors gives information about the depth or distance of objects in the scene. In some cases, corrupting light that is outside of a field of view of a pixel in the array of sensors can hit the pixel due to internal scattering or internal reflections occurring in the depth imager. The corrupting light can corrupt the depth or distance measurement. To address this problem, an improved depth imager can isolate and measure the corrupting light due to internal scattering or internal reflections occurring in the depth imager, and systematically remove the measured corrupting light from the depth or distance measurement.
Type:
Grant
Filed:
April 17, 2019
Date of Patent:
August 17, 2021
Assignee:
ANALOG DEVICES, INC.
Inventors:
Erik D. Barnes, Charles Mathy, Sefa Demirtas
Abstract: A surveillance method using a POS apparatus includes receiving, by a point-of-sale (POS) apparatus, transaction information from a user, transmitting, by the POS apparatus, the transaction information to a camera management apparatus, transmitting, by the camera management apparatus, the transaction information to a camera, capturing, by the camera, an image of a surveillance region corresponding to the transaction information based on receiving the transaction information from the camera management apparatus, transmitting, by the camera, the captured image to the camera management apparatus, and storing, by the camera management apparatus, the image transmitted by the camera and the transaction information, respectively, in a storage.
Type:
Grant
Filed:
April 10, 2019
Date of Patent:
August 17, 2021
Assignee:
HANWHA TECHWIN CO., LTD.
Inventors:
Kyung Duk Kim, Hyun Ho Kim, Min Jung Shim
Abstract: A method for providing content includes determining a viewing direction of a user viewing a content item comprising a plurality of video streams, selecting two or more video streams of the content item based on the viewing direction of the user and directional data associated with the plurality of video streams, decoding the two or more video streams to form two or more decoded video streams, stitching the two or more decoded video streams to form a combined image, and causing the combined image to be displayed to the user. Systems perform similar steps and non-transitory computer readable storage mediums each store one or more computer programs.
Abstract: A method of encoding/decoding a picture of a video signal includes selecting a block of the picture for decoding, comparing a motion vector associated with the selected block to a motion vector associated with a neighbouring block that is adjacent the block, and determining whether to use motion vectors associated with the neighbouring block in encoding/decoding of the block based on the comparison of the motion vector associated with the selected block and the motion vector associated with the neighbouring block.
Type:
Grant
Filed:
December 14, 2018
Date of Patent:
August 3, 2021
Assignee:
Telefonaktiebolaget LM Ericsson (publ)
Inventors:
Ruoyang Yu, Kenneth Andersson, Per Wennersten
Abstract: Disclosed herein are embodiments of imaging biological specimens. An imaging system can include a microscope for directly viewing the biological specimen and a multi-spectral imaging apparatus for outputting digitally enhanced images, near-video rate imaging, and/or videos of the specimen. An imaging system can include a digital scanner that digitally processes images to produce a composite image with enhanced color contrast of features of interest.
Abstract: Presented herein are techniques for a low-complexity process of generating an artificial frame that can be used for prediction. At least a first reference frame and a second reference frame of a video signal are obtained. A synthetic reference frame is generated from the first reference frame and the second reference frame. Reference blocks from each of the first reference frame and the second reference frame are combined to derive an interpolated block of the synthetic reference frame.
Abstract: A deep learning based compression (DLBC) system applies trained models to compress binary code of an input image to a target codelength. For a set of binary codes representing the quantized coefficents of an input image, the DLBC system applies a first model that is trained to predict feature probabilities based on the context of each bit of the binary codes. The DLBC system compresses the binary code via adaptive arithmetic coding based on the determined probability of each bit. The compressed binary code represents a balance between a reconstruction quality of a reconstruction of the input image and a target compression ratio of the compressed binary code.
Abstract: An information processing apparatus generates, for two or more videos including an overlapping region that overlap each other, overlapping region information that indicates the overlapping region, and transmits the overlapping region information to a display control apparatus that generates a video by concatenating the two or more videos. The display control apparatus obtains the overlapping region information from another apparatus, generates a video by concatenating the two or more videos based on the overlap information, and causes the display unit to display the generated video.
Abstract: In one example, a method for video coding includes receiving input data associated with a current block in an image frame, generating an inter predictor of the current block, and generating an intra predictor of the current block based on samples of neighboring pixels and an intra prediction mode that locates the samples of neighboring pixels. The method further includes generating a final predictor of the current block by combining the inter predictor and the intra predictor according to one or more intra weight coefficients associated with the intra prediction mode, and encoding or decoding the current block based on the final predictor to output encoded video data or a decoded block. The one or more intra weight coefficients indicate one or more ratios that corresponding one or more portions of the intra predictor are combined with the inter predictor, respectively.
Abstract: A novel 3D mixed-reality space and experience construction sharing system is configured to provide a novel part-holographic and part-physical object-based immersive user experience environment that enables an intermixture of computer-generated lifelike holographic objects and real objects to be synchronized and correlated to a particular physical space (i.e. as a “mixed-reality” (MR) environment) for vividly-interactive user experiences during a user's visit to the particular physical space. Moreover, the novel 3D mixed-reality space and experience construction sharing system accommodates a user interaction designer to construct and configure a mixed-reality (MR) environment and various potential user interactivities for a geographic landmark, a museum, or another tourist destination, and subsequently share the MR environment with other user interaction designers and users (e.g. tourists) who visit that tourist destination.
Abstract: A system for capturing Omni-Stereo videos using multi-sensor includes left cameras, right cameras and a viewing circle. A method of capturing omni stereo videos using multi-sensor approach includes steps of: capturing images of a scene using left cameras, capturing images of a scene using right cameras, processing each image from the left camera and right camera using a computation method, and obtaining a final omni stereo frame through the computation method.
Abstract: Methods, devices, apparatus, and systems for video encoding/decoding in which usage of a background picture is synchronized on an encoder side and a decoder side. In this solution, a background picture that is to be used as a reference picture is determined, background-picture indication information is used to indicate a time point from which the background picture is used as the reference picture, the encoder side encodes to-be-coded video pictures by using the background picture as the reference picture from the time point indicated by the background-picture indication information, to generate a primary bitstream, and the encoder side transmits a background-picture bitstream, the background-picture indication information, and the primary bitstream to the decoder side.