Abstract: A method and apparatus for video decoding includes decoding a binary coded syntax element carrying an identification of a picture segment in a high level syntax structure comprising fixed length codewords and reconstructing the picture segment.
Type:
Grant
Filed:
May 6, 2019
Date of Patent:
October 5, 2021
Assignee:
TENCENT AMERICA LLC
Inventors:
Byeongdoo Choi, Stephan Wenger, Shan Liu
Abstract: A depth measurement assembly (DMA) includes an illumination source that projects pulses of light (e.g., structured light) at a temporal pulsing frequency into a local area. The DMA includes a sensor that capture images of the pulses of light reflected from the local area and determines, using one or more of the captured images, one or more TOF phase shifts for the pulses of light. The DMA includes a controller coupled to the sensor and configured to determine a first set of estimated radial distances to an object in the local area based on the one or more TOF phase shifts. The controller determines a second estimated radial distance to the object based on an encoding of structured light and at least one of the captured images. The controller selects an estimated radial distance from the first set of radial distances.
Abstract: A method for detecting objects ejected from a cabin of a vehicle includes generating window image data of a vehicle window opening of the vehicle with a camera mounted on the vehicle; and processing the window image data with a controller operably connected to the camera to generate ejected object data corresponding to at least one object ejected from the cabin through the vehicle window opening. The method further includes associating the ejected object data with occupant data of an occupant of the vehicle with the controller.
Abstract: Systems and methods for virtualized computing or cloud-computing network with distributed input devices and at least one remote server computer for automatically analyzing received video, audio and/or image inputs for providing social security and/or surveillance for a surveillance environment, surveillance event, and/or surveillance target.
Abstract: Systems and methods for cloud-based surveillance for an operation area are disclosed. At least two input capture devices, at least one safety control device and at least one user device are communicatively connected to a cloud-based analytics platform. The cloud-based analytics platform automatically generates 3-Dimensional (3D) surveillance data based on received 2-Dimensional (2D) video and/or image inputs and perform advanced analytics and activates the at least one safety control device based on analytics data from advanced analytics.
Abstract: An apparatus including a capture device, an illumination device and a processor. The capture device may be configured to generate pixel data corresponding to an exterior view from a vehicle. The illumination device may be configured to generate light for the exterior view. The processor may be configured to generate video frames from the pixel data, perform computer vision operations on the video frames to detect objects in the video frames and generate a control signal. The objects detected may provide data for a vehicle maneuver. The control signal may adjust characteristics of the light generated by the illumination device. The characteristics of the light may be adjusted in response to the objects detected by the processor. The light from the illumination device may facilitate detection of the objects by the processor.
Abstract: A method and apparatus are provided for encoding and decoding image information. The encoding comprises receiving a block of pixels; creating a set of motion vector prediction candidates for the block of pixels; and examining the set to determine if a motion vector prediction candidate is a temporal motion vector prediction, or a spatial motion vector prediction. If the motion vector prediction candidate is a temporal motion vector prediction, the motion vector prediction candidate is kept in the set. If the motion vector prediction candidate is a spatial motion vector prediction, it is examined whether the set comprises a motion vector prediction candidate corresponding with the spatial motion vector prediction; and if so, the motion vector prediction candidate is removed from the set. Once the set is created, one of the candidates from the set is selected to represent a motion vector prediction for the block of pixels.
Type:
Grant
Filed:
June 4, 2018
Date of Patent:
September 14, 2021
Assignee:
NOKIA TECHNOLOGIES OY
Inventors:
Mehmet Oguz Bici, Jani Lainema, Kemal Ugur
Abstract: A monitoring system detects an event happening to a detection target, and includes imaging devices, storage, and a controller. The imaging devices each capture an image of an imaging area including the detection target to generate captured image data indicating a captured image. The storage stores therein a detection range in the captured image. The controller detects a change to the captured image in the detection range based on the captured image data. The controller determines one of the imaging devices that has provided a captured image whose image exhibiting a portion of the detection target has a larger area among the captured images to be a main imaging device. Upon detecting a change to a captured image in the detection range captured by the main imaging device, the controller changes the detection range of the main imaging device so that the detection range encloses the detection target image.
Abstract: In a picture coding device, a significant coefficient information coding controller 706 and an arithmetic encoder 701 code significant difference coefficient information indicating that a difference coefficient value is not zero and significant for each of the difference coefficients in the partial region of the coding target. A difference coefficient value coding controller 707 and the arithmetic encoder 701 code difference coefficient values when significant difference coefficient information is significant for each of pixels in the partial region of the coding target. The significant coefficient information coding controller 706 decides a context for coding the significant difference coefficient information in the partial region of the coding target based on information indicating significance of the difference coefficient in the coded partial region.
Abstract: A method for calibrating image data of an imaging system for a vehicle combination, including a first part of the tractor trailer, which encompasses the imaging system, and a second part which encompasses a calibration object, the second part being mechanically coupled to the first part so as to be movable about at least one axis. The imaging system is configured to at least partially representing the second part. The method includes: providing at least one image of the imaging system, which represents the calibration object; identifying the calibration object within the image; identifying a characteristic variable of the calibration object within the image; determining a deviation of the characteristic variable from a stored characteristic setpoint variable; generating calibrated image data by transforming the image data of the imaging system, depending on the deviation of the characteristic variable from the stored characteristic setpoint variable to compensate for the deviation.
Abstract: A method of calculating depth information for a three-dimensional (3D) image includes generating a pattern based on the value of at least one cell included in a two-dimensional (2D) image, projecting the pattern, capturing a reflected image of the pattern, and calculating depth information based on the reflected image of the pattern.
Type:
Grant
Filed:
October 30, 2018
Date of Patent:
August 17, 2021
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Andrii Volochniuk, Yong-Chan Keh, Jung-Kee Lee, Sung-Soon Kim, Sun-Kyung Kim, Andrii But, Andrii Sukhariev, Dmytro Vavdiyuk, Konstantin Morozov
Abstract: An example method and hyperspectral imaging (HSI) system for imaging a scene are provided. The method is for imaging the scene with the HSI system including a sensor with a plurality of sensor pixels and a plurality of spectral filters, each of the spectral filters being associated with one of the sensor pixels. The method comprises obtaining a higher-resolution spatial image by illuminating the scene with a first set of wavelengths, wherein each spectral filter passes the first set of wavelengths to the sensor pixel it is associated with. The method further comprises obtaining a lower-resolution hyperspectral image by illuminating the scene with a second set of wavelengths, wherein each spectral filter passes only a subset of the second set of wavelengths to the sensor pixel it is associated with.
Abstract: A vehicle periphery monitoring device acquires first image information of a vehicle rear side and second image information of vehicle rear lateral sides. Second points at infinity of the second image information are made to approximately coincide with a first point at infinity of the first image information, and the first image information and the second image information are combined, and first composite image information is generated. When sensing a request to change a viewpoint toward a vehicle rear lateral side, second composite image information that is viewed with a virtual viewpoint having been moved toward a vehicle lateral side is generated. Further, when a path changing operation of an operation portion that is used in changing a path is completed, third composite image information that is viewed with a virtual viewpoint having been moved toward the side of the path change is generated.
Abstract: The disclosure extends to methods, systems, and computer program products for producing an image in light deficient environments with luminance and chrominance emitted from a controlled light source.
Abstract: A viewing direction may define an angle/visual portion of a spherical video at which a viewing window is directed. A trajectory of viewing direction may include changes in viewing directions for playback of spherical video. Abrupt changes in the viewing directions may result in jerky or shaky views of the spherical video. Changes in the viewing directions may be stabilized to provide stabilized views of the spherical video. Amount of stabilization may be limited by a margin constraint.
Type:
Grant
Filed:
May 19, 2020
Date of Patent:
July 13, 2021
Assignee:
GOPRO, INC.
Inventors:
Daryl Stimm, William Edward MacDonald, Kyler William Schwartz
Abstract: A system and method of content adaptive pixel intensity processing are described. The method includes receiving a predefined set of processed video data configured from the processed video data, deriving a range information associated with an original maximum value and an original minimum value for a predefined set of original video data, wherein the predefined set of processed video data is derived from the predefined set of original video data, and adaptively clipping pixel intensity of the predefined set of processed video data to a range deriving from the range information, wherein the range information is incorporated in a bitstream and represented in a form of the original maximum value and the original minimum value, prediction values associated with a reference maximum value and a reference minimum value, or a range index associated with a predefined range set.
Abstract: A depth camera assembly for depth sensing of a local area includes an illumination source, an imaging device, and a controller. The illumination source illuminates a local area with light emitted in accordance with emission instructions generated by the controller. The illumination source includes an array of optical sources and an optical assembly. Operation of each optical source in the array is controllable based in part on the emission instructions. The optical assembly is configured to project the light into the local area. The imaging device captures one or more images of at least a portion of the light reflected from one or more objects in the local area. The controller determines depth information for the one or more objects based in part on the captured one or more images.
Abstract: Provided is a method of motion estimation for processing a video stream comprising a plurality of frames, the method including segmenting at least one frame, from among the plurality of frames, into a plurality of blocks, determining an event density factor for each block included in a frame, wherein the event density factor of the block corresponds to a number of events accumulated in the block across frames in a predetermined time duration, comparing the determined event density factor with a threshold value, estimating a motion vector of the block based on the comparison, and processing the block in the video stream based on the estimated motion vector of the block.
Abstract: A drive recorder includes: an image acquisition interface that acquires image data captured by a camera mounted to a vehicle; an imaging controller that causes the camera to capture a first image, in which a first imaging condition is used, and a second image, in which a second imaging condition is used, in temporally different frames; an image recorder that records moving image data based on the first image; a passenger detector that detects a passenger in the vehicle; an image processor that generates, based on the second image including the passenger detected by the passenger detector, a passenger image in which visibility of an image portion including the passenger is higher than in the second image; and a display controller that causes the display device to display moving images based on the passenger image.
Abstract: An image decoding method that is performed by a decoding apparatus of the present invention comprises the steps of: receiving 360-degree video information; deriving a projection type of a projected picture based on the 360-degree video information; deriving a weight map of the projected picture based on the projection type; deriving quantisation processing units of the projected picture; deriving DAQP for the respective quantisation processing units based on the weight map; and decoding the respective quantisation processing units based on the DAQP.