Abstract: A method of video decoding performed in a video decoder includes receiving a coded video bitstream including signaling information for a current block. The method further includes determining block reconstruction information for the current block based on the signaling information. The method further includes reconstructing the current block using the determined block reconstruction information.
Abstract: The present disclosure relates to deblocking filtering, which may be advantageously applied for block-wise encoding and decoding of images or video signals. In particular, the present disclosure relates to an improved memory management in an automated decision on whether to apply or skip deblocking filtering for a block and to selection of the deblocking filter. The decision is performed on the basis of a segmentation of blocks in such a manner that memory usage is optimized. Preferably, the selection of appropriate deblocking filters is improved so as to reduce computational expense.
Type:
Grant
Filed:
July 27, 2020
Date of Patent:
January 19, 2021
Assignee:
SUN PATENT TRUST
Inventors:
Matthias Narroschke, Semih Esenlik, Thomas Wedi
Abstract: A system for processing a video obtains a prediction table for a reference frame of the video and codes one or more target frames of the video based on the prediction table. The prediction table is a Huffman table of difference values for reference pixels of the reference frame. The difference value for a reference pixel is determined based on an actual value of the reference pixel and a prediction value determined based on respective pixel values of one or more pixels adjacent to the reference pixel. The one or more target frames are coded based on the Huffman table of the reference frame and prediction values of the one or more target frames.
Abstract: In a picture coding device, a significant coefficient information coding controller 706 and an arithmetic encoder 701 code significant difference coefficient information indicating that a difference coefficient value is not zero and significant for each of the difference coefficients in the partial region of the coding target. A difference coefficient value coding controller 707 and the arithmetic encoder 701 code difference coefficient values when significant difference coefficient information is significant for each of pixels in the partial region of the coding target. The significant coefficient information coding controller 706 decides a context for coding the significant difference coefficient information in the partial region of the coding target based on information indicating significance of the difference coefficient in the coded partial region.
Abstract: Modular audio/video entertainment system, said system comprising three modules: a screen unit, a loudspeaker unit and a control unit, where means are provided for assembling and disassembling the modules, where said means comprises one or more bracket assemblies, where one end of the one or more bracket assemblies is fastened in the loudspeaker unit, and another end of the one or more bracket assemblies engages means provided on the rear or bottom side of the screen, normally used for the screen's standard table or floor stand or wall mount.
Abstract: A computer-implemented method includes obtaining data from one or more non-visual sensors and a camera from a first monitoring system. The data includes non-visual data from the non-visual sensors and visual data obtained from the camera. The non-visual data from the non-visual sensors are paired with corresponding visual data from the camera. Data points of the non-visual data are synchronized with frames of the visual data based on a likelihood of an event indicated in the non-visual data. The synchronized data points of the non-visual data with the frames of the visual data are provided as labeled input to a neural network to train the neural network to detect the event. The trained neural network is provided to one or more cameras corresponding to one or more additional monitoring systems to detect the event in the visual data obtained by the one or more cameras.
Abstract: A helmet with video acquisition device and display, comprising a helmet shell on which at least one video acquisition device is fixed, and a display arranged at one end of a front piece of the helmet; further comprising elements to allow the movement of the display laterally with respect to the active position in which the display is arranged completely in front of the eyes of a user.
Abstract: The present disclosure provides a verification method. The verification method includes: driving the movable component to extend out from the housing, in which the movable component is received in the housing and capable of extending out from the housing; determining whether the light entrance of the infrared camera is completely exposed from the housing, wherein the infrared camera is installed on the movable component and can be driven by the movable component; when the light entrance of the infrared camera is completely exposed from the housing, obtaining an infrared image by the infrared camera; and performing an infrared image verification based on the infrared image. The present disclosure also provides a verification device and an electronic device.
Type:
Grant
Filed:
May 28, 2019
Date of Patent:
November 10, 2020
Assignee:
GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Abstract: An image display apparatus has: an imaging device (11L, 11R) that is placed at a door (15L, 15R) of a vehicle (1) and that is configured to image an external circumstance of the vehicle; a displaying device (14) that is configured to display an external image captured by the imaging device; and a controlling device (132, 133) that is configured to control an display aspect of the display when the display displays the external image, the controlling device is configured to control the display aspect such that the display aspect when the door is in an opened state is different from the display aspect when the door is in a closed state.
Abstract: Systems, devices, and techniques related to thin form factor head mounted displays and near light field displays are discussed. Such devices may include a display to present elemental images, a primary lens array in an optical path between the display and a viewing zone of a user, the primary lens array to magnify elemental images to a viewing zone, and a secondary array of optical elements between the display and the primary lens array to concentrate elemental images from the display to the primary lens array.
Type:
Grant
Filed:
May 30, 2018
Date of Patent:
November 3, 2020
Assignee:
Intel Corporation
Inventors:
Joshua Ratcliff, Alexey Supikov, Santiago Alfaro, Basel Salahieh
Abstract: The present application discloses an easy installing and disassembling vehicle monitoring device, which includes a board, a signal input device, a communication module, magnetic members, and a battery for powering the signal input device and the communication module. The board equipped with the signal input device, the communication module, and the battery may be directly attached to the housing of a vehicle through the magnetic members, thus punching holes in the vehicle is avoided, and installation and disassembly of the vehicle monitoring device is greatly facilitated. There is no restriction on installation positions, so the monitoring scope is greatly increased, and an improved driving safety may be achieved. The communication module sends the information collected by the signal input device to smart terminals through wireless network, thus no complicated wiring is needed, making the installation and disassembly of the monitoring device much easier, and simultaneously achieving remote monitoring.
Abstract: A method and apparatus for decoding a video sequence include decoding a fixed length binary coded network abstraction layer unit (NALU) class type included in an NALU header. An NALU type in the NALU header is decoded. A picture is reconstructed, and a type of the picture is identified by a combination of the NALU class type and the NALU type.
Type:
Grant
Filed:
May 6, 2019
Date of Patent:
October 20, 2020
Assignee:
TENCENT AMERICA LLC
Inventors:
Byeongdoo Choi, Stephan Wenger, Shan Liu
Abstract: A method at an image capture apparatus, the method including receiving, at the image capture apparatus, a trigger to begin image capture; based on the trigger, starting image capture for a fixed duration; and providing image capture data to a processing service.
Type:
Grant
Filed:
March 24, 2017
Date of Patent:
September 22, 2020
Assignee:
BlackBerry Limited
Inventors:
Conrad Delbert Seaman, Ryan Michael Parker, Stephen West
Abstract: A monitoring system detects an event happening to a detection target. The monitoring system includes an imaging device, storage, and a controller. The imaging device captures an image of an imaging area including the detection target to generate captured image data indicating a captured image. The storage stores therein a detection range in the captured image. The controller detects a change to the captured image in the detection range based on the captured image data. The detection range includes a detection target image exhibiting the detection target. Upon detecting a change to the captured image in the detection range, the controller changes the detection range so that the detection range encloses the detection target image.
Abstract: Methods, systems, and computer program products for digital imaging in an ambient light deficient environment are described. The methods, systems, and computer program products described include an imaging sensor with an array of pixels for sensing electromagnetic radiation, an emitter that is configured to emit a pulse of electromagnetic radiation, and a control unit that includes a processor. The control unit is in electrical communication with the imaging sensor and the emitter. The control unit is configured to synchronize the emitter and the imaging sensor so as to produce a plurality of image reference frames. The plurality of image reference frames include a luminance frame with luminance image data and a chrominance frame with chrominance data, where the plurality of image reference frames are combined to form a color image.
Abstract: The present disclosure relates to deblocking filtering, which may be advantageously applied for block-wise encoding and decoding of images or video signals. In particular, the present disclosure relates to an improved memory management in an automated decision on whether to apply or skip deblocking filtering for a block and to selection of the deblocking filter. The decision is performed on the basis of a segmentation of blocks in such a manner that memory usage is optimized. Preferably, the selection of appropriate deblocking filters is improved so as to reduce computational expense.
Type:
Grant
Filed:
January 24, 2019
Date of Patent:
September 15, 2020
Assignee:
SUN PATENT TRUST
Inventors:
Matthias Narroschke, Semih Esenlik, Thomas Wedi
Abstract: A method of controlling a vehicle display apparatus may include generating a user image as information on a user sitting on a seat corresponding to the display apparatus, extracting a midpoint between two eyes of the user from the user image and monitoring a position of the midpoint, and changing an output mode or an output area of the display apparatus in correspondence with the monitored position of the midpoint.
Type:
Grant
Filed:
June 1, 2018
Date of Patent:
September 15, 2020
Assignees:
Hyundai Motor Company, Kia Motors Corporation
Abstract: An observation device, comprising an image sensor that forms an image of a specimen, and a processor having a focus control section and an analyzer, wherein the focus control section, in a case where image data for analysis of the specimen by the analyzer is obtained when a plurality of maximum values have been generated for change in contrast evaluation value corresponding to change in the focus position, controls focus position any focus position that corresponds to the plurality of maximum values, and the image sensor outputs image data that has been imaged.
Abstract: Improved transforms are used to encode and decode large video and image blocks. During encoding, a prediction residual block having a large size (e.g., larger than 32×32) is generated. The pixel values of the prediction residual block are transformed to produce transform coefficients. After determining that the transform coefficients exceed a threshold cardinality representative of a maximum transform block size (e.g., 32×32), a number of the transform coefficients are discarded such that a remaining number of transform coefficients does not exceed the threshold cardinality. A transform block is then generated using the remaining number. During decoding, after determining that the transform coefficients exceed the threshold cardinality, a number of new coefficients are added to the transform coefficients such that a total number of transform coefficients exceeds the threshold cardinality. The transform coefficients are then inverse transformed into a prediction residual block having a large size.
Abstract: A measurement device that can improve measurement accuracy and increase the number of times of service of a measurement jig. The measurement device includes a movement control section that positions the measurement jig at an instructed position in the transfer direction, an imaging control section that images the positioned measurement jig by the measurement camera and acquires an image data, an image processing section that calculates an actual position of the measurement jig based on multiple measurement marks which are included in the image data, and an error measurement section that measures a positioning error in the transfer direction due to the driving device based on the instructed position and the actual position.