Abstract: Provided is a system which can supply appropriate information to one operator from a viewpoint that the one operator grasps a remotely operational mode for a working machine by the other operator. In a first remote operation apparatus 10, a passive mode detector 112 detects a passive mode of a first operation mechanism 111, and transmits operational mode data corresponding to the passive mode. In a second remote operation apparatus 20, an operation of an actuator 212 is controlled in accordance with the passive mode of the first operation mechanism 111 in response to the operational mode data, and thereby, a second operation mechanism 211 actively operates.
Type:
Grant
Filed:
December 2, 2019
Date of Patent:
May 28, 2024
Assignee:
Kobelco Construction Machinery Co., Ltd.
Abstract: A display apparatus includes an encoder which compresses first image data to generate second image data, a decoder which recovers the first image data to generate third image data, and a display panel which displays an image, in which the encoder separates the first image data into a plurality of sub-color data, and generates a low frequency component of minor sub-color data among the plurality of sub-color data and the remaining sub-color data as the second image data, and the decoder generates recovery minor sub-color data corresponding to the minor sub-color data by using the low frequency component of the minor sub-color data and the high frequency component of one of the remaining sub-color data in the second image data, and generates the recovery minor sub-color data and the remaining sub-color data as the third image data.
Abstract: An encoder capable of properly handling an image to be encoded or decoded includes processing circuitry and memory connected to the processing circuitry. Using the memory, the processing circuitry: obtains parameters including at least one of (i) one or more parameters related to a first process for correcting distortion in an image captured with a wide angle lens and (ii) one or more parameters related to a second process for stitching a plurality of images; generates an encoded image by encoding a current image to be processed that is based on the image or the plurality of images; and writes the parameters into a bitstream including the encoded image.
Type:
Grant
Filed:
July 12, 2021
Date of Patent:
May 14, 2024
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
Inventors:
Chong Soon Lim, Han Boon Teo, Takahiro Nishi, Tadamasa Toma, Ru Ling Liao, Sughosh Pavan Shashidhar, Hai Wei Sun
Abstract: A conversion method for converting luminance of a video, including a luminance value in a first luminance range, to be displayed on a display apparatus includes: acquiring a first luminance signal indicating a code value obtained by quantization of the luminance value of the video; and converting the code value indicated by the acquired first luminance signal into a second luminance value determined based on a luminance range of the display apparatus, the second luminance value being compatible with a second luminance range with a maximum value smaller than a maximum value of the first luminance range and larger than 100 nit. This provides the conversion method capable of achieving further improvement.
Abstract: A method for embedding information in a video signal is described. The method comprises receiving (305) a message (30) including the information; dividing (310) the message (30) in a first message part (132) and a second message part (134); acquiring (320) a first video frame (9) and a second video frame (10) from the video signal, wherein the second video frame (10) is temporally subsequent to the first video frame (9), and the video frames (9, 10) each include a pre-set number of pixels; and determining (330) a motion map (122) associated with the second video frame (10), wherein the motion map (122) indicates a movement of each of the pixels of in the second video frame (10) compared to the first video frame (9).
Abstract: Innovations in syntax and semantics of coded picture buffer removal delay (“CPBRD”) values potentially simplify splicing operations. For example, a video encoder sets a CPBRD value for a current picture that indicates an increment value relative to a nominal coded picture buffer removal time of a preceding picture in decoding order, regardless of whether the preceding picture has a buffering period SEI message. The encoder can signal the CPBRD value according to a single-value approach in which a flag indicates how to interpret the CPBRD value, according to a two-value approach in which another CPBRD value (having a different interpretation) is also signaled, or according to a two-value approach that uses a flag and a delta value. A corresponding video decoder receives and parses the CPBRD value for the current picture. A splicing tool can perform simple concatenation operations to splice bitstreams using the CPBRD value for the current picture.
Abstract: Apparatus comprises a data encoder configured to derive, from an array of sample values, sample range flags each indicative of whether one or more sample values of the array of sample values lie in a predetermined range of sample values, the data encoder being configured in a first encoding mode to encode the array of sample values, at least in part, by encoding the sample range flags to an output data stream; a predictor configured to predict the state of a group of the sample range flags for a given array of sample values, the group being at least a subset of the sample range flags; and a comparator configured to compare the predicted state of the group of sample range flags with the actual state of the respective sample range flags for the given array of sample values; the data encoder being configured, in response to the comparator, to encode the given array of samples values in a second encoding mode in which the encoder is configured to encode to the output data stream a predetermined number of indicato
Type:
Grant
Filed:
June 24, 2020
Date of Patent:
March 19, 2024
Assignee:
Sony Group Corporation
Inventors:
Stephen Mark Keating, Karl James Sharman, Adrian Richard Browne
Abstract: According to the disclosure of the present document, ALF parameters and/or LMCS parameters may be hierarchically signaled, thereby enabling a reduction in data volume to be signaled for video/image coding and an increase in coding efficiency.
Type:
Grant
Filed:
March 19, 2020
Date of Patent:
February 27, 2024
Assignee:
LG Electronics Inc.
Inventors:
Seethal Paluri, Jaehyun Lim, Seunghwan Kim, Jie Zhao
Abstract: A method and a system for augmenting an audio feed for inclusion of data therein suitable for determining an identity of a human assessor are provided. The method comprises: receiving the audio feed; receiving an indication of identity of the human assessor to whom the audio feed is to be transmitted, the indication of identity being representable by a unique sequence of bits; generating, based on the unique sequence of bits, an identity watermark associated with the human assessor to be included in the audio feed to generate an augmented audio feed, by modifying the audio signal to have predetermined energy level at each of at least two different frequency levels to indicate presence of the given bit of the unique sequence of bits associated with the human assessor in the augmented audio feed; and transmit the augmented audio feed to an electronic device associated with the human assessor.
Abstract: The present disclosure relates to an image processing apparatus and a method that enable decoding of encoded data of an octree in various processing orders. The octree corresponding to point cloud data is encoded after the context is initialized for each layer of the octree. Further, a breadth-first order or a depth-first order is selected as the decoding order for the encoded data of the octree corresponding to point cloud data, and the encoded data is decoded in the selected decoding order. The present disclosure can be applied to an image processing apparatus, an electronic apparatus, an image processing method, a program, or the like, for example.
Type:
Grant
Filed:
September 13, 2019
Date of Patent:
February 20, 2024
Assignee:
SONY CORPORATION
Inventors:
Koji Yano, Tsuyoshi Kato, Satoru Kuma, Ohji Nakagami
Abstract: The present disclosure provides a visual model for image analysis of material characterization and analysis method thereof. By collecting and labeling big data of microscopic images, the present disclosure establishes an image data set of material characterization; and uses this data set for high-throughput deep learning, establishes a neural network model and dynamic statistical model based on deep learning, to identify and locate atomic or lattice defects, and automatically mark the lattice spacing, obtain the classification and statistics of the true shape of the microscopic particles of the material, quantitatively analyze the tissue dynamics of the material.
Abstract: An encoding engine encodes a video sequence to provide optimal quality for a given bitrate. The encoding engine cuts the video sequence into a collection of shot sequences. Each shot sequence includes video frames captured from a particular capture point. The encoding engine resamples each shot sequence across a range of different resolutions, encodes each resampled sequence with a range of quality parameters, and then upsamples each encoded sequence to the original resolution of the video sequence. For each upsampled sequence, the encoding engine computes a quality metric and generates a data point that includes the quality metric and the resample resolution. The encoding engine collects all such data points and then computes the convex hull of the resultant data set. Based on all convex hulls across all shot sequences, the encoding engine determines an optimal collection of shot sequences for a range of bitrates.
Abstract: A video coding mechanism is disclosed. The mechanism includes receiving a bitstream comprising a slice and a luma mapping with chroma scaling (LMCS) adaptation parameter set (APS) including LMCS parameters. The mechanism further includes determining that the LMCS APS is referenced in data related to the slice. The mechanism further includes decoding the slice using LMCS parameters from the LMCS APS based on the reference to the LMCS APS. The mechanism further includes forwarding the slice for display as part of a decoded video sequence.
Abstract: A video coding mechanism is disclosed. The mechanism includes receiving a bitstream comprising a first adaptation parameter set (APS) network abstraction layer (NAL) unit including an adaptive loop filter (ALF) type, a second APS NAL unit including a scaling list type, a third APS NAL unit including a luma mapping with chroma scaling (LMCS) type, and a slice. The mechanism further includes obtaining ALF parameters from the first APS NAL unit, obtaining scaling list parameters from the second APS NAL unit, and LMCS parameters from the third APS NAL unit. The mechanism further includes decoding the slice using the ALF parameters, the scaling list parameters, and the LMCS parameter. The mechanism further includes forwarding the slice for display as part of a decoded video sequence.
Abstract: Methods and apparatus for processing of video are described. The processing may include video encoding, decoding, or transcoding. One example video processing method includes performing a conversion between a video including one or more pictures including one or more subpictures and a bitstream of the video. The bitstream conforms to a format rule that specifies that a subpicture cannot be a random access type of subpicture in response to the subpicture not being a leading subpicture of an intra random access point subpicture. The leading subpicture precedes the intra random access point subpicture in output order.
Abstract: To enable a high quality HDR video communication, which can work by sending corresponding LDR images potentially via established LDR video communication technologies, which works well in practical situations, applicant has invented a HDR video decoder (600, 1100) arranged to calculate a HDR image (Im_RHDR) based on applying to a received 100 nit standard dynamic range image (Im_RLDR) a set of luminance transformation functions, the functions comprising at least a coarse luminance mapping (FC), which is applied by a dynamic range optimizer (603), and a mapping of the darkest value (0) of an intermediate luma (Y?HPS), being output of the dynamic range optimizer, to a received black offset value (Bk_off) by a range stretcher (604), the video decoder comprising a gain limiter (611, 1105) arranged to apply an alternate luminance transformation function to calculate a subset (502) of the darkest luminances of the HDR image, from corresponding darkest lumas (Y?_in) of the standard dynamic range image.
Type:
Grant
Filed:
January 30, 2023
Date of Patent:
January 30, 2024
Assignee:
Koninklijke Philips N.V.
Inventors:
Johannes Yzebrand Tichelaar, Johannes Gerardus Rijk Van Mourik, Jeroen Hubert Christoffel Jacobus Stessen, Roeland Focco Everhard Goris, Mark Jozef Willem Mertens, Patrick Luc Els Vandewalle, Rutger Nijland
Abstract: This disclosure relates to video coding and more particularly to techniques for signaling types of pictures for coded video. In particular, this disclosure describes techniques for enabling and signaling a so-called gradual random access picture. According to an aspect of an invention, a second syntax element is decoded by using a value of a first syntax element wherein the first syntax element indicates a picture is a random access picture having a recovery point picture and the second syntax element specifies a first picture order count value of the recovery point picture.
Abstract: An image rendering and coding method includes first sending, by a first processor, to-be-rendered data to a second processor; instructing the second processor to obtain first format data through rendering based on the to-be-rendered data, where the first format data is in first storage space of the second processor; instructing, by the first processor, the second processor to convert the first format data into second format data; and instructing the second processor to code the second format data into third format data, where a first data capacity of the third format data is less than a second data capacity of the second format data; and sending the third format data to a client.
Abstract: An electronic device for encoding a picture is described. The electronic device includes a processor and instructions stored in memory that are in electronic communication with the processor. The instructions are executable to encode a step-wise temporal sub-layer access (STSA) sample grouping. The instructions are further executable to send and/or store the STSA sample grouping.
Abstract: A method for remotely provisioning resources for running a computer application is described. The method includes: causing, using one or more processing units, an initialization of a user interactive video portion of a computer application, the computer application being executed using a remote server; determining a runtime of a static video portion of the computer application and a time required to complete initialization of the user interactive portion using information provided by the remote server; and delaying a start time of displaying the static video portion when the runtime of the static video portion is shorter than the time required to complete the initialization of the user interactive portion. A device that is capable of performing the above method and a server are also described.
Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
Abstract: Embodiments relate to a method and a device for switching media service channels, which minimize the time required to detect the first I frame for a switch channel by utilizing a caching server when switching channels in an IP-based media service, such as IPTV, and at the same time, minimize the time required for synchronization by changing time information in caching information transmitted from the caching server, such that channel switching between media service channels can be performed more quickly.
Type:
Grant
Filed:
February 8, 2021
Date of Patent:
November 14, 2023
Assignees:
SK Telecom Co., Ltd., SK Broadband Co., Ltd.
Inventors:
Dai Boong Lee, Youn Kwon Kim, Sang Ho Bae, Gun Woo Kim, Jeong Mee Moon
Abstract: An approach for gathering, encoding and transmitting histogram data mitigates the need for transmission resources by compressing the gathered data in a lossless, stateless manner for transmission. A generally sparse data set benefits from an encoding mechanism based on a bit plane arrangement of the raw data. The approach organizes bit planes in a sequential manner, and then encodes values based on intervals of non-zero bit positions. By traversing a sequential string based on the bit plane, each “run” of zeroes tends to produce relatively small values, easing encoding burdens, but also accommodated larger values when necessary. A selective encoding technique invokes different encoding processes based on the magnitude of the interval, to allow use of an encoding that stores each respective value in the fewest bits. Different encoding techniques are applied based on ranges of the interval magnitude, or zero run.
Abstract: In various examples, at least partial control of a vehicle may be transferred to a control system remote from the vehicle. Sensor data may be received from a sensor(s) of the vehicle and the sensor data may be encoded to generate encoded sensor data. The encoded sensor data may be transmitted to the control system for display on a virtual reality headset of the control system. Control data may be received by the vehicle and from the control system that may be representative of a control input(s) from the control system, and actuation by an actuation component(s) of the vehicle may be caused based on the control input.
Type:
Grant
Filed:
July 19, 2021
Date of Patent:
October 17, 2023
Assignee:
NVIDIA Corporation
Inventors:
Jen-Hsun Huang, Prajakta Gudadhe, Justin Ebert, Dane Johnston
Abstract: A better compromise between encoding complexity and achievable rate distortion ratio, and/or to achieve a better rate distortion ratio is achieved by using multitree sub-divisioning not only in order to subdivide a continuous area, namely the sample array, into leaf regions, but using the intermediate regions also to share coding parameters among the corresponding collocated leaf blocks. By this measure, coding procedures performed in tiles—leaf regions—locally, may be associated with coding parameters individually without having to, however, explicitly transmit the whole coding parameters for each leaf region separately. Rather, similarities may effectively exploited by using the multitree subdivision.
Type:
Grant
Filed:
March 29, 2018
Date of Patent:
October 10, 2023
Assignee:
GE Video Compression, LLC
Inventors:
Philipp Helle, Detlev Marpe, Simon Oudin, Thomas Wiegand
Abstract: In various examples, at least partial control of a vehicle may be transferred to a control system remote from the vehicle. Sensor data may be received from a sensor(s) of the vehicle and the sensor data may be encoded to generate encoded sensor data. The encoded sensor data may be transmitted to the control system for display on a virtual reality headset of the control system. Control data may be received by the vehicle and from the control system that may be representative of a control input(s) from the control system, and actuation by an actuation component(s) of the vehicle may be caused based on the control input.
Type:
Grant
Filed:
July 19, 2021
Date of Patent:
October 3, 2023
Assignee:
NVIDIA Corporation
Inventors:
Jen-Hsun Huang, Prajakta Gudadhe, Justin Ebert, Dane Johnston
Abstract: Devices of a control system may communicate with each other via a network. The control devices may be configured to form the network by joining the network and each attempt to attach to another device on the network. Attachment may be performed using one or more link quality thresholds. For example, the control device may measure background values and the link quality threshold may represent an Nth percentile value of the recorded background values measured at the control device. During a router optimization mode, the control devices may measure and store a communication quality metrics that may be used to assign the role of router devices and/or the role of leader device to control devices on the network.
Abstract: There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.
Abstract: A layered video coding method is provided that selects data to upsamples from a base layer (BL) to provide to an enhancement (EL) to improved coding efficiency. The method determines a filter to determine an up-sampled value for a first layer for a video, wherein the filter has a set of coefficient values assigned to the filter. The up-sampled value is determined by applying the set of coefficient values to the plurality of sample values. The method then outputs the up-sampled value for use in coding a second enhancement layer (EL) of a higher resolution than the first layer. The up-sampled values may be for the 6/16 and ? 6/16 phase offsets.
Abstract: Innovations in syntax and semantics of coded picture buffer removal delay (“CPBRD”) values potentially simplify splicing operations. For example, a video encoder sets a CPBRD value for a current picture that indicates an increment value relative to a nominal coded picture buffer removal time of a preceding picture in decoding order, regardless of whether the preceding picture has a buffering period SEI message. The encoder can signal the CPBRD value according to a single-value approach in which a flag indicates how to interpret the CPBRD value, according to a two-value approach in which another CPBRD value (having a different interpretation) is also signaled, or according to a two-value approach that uses a flag and a delta value. A corresponding video decoder receives and parses the CPBRD value for the current picture. A splicing tool can perform simple concatenation operations to splice bitstreams using the CPBRD value for the current picture.
Abstract: Innovations in syntax and semantics of coded picture buffer removal delay (“CPBRD”) values potentially simplify splicing operations. For example, a video encoder sets a CPBRD value for a current picture that indicates an increment value relative to a nominal coded picture buffer removal time of a preceding picture in decoding order, regardless of whether the preceding picture has a buffering period SEI message. The encoder can signal the CPBRD value according to a single-value approach in which a flag indicates how to interpret the CPBRD value, according to a two-value approach in which another CPBRD value (having a different interpretation) is also signaled, or according to a two-value approach that uses a flag and a delta value. A corresponding video decoder receives and parses the CPBRD value for the current picture. A splicing tool can perform simple concatenation operations to splice bitstreams using the CPBRD value for the current picture.
Abstract: A video decoding method includes: decoding a part of a bitstream to generate a decoded frame, and parsing at least one syntax element from the bitstream. The decoded frame is a projection-based frame that includes projection faces packed in a cube-based projection layout. At least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection. The at least one syntax element is indicative of a guard band configuration of the projection-based frame.
Abstract: The present application provides an image processing method and an image processing system. The image processing method includes: obtaining a first image matrix; generating a first classified image matrix, wherein the first classified image matrix Includes a plurality of parts corresponding to a plurality of classification; obtaining a plurality of weightings, for a first image process, corresponding to the plurality of parts of the first classified image matrix, and generating a first weighting matrix accordingly; and performing the first image process upon the first image matrix according to the first weighting matrix to generate a first processed image matrix.
Abstract: When a temporally compressed video stream is decoded and subsequently re-encoded, quality is typically lost. The quality loss may be mitigated using information about how the source video stream was encoded during the re-encoding process. According to some aspects of the disclosure, this mitigation of quality loss can be facilitated by decoders that output such information and encoders that receive such information. These decoders and encoders may be separate devices. The functionality of these decoders and encoders may also be combined in a single device, such as a transcoding device. An example of the information that may be used during re-encoding is whether each portion of the original stream was intra-coded or non-intra-coded.
Abstract: A method includes receiving a bit stream; determining, using the bit stream and for a current frame, whether the current frame is available to be used as a reference frame; setting, in response to determining that the current frame is available to be used as a reference frame, a variable characterizing that an adaptive resolution management mode is disallowed; and reconstructing pixel data of the current frame, wherein the adaptive resolution management mode is disallowed. Related apparatus, systems, techniques and articles are also described.
Type:
Grant
Filed:
September 3, 2020
Date of Patent:
March 21, 2023
Assignee:
OP Solutions, LLC
Inventors:
Borivoje Furht, Hari Kalva, Velibor Adzic
Abstract: To enable a high quality HDR video communication, which can work by sending corresponding LDR images potentially via established LDR video communication technologies, which works well in practical situations, applicant has invented a HDR video decoder (600, 1100) arranged to calculate a HDR image (Im_RHDR) based on applying to a received 100 nit standard dynamic range image (Im_RLDR) a set of luminance transformation functions, the functions comprising at least a coarse luminance mapping (FC), which is applied by a dynamic range optimizer (603), and a mapping of the darkest value (0) of an intermediate luma (Y?HPS), being output of the dynamic range optimizer, to a received black offset value (Bk_off) by a range stretcher (604), the video decoder comprising a gain limiter (611, 1105) arranged to apply an alternate luminance transformation function to calculate a subset (502) of the darkest luminances of the HDR image, from corresponding darkest lumas (Y?_in) of the standard dynamic range image.
Type:
Grant
Filed:
October 2, 2020
Date of Patent:
February 28, 2023
Assignee:
Koninklijke Philips N.V.
Inventors:
Johannes Yzebrand Tichelaar, Johannes Gerardus Rijk Van Mourik, Jeroen Hubert Christoffel Jacobus Stessen, Roeland Focco Everhard Goris, Mark Jozef Willem Mertens, Patrick Luc Els Vandewalle, Rutger Nijland
Abstract: A video decoding/decoding method and apparatus according to the present disclosure may obtain a transform coefficient of a current block, obtain an inverse-quantized transform coefficient by performing inverse-quantization on a transform coefficient based on a quantization-related parameter of an adaptation parameter set, and reconstruct a residual block of a current block based on an inverse-quantized transform coefficient.
Abstract: A method to be performed by a receiving apparatus for decoding an encoded bitstream representing a sequence of pictures of a video stream is provided. In the method, capabilities relating to level of decoding parallelism for the decoder are identified, a parameter indicative of the decoder's capabilities relating to level of decoding parallelism is kept, and for a set of levels of decoding parallelism, information relating to HEVC profile and HEVC level that the decoder is capable of decoding is kept. A method for encoding a bitstream representing a sequence of pictures of a video stream is also provided. In the method, a parameter is received from a transmitting apparatus that should decode the encoded bitstream.
Type:
Grant
Filed:
September 18, 2020
Date of Patent:
January 31, 2023
Assignee:
TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
Inventors:
Jonatan Samuelsson, Bo Burman, Rickard Sjöberg, Magnus Westerlund
Abstract: A video decoding method according to the present invention may comprise: a step for determining whether to divide a current block into a plurality of sub-blocks; a step for determining an intra prediction mode for the current block; and a step for performing intra prediction for each sub-block on the basis of the intra prediction mode, when the current block is divided into the plurality of sub-blocks.
Abstract: There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform acquiring an input bitstream comprising metadata and video data, decoding the video data, determining whether the metadata comprises at least one flag signaling at least one component of a picture size of at least one picture of the video data, and signaling, in a case where it is determined that the metadata comprises the at least one flag, a display device to display the at least one picture from the video data according to the at least one flag.
Type:
Grant
Filed:
October 5, 2020
Date of Patent:
January 24, 2023
Assignee:
TENCENT AMERICA LLC
Inventors:
Byeongdoo Choi, Stephan Wenger, Shan Liu
Abstract: When a temporally compressed video stream is decoded and subsequently re-encoded, quality is typically lost. The quality loss may be mitigated using information about how the source video stream was encoded during the re-encoding process. According to some aspects of the disclosure, this mitigation of quality loss can be facilitated by decoders that output such information and encoders that receive such information. These decoders and encoders may be separate devices. The functionality of these decoders and encoders may also be combined in a single device, such as a transcoding device. An example of the information that may be used during re-encoding is whether each portion of the original stream was intra-coded or non-intra-coded.
Abstract: A computer-implemented method can include receiving a first signal corresponding to a first flow of acoustic energy, applying a transform to the received first signal using at least a first amplitude-independent window size at a first frequency and a second amplitude-independent window size at a second frequency, the second amplitude-independent window size improving a temporal response at the second frequency, wherein the second frequency is subject to amplitude reduction due to a resonance phenomenon associated with the first frequency, and storing a first encoded signal, the first encoded signal based on applying the transform to the received first signal.
Abstract: There is provided a method for encoding and decoding a signal. The input signal (1000) is processed by at least converting the input signal (1000) from a high-dynamic range—HDR-signal to a standard dynamic range—SDR-signal, to produce a first processed signal. The first processed signal is encoded by a first encoding module (1004) to generate a first encoded signal (1012). The first encoded signal (1012) is decoded to generate a first decoded signal. The first decoded signal is processed together with the first processed signal by a second encoding module (1006) to generate a second encoded signal (1014). The second encoded signal (1014) is decoded and the result is combined with the first decoded signal (1012) to generate a second decoded signal. The second decoded signal is processed at least by converting the second decoded signal from a SDR signal to a HDR signal, to produce a second processed signal.
Abstract: There is provided a method for processing an input signal (700). The input signal (700) is processed at least by converting the input signal (700) from a first colour space to a second colour space, to produce a first processed signal. The processed signal is encoded by a first encoding module (703) to generate a first encoded signal (710). A decoded signal is generated by decoding the first encoded signal (710). The decoded signal is processed at least by converting the decoded signal from the second colour space to the first colour space to produce a second processed signal. The second processed signal and the input signal (700) are processed by a second encoding module (707) to generate a second encoded signal (720).
Abstract: An acquisition unit capable of providing more flexible services includes an acquisition unit that acquires a first stream; a transformer that transforms a first picture included in the first stream into a second picture, the first picture being irreversibly encoded, the second picture being reversibly encoded; and an output unit that outputs a second stream including the second picture.
Abstract: A control signal transmission circuit and a control signal receiving circuit for an audio/video interface are provided. The control signal transmission circuit includes an audio/video interface encoder, a signal packaging circuit and a data allocator. The audio/video interface encoder is configured to receive an audio packet and supports a user-defined packet format. The signal packaging circuit is configured to receive a first control signal and package the first control signal into a control data packet according to the user-defined packet format. The data allocator is configured to receive a video data and a second control signal and to mix the second control signal and the video data to generate a mixed data packet. The audio/video interface encoder packages the control data packet, the mixed data packet and the audio packet according to an audio/video transmission protocol to generate an audio/video and control data.
Abstract: The present invention relates to a decoding method for a bit stream that supports a plurality of layers. The decoding method may include receiving information on a set of video parameters that includes information on the plurality of layers, and parsing the set of video parameters to grasp information on the layers in the bit stream.
Type:
Grant
Filed:
February 12, 2021
Date of Patent:
November 1, 2022
Assignee:
Electronics and Telecommunications Research Institute
Inventors:
Jung Won Kang, Ha Hyun Lee, Jin Soo Choi, Jin Woong Kim
Abstract: A method and system for encoding a stereoscopic image pair is disclosed. Groups of pixels are analyzed to determine the depth of each pixel group. The number of bits per pixel used to encode each pixel group is selected based on the depth of that pixel group. Therefore, images of objects closer to the camera pair, which appear closer to the viewer, are encoded with a larger number of bits per pixel than objects perceived to be farther from the viewer. The number of bits per pixel may also be increased based on a number of objects depicted or motion detected. The size of prediction blocks used to encode image portions may also be determined based on an angular distance of an image portion relative to the center of the frame. Therefore, smaller prediction blocks may be used to encode image portions closer to the center of the frame.
Abstract: A computing device generating an enhanced reality environment may include, in an example, a processor to execute the enhanced reality environment on the computing device and emulate hardware associated with a real-world mobile computing device within the enhanced reality environment; a network adapter to communicatively couple the computing device to the real-world mobile computing device; and an enhanced reality data capture module to capture data defining the enhanced reality environment and deliver the data to the mobile computing device to be processed by an app associated with the real-world mobile computing device.
Type:
Grant
Filed:
October 9, 2018
Date of Patent:
September 20, 2022
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: An apparatus is provided for decoding last position information indicating a horizontal position and a vertical position of a last non-zero coefficient in a predetermined order within a current block to be decoded, the current block being included in a picture and including a plurality of coefficients. The apparatus includes one or more processors, a communication unit, and storage coupled to the one or more processors and the communication unit. The communication unit is configured to transmit a request for a bitstream to an external system, and receive the bitstream from the external system. The one or more processors are configured to obtain the bitstream, perform first arithmetic decoding, perform second arithmetic decoding, derive a horizontal component of the last position information, and derive a vertical component of the last position. A system for decoding and a displaying method are also provided.