Patents by Inventor Alican Nalci

Alican Nalci has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11973983
    Abstract: An example method of decoding video data that includes receiving one or more syntax elements of the video data indicative of whether a first type of coding scheme or a second type of coding scheme is applied to residual values of a block of video data coded with transform skip, wherein the residual values are indicative of a difference between the block and a prediction block, and wherein, in transform skip, the residual values are not transformed from a sample domain to a frequency domain. The method includes determining a type of coding scheme to apply to the residual values based on the one or more syntax elements, determining the residual values based on the determined type of coding scheme, and reconstructing the block based on the determined residual values and the prediction block.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: April 30, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Marta Karczewicz, Muhammed Zeyd Coban, Alican Nalci, Hilmi Enes Egilmez
  • Publication number: 20240129472
    Abstract: Improved lossless entropy coding techniques for coding of image data include selecting a context for entropy coding based on an ordered scan path of possible context locations. A symbol for a current location within a source image may be entropy coded based on a context of prior encoded symbols of other locations within source images, where the context is selected based on an ordered scan path enumerating a series of potential context locations within one or more source images. To select a context, a predetermined number of prior symbols may be selected by qualifying or disqualifying locations in the scan path, and then the current symbol may be encoded with a context based on prior symbols corresponding to the first qualifying context locations in the order of the scan path.
    Type: Application
    Filed: September 18, 2023
    Publication date: April 18, 2024
    Inventors: Yeqing WU, Yunfei ZHENG, Alican NALCI, Yixin DU, Hilmi Enes EGILMEZ, Guoxin JIN, Alexandros TOURAPIS, Jun XIN, Hsi-Jung WU
  • Patent number: 11949870
    Abstract: An example method includes determining a color component of a unit of video data; determining, based at least on the color component, a context for context-adaptive binary arithmetic coding (CABAC) a syntax element that specifies a value of a low-frequency non-separable transform (LFNST) index for the unit of video data; CABAC decoding, based on the determined context and via a syntax structure for the unit of video data, the syntax element that specifies the value of the LFNST index for the unit of video data; and inverse-transforming, based on a transform indicated by the value of the LFNST index, transform coefficients of the unit of video data.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: April 2, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Alican Nalci, Hilmi Enes Egilmez, Vadim Seregin, Muhammed Zeyd Coban, Marta Karczewicz
  • Publication number: 20240073438
    Abstract: Techniques are disclosed for improved video coding with virtual reference frames. A motion vector for prediction of a pixel block from a reference may be constrained based on the reference. In as aspect, if the reference is a temporally interpolated virtual reference frame with corresponding time close to the time of the current pixel block, the motion vector for prediction may be constrained magnitude and/or precision. In another aspect, a bitstream syntax for encoding the constrained motion vector may also be constrained. In this manner, the techniques proposed herein contribute to improved coding efficiencies.
    Type: Application
    Filed: August 18, 2023
    Publication date: February 29, 2024
    Inventors: Yeqing WU, Yunfei ZHENG, Guoxin JIN, Yixin DU, Alican NALCI, Hilmi Enes EGILMEZ, Jun XIN, Hsi-Jung WU
  • Publication number: 20240048776
    Abstract: Disclosed is a method that includes receiving an image frame having a plurality of coded blocks, determining a prediction unit (PU) from the plurality of coded blocks, determining one or more motion compensation units arranged in an array within the PU, and applying a filter to one or more boundaries of the one or more motion compensation units. Also disclosed is a method that includes receiving a reference frame that includes a reference block, determining a timing for deblocking a current block, performing motion compensation on the reference frame to obtain a predicted frame that includes a predicted block, performing reconstruction on the predicted frame to obtain a reconstructed frame that includes a reconstructed PU, and applying, at the timing for deblocking the current block, a deblocking filter based on one or more parameters to the reference block, the predicted block, or the reconstructed PU.
    Type: Application
    Filed: September 29, 2022
    Publication date: February 8, 2024
    Inventors: Yixin Du, Alexandros Tourapis, Alican Nalci, Guoxin Jin, Hilmi Enes Egilmez, Hsi-Jung Wu, Jun Xin, Yeqing Wu, Yunfei Zheng
  • Publication number: 20240040151
    Abstract: Techniques are described for express and implied signaling of transform mode selections in video coding. Information derived from coefficient samples in a given transform unit (TU) or prediction unit (PU) may constrain or modify signaling of certain syntax elements at the coding block (CB), TU, or PU levels. For instance, based on the spatial locations of decoded coefficients, the spatial patterns of coefficients, or the correlation with the coefficients in neighboring blocks, various syntax elements such as the transform type and related flags/indices or secondary transform modes/flags indices, a residual coding mode, intra and inter prediction modes, and scanning order may be disabled or constrained. In another case, if the coefficient samples match a desired spatial pattern or have other desired properties then a default transform type, a default secondary transform type, a default intra and inter prediction mode or other block level modes may be inferred at the decoder side.
    Type: Application
    Filed: May 4, 2023
    Publication date: February 1, 2024
    Inventors: Alican Nalci, Yunfei Zheng, Hilmi E. Egilmez, Yeqing WU, Yixin Du, Alexis Tourapis, Jun Xin, Hsi-Jung Wu
  • Publication number: 20240040120
    Abstract: Video coders and decoders perform transform coding and decoding on blocks of video content according to an adaptively selected transform type. The transform types are organized into a hierarchy of transform sets where each transform set includes a respective number of transforms and each higher-level transform set includes the transforms of each lower-level transform set within the hierarchy. The video coders and video decoders may exchange signaling that establishes a transform set context from which a transform set that was selected for coding given block(s) may be identified. The video coders and video decoders may exchange signaling that establishes a transform decoding context from which a transform that was selected from the identified transform set to be used for decoding the transform unit. The block(s) may be coded and decoded by the selected transform.
    Type: Application
    Filed: July 25, 2023
    Publication date: February 1, 2024
    Inventors: Hilmi Enes EGILMEZ, Yunfei ZHENG, Alican NALCI, Yeqing WU, Yixin DU, Guoxin JIN, Alexandros TOURAPIS, Jun XIN, Hsi-Jung WU
  • Publication number: 20240040124
    Abstract: A flexible coefficient coding (FCC) approach is presented. In the first aspect, spatial sub-regions are defined over a transform unit (TU) or a prediction unit (PU). These sub-regions organize the coefficient samples residing inside a TU or a PU into variable coefficient groups (VCGs). Each VCG corresponds to a sub-region inside a larger TU or PU. The shape of VCGs or the boundaries between different VCGs may be irregular, determined based on the relative distance of coefficient samples with respect to each other. Alternatively, the VCG regions may be defined according to scan ordering within a TU. Each VCG can encode a 1) different number of symbols for a given syntax element, or a 2) different number of syntax elements within the same TU or PU. Whether to code more symbols or more syntax elements may depend on the type of arithmetic coding engine used in a particular coding specification. For multi-symbol arithmetic coding (MS-AC), a VCG may encode a different number of symbols for a syntax element.
    Type: Application
    Filed: July 25, 2023
    Publication date: February 1, 2024
    Inventors: Alican NALCI, Yunfei ZHENG, Hilmi Enes EGILMEZ, Yeqing WU, Yixin DU, Alexandros TOURAPIS, Jun XIN, Hsi-Jung WU, Arash VOSOUGHI, Dzung T. HOANG
  • Publication number: 20230412844
    Abstract: A video decoder determines, based on a block size of a current block and a low-frequency non-separable transform (LFNST) syntax element, a zero-out pattern of normatively defined zero-coefficients. The LFNST syntax element is signaled at a transform unit (TU) level. Additionally, the video decoder determines transform coefficients of the current block. The transform coefficients of the current block include transform coefficients in an LFNST region of the current block and transform coefficients outside the LFNST region of the current block. As part of determining the transform coefficients of the current block, the video decoder applies an inverse LFNST to determine values of one or more transform coefficients in the LFNST region of the current block. The video decoder also determines that transform coefficients of the current block in a region of the current block defined by the zero-out pattern are equal to 0.
    Type: Application
    Filed: May 22, 2023
    Publication date: December 21, 2023
    Inventors: Alican Nalci, Hilmi Enes Egilmez, Vadim Seregin, Muhammed Zeyd Coban, Marta Karczewicz
  • Publication number: 20230300341
    Abstract: Techniques are disclosed for generating virtual reference frames that may be used for prediction of input video frames. The virtual reference frames may be derived from already-coded reference frames and thereby incur reduced signaling overhead. Moreover, signaling of virtual reference frames may be avoided until an encoder selects the virtual reference frame as a prediction reference for a current frame. In this manner, the techniques proposed herein contribute to improved coding efficiencies.
    Type: Application
    Filed: January 20, 2023
    Publication date: September 21, 2023
    Inventors: Yeqing WU, Yunfei ZHENG, Alexandros TOURAPIS, Alican NALCI, Yixin DU, Hilmi Enes EGILMEZ, Albert E. KEINATH, Jun XIN, Hsi-Jung WU
  • Patent number: 11695960
    Abstract: A video decoder determines, based on a block size of a current block and a low-frequency non-separable transform (LFNST) syntax element, a zero-out pattern of normatively defined zero-coefficients. The LFNST syntax element is signaled at a transform unit (TU) level. Additionally, the video decoder determines transform coefficients of the current block. The transform coefficients of the current block include transform coefficients in an LFNST region of the current block and transform coefficients outside the LFNST region of the current block. As part of determining the transform coefficients of the current block, the video decoder applies an inverse LFNST to determine values of one or more transform coefficients in the LFNST region of the current block. The video decoder also determines that transform coefficients of the current block in a region of the current block defined by the zero-out pattern are equal to 0.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: July 4, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Alican Nalci, Hilmi Enes Egilmez, Vadim Seregin, Muhammed Zeyd Coban, Marta Karczewicz
  • Publication number: 20230199226
    Abstract: An example device includes memory and one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to receive a first slice header syntax element for a slice of the video data and determine a first value for the first slice header syntax element, the first value being indicative of whether dependent quantization is enabled. The one or more processors are configured to receive a second slice header syntax element for the slice of the video data and determine a second value for the second slice header syntax element, the second value being indicative of whether sign data hiding is enabled. The one or more processors are configured to determine whether transform skip residual coding is disabled for the slice based on the first value and the second value and decode the slice based on the determinations.
    Type: Application
    Filed: February 14, 2023
    Publication date: June 22, 2023
    Inventors: Alican Nalci, Marta Karczewicz, Muhammed Zeyd Coban
  • Publication number: 20230188738
    Abstract: In an example method, a decoder obtains a data stream representing video content. The video content is partitioned into one or more logical units, and each of the logical units is partitioned into one or more respective logical sub-units. The decoder determines that the data stream includes first data indicating that a first logical unit has been encoded according to a flexible skip coding scheme. In response, the decoder determines a first set of decoding parameters based on the first data, and decodes each of the logical sub-units of the first logical unit according to the first set of decoding parameters.
    Type: Application
    Filed: December 6, 2022
    Publication date: June 15, 2023
    Inventors: Alican Nalci, Alexandros Tourapis, Hilmi Enes Egilmez, Hsi-Jung Wu, Jun Xin, Yeqing Wu, Yixin Du, Yunfei Zheng
  • Publication number: 20230143147
    Abstract: A cross-component based filtering system is disclosed for video coders and decoders. The filtering system may include a filter having an input for a filter offset and an input for samples reconstructed from coded video data representing a native component of source video on which the filter operates. The offset may be generated at least in part from a sample classifier that classifies samples reconstructed from coded video data representing a color component of the source video orthogonal to the native component according to sample intensity.
    Type: Application
    Filed: November 2, 2022
    Publication date: May 11, 2023
    Inventors: Yixin DU, Alexandros TOURAPIS, Yunfei ZHENG, Jun XIN, Alican NALCI, Mei T. GUO, Yeqing WU, Hsi-Jung WU
  • Publication number: 20230142771
    Abstract: A filtering system for video coders and decoders is disclosed that includes a feature detector having an input for samples reconstructed from coded video data representing a color component of source video, and having an output for data identifying a feature recognized therefrom, an offset calculator having an input for the feature identification data from the feature detector and having an output for a filter offset, and a filter having an input for the filter offset from the offset calculator and an input for the reconstructed samples, and having an output for filtered samples. The filtering system is expected to improve operations of video coder/decoder filtering systems by selecting filtering offsets from analysis of recovered video data in a common color plane as the samples that will be filtered.
    Type: Application
    Filed: November 2, 2022
    Publication date: May 11, 2023
    Inventors: Yixin DU, Alexandros TOURAPIS, Yunfei ZHENG, Jun XIN, Mukta S. Gore, Alican NALCI, Mei T. GUO, Yeqing WU, Hsi-Jung WU
  • Patent number: 11638036
    Abstract: An example device includes memory and one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to receive a first slice header syntax element for a slice of the video data and determine a first value for the first slice header syntax element, the first value being indicative of whether dependent quantization is enabled. The one or more processors are configured to receive a second slice header syntax element for the slice of the video data and determine a second value for the second slice header syntax element, the second value being indicative of whether sign data hiding is enabled. The one or more processors are configured to determine whether transform skip residual coding is disabled for the slice based on the first value and the second value and decode the slice based on the determinations.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: April 25, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Alican Nalci, Marta Karczewicz, Muhammed Zeyd Coban
  • Publication number: 20230096567
    Abstract: Improved neural-network-based image and video coding techniques are presented, including hybrid techniques that include both tools of a host codec and neural-network-based tools. In these improved techniques, the host coding tools may include conventional video coding standards such H.266 (VVC). In an aspects, source frames may be partitioned and either host or neural-network-based tools may be selected per partition. Coding parameter decisions for a partition may be constrained based on the partitioning and coding tool selection. Rate control for host and neural network tools may be combined. Multi-stage processing of neural network output may use a checkerboard prediction pattern.
    Type: Application
    Filed: September 23, 2022
    Publication date: March 30, 2023
    Inventors: Alican NALCI, Alexandros TOURAPIS, Hsi-Jung WU, Jiefu ZHAI, Jingteng XUE, Jun XIN, Mei GUO, Xingyu ZHANG, Yeqing WU, Yunfei ZHENG, Jean Begaint
  • Patent number: 11582491
    Abstract: An example video codec includes memory configured to store the video data and one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to determine that a current mode of coding a current block of the video data is a single tree partitioning mode. Based on the current mode being the single tree partitioning mode, the one or more processors are configured to refrain from determining whether there is a non-DC coefficient for a chroma component of a transform unit (TU) for the current block and refrain from coding a low-frequency non-separable transformation (LFNST) index in response to the refraining of the determination of whether there is the non-DC coefficient. The one or more processors are configured to code the current block in the single partitioning mode with LFNST disabled.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: February 14, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Hilmi Enes Egilmez, Alican Nalci, Vadim Seregin, Marta Karczewicz
  • Publication number: 20220360814
    Abstract: An encoder or decoder can perform enhanced motion vector prediction by receiving an input block of data for encoding or decoding and accessing stored motion information for at least one other block of data. Based on the stored motion information, the encoder or decoder can generate a list of one or more motion vector predictor candidates for the input block in accordance with an adaptive list construction order. The encoder or decoder can predict a motion vector for the input block based on at least one of the one or more motion vector predictor candidates.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 10, 2022
    Inventors: Yeqing Wu, Alexandros Tourapis, Yunfei Zheng, Hsi-Jung Wu, Jun Xin, Albert E. Keinath, Mei Guo, Alican Nalci
  • Patent number: 11457229
    Abstract: An example device includes memory configured to store video data and one or more processors implemented in circuitry and coupled to the memory. The one or more processors determine whether a chroma block of the video data is encoded using dual tree partitioning. The one or more processors determine whether transform skip mode for the chroma block is enabled. The one or more processors, based on the chroma block being encoded using dual tree partitioning and transform skip mode being enabled for the chroma block, infer a value of a low-frequency non-separable transform (LFNST) index for the chroma block.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: September 27, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Hilmi Enes Egilmez, Alican Nalci, Muhammed Zeyd Coban, Marta Karczewicz