Patents by Inventor Nannan Zou

Nannan Zou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12323607
    Abstract: In example embodiments, an apparatus, a method, and a computer program product are provided. An example apparatus include processing circuitry; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processing circuitry, cause the apparatus at least to: overfit a neural network on each media item, from a batch of media items, for a number of iterations to obtain an overfitted neural network model for the each media item; evaluate the overfitted neural network model on the each media item to obtain evaluation errors; and update parameters of the neural network to be based on the evaluation errors.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: June 3, 2025
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricrì, Hamed Rezazadegan Tavakoli, Honglei Zhang, Nannan Zou
  • Patent number: 12321870
    Abstract: Various embodiments provide an apparatus, a method, and a computer program product. 1. An apparatus incudes at least one processor; and at least one non-transitory memory includes computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: perform an overfitting operation, at an encoder side, to obtain an overfitted probability model, wherein overfitting comprises one or more training operations applied to a probability model, wherein one or more parameters of the probability model are trained; use the overfitted probability model to provide probability estimates to a lossless codec or a substantially lossless codec for encoding data or a portion of the data; and signal information to a decoder on whether to perform the overfitting operation at the decoder side.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: June 3, 2025
    Assignee: Nokia Technologies Oy
    Inventors: Nannan Zou, Francesco Cricrì, Honglei Zhang, Hamed Rezazadegan Tavakoli, Jani Lainema, Miska Matias Hannuksela
  • Publication number: 20240289590
    Abstract: Various embodiments provide a method, an apparatus, and computer program product. The method comprising: defining an attention block comprising: a set of initial neural network layers, wherein each layer is caused to process an output of a previous layer, and wherein a first layer processes an input of a dense split attention block; core attention blocks process one or more outputs of the set of initial neural network layers; a concatenation block for concatenating one or more outputs of the core attention blocks and at least one intermediate output of the set of initial neural network layers; one or more final neural network layers process at least the output of the concatenation block; and a summation block caused to sum an output of the final neural network layers and an input to the attention block; and providing an output of the summation block as a final output of the attention block.
    Type: Application
    Filed: June 16, 2022
    Publication date: August 29, 2024
    Inventors: Francesco CRICRÌ, Nannan ZOU, Honglei ZHANG, Hamed REZAZADEGAN TAVAKOLI
  • Publication number: 20240265240
    Abstract: An example apparatus includes at least one processor; and at least one non-transitory memory comprising computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform; learn importance of one or more parameters by using a training dataset; define one or more masks for indicating the importance of the one or more parameters for a model finetuning; share at least one mask of the one or more masks with at least one of an encoder or a decoder; finetune at least one parameter of the one or more parameters based at least on the at least one mask; send or signal one or more weight updates corresponding to the at least one parameter in a bitstream to the decoder.
    Type: Application
    Filed: June 17, 2022
    Publication date: August 8, 2024
    Inventors: Honglei ZHANG, Francesco CRICRÌ, Ramin GHAZNAVI YOUVALARI, Hamed REZAZADEGAN TAVAKOLI, Nannan ZOU, Vinod Kumar MALAMAL VADAKITAL, Miska Matias HANNUKSELA, Yat Hong LAM, Jani LAINEMA, Emre Baris AKSU
  • Publication number: 20240267543
    Abstract: An example method includes: receiving a target frame and one or more reference frames; extracting a first feature map from a first predicted target frame predicted from a first reference frame, and a second feature map from a second predicted frame predicted from a second target frame, wherein the first predicted target frame is a backward predicted target frame and the second predicted target frame is a forward predicted target frame; generating a refined residual feature based at least on the first feature map, the second feature map, and a third feature map extracted from a feature decoder net module or circuit; generating a frame residual based at least on the refined residual feature; and generating an output reconstructed frame based at least on the frame residual and an average frame, wherein the average frame represents an average of the first predicted target frame and the second predicted target frame.
    Type: Application
    Filed: January 29, 2024
    Publication date: August 8, 2024
    Inventors: Nannan ZOU, Francesco CRICRÌ, Honglei ZHANG
  • Publication number: 20240249514
    Abstract: Various embodiments provide an apparatus, a method, and a computer program product. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform; train or finetune one or more additional parameters of at least one neural network (NN) or a portion of the at least one NN, wherein the one or more additional parameters comprise one or more scaling parameters; and encode or decode one or more media elements based on the at least one neural network or a portion of the at least one NN comprising the trained or finetuned one or more additional parameters.
    Type: Application
    Filed: May 13, 2022
    Publication date: July 25, 2024
    Inventors: Jani LAINEMA, Francesco CRICRÌ, Honglei ZHANG, Hamed REZAZADEGAN TAVAKOLI, Yat Hong LAM, Miska Matias HANNUKSELA, Nannan ZOU
  • Publication number: 20240146938
    Abstract: Various embodiments provide an apparatus, a method and a computer program product for end-to-end learned predictive coding of media frames. An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode one or more media frames for at least one neural network; wherein an inter-frame codec is applied to at least one media frame of the one or more media frames; and wherein a first decoded reference frame and a second decoded reference frame refer to reference frames for the at least one media frame.
    Type: Application
    Filed: March 9, 2022
    Publication date: May 2, 2024
    Inventors: Nannan ZOU, Honglei ZHANG, Francesco CRICRÌ, Hamed REZAZADEGAN TAVAKOLI, Ramin GHAZNAVI YOUVALARI
  • Publication number: 20230269387
    Abstract: In example embodiments, an apparatus, a method, and a computer program product are provided. An example apparatus include processing circuitry; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processing circuitry, cause the apparatus at least to: overfit a neural network on each media item, from a batch of media items, for a number of iterations to obtain an overfitted neural network model for the each media item; evaluate the overfitted neural network model on the each media item to obtain evaluation errors; and update parameters of the neural network to be based on the evaluation errors.
    Type: Application
    Filed: June 11, 2021
    Publication date: August 24, 2023
    Inventors: Francesco CRICRÌ, Hamed REZAZADEGAN TAVAKOLI, Honglei ZHANG, Nannan ZOU
  • Publication number: 20230196072
    Abstract: Various embodiments provide an apparatus, a method, and a computer program product. An example apparatus includes at least one processor; and at least one non-transitory memory comprising computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: determine a subset of parameters to overfit from a set of candidate parameters of decoder side neural network to be overfitted (OPs); wherein the subset of parameters to overfit is smaller than the set of candidate parameters to be overfitted; and overfit the determined subset of parameters.
    Type: Application
    Filed: December 16, 2021
    Publication date: June 22, 2023
    Inventors: Nannan ZOU, Francesco CRICRÌ, Honglei ZHANG, Hamed REZAZADEGAN TAVAKOLI, Jani LAINEMA, Miska Matias HANNUKSELA
  • Publication number: 20230169372
    Abstract: Various embodiments provide an apparatus, a method, and a computer program product. 1. An apparatus incudes at least one processor; and at least one non-transitory memory includes computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: perform an overfitting operation, at an encoder side, to obtain an overfitted probability model, wherein overfitting comprises one or more training operations applied to a probability model, wherein one or more parameters of the probability model are trained; use the overfitted probability model to provide probability estimates to a lossless codec or a substantially lossless codec for encoding data or a portion of the data; and signal information to a decoder on whether to perform the overfitting operation at the decoder side.
    Type: Application
    Filed: December 1, 2021
    Publication date: June 1, 2023
    Inventors: Nannan ZOU, Francesco CRICRÌ, Honglei ZHANG, Hamed REZAZADEGAN TAVAKOLI, Jani LAINEMA, Miska Matias HANNUKSELA
  • Patent number: 11558628
    Abstract: An apparatus includes circuitry configured to: partition an input tensor into one or more block tensors; partition at least one of the block tensors into one or more continuation bands, the one or more continuation bands being associated with a caching counter having a value; store the one or more continuation bands in a cache managed using a cache manager; retrieve, prior to a convolution or pooling operation on a current block tensor, the one or more continuation bands of a previous block tensor from the cache that are adjacent to a current block tensor; concatenate the retrieved continuation bands with the current block tensor; apply the convolution or pooling operation on the current block tensor after the concatenation; decrease the respective caching counter value of the retrieved continuation bands; and clear the continuation bands from the cache when its respective caching counter reaches a value of zero.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: January 17, 2023
    Assignee: Nokia Technologies Oy
    Inventors: Honglei Zhang, Francesco Cricri, Hamed Rezazadegan Tavakoli, Jani Lainema, Emre Aksu, Nannan Zou
  • Publication number: 20220191524
    Abstract: An apparatus includes circuitry configured to: partition an input tensor into one or more block tensors; partition at least one of the block tensors into one or more continuation bands, the one or more continuation bands being associated with a caching counter having a value; store the one or more continuation bands in a cache managed using a cache manager; retrieve, prior to a convolution or pooling operation on a current block tensor, the one or more continuation bands of a previous block tensor from the cache that are adjacent to a current block tensor; concatenate the retrieved continuation bands with the current block tensor; apply the convolution or pooling operation on the current block tensor after the concatenation; decrease the respective caching counter value of the retrieved continuation bands; and clear the continuation bands from the cache when its respective caching counter reaches a value of zero.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 16, 2022
    Inventors: Honglei ZHANG, Francesco Cricri, Hamed Rezazadegan Tavakoli, Jani Lainema, Emre Aksu, Nannan Zou