Patents by Inventor Hamed Rezazadegan Tavakoli

Hamed Rezazadegan Tavakoli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250147753
    Abstract: In accordance with example embodiments of the invention there is at least a method and apparatus to perform executing a machine learning inference loop of a currently deployed or stored at least one machine learning model, wherein the currently deployed or stored at least one machine learning model is identified based on a manifest file received from a communication network; based on determined factors, requesting from the communication network a model update to trigger the model update for use with the currently deployed or stored at least one machine learning model; based on the request, receiving information from the communication network comprising the model update; and based on the information, performing a model update to update the currently deployed or stored at least one machine learning model.
    Type: Application
    Filed: November 6, 2024
    Publication date: May 8, 2025
    Inventors: Serhan Gul, Homayun Afrabandpey, Saba Ahsan, Hamed Rezazadegan Tavakoli, Igor Danilo Diego Curcio, Gazi Karam Illahi
  • Publication number: 20250113079
    Abstract: A method is provided for defining a metadata box of a neural network representation (NNR) item data, wherein the NNR item data comprises an NNR bitstream; and defining an association between the NNR item data and an NNR configuration by using a configuration item property, wherein the NNR configuration item property comprises information about stored NNR item data. Corresponding apparatuses and computer program products are also provided.
    Type: Application
    Filed: December 12, 2024
    Publication date: April 3, 2025
    Inventors: Emre AKSU, Miska HANNUKSELA, Francesco CRICRÌ, Hamed REZAZADEGAN TAVAKOLI
  • Patent number: 12242969
    Abstract: An apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: estimate an importance of parameters of a neural network based on a graph diffusion process over at least one layer of the neural network; determine the parameters of the neural network that are suitable for pruning or sparsification; remove neurons of the neural network to prune or sparsify the neural network; and provide at least one syntax element for signaling the pruned or sparsified neural network over a communication channel, wherein the at least one syntax element comprises at least one neural network representation syntax element.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: March 4, 2025
    Assignee: Nokia Technologies Oy
    Inventors: Honglei Zhang, Francesco Cricri, Hamed Rezazadegan Tavakoli, Joachim Wabnig, Iraj Saniee, Miska Matias Hannuksela, Emre Aksu
  • Patent number: 12219204
    Abstract: A method is provided for defining a metadata box of a neural network representation (NNR) item data, wherein the NNR item data comprises an NNR bitstream; and defining an association between the NNR item data and an NNR configuration by using a configuration item property, wherein the NNR configuration item property comprises information about stored NNR item data. Corresponding apparatuses and computer program products are also provided.
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: February 4, 2025
    Assignee: Nokia Technologies Oy
    Inventors: Emre Aksu, Miska Hannuksela, Francesco Cricrì, Hamed Rezazadegan Tavakoli
  • Patent number: 12170779
    Abstract: Example embodiments provide a system for training a data coding pipeline including a feature extractor neural network, an encoder neural network, and a decoder neural network configured to reconstruct input data based on encoded features. A plurality of losses corresponding to different tasks may be determined for the coding pipeline. Tasks may be performed based on an output of the coding pipeline. A weight update may be determined for at least a subset of the coding pipeline based on the plurality of losses. The weight update may be configured to reduce a number of iterations for fine-tuning the coding pipeline for one of the tasks. This enables faster adaptation of the coding pipeline for one of the tasks after deployment of the coding pipeline. Apparatuses, methods, and computer programs are disclosed. Apparatuses, methods, and computer programs are disclosed.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: December 17, 2024
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricri, Nam Le, Hamed Rezazadegan Tavakoli, Honglei Zhang, Miska Matias Hannuksela, Emre Baris Aksu
  • Publication number: 20240357104
    Abstract: Various embodiments describe an apparatus, a method, and a computer program product. An example apparatus includes at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to perform: encoding an input picture by using a first encoder or first encoding parameters; encoding the input picture by using a second encoder or second encoding parameters; generating a first reconstructed picture based on the encoding of the input picture by using the first encoder or the first encoding parameters; and generating a second reconstructed picture based on the encoding of the input picture by using the second encoder or the second encoding parameters.
    Type: Application
    Filed: April 19, 2024
    Publication date: October 24, 2024
    Inventors: Honglei ZHANG, Francesco CRICRÌ, Alireza AMINLOU, Miska Matias HANNUKSELA, Nam Hai LE, Jukka Ilari AHONEN, Hamed REZAZADEGAN TAVAKOLI
  • Patent number: 12113974
    Abstract: A method, an apparatus, and a computer program product are provided. An example method includes defining an enhancement message comprising at least one of the following: an identifying number for identifying a post-processing filter; a mode identity (idc) field used of indicating association of a post-processing filter with the identifying number; a flag for specifying the enhancement message being used for a current layer; and the payload byte comprising a bitstream; and using the enhancement message for at least one of specifying a neural network that is used as a post-processing filter or cancelling a use of a previous post-processing filter with the same identifying number.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: October 8, 2024
    Assignee: Nokia Technologies Oy
    Inventors: Miska Matias Hannuksela, Emre Baris Aksu, Francesco Cricrì, Hamed Rezazadegan Tavakoli
  • Publication number: 20240314362
    Abstract: A method includes receiving auxiliary information and/or at least one auxiliary feature, the at least one auxiliary feature being based on the auxiliary information; wherein the auxiliary information comprises information available within a decoder, or the auxiliary information comprises a bitstream output from data encoded using an encoder; receiving decoded data generated using the decoder; and generating filtered data with at least one filter using the auxiliary information and/or the at least one auxiliary feature via applying the filter to the decoded data; wherein the at least one filter comprises a learned filter; wherein the filtered data is configured to be used for at least one machine task performed using a model.
    Type: Application
    Filed: June 21, 2022
    Publication date: September 19, 2024
    Inventors: Ramin GHAZNAVI YOUVALARI, Honglei ZHANG, Francesco CRICRÌ, Nam Hai LE, Miska Matias HANNUKSELA, Hamed REZAZADEGAN TAVAKOLI, Jukka Ilari AHONEN
  • Publication number: 20240306986
    Abstract: An example apparatus includes: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to perform: receiving a media bitstream comprising one or more media units and at least a first enhancement information message, wherein the first enhancement information message comprises or identifies a neural network, and wherein the received media bitstream comprises a third enhancement information message; decoding the one or more media units; and using the neural network to enhance or filter the decoded one or more media units within a temporal scope determined with the third enhancement information message, a spatial scope determined with the third enhancement information message, or a spatio-temporal scope determined with the third enhancement information message.
    Type: Application
    Filed: May 29, 2024
    Publication date: September 19, 2024
    Inventors: Hamed REZAZADEGAN TAVAKOLI, Francesco CRICRÌ, Emre Baris AKSU, Miska Matias HANNUKSELA
  • Publication number: 20240303486
    Abstract: An apparatus may be configured to: process at least one input with an efficient neural network; determine at least one performance criteria for the efficient neural network; and activate online learning for the efficient neural network based, at least partially, on the at least one performance criteria. An apparatus may be configured to: receive, from an efficient neural network, at least one video frame or at least one feature; determine at least one inference result based, at least partially, on the at least one video frame or the at least one feature; and transmit, to the efficient neural network, the at least one inference result.
    Type: Application
    Filed: February 20, 2024
    Publication date: September 12, 2024
    Inventors: Hamed Rezazadegan Tavakoli, Amirhossein Hassankhani, Esa Rahtu
  • Publication number: 20240289590
    Abstract: Various embodiments provide a method, an apparatus, and computer program product. The method comprising: defining an attention block comprising: a set of initial neural network layers, wherein each layer is caused to process an output of a previous layer, and wherein a first layer processes an input of a dense split attention block; core attention blocks process one or more outputs of the set of initial neural network layers; a concatenation block for concatenating one or more outputs of the core attention blocks and at least one intermediate output of the set of initial neural network layers; one or more final neural network layers process at least the output of the concatenation block; and a summation block caused to sum an output of the final neural network layers and an input to the attention block; and providing an output of the summation block as a final output of the attention block.
    Type: Application
    Filed: June 16, 2022
    Publication date: August 29, 2024
    Inventors: Francesco CRICRÌ, Nannan ZOU, Honglei ZHANG, Hamed REZAZADEGAN TAVAKOLI
  • Publication number: 20240265240
    Abstract: An example apparatus includes at least one processor; and at least one non-transitory memory comprising computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform; learn importance of one or more parameters by using a training dataset; define one or more masks for indicating the importance of the one or more parameters for a model finetuning; share at least one mask of the one or more masks with at least one of an encoder or a decoder; finetune at least one parameter of the one or more parameters based at least on the at least one mask; send or signal one or more weight updates corresponding to the at least one parameter in a bitstream to the decoder.
    Type: Application
    Filed: June 17, 2022
    Publication date: August 8, 2024
    Inventors: Honglei ZHANG, Francesco CRICRÌ, Ramin GHAZNAVI YOUVALARI, Hamed REZAZADEGAN TAVAKOLI, Nannan ZOU, Vinod Kumar MALAMAL VADAKITAL, Miska Matias HANNUKSELA, Yat Hong LAM, Jani LAINEMA, Emre Baris AKSU
  • Publication number: 20240249514
    Abstract: Various embodiments provide an apparatus, a method, and a computer program product. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform; train or finetune one or more additional parameters of at least one neural network (NN) or a portion of the at least one NN, wherein the one or more additional parameters comprise one or more scaling parameters; and encode or decode one or more media elements based on the at least one neural network or a portion of the at least one NN comprising the trained or finetuned one or more additional parameters.
    Type: Application
    Filed: May 13, 2022
    Publication date: July 25, 2024
    Inventors: Jani LAINEMA, Francesco CRICRÌ, Honglei ZHANG, Hamed REZAZADEGAN TAVAKOLI, Yat Hong LAM, Miska Matias HANNUKSELA, Nannan ZOU
  • Patent number: 12036036
    Abstract: An example method is provided to include receiving a media bitstream comprising one or more media units and a first enhancement information message, wherein the first enhancement information message comprises at least two independently parsable structures, a first independently parsable structure comprising information about at least one purpose of one or more neural networks (NNs) to be applied to the one or more media units, and a second independently parsable structure comprising or identifying one or more neural networks; decoding the one or more media units; and using the one or more neural networks to enhance or filter one or more frames of the decoded the one or more media units, based on the at least one purpose. An example method includes. Corresponding apparatuses and computer program products are also provided.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: July 16, 2024
    Assignee: Nokia Technologies Oy
    Inventors: Hamed Rezazadegan Tavakoli, Francesco Cricrì, Emre Baris Aksu, Miska Matias Hannuksela
  • Publication number: 20240223762
    Abstract: The embodiments relate to an apparatus and a method for encoding and decoding.
    Type: Application
    Filed: February 9, 2022
    Publication date: July 4, 2024
    Inventors: Honglei ZHANG, Francesco CRICRÌ, Hamed REZAZADEGAN TAVAKOLI
  • Patent number: 12022129
    Abstract: In example embodiments, an apparatus, a method, and a computer program product are provided. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; and wherein a serialized bitstream comprises one or more of the at least one information units.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: June 25, 2024
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricrì, Miska Matias Hannuksela, Emre Baris Aksu, Hamed Rezazadegan Tavakoli
  • Publication number: 20240202507
    Abstract: An apparatus with a corresponding method and computer program product are provided. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the steps (1600) of train or finetune at least one neural network (NN) based at least on a temporal persistence scope; and encode or decode one or more media frames elements based at least on the trained or finetuned at least one neural network. A further apparatus with a corresponding method and computer program product are provided.
    Type: Application
    Filed: April 15, 2022
    Publication date: June 20, 2024
    Inventors: Francesco CRICRÌ, Jani LAINEMA, Ramin GHAZNAVI YOUVALARI, Honglei ZHANG, Yat Hong LAM, Maria Claudia SANTAMARIA GOMEZ, Hamed REZAZADEGAN TAVAKOLI, Miska Matias HANNUKSELA
  • Publication number: 20240195969
    Abstract: An example apparatus, method, and computer program product are provided. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; and wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units, and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.
    Type: Application
    Filed: April 7, 2022
    Publication date: June 13, 2024
    Inventors: Hamed REZAZADEGAN TAVAKOLI, Francesco CRICRÌ, Emre Baris AKSU, Miska Matias HANNUKSELA
  • Publication number: 20240195433
    Abstract: The embodiments relate to a method for encoding two or more tensors. The method comprises processing the two or more tensors having respective dimensions so that the dimensions of said two or more sensors have the same number (510); identifying which axis of each individual tensor is swappable to result in concatenable tensors around an axis of concatenation (520): reshaping the tensors so that the dimensions are modified based on the swapped axis (530): concatenating the tensors around the axis of concatenation to result in concatenated tensor (540): compressing the concatenated tensor (550); generating syntax structures for carrying concatenation and axis swapping information (560): and generating a bitstream by combining the syntax structures and the compressed concatenated tensor (570). The embodiments also relate to a method for decoding, and to apparatuses for implementing the methods.
    Type: Application
    Filed: April 4, 2022
    Publication date: June 13, 2024
    Inventors: Emre Baris AKSU, Miska Matias HANNUKSELA, Hamed REZAZADEGAN TAVAKOLI, Francesco CRICRÌ
  • Publication number: 20240146938
    Abstract: Various embodiments provide an apparatus, a method and a computer program product for end-to-end learned predictive coding of media frames. An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode one or more media frames for at least one neural network; wherein an inter-frame codec is applied to at least one media frame of the one or more media frames; and wherein a first decoded reference frame and a second decoded reference frame refer to reference frames for the at least one media frame.
    Type: Application
    Filed: March 9, 2022
    Publication date: May 2, 2024
    Inventors: Nannan ZOU, Honglei ZHANG, Francesco CRICRÌ, Hamed REZAZADEGAN TAVAKOLI, Ramin GHAZNAVI YOUVALARI