Patents by Inventor Yat Hong LAM

Yat Hong LAM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11556796
    Abstract: A method, apparatus, and computer program product are provided for training a neural network or providing a pre-trained neural network with the weight-updates being compressible using at least a weight-update compression loss function and/or task loss function. The weight-update compression loss function can comprise a weight-update vector defined as a latest weight vector minus an initial weight vector before training. A pre-trained neural network can be compressed by pruning one or more small-valued weights. The training of the neural network can consider the compressibility of the neural network, for instance, using a compression loss function, such as a task loss and/or a weight-update compression loss. The compressed neural network can be applied within a decoding loop of an encoder side or in a post-processing stage, as well as at a decoder side.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: January 17, 2023
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Caglar Aytekin, Francesco Cricri, Yat Hong Lam
  • Patent number: 11341688
    Abstract: Optimization of a neural network, for example in a video codec at the decoder side, may be guided to limit overfitting. The encoder may encode video(s) with different qualities for different frames in the video. Low-quality frames may be used as both input and ground-truth during optimization. High-quality frames may be used to optimize the neural network so that higher-quality versions of lower-quality inputs may be predicted. The neural network may be trained to make such predictions by making a prediction based on a constructed low-quality input for which the corresponding high-quality version is known, comparing the prediction to the high-quality version, and fine-tuning the neural network to improve its ability to predict a high-quality version of a low-quality input. To limit overfitting, the neural network may be concurrently or in an alternating fashion trained with low-quality input for which a higher-quality version of the low-quality input is known.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: May 24, 2022
    Assignee: Nokia Technologies Oy
    Inventors: Alireza Zare, Francesco Cricri, Yat Hong Lam, Miska Matias Hannuksela, Jani Olavi Lainema
  • Publication number: 20210104076
    Abstract: Optimization of a neural network, for example in a video codec at the decoder side, may be guided to limit overfitting. The encoder may encode video(s) with different qualities for different frames in the video. Low-quality frames may be used as both input and ground-truth during optimization. High-quality frames may be used to optimize the neural network so that higher-quality versions of lower-quality inputs may be predicted. The neural network may be trained to make such predictions by making a prediction based on a constructed low-quality input for which the corresponding high-quality version is known, comparing the prediction to the high-quality version, and fine-tuning the neural network to improve its ability to predict a high-quality version of a low-quality input. To limit overfitting, the neural network may be concurrently or in an alternating fashion trained with low-quality input for which a higher-quality version of the low-quality input is known.
    Type: Application
    Filed: September 30, 2020
    Publication date: April 8, 2021
    Inventors: Alireza Zare, Francesco Cricri, Yat Hong Lam, Miska Matias Hannuksela, Jani Olavi Lainema
  • Publication number: 20200311551
    Abstract: A method, apparatus, and computer program product are provided for training a neural network or providing a pre-trained neural network with the weight-updates being compressible using at least a weight-update compression loss function and/or task loss function. The weight-update compression loss function can comprise a weight-update vector defined as a latest weight vector minus an initial weight vector before training. A pre-trained neural network can be compressed by pruning one or more small-valued weights. The training of the neural network can consider the compressibility of the neural network, for instance, using a compression loss function, such as a task loss and/or a weight-update compression loss. The compressed neural network can be applied within a decoding loop of an encoder side or in a post-processing stage, as well as at a decoder side.
    Type: Application
    Filed: March 24, 2020
    Publication date: October 1, 2020
    Inventors: Caglar AYTEKIN, Francesco CRICRI, Yat Hong LAM