Patents by Inventor Vahid PARTOVI NIA

Vahid PARTOVI NIA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104342
    Abstract: Methods, systems and computer readable media using hardware-efficient bit-shift operations for computing the output of a low-bit neural network layer. A dense shift inner product operator (or dense shift IPO) using bit shifting in place of multiplication replaces the inner product operator that is conventionally used to compute the output of a neural network layer. Dense shift neural networks may have weights encoded using a low-bit dense shift encoding. A dedicated neural network accelerator is designed to compute the output of a dense shift neural network layer using dense shift IPOs. A Sign-Sparse-Shift (S3) training technique trains a low-bit neural network using dense shift IPOs or other bit shift operations in computing its outputs.
    Type: Application
    Filed: November 28, 2023
    Publication date: March 28, 2024
    Inventors: Xinlin LI, Vahid PARTOVI NIA
  • Patent number: 11922609
    Abstract: End to end differentiable machine vision systems, training methods, and processor-readable media are disclosed. A differentiable image signal processor (ISP) is disclosed that can be trained, using machine learning techniques, to adapt raw images received from a new sensor into an adapted images of the same type (i.e. in the same visual domain) as the images previously used to train a perception module, without fine-tuning the perception module itself. The differentiable ISP may include functional blocks for performing specific image enhancement operations using a relatively small number of learned parameters corresponding to meaningful characteristics of the image enhancement operations.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: March 5, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ali Mosleh, Vahid Partovi Nia
  • Publication number: 20240070221
    Abstract: Methods and systems for generating an integer neural network are described. The method includes receiving an input vector comprising a plurality of input values. The plurality of input values are represented using a desired number bits. The input vector is multiplied by a weight vector, and the products of which are summed to obtain a first value. The first value is quantized and applied to a piecewise linear activation function to obtain a second value. The piecewise linear activation function is a set of linear function that collectively approximate a nonlinear activation function. The second value is quantized to generate the output of the neuron in the integer neural network.
    Type: Application
    Filed: November 6, 2023
    Publication date: February 29, 2024
    Inventors: Eyyüb Hachmie SARI, Vanessa COURVILLE, Mohan LIU, Vahid PARTOVI NIA
  • Publication number: 20230376769
    Abstract: Systems and methods for training a machine learning model. The methods comprise receiving a plurality of first data points, each data point of the first data points being represented in a floating-point representation. The methods further comprise converting the plurality of first data points into a corresponding plurality of second data points. Each of the second data points is represented in a dynamic fixed-point representation. The plurality of second data points may include: for each second data point, the sign component of the corresponding first data point, for each second data point, a dynamic fixed-point mantissa component, and one or more shared fraction components. At least two of the second data points share a value of a shared fraction component of the one or more shared fraction components. The methods further comprise performing integer computations during training of the machine learning model using the second data points.
    Type: Application
    Filed: May 18, 2022
    Publication date: November 23, 2023
    Inventors: Seyed Alireza GHAFFARI, Eyyüb HACHMIE SARI, Vahid PARTOVI NIA
  • Publication number: 20230306255
    Abstract: Training a neural network, including applying a quantization function to a set of real-valued weights to generate quantized weights scaled to fall within a respective quantization range that is symmetrically centered at zero and comprises a defined number of uniform quantization levels corresponding to integer multiples of a scaling factor. A cost is computed based alignments of the quantized weights with the quantization levels. The real-valued weights and the scaling factor are adjusted with an objective of reducing the computed cost in one or more following training iterations. When performing a plurality training iterations, a smoothness of the quantization function is incrementally reduced for multiple training iterations. Alignment of quantized weights with quantization levels and decreasing smoothness of the quantization function can result in a trained neural network that can perform accurate predictions using relatively few computational resources.
    Type: Application
    Filed: March 22, 2022
    Publication date: September 28, 2023
    Inventors: Ella CHARLAIX, Vanessa COURVILLE, Vahid Partovi Nia
  • Patent number: 11615342
    Abstract: Methods and systems are described for training a machine learning (ML) model to predict the gain of a target channel of a multi-channel amplifier device. An ML model may be pre-trained using an existing set of training objects. The trained ML model then can be utilized to suggest further useful training objects to be labelled that will improve the performance of the ML model by predicting more accurate target channel gains given the on/off value for the channel inputs.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: March 28, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ali Vahdat, Vahid Partovi Nia
  • Patent number: 11586833
    Abstract: A method and machine translation system for bi-directional translation of textual sequences between a first language and a second language are described. The machine translation system includes a first autoencoder configured to receive a vector representation of a first textual sequence in the first language and encode the vector representation of the first textual sequence into a first sentence embedding. The machine translation system also includes a sum-product network (SPN) configured to receive the first sentence embedding and generate a second sentence embedding by maximizing a first conditional probability of the second sentence embedding given the first sentence embedding and a second autoencoder receiving the second sentence embedding, the second autoencoder being trained to decode the second sentence embedding into a vector representation of a second textual sequence in the second language.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: February 21, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Mehdi Rezagholizadeh, Vahid Partovi Nia, Md Akmal Haidar, Pascal Poupart
  • Publication number: 20220374689
    Abstract: Systems and methods for computing a neural network layer of a neural network are described. A squared Euclidean distance is computed between the input vector and the weight vector of the neural network layer, replacing computation of the inner product. Methods for quantization of the squared Euclidean computation are also described. Methods for training the neural network using homotopy training are also described.
    Type: Application
    Filed: May 11, 2021
    Publication date: November 24, 2022
    Inventors: Xinlin LI, Mariana Oliveira PRAZERES, Adam Morrison OBERMAN, Vahid PARTOVI NIA
  • Publication number: 20220366226
    Abstract: Methods and systems for compressing a neural network which performs an inference task and for performing computations of a Kronecker layer of a Kroenke NN are described. A batch of data samples are obtained from a training dataset. The input data of the data samples are inputted into a trained neural network to forward propagate the input data through the trained neural network and generate neural network predictions for the input data. Further, the input data are inputted into a Kronecker neural network to forward propagate the input data through the Kronecker neural network to generate Kronecker neural network predictions for the input data. Afterwards, two losses are computed: a knowledge distillation loss and a loss for Kronecker layer. The knowledge distillation loss is based on outputs generated by a layer of the neural network and a corresponding Kronecker layer of the Kronecker neural network.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 17, 2022
    Inventors: Marziehsadat TAHAEI, Ali GHODSI, Mehdi REZAGHOLIZADEH, Vahid PARTOVI NIA
  • Publication number: 20220301123
    Abstract: End to end differentiable machine vision systems, training methods, and processor-readable media are disclosed. A differentiable image signal processor (ISP) is disclosed that can be trained, using machine learning techniques, to adapt raw images received from a new sensor into an adapted images of the same type (i.e. in the same visual domain) as the images previously used to train a perception module, without fine-tuning the perception module itself. The differentiable ISP may include functional blocks for performing specific image enhancement operations using a relatively small number of learned parameters corresponding to meaningful characteristics of the image enhancement operations.
    Type: Application
    Filed: March 17, 2021
    Publication date: September 22, 2022
    Inventors: Ali MOSLEH, Vahid PARTOVI NIA
  • Publication number: 20210390269
    Abstract: A method and machine translation system for bi-directional translation of textual sequences between a first language and a second language are described. The machine translation system includes a first autoencoder configured to receive a vector representation of a first textual sequence in the first language and encode the vector representation of the first textual sequence into a first sentence embedding. The machine translation system also includes a sum-product network (SPN) configured to receive the first sentence embedding and generate a second sentence embedding by maximizing a first conditional probability of the second sentence embedding given the first sentence embedding and a second autoencoder receiving the second sentence embedding, the second autoencoder being trained to decode the second sentence embedding into a vector representation of a second textual sequence in the second language.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 16, 2021
    Inventors: Mehdi REZAGHOLIZADEH, Vahid PARTOVI NIA, Md Akmal HAIDAR, Pascal POUPART
  • Publication number: 20210390386
    Abstract: A computational block configured to perform an inference task by applying a plurality of low resource computing operations to a binary input feature tensor to generate an integer feature tensor that is equivalent to an output of multiplication and accumulation operations performed in respect of a ternary weight tensor and the binary input feature tensor; and performing a comparison operation between the generated integer feature tensor and a comparison threshold to generate a binary output feature tensor.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 16, 2021
    Inventors: Xinlin LI, Vahid PARTOVI NIA
  • Publication number: 20210089925
    Abstract: A method and processing unit for training a neural network to selectively quantize weights of a filter of the neural network as either binary weights or ternary weights. A plurality of training iterations a performed that each comprise: quantizing a set of real-valued weights of a filter to generate a corresponding set of quantized weights; generating an output feature tensor based on matrix multiplication of an input feature tensor and the set of quantized weights; computing, based on the output feature tensor, a loss based on a regularization function that is configured to move the loss towards a minimum value when either: (i) the quantized weights move towards binary weights, or (ii) the quantized weights move towards a ternary weights; computing a gradient with an objective of minimizing the loss; updating the real-valued weights based on the computed gradient.
    Type: Application
    Filed: September 24, 2020
    Publication date: March 25, 2021
    Inventors: Vahid PARTOVI NIA, Ryan RAZANI
  • Publication number: 20210073643
    Abstract: A method and system for pruning a neural network (NN) block of a neural network during training, wherein the NN block comprises: a convolution operation configured to convolve an input feature map with a plurality of filters, each filter including a plurality of weights, to generate a plurality of filter outputs each corresponding to a respective filter; an activation operation configured to generate, for each of the filter outputs, a respective non-linearized output; a scaling operation configured to scale the non-linearized output generated in respect of each filter by multiplying the non-linearized output with a mask function and a respective scaling factor that corresponds to the filter.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 11, 2021
    Inventors: Vahid PARTOVI NIA, Ramchalam Kinattinkara RAMAKRISHNAN, Eyyüb Hachmie SARI
  • Publication number: 20200097818
    Abstract: A method of training a neural network (NN) block for a neural network, including: performing a first quantization operation on a real-valued feature map tensor to generate a corresponding binary feature map tensor; performing a second quantization operation on a real-valued weight tensor to generate a corresponding binary weight tensor; convoluting the binary feature map tensor with the binary weight tensor to generate a convoluted output; scaling the convoluted output with a scaling factor to generate a scaled output, wherein the scaled output is equal to an estimated weight tensor convoluted with the binary feature map tensor, the estimated weight tensor corresponding to a product of the binary weight tensor and the scaling factor; calculating a loss function, the loss function including a regularization function configured to train the scaling factor so that the estimated weight tensor is guided towards the real-valued weight tensor; and updating the real-valued weight tensor and scaling factor based on th
    Type: Application
    Filed: September 25, 2019
    Publication date: March 26, 2020
    Inventors: Xinlin LI, Sajad DARABI, Mouloud BELBAHRI, Vahid PARTOVI NIA
  • Publication number: 20200057961
    Abstract: Methods and systems are described for training a machine learning (ML) model to predict the gain of a target channel of a multi-channel amplifier device. An ML model may be pre-trained using an existing set of training objects. The trained ML model then can be utilized to suggest further useful training objects to be labelled that will improve the performance of the ML model by predicting more accurate target channel gains given the on/off value for the channel inputs.
    Type: Application
    Filed: August 14, 2019
    Publication date: February 20, 2020
    Inventors: Ali VAHDAT, Vahid PARTOVI NIA