Patents by Inventor Andrew Z. LUO

Andrew Z. LUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11651192
    Abstract: Systems and processes for training and compressing a convolutional neural network model include the use of quantization and layer fusion. Quantized training data is passed through a convolutional layer of a neural network model to generate convolutional results during a first iteration of training the neural network model. The convolutional results are passed through a batch normalization layer of the neural network model to update normalization parameters of the batch normalization layer. The convolutional layer is fused with the batch normalization layer to generate a first fused layer and the fused parameters of the fused layer are quantized. The quantized training data is passed through the fused layer using the quantized fused parameters to generate output data, which may be quantized for a subsequent layer in the training iteration.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: May 16, 2023
    Assignee: Apple Inc.
    Inventors: James C. Gabriel, Mohammad Rastegari, Hessam Bagherinezhad, Saman Naderiparizi, Anish Prabhu, Sophie Lebrecht, Jonathan Gelsey, Sayyed Karen Khatamifard, Andrew L. Chronister, David Bakin, Andrew Z. Luo
  • Publication number: 20200257960
    Abstract: Systems and processes for training and compressing a convolutional neural network model include the use of quantization and layer fusion. Quantized training data is passed through a convolutional layer of a neural network model to generate convolutional results during a first iteration of training the neural network model. The convolutional results are passed through a batch normalization layer of the neural network model to update normalization parameters of the batch normalization layer. The convolutional layer is fused with the batch normalization layer to generate a first fused layer and the fused parameters of the fused layer are quantized. The quantized training data is passed through the fused layer using the quantized fused parameters to generate output data, which may be quantized for a subsequent layer in the training iteration.
    Type: Application
    Filed: February 11, 2020
    Publication date: August 13, 2020
    Inventors: James C. GABRIEL, Mohammad RASTEGARI, Hessam BAGHERINEZHAD, Saman NADERIPARIZI, Anish PRABHU, Sophie LEBRECHT, Jonathan GELSEY, Sayyed Karen KHATAMIFARD, Andrew L. CHRONISTER, David BAKIN, Andrew Z. LUO