Patents by Inventor Mostafa El-Khamy

Mostafa El-Khamy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11451242
    Abstract: A method and apparatus for variable rate compression with a conditional autoencoder is herein provided. According to one embodiment, a method includes training a conditional autoencoder using a Lagrange multiplier and training a neural network that includes the conditional autoencoder with mixed quantization bin sizes.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: September 20, 2022
    Inventors: Yoo Jin Choi, Mostafa El-Khamy, Jungwon Lee
  • Publication number: 20220293120
    Abstract: A system for performing echo cancellation includes: a processor configured to: receive a far-end signal; record a microphone signal including: a near-end signal; and an echo signal corresponding to the far-end signal; extract far-end features from the far-end signal; extract microphone features from the microphone signal; compute estimated near-end features by supplying the microphone features and the far-end features to an acoustic echo cancellation module including a recurrent neural network including: an encoder including a plurality of gated recurrent units; and a decoder including a plurality of gated recurrent units; compute an estimated near-end signal from the estimated near-end features; and transmit the estimated near-end signal to the far-end device. The recurrent neural network may include a contextual attention module; and the recurrent neural network may take, as input, a plurality of error features computed based on the far-end features, the microphone features, and acoustic path parameters.
    Type: Application
    Filed: May 27, 2022
    Publication date: September 15, 2022
    Inventors: Amin Fazeli, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11429805
    Abstract: A computer vision (CV) training system, includes: a supervised learning system to estimate a supervision output from one or more input images according to a target CV application, and to determine a supervised loss according to the supervision output and a ground-truth of the supervision output; an unsupervised learning system to determine an unsupervised loss according to the supervision output and the one or more input images; a weakly supervised learning system to determine a weakly supervised loss according to the supervision output and a weak label corresponding to the one or more input images; and a joint optimizer to concurrently optimize the supervised loss, the unsupervised loss, and the weakly supervised loss.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: August 30, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Haoyu Ren, Mostafa El-Khamy, Jungwon Lee, Aman Raj
  • Patent number: 11423312
    Abstract: A method and system for constructing a convolutional neural network (CNN) model are herein disclosed. The method includes regularizing spatial domain weights, providing quantization of the spatial domain weights, pruning small or zero weights in a spatial domain, fine-tuning a quantization codebook, compressing a quantization output from the quantization codebook, and decompressing the spatial domain weights and using either sparse spatial domain convolution and sparse Winograd convolution after pruning Winograd-domain weights.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: August 23, 2022
    Inventors: Yoo Jin Choi, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11393487
    Abstract: A system for performing echo cancellation includes: a processor configured to: receive a far-end signal; record a microphone signal including: a near-end signal; and an echo signal corresponding to the far-end signal; extract far-end features from the far-end signal; extract microphone features from the microphone signal; compute estimated near-end features by supplying the microphone features and the far-end features to an acoustic echo cancellation module including a recurrent neural network including: an encoder including a plurality of gated recurrent units; and a decoder including a plurality of gated recurrent units; compute an estimated near-end signal from the estimated near-end features; and transmit the estimated near-end signal to the far-end device. The recurrent neural network may include a contextual attention module; and the recurrent neural network may take, as input, a plurality of error features computed based on the far-end features, the microphone features, and acoustic path parameters.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: July 19, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Amin Fazeli, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11354577
    Abstract: Apparatuses and methods of manufacturing same, systems, and methods are described. In one aspect, a method includes generating a convolutional neural network (CNN) by training a CNN having three or more convolutional layers, and performing cascade training on the trained CNN. The cascade training includes an iterative process of one or more stages, in which each stage includes inserting a residual block (ResBlock) including at least two additional convolutional layers and training the CNN with the inserted ResBlock.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: June 7, 2022
    Inventors: Haoyu Ren, Mostafa El-Khamy, Jungwon Lee
  • Publication number: 20220138633
    Abstract: An electronic device and method for performing class-incremental learning are provided. The method includes designating a pre-trained first model for at least one past data class as a first teacher; training a second model; designating the trained second model as a second teacher; performing dual-teacher information distillation by maximizing mutual information at intermediate layers of the first teacher and second teacher; and transferring the information to a combined student model.
    Type: Application
    Filed: May 11, 2021
    Publication date: May 5, 2022
    Inventors: Yoo Jin Choi, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11321609
    Abstract: Apparatuses and methods of manufacturing same, systems, and methods for performing network parameter quantization in deep neural networks are described. In one aspect, diagonals of a second-order partial derivative matrix (a Hessian matrix) of a loss function of network parameters of a neural network are determined and then used to weight (Hessian-weighting) the network parameters as part of quantizing the network parameters. In another aspect, the neural network is trained using first and second moment estimates of gradients of the network parameters and then the second moment estimates are used to weight the network parameters as part of quantizing the network parameters. In yet another aspect, network parameter quantization is performed by using an entropy-constrained scalar quantization (ECSQ) iterative algorithm. In yet another aspect, network parameter quantization is performed by quantizing the network parameters of all layers of a deep neural network together at once.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: May 3, 2022
    Inventors: Yoo Jin Choi, Mostafa El-Khamy, Jungwon Lee
  • Publication number: 20220093116
    Abstract: A method and system for providing Gaussian weighted self-attention for speech enhancement are herein provided. According to one embodiment, the method includes receiving an input noise signal, generating a score matrix based on the received input noise signal, and applying a Gaussian weighted function to the generated score matrix.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: JaeYoung KIM, Mostafa EL-KHAMY, Jungwon LEE
  • Publication number: 20220092383
    Abstract: A method and system are provided. The method includes topologically sorting layers of a neural network, selecting a quantization process that utilizes a quantization of a previous layer, and determining, with the selected quantization process, a quantization mode of one layer in the neural network based on the quantization of a previous layer.
    Type: Application
    Filed: February 10, 2021
    Publication date: March 24, 2022
    Inventors: Fatih CAKIR, Mostafa EL-KHAMY, Jungwon LEE
  • Publication number: 20220083861
    Abstract: Methods and apparatuses for deep learning training are provided which include receiving a candidate unit for classification. The candidate unit including an intersection area between a ground-truth bounding box and a detection box. The candidate unit is classified by assigning a label that is a probability value that a given feature is observed in the intersection area. Deep learning training is performed using the assigned label of the classified candidate unit.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 17, 2022
    Inventors: Xianzhi DU, Mostafa EL-KHAMY, Jungwon LEE
  • Publication number: 20220083855
    Abstract: A method for training a generator, by a generator training system including a processor and memory, includes: extracting training statistical characteristics from a batch normalization layer of a pre-trained model, the training statistical characteristics including a training mean ? and a training variance ?2; initializing a generator configured with generator parameters; generating a batch of synthetic data using the generator; supplying the batch of synthetic data to the pre-trained model; measuring statistical characteristics of activations at the batch normalization layer and at the output of the pre-trained model in response to the batch of synthetic data, the statistical characteristics including a measured mean {circumflex over (?)}? and a measured variance {circumflex over (?)}?2; computing a training loss in accordance with a loss function L? based on ?, ?2, {circumflex over (?)}?, and {circumflex over (?)}?2; and iteratively updating the generator parameters in accordance with the training loss unti
    Type: Application
    Filed: November 12, 2020
    Publication date: March 17, 2022
    Inventors: Yoo Jin Choi, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11270187
    Abstract: A method is provided. The method includes selecting a neural network model, wherein the neural network model includes a plurality of layers, and wherein each of the plurality of layers includes weights and activations; modifying the neural network model by inserting a plurality of quantization layers within the neural network model; associating a cost function with the modified neural network model, wherein the cost function includes a first coefficient corresponding to a first regularization term, and wherein an initial value of the first coefficient is pre-defined; and training the modified neural network model to generate quantized weights for a layer by increasing the first coefficient until all weights are quantized and the first coefficient satisfies a pre-defined threshold, further including optimizing a weight scaling factor for the quantized weights and an activation scaling factor for quantized activations, and wherein the quantized weights are quantized using the optimized weight scaling factor.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: March 8, 2022
    Inventors: Yoo Jin Choi, Mostafa El-Khamy, Jungwon Lee
  • Publication number: 20220067582
    Abstract: Methods and apparatuses are provided for continual few-shot learning. A model for a base task is generated with base classification weights for base classes of the base task. A series of novel tasks is sequentially received. Upon receiving each novel task in the series of novel tasks, the model is updated with novel classification weights for novel classes of the respective novel task. The novel classification weights are generated by a weight generator based on one or more of the base classification weights and, when one or more other novel tasks in the series are previously received, one or more other novel classification weights for novel classes of the one or more other novel tasks. Additionally, for each novel task, a first set of samples of the respective novel task are classified into the novel classes using the updated model.
    Type: Application
    Filed: January 22, 2021
    Publication date: March 3, 2022
    Inventors: Yoo Jin CHOI, Mostafa El-Khamy, Sijia Wang, Jungwon Lee
  • Publication number: 20220058507
    Abstract: Methods and devices are provided for performing federated learning. A global model is distributed from a server to a plurality of client devices. At each of the plurality of client devices: model inversion is performed on the global model to generate synthetic data; the global model is on an augmented dataset of collected data and the synthetic data to generate a respective client model; and the respective client model is transmitted to the server. At the server: client models are received from the plurality of client devices, where each client model is received from a respective client device of the plurality of client devices: model inversion is performed on each client model to generate a synthetic dataset; the client models are averaged to generate an averaged model; and the averaged model is trained using the synthetic dataset to generate an updated model.
    Type: Application
    Filed: February 19, 2021
    Publication date: February 24, 2022
    Inventors: Mostafa El-Khamy, Weituo Hao, Jungwon Lee
  • Publication number: 20220051426
    Abstract: Method and systems are provided for robust disparity estimation based on cost-volume attention. A method includes extracting first feature maps from left images captured by a first camera; extracting second feature maps from right images captured by a second camera; calculating a matching cost based on a comparison of the first and second feature maps to generate a cost volume; generating an attention-aware cost volume from the generated cost volume; and aggregating the attention-aware cost volume to generate an output disparity.
    Type: Application
    Filed: December 22, 2020
    Publication date: February 17, 2022
    Inventors: Mostafa EL-KHAMY, Jungwon LEE, Haoyu REN
  • Publication number: 20220004827
    Abstract: A method and system for training a neural network are provided. The method includes receiving an input image, selecting at least one data augmentation method from a pool of data augmentation methods, generating an augmented image by applying the selected at least one data augmentation method to the input image, and generating a mixed image from the input image and the augmented image.
    Type: Application
    Filed: April 27, 2021
    Publication date: January 6, 2022
    Inventors: Qingfeng LIU, Mostafa EL-KHAMY, Jungwon LEE, Behnam Babagholami MOHAMADABADI
  • Publication number: 20210406647
    Abstract: A convolutional neural network (CNN) system for generating a classification for an input image is presented. The CNN system comprises circuitry running on clock cycles and configured to compute a product of two received values, and at least one non-transitory computer-readable medium that stores instructions for the circuitry to derive a feature map based on at least the input image; puncture at least one selection among the feature map and a kernel by setting the value of an element at an index of the at least one selection to zero and cyclic shifting a puncture pattern to achieve a 1/d reduction in number of clock cycles, where d is an integer and puncture interval value>1. The feature map is convolved with the kernel to generate an output, and a classification of the input image is generated based on the output.
    Type: Application
    Filed: September 13, 2021
    Publication date: December 30, 2021
    Inventors: Mostafa El-Khamy, Yoo Jin Choi, Jungwon Lee
  • Patent number: 11205120
    Abstract: Apparatuses and methods of manufacturing same, systems, and methods for training deep learning machines are described. In one aspect, candidate units, such as detection bounding boxes in images or phones of an input audio feature, are classified using soft labelling, where at least label has a range of possible values between 0 and 1 based, in the case of images, on the overlap of a detection bounding box and one or more ground-truth bounding boxes for one or more classes.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: December 21, 2021
    Inventors: Xianzhi Du, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11195541
    Abstract: A method and system for providing Gaussian weighted self-attention for speech enhancement are herein provided. According to one embodiment, the method includes receiving a input noise signal, generating a score matrix based on the received input noise signal, and applying a Gaussian weighted function to the generated score matrix.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: December 7, 2021
    Inventors: JaeYoung Kim, Mostafa El-Khamy, Jungwon Lee