Patents by Inventor Fahad Sarfraz

Fahad Sarfraz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135169
    Abstract: A computer-implemented method that encourages sparse coding in deep neural networks and mimics the interplay of multiple memory systems for maintaining a balance between stability and plasticity. To this end, the method includes a multi-memory experience replay mechanism that employs sparse coding. Activation sparsity is enforced along with a complementary dropout mechanism, which encourages the model to activate similar neurons for semantically similar inputs while reducing the overlap with activation patterns of semantically dissimilar inputs. The semantic dropout provides an efficient mechanism for balancing reusability and interference of features depending on the similarity of classes across tasks. Furthermore, the method includes the step of maintaining an additional long-term semantic memory that aggregates the information encoded in the synaptic weights of the working memory.
    Type: Application
    Filed: December 29, 2022
    Publication date: April 25, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240119280
    Abstract: A computer-implemented method that maintains a memory of errors along the training trajectory and adjusts the contribution of each sample towards learning based on how far it is from the mean statistics of the error memory. The method may include the step of maintaining an additional semantic memory, called a stable model, which gradually aggregates the knowledge encoded in the weights of the working model. The stable model is utilized to select the low loss samples from the current task for populating the error memory. The different components of the method complement each other to effectively reduce the drift in representations at the task boundary and enables consolidation of information across the tasks.
    Type: Application
    Filed: January 20, 2023
    Publication date: April 11, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240046102
    Abstract: A computer-implemented method for general continual learning (CL) in artificial neural network that provides a biologically plausible framework for continual learning which incorporates different mechanisms inspired by the brain. The underlying model comprises separate populations of exclusively excitatory and exclusively inhibitory neurons in each layer which adheres to Dale's principle and the excitatory neurons (mimicking pyramidal cells) are augmented with dendrite-like structures for context-dependent processing of information. The dendritic segments process an additional context signal encoding task information and subsequently modulate the feedforward activity of the excitatory neuron. Additionally, it provides an efficient mechanism for controlling the sparsity in activations using k-WTA (k-Winners-Take-All) activations and Heterogeneous dropout mechanism that encourages the model to use a different set of neurons for each task.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 8, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20230281451
    Abstract: A computer-implemented method of synaptic consolidation for training a neural network using an episodic memory, and a semantic memory, by using a Fisher information matrix for estimating the importance of each synapse in the network to previous tasks of the neural network; evaluating the Fisher information matrix on the episodic memory using the semantic memory; adjusting the importance estimate such that functional integrity of the filters in the convolutional layers is maintained whereby the importance of each filter is given by the mean importance of its parameters; using the weights of the semantic memory as the anchor parameters for constraining an update of the synapses of the network based on the adjusted importance estimate; updating the semantic memory and fisher information matrix stochastically using exponential moving average, and interleaving samples from a current task with samples from the episodic memory for performing the training.
    Type: Application
    Filed: March 3, 2022
    Publication date: September 7, 2023
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20230076893
    Abstract: Embodiments of the disclosure provide methods and systems for an artificial intelligence method of making predictions from a sequence of images. The method may include receiving the sequence of images acquired at different time points. The method may further include applying a stable model to process the sequence of images to make the predictions. The stable model is trained along with a working model and a plastic model. The training enforces a consistency among the working model, the stable model, and the plastic model. The working model is trained using a loss function including a cross-entropy loss on a union of a training batch and memory exemplars and a consistency loss on the memory exemplars.
    Type: Application
    Filed: September 8, 2021
    Publication date: March 9, 2023
    Applicant: NavInfo Europe B.V.
    Inventors: Elahe ARANI, Fahad SARFRAZ, Bahram ZONOOZ
  • Publication number: 20220044116
    Abstract: A computer-implemented method of training a computer-implemented deep neural network with a dataset with annotated labels, wherein at least two models are concurrently trained collaboratively, and wherein each model is trained with a supervised learning loss, and a mimicry loss in addition to the supervised learning loss, wherein the super-vised learning loss relates to learning from environmental cues and supervision from the mimicry loss relates to imitation in cultural learning.
    Type: Application
    Filed: July 21, 2021
    Publication date: February 10, 2022
    Inventors: Elahe Arani, Fahad Sarfraz, Bahram Zonooz
  • Publication number: 20210166123
    Abstract: A method for training a robust deep neural network model in collaboration with a standard model in a minimax game in a closed learning loop. The method encourages the robust and standard models to align their feature spaces by utilizing the task-specific decision boundaries and explore the input space more broadly. The supervision from the standard model acts as a noise-free reference for regularizing the robust model. This effectively adds a prior on the learned representations which encourages the model to learn semantically relevant features which are less susceptible to off-manifold perturbations introduced by adversarial attacks. The adversarial examples are generated by identifying regions in the input space where the discrepancy between the robust and standard model is maximum within the perturbation bound. In the subsequent step, the discrepancy between the robust and standard models is minimized in addition to optimizing them on their respective tasks.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 3, 2021
    Inventors: Bahram Zonooz, Fahad Sarfraz, Elahe Arani