Patents by Inventor Elahe Arani

Elahe Arani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135722
    Abstract: A computer-implemented method that provides a novel shape aware FSL framework, referred to as LSFSL. In addition to the inductive biases associated with deep learning models, the method of the current invention introduces meaningful shape bias. The method of the current invention comprises the step of capturing the human behavior of recognizing objects by utilizing shape information. The shape information is distilled to address the texture bias of CNN-based models. During training, the model has two branches: RIN-branch, network with colored images as input, preferably RGB images, and SIN-branch, network with shape semantic-based input. Each branch incorporates a CNN backbone followed by a fully connected layer performing classification. RIN-branch and SIN-branch receive the RGB input image and shape information enhanced RGB input image, respectively.
    Type: Application
    Filed: February 7, 2023
    Publication date: April 25, 2024
    Applicant: NavInfo Europe B.V.
    Inventors: Deepan Chakravarthi Padmanabhan, Shruthi Gowda, Elahe Arani, Bahram Zonooz
  • Publication number: 20240135170
    Abstract: A computer-implemented method for continual learning of multiple tasks sequentially using a deep neural network wherein the method comprises providing a plurality of task-attention modules, wherein the method comprises: processing sensory inputs using said the deep neural network to build a first representation space of fixed capacity for representations (common representation space); admitting only task-relevant information from said first representation space into a second representation space (global workspace) different from the first representation space using said plurality of task-attention modules, and wherein each task-attention module of the plurality of task-attention modules is specialized towards a different task.
    Type: Application
    Filed: January 3, 2023
    Publication date: April 25, 2024
    Inventors: Prashant Shivaram Bhat, Elahe Arani, Bahram Zonooz
  • Publication number: 20240135169
    Abstract: A computer-implemented method that encourages sparse coding in deep neural networks and mimics the interplay of multiple memory systems for maintaining a balance between stability and plasticity. To this end, the method includes a multi-memory experience replay mechanism that employs sparse coding. Activation sparsity is enforced along with a complementary dropout mechanism, which encourages the model to activate similar neurons for semantically similar inputs while reducing the overlap with activation patterns of semantically dissimilar inputs. The semantic dropout provides an efficient mechanism for balancing reusability and interference of features depending on the similarity of classes across tasks. Furthermore, the method includes the step of maintaining an additional long-term semantic memory that aggregates the information encoded in the synaptic weights of the working memory.
    Type: Application
    Filed: December 29, 2022
    Publication date: April 25, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240127066
    Abstract: A computer-implemented method for improving generalization in training deep neural networks in online settings. The method includes a general learning paradigm for sequential data that is referred to as Learn, Unlearn, RElearn (LURE), a dynamic re-initialization method to address the above-mentioned larger problem of generalization of parameterized networks on sequential data by selectively retaining the task-specific connections through the important criteria and re-randomizing the less important parameters at each mega batch of training. The method of selectively forgetting retains previous information all the while improving generalization to unseen samples.
    Type: Application
    Filed: January 30, 2023
    Publication date: April 18, 2024
    Inventors: Vijaya Raghavan Thiruvengadathan Ramkumar, Elahe Arani, Bahram Zonooz
  • Publication number: 20240119280
    Abstract: A computer-implemented method that maintains a memory of errors along the training trajectory and adjusts the contribution of each sample towards learning based on how far it is from the mean statistics of the error memory. The method may include the step of maintaining an additional semantic memory, called a stable model, which gradually aggregates the knowledge encoded in the weights of the working model. The stable model is utilized to select the low loss samples from the current task for populating the error memory. The different components of the method complement each other to effectively reduce the drift in representations at the task boundary and enables consolidation of information across the tasks.
    Type: Application
    Filed: January 20, 2023
    Publication date: April 11, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240119304
    Abstract: A computer-implemented method including the step of formulating a continual learning algorithm with both element similarity as well as relational similarity between the stable and plastic model in a dual-memory setup with rehearsal. While the method includes the step of using only two memories to simplify the analysis of impact of relational similarity, the method can be trivially extended to more than two memories. Specifically, the plastic model learns on the data stream as well as on memory samples, while the stable model maintains an exponentially moving average of the plastic model, resulting in a more generalizable model. Simultaneously, to mitigate forgetting and to enable forward transfer, the stable model distills instance-wise and relational knowledge to the plastic model on memory samples. Instance-wise knowledge distillation maintains element similarities, while relational similarity loss maintains relational similarities.
    Type: Application
    Filed: March 8, 2023
    Publication date: April 11, 2024
    Inventors: Arnav Varma, Elahe Arani, Bahram Zonooz
  • Patent number: 11948272
    Abstract: A computer-implemented method to improve scale consistency and/or scale awareness in a model of self-supervised depth and ego-motion prediction neural networks processing a video stream of monocular images, wherein complementary GPS coordinates synchronized with the images are used to calculate a GPS to scale loss to enforce the scale-consistency and/or -awareness on the monocular self-supervised ego-motion and depth estimation. A relative weight assigned to the GPS to scale loss exponentially increases as training progresses. The depth and ego-motion prediction neural networks are trained using an appearance-based photometric loss between real and synthesized target images, as well as a smoothness loss on the depth predictions.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: April 2, 2024
    Assignee: NAVINFO EUROPE B.V.
    Inventors: Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz
  • Publication number: 20240104373
    Abstract: A computer-implemented method for continual task learning in an artificial cognitive architecture that includes a first neural network module for encoding explicit knowledge representations, a second neural network module for encoding implicit knowledge representations, and a memory buffer. A visual data stream is provided to the architecture. Visual data samples are stored from said visual data stream in the memory buffer. Both visual data samples of the visual data stream and visual data samples from the memory buffer are processed using the first neural network module for learning explicit knowledge representations. Both samples of said visual data stream and visual data samples from the memory buffer are processed using the second neural network module for learning implicit knowledge representations.
    Type: Application
    Filed: December 29, 2022
    Publication date: March 28, 2024
    Inventors: Shruthi Gowda, Bahram Zonooz, Elahe Arani
  • Publication number: 20240054603
    Abstract: A computer-implemented method for mimicking saccadic eye movements in artificial neural networks, wherein said method proposes a bioinspired transformer architecture called Foveated Dynamic Transformers that dynamically selects multiscale tokens with respect to the input image. The foveation and the fixation modules are introduced to the vision transformer block to exploit human visual system inspired mechanisms. To mimic foveation in human visual system, the input token is processed to generate multiscale queries, keys, and values using the foveation module. Next, the foveation module transforms the token in several scales with increasing windows size. To simulate eye movements, the method of the invention comprises the step of using dynamic networks. The dynamic fixation module generates a fixation map corresponding to each token in each transformer block. Tokens that are not at the fixation point are not processed.
    Type: Application
    Filed: September 1, 2022
    Publication date: February 15, 2024
    Inventors: Ibrahim Batuhan Akkaya, Elahe Arani, Bahram Zonooz
  • Publication number: 20240054337
    Abstract: A computer-implemented method for continual task learning in a training framework. The method includes: providing a first deep neural network (?w) including a first function (Gw) and a second function (Fw) which are nested; providing a second deep neural network (?s) including a third function (Fs) as a counterpart to the second nested function (Fw); feeding input images to the first neural network (?w), such as through a filter and/or via patch embedding; generating representations of task samples using the first function (Gw); providing a memory (Dm) for storing at least some of the generated representations of task samples and/or having pre-stored task representation; providing the generated and memory stored representations of task samples to the second function (Fw); and providing memory stored representations of task samples to the third function (Fs).
    Type: Application
    Filed: September 2, 2022
    Publication date: February 15, 2024
    Inventors: Kishaan Jeeveswaran, Prashant Shivaram Bhat, Elahe Arani, Bahram Zonooz
  • Publication number: 20240046102
    Abstract: A computer-implemented method for general continual learning (CL) in artificial neural network that provides a biologically plausible framework for continual learning which incorporates different mechanisms inspired by the brain. The underlying model comprises separate populations of exclusively excitatory and exclusively inhibitory neurons in each layer which adheres to Dale's principle and the excitatory neurons (mimicking pyramidal cells) are augmented with dendrite-like structures for context-dependent processing of information. The dendritic segments process an additional context signal encoding task information and subsequently modulate the feedforward activity of the excitatory neuron. Additionally, it provides an efficient mechanism for controlling the sparsity in activations using k-WTA (k-Winners-Take-All) activations and Heterogeneous dropout mechanism that encourages the model to use a different set of neurons for each task.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 8, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240037455
    Abstract: A computer-implemented method for multi-task structural learning in artificial neural network in which both the architecture and its parameters are learned simultaneously. The method utilizes two neural operators, namely, neuron creation and neuron removal, to aid in structural learning. The method creates excess neurons by starting from a disparate network for each task. Through the progress of training, corresponding task neurons in a layer pave the way for a specialized group neuron leading to a structural change. In the task learning phase of training, different neurons specialize in different tasks. In the interleaved structural learning phase, locally similar task neurons, before being removed, transfer their knowledge to a newly created group neuron. The training is completed with a final fine-tuning phase where only the multi-task loss is used.
    Type: Application
    Filed: August 24, 2022
    Publication date: February 1, 2024
    Inventors: Naresh Kumar Gurulingan, Elahe Arani, Bahram Zonooz
  • Publication number: 20240028885
    Abstract: A computer-implemented method of self-supervised learning for deep neural networks including the steps of: providing input images (x); extracting implicit shape information from the input images; and performing self-supervised learning on at least two deep neural network (f) based on the provided input images (x) and the at least one extracted implicit shape information for enabling said at least one deep neural network (f) to classify and/or detect objects within other input images.
    Type: Application
    Filed: August 24, 2022
    Publication date: January 25, 2024
    Inventors: Shruthi Gowda, Bahram Zonooz, Elahe Arani
  • Patent number: 11847802
    Abstract: Systems arranged to implement methods for positioning a semantic landmark in an image from the real world during a continuous motion of a monocular camera providing said image, using in combination image information from the camera and GPS information, wherein the camera parameters are unknown a priori and are estimated in a self-calibration step, wherein in a subsequent step positioning of the landmarks is completed using one of camera ego motion and depth estimation.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: December 19, 2023
    Assignee: NavInfo Europe B.V.
    Inventors: Hemang Chawla, Matti Jukola, Terence Brouns, Elahe Arani, Bahram Zonooz
  • Publication number: 20230401825
    Abstract: A computer-implemented method for processing images in deep neural networks by: breaking an input sample into a plurality of non-overlapping patches; converting said patches into a plurality of patch-tokens; processing said patch-tokens in at least one transformer block comprising a multi-head self-attention block; providing a multi-scale feature module block in the at least one transformer block; using said multi-scale feature module block for extracting features corresponding to a plurality of scales by applying a plurality of kernels having different window sizes; concatenating said features in the multi-scale feature module block; providing a plurality of hierarchically arranged convolution layers in the multi-scale feature module block; and processing said features in said hierarchically arranged convolution layers for generating at least three multiscale tokens containing multiscale information.
    Type: Application
    Filed: June 29, 2022
    Publication date: December 14, 2023
    Inventors: Ibrahim Batuhan Akkaya, Senthilkumar Sockalingam Kathiresan, Elahe Arani, Bahram Zonooz
  • Patent number: 11842532
    Abstract: A semantic segmentation architecture comprising an asymmetric encoder—decoder structure, wherein the architecture comprises further an adapter for linking different stages of the encoder and the decoder. The adapter amalgamates information from both the encoder and the decoder for preserving and refining information between multiple levels of the encoder and decoder. In this way the adapter aggregates features from different levels and intermediates between encoder and decoder.
    Type: Grant
    Filed: October 21, 2022
    Date of Patent: December 12, 2023
    Assignee: NavInfo Europe B.V.
    Inventors: Elahe Arani, Shabbir Marzban, Andrei Pata, Bahram Zonooz
  • Publication number: 20230385644
    Abstract: A computer-implemented method for general continual learning combines rehearsal-based methods with dynamic modularity and compositionality. Concretely, the method aims at achieving three objectives: dynamic, sparse, and compositional response to inputs; competent application performance; and—reducing catastrophic forgetting. The proposed method can work without knowledge of task-identities at test-time, it does not employ task-boundaries and it has bounded memory even when training on longer sequences.
    Type: Application
    Filed: June 29, 2022
    Publication date: November 30, 2023
    Inventors: Arnav Varma, Elahe Arani, Bahram Zonooz
  • Publication number: 20230289977
    Abstract: A computer implemented network for executing a self-supervised scene change detection method in which image pairs (T0, T1) from different time instances are subjected to random photometric transformations to obtain two pairs of augmented images (T0 ? T 0?, T 0? ; T1 ? T 1?, T1?), which augmented images are passed into an encoder (f?) and a projection head (g?) to provide corresponding feature representations.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 14, 2023
    Inventors: Vijaya Raghavan Thiruvengadathan Ramkumar, Bahram Zonooz, Elahe Arani
  • Publication number: 20230281978
    Abstract: A computer implemented method to distill an inductive bias in a deep neural network operating on image data, the deep neural network comprising a standard network that receives original images from the image data, and an inductive-bias network that receives shape data of the images, and a bias alignment is performed on the standard network and inductive-bias network in feature space and decision space to enable the networks to learn both local texture information and global shape information to produce high-level, generic representations.
    Type: Application
    Filed: March 3, 2022
    Publication date: September 7, 2023
    Inventors: Shruthi Gowda, Bahram Zonooz, Elahe Arani
  • Publication number: 20230281985
    Abstract: A deep learning framework in multi-task learning for finding a sharing scheme of representations in the decoder to best curb task interference while benefiting from complementary information sharing. A deep-learning based computer-implemented method for multi-task learning, the method including the step of progressively fusing decoders by grouping tasks stage-by-stage based on a pairwise similarity matrix between learned representations of different task decoders.
    Type: Application
    Filed: March 3, 2022
    Publication date: September 7, 2023
    Inventors: Naresh Kumar Gurulingan, Elahe Arani, Bahram Zonooz