Patents by Inventor Bahram Zonooz

Bahram Zonooz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12147502
    Abstract: A computer implemented network for executing a self-supervised scene change detection method, wherein at least one image pair with images captured at different instances of time is processed to detect structural changes caused by an appearance or disappearance of an object in the image pair, and wherein a self-supervised pretraining method is employed that utilizes an unlabelled image pair or pairs to learn representations for scene change detection, and wherein the aligned image pair is subjected to a differencing based self-supervised pre-training method to maximize a correlation between changed regions in the images which provide the structural changes that occur in the image pairs.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: November 19, 2024
    Assignee: Navinfo Europe B.V.
    Inventors: Elahe Arani, Vijaya Raghavan Thiruvengadathan Ramkumar, Bahram Zonooz
  • Publication number: 20240330673
    Abstract: A computer-implemented method for training a continual learning artificial neural network model, for sequential tasks, comprising an encoder and two classifiers. The method involves training the model on a plurality sequential tasks, with visual data being received from a vehicle mounted camera becoming increasingly available over time, wherein during each task, the model is presented with task-specific samples of the data and corresponding labels are drawn from a distribution.
    Type: Application
    Filed: March 15, 2023
    Publication date: October 3, 2024
    Inventors: Kishaan Jeeveswaran, Prashant Shivaram Bhat, Elahe Arani, Bahram Zonooz
  • Publication number: 20240331367
    Abstract: A computer implemented method for continual learning of a plurality of tasks using a machine learning model comprising a deep neural network and a preferably compact shared task-attention module. Each task is associated with a mutually different learnable task-specific token including: rehearsing learned knowledge in the deep neural network to prevent the forgetting of previous tasks; transforming latent representations of the shared task-attention module towards a task distribution, using the learnable task-specific tokens, so that memory and computational use is limited, such as substantially insignificant; and using the task-specific tokens to reduce task interference and facilitate within-task and task-id prediction by the deep neural network.
    Type: Application
    Filed: March 31, 2023
    Publication date: October 3, 2024
    Inventors: Prashant Shivaram Bhat, Bharath Renjith, Elahe Arani, Bahram Zonooz
  • Publication number: 20240296321
    Abstract: A computer-implemented method for continual learning in deep neural networks that introduces robust inductive biases by intertwining implicit regularization, using a projection head through auxiliary contrastive representation learning, and explicit consistency regularization on the soft targets using exponential moving average. To further leverage the global relationship between representations learned, the method of the current invention comprises a regularization strategy of guiding the classifier towards the activation correlations in the unit hypersphere of the projection head. These implicit and explicit regularizations encourage the model to learn generalizable representations, thereby reducing task interference and catastrophic forgetting.
    Type: Application
    Filed: February 28, 2023
    Publication date: September 5, 2024
    Inventors: Prashant Shivaram Bhat, Bharath Renjith, Elahe Arani, Bahram Zonooz
  • Patent number: 12062188
    Abstract: A computer implemented network for executing a self-supervised scene change detection method in which image pairs (T0, T1) from different time instances are subjected to random photometric transformations to obtain two pairs of augmented images (T0?T0?, T0?; T1?T1?, T1?), which augmented images are passed into an encoder (f?) and a projection head (g?) to provide corresponding feature representations.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: August 13, 2024
    Assignee: NavInfo Europe B.V.
    Inventors: Vijaya Raghavan Thiruvengadathan Ramkumar, Bahram Zonooz, Elahe Arani
  • Patent number: 12061094
    Abstract: An AI based change detection system for executing a method to detect changes in geo-tagged videos to update HD maps, the method employing a neural network of modular components including a keyframe extraction module for processing two or more videos relating to separate traversals of an area of interest to which the HD map which is to be updated relates, a deep neural network module processing output of the keyframe extraction module, a change detection module processing output of the deep neural network module, and an auxiliary computations module which is designed to aid the change detection module.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: August 13, 2024
    Assignee: NavInfo Europe B.V.
    Inventors: Haris Iqbal, Shruthi Gowda, Ahmed Badar, Terence Brouns, Arnav Varma, Elahe Arani, Bahram Zonooz
  • Publication number: 20240265684
    Abstract: A computer-implemented method for the detection and recognition of objects in unlabeled image data using an automated labelling architecture. The method includes the steps of: proposing bounding-box in every image of the unlabeled image data using a task specific and/or a related task pretrained object detection model and a Bounding Box Sampler module; filtering said bounding boxes for positive object instances; assigning to said filtered bounding boxes a class label using a Few-Shot Classification module; and modifying filtered bounding boxes based on additional class wise attention output from the Few-Shot Classification module.
    Type: Application
    Filed: February 2, 2023
    Publication date: August 8, 2024
    Inventors: Haris Iqbal, Elahe Arani, Bahram Zonooz
  • Publication number: 20240135169
    Abstract: A computer-implemented method that encourages sparse coding in deep neural networks and mimics the interplay of multiple memory systems for maintaining a balance between stability and plasticity. To this end, the method includes a multi-memory experience replay mechanism that employs sparse coding. Activation sparsity is enforced along with a complementary dropout mechanism, which encourages the model to activate similar neurons for semantically similar inputs while reducing the overlap with activation patterns of semantically dissimilar inputs. The semantic dropout provides an efficient mechanism for balancing reusability and interference of features depending on the similarity of classes across tasks. Furthermore, the method includes the step of maintaining an additional long-term semantic memory that aggregates the information encoded in the synaptic weights of the working memory.
    Type: Application
    Filed: December 29, 2022
    Publication date: April 25, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240135722
    Abstract: A computer-implemented method that provides a novel shape aware FSL framework, referred to as LSFSL. In addition to the inductive biases associated with deep learning models, the method of the current invention introduces meaningful shape bias. The method of the current invention comprises the step of capturing the human behavior of recognizing objects by utilizing shape information. The shape information is distilled to address the texture bias of CNN-based models. During training, the model has two branches: RIN-branch, network with colored images as input, preferably RGB images, and SIN-branch, network with shape semantic-based input. Each branch incorporates a CNN backbone followed by a fully connected layer performing classification. RIN-branch and SIN-branch receive the RGB input image and shape information enhanced RGB input image, respectively.
    Type: Application
    Filed: February 7, 2023
    Publication date: April 25, 2024
    Applicant: NavInfo Europe B.V.
    Inventors: Deepan Chakravarthi Padmanabhan, Shruthi Gowda, Elahe Arani, Bahram Zonooz
  • Publication number: 20240135170
    Abstract: A computer-implemented method for continual learning of multiple tasks sequentially using a deep neural network wherein the method comprises providing a plurality of task-attention modules, wherein the method comprises: processing sensory inputs using said the deep neural network to build a first representation space of fixed capacity for representations (common representation space); admitting only task-relevant information from said first representation space into a second representation space (global workspace) different from the first representation space using said plurality of task-attention modules, and wherein each task-attention module of the plurality of task-attention modules is specialized towards a different task.
    Type: Application
    Filed: January 3, 2023
    Publication date: April 25, 2024
    Inventors: Prashant Shivaram Bhat, Elahe Arani, Bahram Zonooz
  • Publication number: 20240127066
    Abstract: A computer-implemented method for improving generalization in training deep neural networks in online settings. The method includes a general learning paradigm for sequential data that is referred to as Learn, Unlearn, RElearn (LURE), a dynamic re-initialization method to address the above-mentioned larger problem of generalization of parameterized networks on sequential data by selectively retaining the task-specific connections through the important criteria and re-randomizing the less important parameters at each mega batch of training. The method of selectively forgetting retains previous information all the while improving generalization to unseen samples.
    Type: Application
    Filed: January 30, 2023
    Publication date: April 18, 2024
    Inventors: Vijaya Raghavan Thiruvengadathan Ramkumar, Elahe Arani, Bahram Zonooz
  • Publication number: 20240119304
    Abstract: A computer-implemented method including the step of formulating a continual learning algorithm with both element similarity as well as relational similarity between the stable and plastic model in a dual-memory setup with rehearsal. While the method includes the step of using only two memories to simplify the analysis of impact of relational similarity, the method can be trivially extended to more than two memories. Specifically, the plastic model learns on the data stream as well as on memory samples, while the stable model maintains an exponentially moving average of the plastic model, resulting in a more generalizable model. Simultaneously, to mitigate forgetting and to enable forward transfer, the stable model distills instance-wise and relational knowledge to the plastic model on memory samples. Instance-wise knowledge distillation maintains element similarities, while relational similarity loss maintains relational similarities.
    Type: Application
    Filed: March 8, 2023
    Publication date: April 11, 2024
    Inventors: Arnav Varma, Elahe Arani, Bahram Zonooz
  • Publication number: 20240119280
    Abstract: A computer-implemented method that maintains a memory of errors along the training trajectory and adjusts the contribution of each sample towards learning based on how far it is from the mean statistics of the error memory. The method may include the step of maintaining an additional semantic memory, called a stable model, which gradually aggregates the knowledge encoded in the weights of the working model. The stable model is utilized to select the low loss samples from the current task for populating the error memory. The different components of the method complement each other to effectively reduce the drift in representations at the task boundary and enables consolidation of information across the tasks.
    Type: Application
    Filed: January 20, 2023
    Publication date: April 11, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Patent number: 11948272
    Abstract: A computer-implemented method to improve scale consistency and/or scale awareness in a model of self-supervised depth and ego-motion prediction neural networks processing a video stream of monocular images, wherein complementary GPS coordinates synchronized with the images are used to calculate a GPS to scale loss to enforce the scale-consistency and/or -awareness on the monocular self-supervised ego-motion and depth estimation. A relative weight assigned to the GPS to scale loss exponentially increases as training progresses. The depth and ego-motion prediction neural networks are trained using an appearance-based photometric loss between real and synthesized target images, as well as a smoothness loss on the depth predictions.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: April 2, 2024
    Assignee: NAVINFO EUROPE B.V.
    Inventors: Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz
  • Publication number: 20240104373
    Abstract: A computer-implemented method for continual task learning in an artificial cognitive architecture that includes a first neural network module for encoding explicit knowledge representations, a second neural network module for encoding implicit knowledge representations, and a memory buffer. A visual data stream is provided to the architecture. Visual data samples are stored from said visual data stream in the memory buffer. Both visual data samples of the visual data stream and visual data samples from the memory buffer are processed using the first neural network module for learning explicit knowledge representations. Both samples of said visual data stream and visual data samples from the memory buffer are processed using the second neural network module for learning implicit knowledge representations.
    Type: Application
    Filed: December 29, 2022
    Publication date: March 28, 2024
    Inventors: Shruthi Gowda, Bahram Zonooz, Elahe Arani
  • Publication number: 20240054337
    Abstract: A computer-implemented method for continual task learning in a training framework. The method includes: providing a first deep neural network (?w) including a first function (Gw) and a second function (Fw) which are nested; providing a second deep neural network (?s) including a third function (Fs) as a counterpart to the second nested function (Fw); feeding input images to the first neural network (?w), such as through a filter and/or via patch embedding; generating representations of task samples using the first function (Gw); providing a memory (Dm) for storing at least some of the generated representations of task samples and/or having pre-stored task representation; providing the generated and memory stored representations of task samples to the second function (Fw); and providing memory stored representations of task samples to the third function (Fs).
    Type: Application
    Filed: September 2, 2022
    Publication date: February 15, 2024
    Inventors: Kishaan Jeeveswaran, Prashant Shivaram Bhat, Elahe Arani, Bahram Zonooz
  • Publication number: 20240054603
    Abstract: A computer-implemented method for mimicking saccadic eye movements in artificial neural networks, wherein said method proposes a bioinspired transformer architecture called Foveated Dynamic Transformers that dynamically selects multiscale tokens with respect to the input image. The foveation and the fixation modules are introduced to the vision transformer block to exploit human visual system inspired mechanisms. To mimic foveation in human visual system, the input token is processed to generate multiscale queries, keys, and values using the foveation module. Next, the foveation module transforms the token in several scales with increasing windows size. To simulate eye movements, the method of the invention comprises the step of using dynamic networks. The dynamic fixation module generates a fixation map corresponding to each token in each transformer block. Tokens that are not at the fixation point are not processed.
    Type: Application
    Filed: September 1, 2022
    Publication date: February 15, 2024
    Inventors: Ibrahim Batuhan Akkaya, Elahe Arani, Bahram Zonooz
  • Publication number: 20240046102
    Abstract: A computer-implemented method for general continual learning (CL) in artificial neural network that provides a biologically plausible framework for continual learning which incorporates different mechanisms inspired by the brain. The underlying model comprises separate populations of exclusively excitatory and exclusively inhibitory neurons in each layer which adheres to Dale's principle and the excitatory neurons (mimicking pyramidal cells) are augmented with dendrite-like structures for context-dependent processing of information. The dendritic segments process an additional context signal encoding task information and subsequently modulate the feedforward activity of the excitatory neuron. Additionally, it provides an efficient mechanism for controlling the sparsity in activations using k-WTA (k-Winners-Take-All) activations and Heterogeneous dropout mechanism that encourages the model to use a different set of neurons for each task.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 8, 2024
    Inventors: Fahad Sarfraz, Elahe Arani, Bahram Zonooz
  • Publication number: 20240037455
    Abstract: A computer-implemented method for multi-task structural learning in artificial neural network in which both the architecture and its parameters are learned simultaneously. The method utilizes two neural operators, namely, neuron creation and neuron removal, to aid in structural learning. The method creates excess neurons by starting from a disparate network for each task. Through the progress of training, corresponding task neurons in a layer pave the way for a specialized group neuron leading to a structural change. In the task learning phase of training, different neurons specialize in different tasks. In the interleaved structural learning phase, locally similar task neurons, before being removed, transfer their knowledge to a newly created group neuron. The training is completed with a final fine-tuning phase where only the multi-task loss is used.
    Type: Application
    Filed: August 24, 2022
    Publication date: February 1, 2024
    Inventors: Naresh Kumar Gurulingan, Elahe Arani, Bahram Zonooz
  • Publication number: 20240028885
    Abstract: A computer-implemented method of self-supervised learning for deep neural networks including the steps of: providing input images (x); extracting implicit shape information from the input images; and performing self-supervised learning on at least two deep neural network (f) based on the provided input images (x) and the at least one extracted implicit shape information for enabling said at least one deep neural network (f) to classify and/or detect objects within other input images.
    Type: Application
    Filed: August 24, 2022
    Publication date: January 25, 2024
    Inventors: Shruthi Gowda, Bahram Zonooz, Elahe Arani