Patents by Inventor Erik Kruus

Erik Kruus has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046606
    Abstract: Methods and systems for temporal action localization include processing a video stream to identify an action and a start time and a stop time for the action using a neural network model that separately processes information of appearance and motion modalities from the video stream using transformer branches that include a self-attention and a cross-attention between the appearance and motion modalities. An action is performed responsive to the identified action.
    Type: Application
    Filed: August 1, 2023
    Publication date: February 8, 2024
    Inventors: Kai Li, Renqiang Min, Deep Patel, Erik Kruus, Xin Hu
  • Publication number: 20230315783
    Abstract: A classification apparatus according to the present disclosure includes an input unit configured to receive an operation performed by a user, an extraction unit configured to extract moving image data by using a predetermined rule, a display control unit configured to display an icon corresponding to the extracted moving image data on a screen of a display unit, a movement detection unit configured to detect a movement of the icon on the screen caused by the operation performed by the user, and a specifying unit configured to specify a classification of the moving image data corresponding to the icon based on a position of the icon on the screen.
    Type: Application
    Filed: March 1, 2023
    Publication date: October 5, 2023
    Applicant: NEC Corporation
    Inventors: Asako FUJII, Iain MELVIN, Yuki CHIBA, Masayuki SAKATA, Erik KRUUS, Chris WHITE
  • Publication number: 20230129568
    Abstract: Systems and methods for predicting T-Cell receptor (TCR)-peptide interaction, including training a deep learning model for the prediction of TCR-peptide interaction by determining a multiple sequence alignment (MSA) for TCR-peptide pair sequences from a dataset of TCR-peptide pair sequences using a sequence analyzer, building TCR structures and peptide structures using the MSA and corresponding structures from a Protein Data Bank (PDB) using a MODELLER, and generating an extended TCR-peptide training dataset based on docking energy scores determined by docking peptides to TCRs using physical modeling based on the TCR structures and peptide structures built using the MODELLER. TCR-peptide pairs are classified and labeled as positive or negative pairs using pseudo-labels based on the docking energy scores, and the deep learning model is iteratively retrained based on the extended TCR-peptide training dataset and the pseudo-labels until convergence.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 27, 2023
    Inventors: Renqiang Min, Hans Peter Graf, Erik Kruus, Yiren Jian
  • Patent number: 11356334
    Abstract: A method is provided for sparse communication in a parallel machine learning environment. The method includes determining a fixed communication cost for a sparse graph to be computed. The sparse graph is (i) determined from a communication graph that includes all the machines in a target cluster of the environment, and (ii) represents a communication network for the target cluster having (a) an overall spectral gap greater than or equal to a minimum threshold, and (b) certain information dispersal properties such that an intermediate output from a given node disperses to all other nodes of the sparse graph in lowest number of time steps given other possible node connections. The method further includes computing the sparse graph, based on the communication graph and the fixed communication cost. The method also includes initiating a propagation of the intermediate output in the parallel machine learning environment using a topology of the sparse graph.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: June 7, 2022
    Inventors: Asim Kadav, Erik Kruus
  • Patent number: 11354935
    Abstract: A computer-implemented method for emulating an object recognizer includes receiving testing image data, and emulating, by employing a first object recognizer, a second object recognizer. Emulating the second object recognizer includes using the first object recognizer to perform object recognition on a testing object from the testing image data to generate data, the data including a feature representation for the testing object, and classifying the testing object based on the feature representation and a machine learning model configured to predict whether the testing object would be recognized by a second object recognizer. The method further includes triggering an action to be performed based on the classification.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: June 7, 2022
    Inventors: Biplob Debnath, Erik Kruus, Murugan Sankaradas, Srimat Chakradhar
  • Patent number: 11086814
    Abstract: Systems and methods for building a distributed learning framework, including generating a sparse communication network graph with a high overall spectral gap. The generating includes computing model parameters in distributed shared memory of a cluster of a plurality of worker nodes; determining a spectral gap of an adjacency matrix for the cluster using a stochastic reduce convergence analysis, wherein a spectral reduce is performed using a sparse reduce graph with a highest possible spectral gap value for a given network bandwidth capability; and optimizing the communication graph by iteratively performing the computing and determining until a threshold condition is reached. Each of the plurality of worker nodes is controlled using tunable approximation based on available bandwidth in a network in accordance with the generated sparse communication network graph.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: August 10, 2021
    Inventors: Asim Kadav, Erik Kruus
  • Publication number: 20210089924
    Abstract: Aspects of the present disclosure describe improving neural network robustness through neighborhood preserving layers and learning weighted-average neighbor embeddings.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Applicant: NEC LABORATORIES AMERICA, INC
    Inventors: Erik KRUUS, Christopher MALON, Bingyuan LIU
  • Publication number: 20200293758
    Abstract: A computer-implemented method for emulating an object recognizer includes receiving testing image data, and emulating, by employing a first object recognizer, a second object recognizer. Emulating the second object recognizer includes using the first object recognizer to perform object recognition on a testing object from the testing image data to generate data, the data including a feature representation for the testing object, and classifying the testing object based on the feature representation and a machine learning model configured to predict whether the testing object would be recognized by a second object recognizer. The method further includes triggering an action to be performed based on the classification.
    Type: Application
    Filed: March 5, 2020
    Publication date: September 17, 2020
    Inventors: Biplob Debnath, Erik Kruus, Murugan Sankaradas, Srimat Chakradhar
  • Patent number: 10740212
    Abstract: Systems and methods for implementing content-level anomaly detection for devices having limited memory are provided. At least one log content model is generated based on training log content of training logs obtained from one or more sources associated with the computer system. The at least one log content model is transformed into at least one modified log content model to limit memory usage. Anomaly detection is performed for testing log content of testing logs obtained from one or more sources associated with the computer system based on the at least one modified log content model. In response to the anomaly detection identifying one or more anomalies associated with the testing log content, the one or more anomalies are output.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: August 11, 2020
    Assignee: NEC Corporation
    Inventors: Biplob Debnath, Hui Zhang, Erik Kruus
  • Publication number: 20200250304
    Abstract: Systems and methods for detecting adversarial examples are provided. The method includes generating encoder direct output by projecting, via an encoder, input data items to a low-dimensional embedding vector of reduced dimensionality with respect to the one or more input data items to form a low-dimensional embedding space. The method includes regularizing the low-dimensional embedding space via a training procedure such that the input data items produce embedding space vectors whose global distribution is expected to follow a simple prior distribution. The method also includes identifying whether each of the input data items is an adversarial or unnatural input. The method further includes classifying, during the training procedure, those input data items which have not been identified as adversarial or unnatural into one of multiple classes.
    Type: Application
    Filed: January 31, 2020
    Publication date: August 6, 2020
    Inventors: Erik Kruus, Renqiang Min, Yao Li
  • Patent number: 10402234
    Abstract: A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: September 3, 2019
    Assignee: NEC CORPORATION
    Inventors: Asim Kadav, Erik Kruus
  • Patent number: 10402235
    Abstract: A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative distributed machine learning process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative distributed machine learning process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: September 3, 2019
    Assignee: NEC CORPORATION
    Inventors: Asim Kadav, Erik Kruus
  • Patent number: 10291485
    Abstract: A network device, system, and method are provided. The network device includes a processor. The processor is configured to store a local estimate and a dual variable maintaining an accumulated subgradient for the network device. The processor is further configured to collect values of the dual variable of neighboring network devices. The processor is also configured to form a convex combination with equal weight from the collected dual variable of neighboring network devices. The processor is additionally configured to add a most recent local subgradient for the network device, scaled by a scaling factor, to the convex combination to obtain an updated dual variable. The processor is further configured to update the local estimate by projecting the updated dual variable to a primal space.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: May 14, 2019
    Assignee: NEC Corporation
    Inventors: Asim Kadav, Renqiang Min, Erik Kruus, Cun Mu
  • Publication number: 20180349250
    Abstract: Systems and methods for implementing content-level anomaly detection for devices having limited memory are provided. At least one log content model is generated based on training log content of training logs obtained from one or more sources associated with the computer system. The at least one log content model is transformed into at least one modified log content model to limit memory usage. Anomaly detection is performed for testing log content of testing logs obtained from one or more sources associated with the computer system based on the at least one modified log content model. In response to the anomaly detection identifying one or more anomalies associated with the testing log content, the one or more anomalies are output.
    Type: Application
    Filed: May 3, 2018
    Publication date: December 6, 2018
    Inventors: Biplob Debnath, Hui Zhang, Erik Kruus
  • Publication number: 20180262402
    Abstract: A method is provided for sparse communication in a parallel machine learning environment. The method includes determining a fixed communication cost for a sparse graph to be computed. The sparse graph is (i) determined from a communication graph that includes all the machines in a target cluster of the environment, and (ii) represents a communication network for the target cluster having (a) an overall spectral gap greater than or equal to a minimum threshold, and (b) certain information dispersal properties such that an intermediate output from a given node disperses to all other nodes of the sparse graph in lowest number of time steps given other possible node connections. The method further includes computing the sparse graph, based on the communication graph and the fixed communication cost. The method also includes initiating a propagation of the intermediate output in the parallel machine learning environment using a topology of the sparse graph.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 13, 2018
    Inventors: Asim Kadav, Erik Kruus
  • Publication number: 20180260256
    Abstract: A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative distributed machine learning process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative distributed machine learning process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 13, 2018
    Inventors: Asim Kadav, Erik Kruus
  • Patent number: 9984337
    Abstract: Systems and methods are disclosed for providing distributed learning over a plurality of parallel machine network nodes by allocating a per-sender receive queue at every machine network node and performing distributed in-memory training; and training each unit replica and maintaining multiple copies of the unit replica being trained, wherein all unit replicas train, receive unit updates and merge in parallel in a peer-to-peer fashion, wherein each receiving machine network node merges updates at later point in time without interruption and wherein the propagating and synchronizing unit replica updates are lockless and asynchronous.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: May 29, 2018
    Assignee: NEC Corporation
    Inventors: Asim Kadav, Erik Kruus, Hao Li
  • Publication number: 20170300830
    Abstract: Systems and methods for building a distributed learning framework, including generating a sparse communication network graph with a high overall spectral gap. The generating includes computing model parameters in distributed shared memory of a cluster of a plurality of worker nodes; determining a spectral gap of an adjacency matrix for the cluster using a stochastic reduce convergence analysis, wherein a spectral reduce is performed using a sparse reduce graph with a highest possible spectral gap value for a given network bandwidth capability; and optimizing the communication graph by iteratively performing the computing and determining until a threshold condition is reached. Each of the plurality of worker nodes is controlled using tunable approximation based on available bandwidth in a network in accordance with the generated sparse communication network graph.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 19, 2017
    Inventors: Asim Kadav, Erik Kruus
  • Publication number: 20170300356
    Abstract: A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby.
    Type: Application
    Filed: April 6, 2017
    Publication date: October 19, 2017
    Inventors: Asim Kadav, Erik Kruus
  • Publication number: 20170111234
    Abstract: A network device, system, and method are provided. The network device includes a processor. The processor is configured to store a local estimate and a dual variable maintaining an accumulated subgradient for the network device. The processor is further configured to collect values of the dual variable of neighboring network devices. The processor is also configured to form a convex combination with equal weight from the collected dual variable of neighboring network devices. The processor is additionally configured to add a most recent local subgradient for the network device, scaled by a scaling factor, to the convex combination to obtain an updated dual variable. The processor is further configured to update the local estimate by projecting the updated dual variable to a primal space.
    Type: Application
    Filed: October 18, 2016
    Publication date: April 20, 2017
    Inventors: Asim Kadav, Renqiang Min, Erik Kruus, Cun Mu