Patents by Inventor Maksims Volkovs

Maksims Volkovs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11995121
    Abstract: An image retrieval system receives an image for which to identify relevant images from an image repository. Relevant images may be of the same environment or object and features and other characteristics. Images in the repository are represented in an image retrieval graph by a set of image nodes connected by edges to other related image nodes with edge weights representing the similarity of the nodes to each other. Based on the received image, the image traversal system identifies an image in the image retrieval graph and alternatively explores and traverses (also termed “exploits”) the image nodes with the edge weights. In the exploration step, image nodes in an exploration set are evaluated to identify connected nodes that are added to a traversal set of image nodes. In the traversal step, the relevant nodes in the traversal set are added to the exploration set and a query result set.
    Type: Grant
    Filed: June 29, 2023
    Date of Patent: May 28, 2024
    Assignee: The Toronto-Dominion Bank
    Inventors: Maksims Volkovs, Cheng Chang, Guangwei Yu, Chundi Liu
  • Publication number: 20240152763
    Abstract: The proposed model is a Variational Autoencoder having a learnable prior that is parametrized with a Tensor Train (VAE-TTLP). The VAE-TTLP can be used to generate new objects, such as molecules, that have specific properties and that can have specific biological activity (when a molecule). The VAE-TTLP can be trained in a way with the Tensor Train so that the provided data may omit one or more properties of the object, and still result in an object with a desired property.
    Type: Application
    Filed: December 13, 2023
    Publication date: May 9, 2024
    Inventors: Aleksandr Aliper, Aleksandrs Zavoronkovs, Alexander Zhebrak, Daniil Polykovskiy, Maksim Kuznetsov, Yan Ivanenkov, Mark Veselov, Vladimir Aladinskiy, Evgeny Putin, Yuriy Volkov, Arip Asadulaev
  • Publication number: 20240127036
    Abstract: To improve processing of the multi-event time-series data, information about each event type is aggregated for a group of time bins, such that an event bin embedding represents the occurring events of that type in the time bin. The event bin embedding may be based on an aggregated event value summarizing the values of that event type in the bin and a count of those events. The event bin embeddings across event types and time bins may be combined with an embedding for static data about the data instance and a representation token for input to an encoder. The encoder may apply an event-focused sublayer and a time-focused sublayer that attend to respective dimensions of the encoder. The model may be initially trained with self-supervised learning with time and event masking and then fine-tuned for particular applications.
    Type: Application
    Filed: September 21, 2023
    Publication date: April 18, 2024
    Inventors: Saba Zuberi, Maksims Volkovs, Aslesha Pokhrel, Alexander Jacob Labach
  • Publication number: 20240119346
    Abstract: There is provided a computer implemented method, system and device for automatically generating a machine learning model for forecasting a likelihood of compromise in one or more transaction devices and subsequently triggering performing an action on one or more related computing devices based on a potentially compromised transaction device.
    Type: Application
    Filed: October 7, 2022
    Publication date: April 11, 2024
    Inventors: CHENG CHANG, HIMANSHU RAI, YIFAN WANG, MOHSEN RAZA, GABRIEL KABO TSANG, MAKSIMS VOLKOVS
  • Publication number: 20240020534
    Abstract: A non-autoregressive transformer model is improved to maintain output quality while reducing a number of iterative applications of the model by training parameters of a student model based on a teacher model. The teacher model is applied several iterations to a masked output and a student model is applied one iteration, such that the respective output token predictions for the masked positions can be compared and a loss propagated to the student. The loss may be based on token distributions rather than the specific output tokens alone, and may additionally consider hidden state losses. The teacher model may also be updated for use in further training based on the updated model, for example, by updating its parameters as a moving average.
    Type: Application
    Filed: June 6, 2023
    Publication date: January 18, 2024
    Inventors: Juan Felipe Perez Vallejo, Maksims Volkovs, Sajad Norouzi, Rasa Hosseinzadeh
  • Publication number: 20230401252
    Abstract: An image retrieval system receives an image for which to identify relevant images from an image repository. Relevant images may be of the same environment or object and features and other characteristics. Images in the repository are represented in an image retrieval graph by a set of image nodes connected by edges to other related image nodes with edge weights representing the similarity of the nodes to each other. Based on the received image, the image traversal system identifies an image in the image retrieval graph and alternatively explores and traverses (also termed “exploits”) the image nodes with the edge weights. In the exploration step, image nodes in an exploration set are evaluated to identify connected nodes that are added to a traversal set of image nodes. In the traversal step, the relevant nodes in the traversal set are added to the exploration set and a query result set.
    Type: Application
    Filed: June 29, 2023
    Publication date: December 14, 2023
    Inventors: Maksims Volkovs, Cheng Chang, Guangwei Yu, Chundi Liu
  • Patent number: 11809486
    Abstract: A content retrieval system uses a graph neural network architecture to determine images relevant to an image designated in a query. The graph neural network learns a new descriptor space that can be used to map images in the repository to image descriptors and the query image to a query descriptor. The image descriptors characterize the images in the repository as vectors in the descriptor space, and the query descriptor characterizes the query image as a vector in the descriptor space. The content retrieval system obtains the query result by identifying a set of relevant images associated with image descriptors having above a similarity threshold with the query descriptor.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: November 7, 2023
    Assignee: The Toronto-Dominion Bank
    Inventors: Chundi Liu, Guangwei Yu, Maksims Volkovs
  • Publication number: 20230351753
    Abstract: A text-video recommendation model determines relevance of a text to a video in a text-video pair (e.g., as a relevance score) with a text embedding and a text-conditioned video embedding. The text-conditioned video embedding is a representation of the video used for evaluating the relevance of the video to the text, where the representation itself is a function of the text it is evaluated for. As such, the input text may be used to weigh or attend to different frames of the video in determining the text-conditioned video embedding. The representation of the video may thus differ for different input texts for comparison. The text-conditioned video embedding may be determined in various ways, such as with a set of the most-similar frames to the input text (the top-k frames) or may be based on an attention function based on query, key, and value projections.
    Type: Application
    Filed: August 24, 2022
    Publication date: November 2, 2023
    Inventors: Satya Krishna Gorti, Junwei Ma, Guangwei Yu, Maksims Volkovs, Keyvan Golestan Irani, Noël Vouitsis
  • Patent number: 11748400
    Abstract: An image retrieval system receives an image for which to identify relevant images from an image repository. Relevant images may be of the same environment or object and features and other characteristics. Images in the repository are represented in an image retrieval graph by a set of image nodes connected by edges to other related image nodes with edge weights representing the similarity of the nodes to each other. Based on the received image, the image traversal system identifies an image in the image retrieval graph and alternatively explores and traverses (also termed “exploits”) the image nodes with the edge weights. In the exploration step, image nodes in an exploration set are evaluated to identify connected nodes that are added to a traversal set of image nodes. In the traversal step, the relevant nodes in the traversal set are added to the exploration set and a query result set.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: September 5, 2023
    Assignee: The Toronto-Dominion Bank
    Inventors: Maksims Volkovs, Cheng Chang, Guangwei Yu, Chundi Liu
  • Publication number: 20230267367
    Abstract: A recommendation system generates item recommendations for a user based on a distance between a user embedding and item embeddings. To train the item and user embeddings, the recommendation system user-item pairs as training data to focus on difficult items based on the positive and negative items with respect to individual users in the training set. In training, the weight of individual user-item pairs in affecting the user and item embeddings may be determined based on the distance of the particular user-item pair between user embedding and item embedding, as well as the comparative distance for other items of the same type for that user and for the distance of user-item pairs for other users, which may regulate the distances across types and across the training batch.
    Type: Application
    Filed: October 19, 2022
    Publication date: August 24, 2023
    Inventors: Maksims Volkovs, Zhaoyue Cheng, Juan Felipe Perez Vallejo, Jianing Sun, Zhaolin Gao
  • Publication number: 20230252301
    Abstract: An online system trains a transformer architecture by an initialization method which allows the transformer architecture to be trained without normalization layers of learning rate warmup, resulting in significant improvements in computational efficiency for transformer architectures. Specifically, an attention block included in an encoder or a decoder of the transformer architecture generates the set of attention representations by applying a key matrix to the input key, a query matrix to the input query, a value matrix to the input value to generate an output, and applying an output matrix to the output to generate the set of attention representations. The initialization method may be performed by scaling the parameters of the value matrix and the output matrix with a factor that is inverse to a number of the set of encoders or a number of the set of decoders.
    Type: Application
    Filed: April 19, 2023
    Publication date: August 10, 2023
    Inventors: Maksims Volkovs, Xiao Shi Huang, Juan Felipe Perez Vallejo
  • Publication number: 20230244962
    Abstract: A model evaluation system evaluates the effect of a feature value at a particular time in a time-series data record on predictions made by a time-series model. The time-series model may make predictions with black-box parameters that can impede explainability of the relationship between predictions for a data record and the values of the data record. To determine the relative importance of a feature occurring at a time and evaluated at an evaluation time, the model predictions are determined on the unmasked data record at the evaluation time and on the data record with feature values masked within a window between the time and the evaluation time, permitting comparison of the evaluation with the features and without the features. In addition, the contribution at the initial time in the window may be determined by comparing the score with another score determined by masking the values except for the initial time.
    Type: Application
    Filed: September 30, 2022
    Publication date: August 3, 2023
    Inventors: Maksims Volkovs, Kin Kwan Leung, Saba Zuberi, Jonathan Anders James Smith, Clayton James Rooke
  • Patent number: 11663488
    Abstract: An online system trains a transformer architecture by an initialization method which allows the transformer architecture to be trained without normalization layers of learning rate warmup, resulting in significant improvements in computational efficiency for transformer architectures. Specifically, an attention block included in an encoder or a decoder of the transformer architecture generates the set of attention representations by applying a key matrix to the input key, a query matrix to the input query, a value matrix to the input value to generate an output, and applying an output matrix to the output to generate the set of attention representations. The initialization method may be performed by scaling the parameters of the value matrix and the output matrix with a factor that is inverse to a number of the set of encoders or a number of the set of decoders.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: May 30, 2023
    Assignee: THE TORONTO-DOMINION BANK
    Inventors: Maksims Volkovs, Xiao Shi Huang, Juan Felipe Perez Vallejo
  • Publication number: 20230153461
    Abstract: A model training system protects data leakage of private data in a federated learning environment by training a private model in conjunction with a proxy model. The proxy model is trained with protections for the private data and may be shared with other participants. Proxy models from other participants are used to train the private model, enabling the private model to benefit from parameters based on other models’ private data without privacy leakage. The proxy model may be trained with a differentially private algorithm that quantifies a privacy cost for the proxy model, enabling a participant to measure the potential exposure of private data and drop out. Iterations may include training the proxy and private models and then mixing the proxy models with other participants. The mixing may include updating and applying a bias to account for the weights of other participants in the received proxy models.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 18, 2023
    Inventors: Shivam Kalra, Jesse Cole Cresswell, Junfeng Wen, Maksims Volkovs, Hamid R. Tizhoosh
  • Publication number: 20230131935
    Abstract: An object detection model and relationship prediction model are jointly trained with parameters that may be updated through a joint backbone. The offset detection model predicts object locations based on keypoint detection, such as a heatmap local peak, enabling disambiguation of objects. The relationship prediction model may predict a relationship between detected objects and be trained with a joint loss with the object detection model. The loss may include terms for object connectedness and model confidence, enabling training to focus first on highly-connected objects and later on lower-confidence items.
    Type: Application
    Filed: October 19, 2022
    Publication date: April 27, 2023
    Inventors: Maksims Volkovs, Cheng Chang, Guangwei Yu, Himanshu Rai, Yichao Lu
  • Publication number: 20230119108
    Abstract: An autoencoder model includes an encoder portion and a decoder portion. The encoder encodes an input token sequence to an input sequence representation that is decoded by the decoder to generate an output token sequence. The autoencoder model may decode multiple output tokens in parallel, such that the decoder may be applied iteratively. The decoder may receive an output estimate from a prior iteration to predict output tokens. To improve positional representation and reduce positional errors and repetitive tokens, the autoencoder may include a trained layer for combining token embeddings with positional encodings. In addition, the model may be trained with a corrective loss based on output predictions when the model receives a masked input as the output estimate.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 20, 2023
    Inventors: Maksims Volkovs, Juan Felipe Perez Vallejo, Xiao Shi Huang
  • Publication number: 20230103753
    Abstract: The disclosed embodiments include computer-implemented processes that generate adaptive textual explanations of output using trained artificial intelligence processes. For example, an apparatus may generate an input dataset based on elements of first interaction data associated with a first temporal interval, and based on an application of a trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of an occurrence of an event during a second temporal interval. Further, and based on an application of a trained explainability process to the input dataset, the apparatus may generate an element of textual content that characterizes an outcome associated with the predicted likelihood of the occurrence of the event, where the element of textual content is associated with a feature value of the input dataset. The apparatus may also transmit a portion of the output data and the element of textual content to a computing system.
    Type: Application
    Filed: November 23, 2021
    Publication date: April 6, 2023
    Inventors: Yaqiao Luo, Jesse Cole Cresswell, Kin Kwan Leung, Kai Wang, Aiyeh Ashari Ghomi, Caitlin Messick, Lu Shu, Barum Rho, Maksims Volkovs, Paige Elyse Dickie
  • Publication number: 20220414145
    Abstract: A content retrieval system uses a graph neural network architecture to determine images relevant to an image designated in a query. The graph neural network learns a new descriptor space that can be used to map images in the repository to image descriptors and the query image to a query descriptor. The image descriptors characterize the images in the repository as vectors in the descriptor space, and the query descriptor characterizes the query image as a vector in the descriptor space. The content retrieval system obtains the query result by identifying a set of relevant images associated with image descriptors having above a similarity threshold with the query descriptor.
    Type: Application
    Filed: August 31, 2022
    Publication date: December 29, 2022
    Inventors: Chundi Liu, Guangwei Yu, Maksims Volkovs
  • Publication number: 20220343422
    Abstract: In some examples, computer-implemented systems and processes facilitate a prediction of occurrences of future events using trained artificial intelligence processes and normalized feature data. For instance, an apparatus may generate an input dataset based on elements of interaction data that characterize an occurrence of a first event during a first temporal interval, and that include at least one element of normalized data. Based on an application of a trained artificial intelligence process to the input dataset, the apparatus may generate output data representative of a predicted likelihood of an occurrence of a second event associated with during a second temporal interval. The apparatus may also transmit at least a portion of the output data to a computing system, which may perform operations consistent with the portion of the output data.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 27, 2022
    Inventors: Saba ZUBERI, Shrinu KUSHAGRA, Callum Iain MAIR, Steven Robert ROMBOUGH, Farnush FARHADI HASSAN KIADEH, Maksims VOLKOVS, Tomi Johan POUTANEN
  • Publication number: 20220335718
    Abstract: A video localization system localizes actions in videos based on a classification model and an actionness model. The classification model is trained to make predictions of which segments of a video depict an action and to classify the actions in the segments. The actionness model predicts whether any action is occurring in each segment, rather than predicting a particular type of action. This reduces the likelihood that the video localization system over-relies on contextual information in localizing actions in video. Furthermore, the classification model and the actionness model are trained based on weakly-labeled data, thereby reducing the cost and time required to generate training data for the video localization system.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 20, 2022
    Inventors: Junwei Ma, Satya Krishna Gorti, Maksims Volkovs, Guangwei Yu