Patents by Inventor Jesse Cole

Jesse Cole has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12373598
    Abstract: A model evaluation system evaluates the extent to which privacy-aware training processes affect the direction of training gradients for groups. A modified differential-privacy (“DP”) training process provides per-sample gradient adjustments with parameters that may be adaptively modified for different data batches. Per-sample gradients are modified with respect to a reference bound and a clipping bound. A scaling factor may be determined for each per-sample gradient based on the higher of the reference bound or a magnitude of the per-sample gradient. Per-sample gradients may then be adjusted based on a ratio of the clipping bound to the scaling factor. A relative privacy cost between groups may be determined as excess training risk based on a difference in group gradient direction relative to an unadjusted batch gradient and the adjusted batch gradient according to the privacy-aware training.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: July 29, 2025
    Assignee: The Toronto-Dominion Bank
    Inventors: Jesse Cole Cresswell, Atiyeh Ashari Ghomi, Yaqiao Luo, Maria Esipova
  • Publication number: 20250173619
    Abstract: Multi-modal models learn a joint latent space for relating data points across different modalities. To more effectively learn multi-modal models with reduced training requirements and greater benefit from limited multi-modal training data, a multi-modal model may be trained with fixed or pre-trained unimodal encoders that generate data representations in respective latent spaces. The multi-modal model is trained to learn a shared latent space while fixing the unimodal encoders, enabling training without storing the unimodal encoders in memory. Limited multi-modal data may also be augmented by generating synthetic data between commonly-labeled pairs in the respective modality's latent spaces. The effect of data diversity can also be determined by generating a diverse data set with respect to the data points in latent space, enabling measurement of performance of the multi-modal model on limited training data.
    Type: Application
    Filed: November 22, 2024
    Publication date: May 29, 2025
    Inventors: Jesse Cole Cresswell, Noel Vouitsis, Zhaoyan Liu, Satya Krishna Gorti, Valentin Victor Villecroze, Guangwei Yu, Gabriel Loaiza Ganem, Maksims Volkovs
  • Publication number: 20250173618
    Abstract: Multi-modal models learn a joint latent space for relating data points across different modalities. To more effectively learn multi-modal models with reduced training requirements and greater benefit from limited multi-modal training data, a multi-modal model may be trained with fixed or pre-trained unimodal encoders that generate data representations in respective latent spaces. The multi-modal model is trained to learn a shared latent space while fixing the unimodal encoders, enabling training without storing the unimodal encoders in memory. Limited multi-modal data may also be augmented by generating synthetic data between commonly-labeled pairs in the respective modality's latent spaces. The effect of data diversity can also be determined by generating a diverse data set with respect to the data points in latent space, enabling measurement of performance of the multi-modal model on limited training data.
    Type: Application
    Filed: November 22, 2024
    Publication date: May 29, 2025
    Inventors: Jesse Cole Cresswell, Noel Vouitsis, Zhaoyan Liu, Satya Krishna Gorti, Valentin Victor Villecroze, Guangwei Yu, Gabriel Loaiza Ganem, Maksims Volkovs
  • Publication number: 20250165866
    Abstract: A computer model is monitored during operation to evaluate performance of the model with respect to different groups evaluated by the model. Performance for each group is evaluated to determine an inter-group performance metric describing how model predictions across groups differs. A threshold for excess inter-group performance differences can be calibrated using withheld training data or out-of-time data to provide a statistical guarantee for detecting meaningful variation in inter-group performance metric differences. When the inter-group performance exceeds the threshold, the computer model may be considered to deviate from expected behavior and the monitoring can act to correct its operation, for example, by modifying actions that may otherwise occur due to model predictions or by initiating model retraining.
    Type: Application
    Filed: November 15, 2024
    Publication date: May 22, 2025
    Inventors: Jesse Cole Cresswell, George Frazer Stein, Farnush Farhadi Hassan Kiadeh, Zhaoyan Liu, Ji Xin
  • Publication number: 20250124240
    Abstract: The disclosed embodiments include computer-implemented processes that generate adaptive textual explanations of output using trained artificial intelligence processes. For example, an apparatus may generate an input dataset based on elements of first interaction data associated with a first temporal interval, and based on an application of a trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of an occurrence of an event during a second temporal interval. Further, and based on an application of a trained explainability process to the input dataset, the apparatus may generate an element of textual content that characterizes an outcome associated with the predicted likelihood of the occurrence of the event, where the element of textual content is associated with a feature value of the input dataset. The apparatus may also transmit a portion of the output data and the element of textual content to a computing system.
    Type: Application
    Filed: December 20, 2024
    Publication date: April 17, 2025
    Inventors: Yaqiao LUO, Jesse Cole CRESSWELL, Kin Kwan LEUNG, Kai WANG, Atiyeh Ashari GHOMI, Caitlin MESSICK, Lu SHU, Barum RHO, Maksims VOLKOVS, Paige Elyse DICKIE
  • Publication number: 20250103961
    Abstract: Generative models are used to determine whether a data sample is in-distribution or out-of-distribution with respect to a training data set. To address potential errors in generative models that attribute high likelihoods to known out-of-distribution data samples, in addition to the likelihood for a data sample, the local intrinsic dimensionality is also evaluated for the data sample. A data sample is determined to belong to the distribution of the training data when the data sample both has sufficient likelihood and local intrinsic dimensionality around its region in the generative model. Different actions may then be determined for the data sample with respect to a data application model based on whether the data sample is in- or out-of-distribution.
    Type: Application
    Filed: September 23, 2024
    Publication date: March 27, 2025
    Inventors: Jesse Cole Cresswell, Brendan Leigh Ross, Gabriel Loaiza Ganem, Anthony Lawrence Caterini, Hamidreza Kamkari
  • Patent number: 12217011
    Abstract: The disclosed embodiments include computer-implemented processes that generate adaptive textual explanations of output using trained artificial intelligence processes. For example, an apparatus may generate an input dataset based on elements of first interaction data associated with a first temporal interval, and based on an application of a trained artificial intelligence process to the input dataset, generate output data representative of a predicted likelihood of an occurrence of an event during a second temporal interval. Further, and based on an application of a trained explainability process to the input dataset, the apparatus may generate an element of textual content that characterizes an outcome associated with the predicted likelihood of the occurrence of the event, where the element of textual content is associated with a feature value of the input dataset. The apparatus may also transmit a portion of the output data and the element of textual content to a computing system.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: February 4, 2025
    Assignee: The Toronto-Dominion Bank
    Inventors: Yaqiao Luo, Jesse Cole Cresswell, Kin Kwan Leung, Kai Wang, Atiyeh Ashari Ghomi, Caitlin Messick, Lu Shu, Barum Rho, Maksims Volkovs, Paige Elyse Dickie
  • Publication number: 20250014954
    Abstract: Hybrid cores including adhesive promotion layers and related methods are disclosed. An example substrate core for an integrated circuit disclosed herein includes a frame including interior edge, a glass panel including an exterior edge, and an adhesion promotion layer disposed between the interior edge and the exterior edge.
    Type: Application
    Filed: June 27, 2024
    Publication date: January 9, 2025
    Inventors: Soham Agarwal, Gang Duan, Benjamin Duong, Darko Grujicic, Kari Hernandez, Lei Jin, Jesse Cole Jones, Zheng Kang, Shayan Kaviani, Yi Li, Sandrine Lteif, Pratyush Mishra, Mahdi Mohammadighaleni, Pratyasha Mohapatra, Logan Myers, Suresh Tanaji Narute, Srinivas Venkata Ramanuja Pietambaram, Umesh Prasad, Rengarajan Shanmugam, Elham Tavakoli, Marcel Arlan Wall, Yekan Wang, Ehsan Zamani
  • Publication number: 20250006609
    Abstract: Systems, apparatus, articles of manufacture, and methods for package substrates with stacks of glass layers having different coefficients of thermal expansion are disclosed. An example package substrate includes: a first glass layer including a first through glass via extending therethrough, the first glass layer having a first coefficient of thermal expansion (CTE); and a second glass layer including a second through glass via extending therethrough, the second glass layer having a second CTE different from the first CTE, the first through glass via electrically coupled to the second through glass via.
    Type: Application
    Filed: September 12, 2024
    Publication date: January 2, 2025
    Applicant: Intel Corporation
    Inventors: Gang Duan, Ibrahim El Khatib, Jesse Cole Jones, Yi Li, Minglu Liu, Robin Shea McRee, Srinivas Venkata Ramanuja Pietambaram, Praveen Sreeramagiri
  • Publication number: 20240419978
    Abstract: A variety of generative models are trained that are trained on a reference data set. The generative models are evaluated by candidate metrics to determine the relative rankings of the models as evaluated by the different candidate metrics. Rankings as generated by the models is compared with human evaluation of the generated results as simulated and the candidate metrics that most align with the human evaluation may then be used to automatically evaluate subsequent generative models. The candidate metrics may include various types of encoding models trained for non-generative purposes, such that the selected candidate metric may represent selecting an encoding model that performs well on the generative data.
    Type: Application
    Filed: June 10, 2024
    Publication date: December 19, 2024
    Inventors: George Frazer Stein, Jesse Cole Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Leigh Ross, Valentin Victor Villecroze, Zhaoyan Liu, Anthony Lawrence Caterini, Joseph Eric Timothy Taylor, Gabriel Loaiza Ganem
  • Publication number: 20240412078
    Abstract: The disclosed embodiments include computer-implemented systems and processes that dynamically monitor variations in process explainability of within a distributed computing environment. For example, an apparatus may obtain first and second explainability data associated with corresponding first and second temporal intervals, and based on the first and second explainability data, determine a value of a metric that characterizes a variation in the explainability of a machine-learning process between the first and second temporal intervals. When the metric value is inconsistent an exception criterion, the apparatus may obtain at least one additional value of the metric associated with a third temporal interval, and when the at least one additional metric value is inconsistent with the exception criterion, the apparatus may perform operations that modify at least one of (i) a value of a process parameter of the machine-learning process or (ii) a composition of an input dataset of the machine-learning process.
    Type: Application
    Filed: June 9, 2023
    Publication date: December 12, 2024
    Inventors: Mahdi GHELICHI, Talieh TABATABAEI, Julie R. MELANSON, Jesse Cole CRESSWELL
  • Publication number: 20240330772
    Abstract: A classification model is calibrated with a conformal threshold to determine a known error rate for classifications. Rather than directly use the model outputs, the classification model outputs are processed to a conformal score that is compared with a conformal threshold for determining whether a data sample is a member of a class. When a number of classes for the data sample that pass the conformal threshold for inclusion is a single class, an action associated with the class can confidently be applied with a known error rate. When the number of classes is zero or multiple classes, it may indicate sufficient uncertainty in the model prediction and the data sample may be escalated to another decision mechanism, such as manual review or a more complex classification model.
    Type: Application
    Filed: March 27, 2024
    Publication date: October 3, 2024
    Inventors: Jesse Cole Cresswell, Noël Vouitsis, Yi Sui
  • Publication number: 20240303551
    Abstract: The disclosed embodiments include computer-implemented apparatuses and processes that facilitate a real-time prediction of future events using trained artificial-intelligence processes and inferred ground-truth labelling in multiple data populations. For example, an apparatus may receive application data characterizing an exchange of data from a device, and based on an application of an artificial-intelligence process to an input dataset that includes at least a portion of the application data, the apparatus may generate, in real time, output data indicative of a likelihood of an occurrence of at least one targeted event associated with the data exchange during a future temporal interval. The artificial-intelligence process may trained using datasets associated with inferred ground-truth labels and multiple data populations, and the apparatus may transmit at least a portion of the output data to the device for presentation within a digital interface.
    Type: Application
    Filed: April 25, 2023
    Publication date: September 12, 2024
    Inventors: He LI, Jesse Cole CRESSWELL, Jean-Christophe BOUÉTTÉ, Mahdi GHELICHI, Zhiyi CHEN, George Frazer STEIN, Peter STARSZYK, Xiaochen ZHANG
  • Publication number: 20240281808
    Abstract: The disclosed embodiments include computer-implemented apparatuses and processes that facilitate a real-time pre-approval of data exchanges using trained artificial intelligence processes. For example, an apparatus may receive, from a device, application data characterizing an application for an exchange of data involving one or more applicants, and may generate an input dataset based on at least a portion of the application data and on interaction data characterizing the one or more applicants. Further, and based on an application of a trained artificial intelligence process to the input dataset, the apparatus may generate, in real-time, elements of output data indicative of a predicted pre-approval of the application for the data exchange involving the one or more applicants, and may transmit the elements of output data to the device for presentation within a digital interface.
    Type: Application
    Filed: April 24, 2023
    Publication date: August 22, 2024
    Inventors: Noël VOUITSIS, Jesse Cole CRESSWELL, Xiaochen ZHANG, Yi SUI, Peter STARSZYK, Devinder KUMAR, Darren Andrew PEPPER, Jean-Christophe BOUËTTÉ, Omar SABBAGH
  • Publication number: 20230385443
    Abstract: A model evaluation system evaluates the extent to which privacy-aware training processes affect the direction of training gradients for groups. A modified differential-privacy (“DP”) training process provides per-sample gradient adjustments with parameters that may be adaptively modified for different data batches. Per-sample gradients are modified with respect to a reference bound and a clipping bound. A scaling factor may be determined for each per-sample gradient based on the higher of the reference bound or a magnitude of the per-sample gradient. Per-sample gradients may then be adjusted based on a ratio of the clipping bound to the scaling factor. A relative privacy cost between groups may be determined as excess training risk based on a difference in group gradient direction relative to an unadjusted batch gradient and the adjusted batch gradient according to the privacy-aware training.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 30, 2023
    Inventors: Jesse Cole Cresswell, Atiyeh Ashari Ghomi, Yaqiao Luo, Maria Esipova
  • Publication number: 20230385444
    Abstract: A model evaluation system evaluates the extent to which privacy-aware training processes affect the direction of training gradients for groups. A modified differential-privacy (“DP”) training process provides per-sample gradient adjustments with parameters that may be adaptively modified for different data batches. Per-sample gradients are modified with respect to a reference bound and a clipping bound. A scaling factor may be determined for each per-sample gradient based on the higher of the reference bound or a magnitude of the per-sample gradient. Per-sample gradients may then be adjusted based on a ratio of the clipping bound to the scaling factor. A relative privacy cost between groups may be determined as excess training risk based on a difference in group gradient direction relative to an unadjusted batch gradient and the adjusted batch gradient according to the privacy-aware training.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 30, 2023
    Applicant: THE TORONTO-DOMINION BANK
    Inventors: Jesse Cole Cresswell, Atiyeh Ashari Ghomi, Yaqiao Luo, Maria Esipova
  • Publication number: 20230386190
    Abstract: A computer model is trained to account for data samples in a high-dimensional space as lying on different manifolds, rather than a single manifold to represent the data set, accounting for the data set as a whole as a union of manifolds. Different data samples that may be expected to belong to the same underlying manifold are determined by grouping the data. For generative models, a generative model may be trained that includes a sub-model for each group trained on that group's data samples, such that each sub-model can account for the manifold of that group. The overall generative model includes information describing the frequency to sample from each sub-model to correctly represent the data set as a whole in sampling. Multi-class classification models may also use the grouping to improve classification accuracy by weighing group data samples according to the estimated latent dimensionality of the group.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 30, 2023
    Inventors: Jesse Cole Cresswell, Brendan Leigh Ross, Anthony Lawrence Caterini, Gabriel Loaiza Ganem, Bradley Craig Anderson Brown
  • Publication number: 20230385694
    Abstract: Model training systems collaborate on model training without revealing respective private data sets. Each private data set learns a set of client weights for a set of computer models that are also learned during training. Inference for a particular private data set is determined as a mixture of the computer model parameters according to the client weights. During training, at each iteration, the client weights are updated in one step based on how well sampled models represent the private data set. In another step, gradients are determined for each sampled model and may be weighed according to the client weight for that model, relatively increasing the gradient contribution of a private data set for model parameters that correspond more highly to that private data set.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 30, 2023
    Inventors: Jesse Cole Cresswell, Brendan Leigh Ross, Ka Ho Yenson Lau, Junfeng Wen, Yi Sui
  • Publication number: 20230385693
    Abstract: Probability density modeling, such as for generative modeling, for data on a manifold of a high-dimensional space is performed with an implicitly-defined manifold such that points belonging to the manifold is the zero set of a manifold-defining function. An energy function is trained to learn an energy function that, evaluated on the manifold, describes a probability density for the manifold. As such, the relevant portions of the energy function are “filtered through” the defined manifold for training and in application. The combined energy function and manifold-defining function provide an “energy-based implicit manifold” that can more effectively model probability densities of a manifold in the high-dimensional space. As the manifold-defining function and the energy function are defined across the high-dimensional space, they may more effectively learn geometries and avoid distortions due to change in dimension that occur for models that model the manifold in a lower-dimensional space.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 30, 2023
    Applicant: THE TORONTO-DOMINION BANK
    Inventors: Jesse Cole Cresswell, Brendan Leigh Ross, Anthony Lawrence Caterini, Gabriel Loaiza Ganem
  • Publication number: 20230244917
    Abstract: To effectively learn a probability density from a data set in a high-dimensional space without manifold overfitting, a computer model first learns an autoencoder model that can transform data from a high-dimensional space to a low-dimensional space, and then learns a probability density model that may be effectively learned with maximum-likelihood. By separating these components, different types of models can be employed for each portion (e.g., manifold learning and density learning) and permits effective modeling of high-dimensional data sets that lie along a manifold representable with fewer dimensions, thus effectively learning both the density and the manifold and permitting effective data generation and density estimation.
    Type: Application
    Filed: December 16, 2022
    Publication date: August 3, 2023
    Inventors: Gabriel Loaiza Ganem, Brendan Leigh Ross, Jesse Cole Cresswell, Anthony Lawrence Caterini