Patents by Inventor Dhruv Khattar

Dhruv Khattar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11829880
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: November 28, 2023
    Assignee: Adobe Inc.
    Inventors: Mayank Singh, Nupur Kumari, Dhruv Khattar, Balaji Krishnamurthy, Abhishek Sinha
  • Publication number: 20230107574
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.
    Type: Application
    Filed: October 24, 2022
    Publication date: April 6, 2023
    Inventors: Mayank Singh, Nupur Kumari, Dhruv Khattar, Balaji Krishnamurthy, Abhishek Sinha
  • Patent number: 11481617
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: October 25, 2022
    Assignee: Adobe Inc.
    Inventors: Mayank Singh, Nupur Kumari, Dhruv Khattar, Balaji Krishnamurthy, Abhishek Sinha
  • Publication number: 20200234110
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.
    Type: Application
    Filed: January 22, 2019
    Publication date: July 23, 2020
    Inventors: Mayank Singh, Nupur Kumari, Dhruv Khattar, Balaji Krishnamurthy, Abhishek Sinha