Patents by Inventor Seyedalireza Yektamaram

Seyedalireza Yektamaram has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11727274
    Abstract: A computer trains a neural network. A neural network is executed with a weight vector to compute a gradient vector using a batch of observation vectors. Eigenvalues are computed from a Hessian approximation matrix, a regularization parameter value is computed using the gradient vector, the eigenvalues, and a step-size value, a search direction vector is computed using the eigenvalues, the gradient vector, the Hessian approximation matrix, and the regularization parameter value, a reduction ratio value is computed, an updated weight vector is computed from the weight vector, a learning rate value, and the search direction vector or the gradient vector based on the computed reduction ratio value, and an updated Hessian approximation matrix is computed from the Hessian approximation matrix, the predefined learning rate value, and the search direction vector or the gradient vector based on the reduction ratio value. The step-size value is updated using the search direction vector.
    Type: Grant
    Filed: August 17, 2022
    Date of Patent: August 15, 2023
    Assignee: SAS Institute Inc.
    Inventors: Jarad Forristal, Joshua David Griffin, Seyedalireza Yektamaram, Wenwen Zhou
  • Patent number: 11635988
    Abstract: A computing device determines an optimal number of threads for a computer task. Execution of a computing task is controlled in a computing environment based on each task configuration included in a plurality of task configurations to determine an execution runtime value for each task configuration. An optimal number of threads value is determined for each set of task configurations having common values for a task parameter value, a dataset indicator, and a hardware indicator. The optimal number of threads value is an extremum value of an execution parameter value as a function of a number of threads value. A dataset parameter value is determined for a dataset. A hardware parameter value is determined as a characteristic of each distinct executing computing device in the computing environment. The optimal number of threads value for each set of task configurations is stored in a performance dataset in association with the common values.
    Type: Grant
    Filed: August 19, 2022
    Date of Patent: April 25, 2023
    Assignee: SAS Institute Inc.
    Inventors: Yan Gao, Joshua David Griffin, Yu-Min Lin, Yan Xu, Seyedalireza Yektamaram, Amod Anil Ankulkar, Aishwarya Sharma, Girish Vinayak Kolapkar, Kiran Devidas Bhole, Kushawah Yogender Singh, Jorge Manuel Gomes da Silva
  • Patent number: 10949747
    Abstract: A computer trains a neural network model. (A) Observation vectors are randomly selected from a plurality of observation vectors. (B) A forward and backward propagation of a neural network is executed to compute a gradient vector and a weight vector. (C) A search direction vector is computed. (D) A step size value is computed. (E) An updated weight vector is computed. (F) Based on a predefined progress check frequency value, second observation vectors are randomly selected, a progress check objective function value is computed given the weight vector, the step size value, the search direction vector, and the second observation vectors, and based on an accuracy test, the mini-batch size value is updated. (G) (A) to (F) are repeated until a convergence parameter value indicates training of the neural network is complete. The weight vector for a next iteration is the computed updated weight vector.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: March 16, 2021
    Assignee: SAS INSTITUTE INC.
    Inventors: Majid Jahani, Joshua David Griffin, Seyedalireza Yektamaram, Wenwen Zhou
  • Patent number: 10776721
    Abstract: Machine-learning models (MLM) can be configured more rapidly using some examples described herein. For example, a MLM can be configured by executing an iterative process, where each iteration includes a series of operations. The series of operations can include determining a current weight value for the current iteration, determining a current gradient direction for the current iteration based on the current weight value, and determining a current learning rate for the current iteration based on the current gradient direction. The operations can also include determining a current multistage momentum value for the current iteration. A next weight value for a next iteration can then be determined based on (i) the current weight value, (ii) the current gradient direction, (iii) the current learning rate, and (iv) the current multistage momentum value. The next weight value may also be determined based on a predefined learning rate that was preset, in some examples.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: September 15, 2020
    Assignee: SAS INSTITUTE INC.
    Inventors: Rui Shi, Seyedalireza Yektamaram, Yan Xu
  • Patent number: 10769528
    Abstract: A computer trains a neural network model. (B) A neural network is executed to compute a post-iteration gradient vector and a current iteration weight vector. (C) A search direction vector is computed using a Hessian approximation matrix and the post-iteration gradient vector. (D) A step size value is initialized. (E) An objective function value is computed that indicates an error measure of the executed neural network. (F) When the computed objective function value is greater than an upper bound value, the step size value is updated using a predefined backtracking factor value. The upper bound value is computed as a sliding average of a predefined upper bound updating interval value number of previous upper bound values. (G) (E) and (F) are repeated until the computed objective function value is not greater than the upper bound value. (H) An updated weight vector is computed to describe a trained neural network model.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: September 8, 2020
    Assignee: SAS Institute Inc.
    Inventors: Ben-hao Wang, Joshua David Griffin, Seyedalireza Yektamaram, Yan Xu