Patents by Inventor DMITRY GOLOVASHKIN

DMITRY GOLOVASHKIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11775833
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: October 3, 2023
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 11615309
    Abstract: In an artificial neural network, integrality refers to the degree to which a neuron generates, for a given set of inputs, outputs that are near the border of the output range of a neuron. From each neural network of a pool of trained neural networks, a group of neurons with a higher integrality is selected to form a neural network tunnel (“tunnel”). The tunnel must include all input neurons and output neurons from the neural network, and some of the hidden neurons. Tunnels generated from each neural network in a pool are merged to form another neural network. The new network may then be trained.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: March 28, 2023
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Brian Vosburgh, Denis B. Mukhin
  • Publication number: 20200272904
    Abstract: In an artificial neural network, integrality refers to the degree to which a neuron generates, for a given set of inputs, outputs that are near the border of the output range of a neuron. From each neural network of a pool of trained neural networks, a group of neurons with a higher integrality is selected to form a neural network tunnel (“tunnel”). The tunnel must include all input neurons and output neurons from the neural network, and some of the hidden neurons. Tunnels generated from each neural network in a pool are merged to form another neural network. The new network may then be trained.
    Type: Application
    Filed: February 27, 2019
    Publication date: August 27, 2020
    Inventors: DMITRY GOLOVASHKIN, ULADZISLAU SHARANHOVICH, BRIAN VOSBURGH, DENIS B. MUKHIN
  • Publication number: 20200034713
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Application
    Filed: October 3, 2019
    Publication date: January 30, 2020
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 10467528
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: November 5, 2019
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 10068170
    Abstract: Computer systems, machine-implemented methods, and stored instructions are provided for minimizing an approximate global error in an artificial neural network that is configured to predict model outputs based at least in part on one or more model inputs. A model manager stores the artificial neural network model. The model manager may then minimize an approximate global error in the artificial neural network model at least in part by causing evaluation of a mixed integer linear program that determines weights between artificial neurons in the artificial neural network model. The mixed integer linear program accounts for piecewise linear activation functions for artificial neurons in the artificial neural network model. The mixed integer linear program comprises a functional expression of a difference between actual data and modeled data, and a set of one or more constraints that reference variables in the functional expression.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: September 4, 2018
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Patrick Aboyoun
  • Patent number: 9990303
    Abstract: Techniques herein are for sharing data structures. In embodiments, a computer obtains a directed object graph (DOG) containing objects and pointers interconnecting the objects. Each object pointer (OP) resides in a source object and comprises a memory address (MA) of a target object (TO). An original address space (OAS) contains the MA of the TO. The objects are not contiguous within the OAS. The DOG resides in original memory segment(s). The computer obtains an additional memory segment (AMS) beginning at a base address. The computer records the base address within the AMS. For each object in the DOG, the computer copies the object into the AMS at a respective address. For each OP in the DOG having the object as the TO of the MA of the OP, the computer replaces the MA of the OP with the respective address. AMS contents are provided in another address space.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: June 5, 2018
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Patent number: 9870342
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: January 16, 2018
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Publication number: 20170344488
    Abstract: Techniques herein are for sharing data structures. In embodiments, a computer obtains a directed object graph (DOG) containing objects and pointers interconnecting the objects. Each object pointer (OP) resides in a source object and comprises a memory address (MA) of a target object (TO). An original address space (OAS) contains the MA of the TO. The objects are not contiguous within the OAS. The DOG resides in original memory segment(s). The computer obtains an additional memory segment (AMS) beginning at a base address. The computer records the base address within the AMS. For each object in the DOG, the computer copies the object into the AMS at a respective address. For each OP in the DOG having the object as the TO of the MA of the OP, the computer replaces the MA of the OP with the respective address. AMS contents are provided in another address space.
    Type: Application
    Filed: August 21, 2017
    Publication date: November 30, 2017
    Inventors: ULADZISLAU SHARANHOVICH, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Publication number: 20170286365
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Application
    Filed: June 8, 2017
    Publication date: October 5, 2017
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 9740626
    Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: August 22, 2017
    Assignee: Oracle International Corporation
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Patent number: 9715481
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: July 25, 2017
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 9594692
    Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: March 14, 2017
    Assignee: Oracle International Corporation
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Publication number: 20170046270
    Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Publication number: 20170046614
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Publication number: 20150378962
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Application
    Filed: March 9, 2015
    Publication date: December 31, 2015
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 9047566
    Abstract: According to one aspect of the invention, target data comprising observations is received. A neural network comprising input neurons, output neurons, hidden neurons, skip-layer connections, and non-skip-layer connections is used to analyze the target data based on an overall objective function that comprises a linear regression part, the neural network's unregularized objective function, and a regularization term. An overall optimized first vector value of a first vector and an overall optimized second vector value of a second vector are determined based on the target data and the overall objective function. The first vector comprises skip-layer weights for the skip-layer connections and output neuron biases, whereas the second vector comprises non-skip-layer weights for the non-skip-layer connections.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: June 2, 2015
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Dmitry Golovashkin, Patrick Aboyoun, Vaishnavi Sashikanth
  • Publication number: 20150088795
    Abstract: Computer systems, machine-implemented methods, and stored instructions are provided for minimizing an approximate global error in an artificial neural network that is configured to predict model outputs based at least in part on one or more model inputs. A model manager stores the artificial neural network model. The model manager may then minimize an approximate global error in the artificial neural network model at least in part by causing evaluation of a mixed integer linear program that determines weights between artificial neurons in the artificial neural network model. The mixed integer linear program accounts for piecewise linear activation functions for artificial neurons in the artificial neural network model. The mixed integer linear program comprises a functional expression of a difference between actual data and modeled data, and a set of one or more constraints that reference variables in the functional expression.
    Type: Application
    Filed: September 22, 2014
    Publication date: March 26, 2015
    Inventors: Dmitry Golovashkin, Patrick Aboyoun
  • Publication number: 20140279771
    Abstract: According to one aspect of the invention, target data comprising observations is received. A neural network comprising input neurons, output neurons, hidden neurons, skip-layer connections, and non-skip-layer connections is used to analyze the target data based on an overall objective function that comprises a linear regression part, the neural network's unregularized objective function, and a regularization term. An overall optimized first vector value of a first vector and an overall optimized second vector value of a second vector are determined based on the target data and the overall objective function. The first vector comprises skip-layer weights for the skip-layer connections and output neuron biases, whereas the second vector comprises non-skip-layer weights for the non-skip-layer connections.
    Type: Application
    Filed: March 12, 2013
    Publication date: September 18, 2014
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: DMITRY GOLOVASHKIN, PATRICK ABOYOUN, VAISHNAVI SASHIKANTH