Patents by Inventor DMITRY GOLOVASHKIN
DMITRY GOLOVASHKIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11775833Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.Type: GrantFiled: October 3, 2019Date of Patent: October 3, 2023Assignee: Oracle International CorporationInventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Patent number: 11615309Abstract: In an artificial neural network, integrality refers to the degree to which a neuron generates, for a given set of inputs, outputs that are near the border of the output range of a neuron. From each neural network of a pool of trained neural networks, a group of neurons with a higher integrality is selected to form a neural network tunnel (“tunnel”). The tunnel must include all input neurons and output neurons from the neural network, and some of the hidden neurons. Tunnels generated from each neural network in a pool are merged to form another neural network. The new network may then be trained.Type: GrantFiled: February 27, 2019Date of Patent: March 28, 2023Assignee: Oracle International CorporationInventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Brian Vosburgh, Denis B. Mukhin
-
Publication number: 20200272904Abstract: In an artificial neural network, integrality refers to the degree to which a neuron generates, for a given set of inputs, outputs that are near the border of the output range of a neuron. From each neural network of a pool of trained neural networks, a group of neurons with a higher integrality is selected to form a neural network tunnel (“tunnel”). The tunnel must include all input neurons and output neurons from the neural network, and some of the hidden neurons. Tunnels generated from each neural network in a pool are merged to form another neural network. The new network may then be trained.Type: ApplicationFiled: February 27, 2019Publication date: August 27, 2020Inventors: DMITRY GOLOVASHKIN, ULADZISLAU SHARANHOVICH, BRIAN VOSBURGH, DENIS B. MUKHIN
-
Publication number: 20200034713Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.Type: ApplicationFiled: October 3, 2019Publication date: January 30, 2020Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Patent number: 10467528Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.Type: GrantFiled: August 11, 2015Date of Patent: November 5, 2019Assignee: Oracle International CorporationInventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Patent number: 10068170Abstract: Computer systems, machine-implemented methods, and stored instructions are provided for minimizing an approximate global error in an artificial neural network that is configured to predict model outputs based at least in part on one or more model inputs. A model manager stores the artificial neural network model. The model manager may then minimize an approximate global error in the artificial neural network model at least in part by causing evaluation of a mixed integer linear program that determines weights between artificial neurons in the artificial neural network model. The mixed integer linear program accounts for piecewise linear activation functions for artificial neurons in the artificial neural network model. The mixed integer linear program comprises a functional expression of a difference between actual data and modeled data, and a set of one or more constraints that reference variables in the functional expression.Type: GrantFiled: September 22, 2014Date of Patent: September 4, 2018Assignee: Oracle International CorporationInventors: Dmitry Golovashkin, Patrick Aboyoun
-
Patent number: 9990303Abstract: Techniques herein are for sharing data structures. In embodiments, a computer obtains a directed object graph (DOG) containing objects and pointers interconnecting the objects. Each object pointer (OP) resides in a source object and comprises a memory address (MA) of a target object (TO). An original address space (OAS) contains the MA of the TO. The objects are not contiguous within the OAS. The DOG resides in original memory segment(s). The computer obtains an additional memory segment (AMS) beginning at a base address. The computer records the base address within the AMS. For each object in the DOG, the computer copies the object into the AMS at a respective address. For each OP in the DOG having the object as the TO of the MA of the OP, the computer replaces the MA of the OP with the respective address. AMS contents are provided in another address space.Type: GrantFiled: August 21, 2017Date of Patent: June 5, 2018Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
-
Patent number: 9870342Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.Type: GrantFiled: June 8, 2017Date of Patent: January 16, 2018Assignee: Oracle International CorporationInventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Publication number: 20170344488Abstract: Techniques herein are for sharing data structures. In embodiments, a computer obtains a directed object graph (DOG) containing objects and pointers interconnecting the objects. Each object pointer (OP) resides in a source object and comprises a memory address (MA) of a target object (TO). An original address space (OAS) contains the MA of the TO. The objects are not contiguous within the OAS. The DOG resides in original memory segment(s). The computer obtains an additional memory segment (AMS) beginning at a base address. The computer records the base address within the AMS. For each object in the DOG, the computer copies the object into the AMS at a respective address. For each OP in the DOG having the object as the TO of the MA of the OP, the computer replaces the MA of the OP with the respective address. AMS contents are provided in another address space.Type: ApplicationFiled: August 21, 2017Publication date: November 30, 2017Inventors: ULADZISLAU SHARANHOVICH, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
-
Publication number: 20170286365Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.Type: ApplicationFiled: June 8, 2017Publication date: October 5, 2017Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Patent number: 9740626Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.Type: GrantFiled: August 11, 2015Date of Patent: August 22, 2017Assignee: Oracle International CorporationInventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
-
Patent number: 9715481Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.Type: GrantFiled: March 9, 2015Date of Patent: July 25, 2017Assignee: Oracle International CorporationInventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Patent number: 9594692Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.Type: GrantFiled: August 11, 2015Date of Patent: March 14, 2017Assignee: Oracle International CorporationInventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
-
Publication number: 20170046270Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.Type: ApplicationFiled: August 11, 2015Publication date: February 16, 2017Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
-
Publication number: 20170046614Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.Type: ApplicationFiled: August 11, 2015Publication date: February 16, 2017Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Publication number: 20150378962Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.Type: ApplicationFiled: March 9, 2015Publication date: December 31, 2015Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
-
Patent number: 9047566Abstract: According to one aspect of the invention, target data comprising observations is received. A neural network comprising input neurons, output neurons, hidden neurons, skip-layer connections, and non-skip-layer connections is used to analyze the target data based on an overall objective function that comprises a linear regression part, the neural network's unregularized objective function, and a regularization term. An overall optimized first vector value of a first vector and an overall optimized second vector value of a second vector are determined based on the target data and the overall objective function. The first vector comprises skip-layer weights for the skip-layer connections and output neuron biases, whereas the second vector comprises non-skip-layer weights for the non-skip-layer connections.Type: GrantFiled: March 12, 2013Date of Patent: June 2, 2015Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Dmitry Golovashkin, Patrick Aboyoun, Vaishnavi Sashikanth
-
Publication number: 20150088795Abstract: Computer systems, machine-implemented methods, and stored instructions are provided for minimizing an approximate global error in an artificial neural network that is configured to predict model outputs based at least in part on one or more model inputs. A model manager stores the artificial neural network model. The model manager may then minimize an approximate global error in the artificial neural network model at least in part by causing evaluation of a mixed integer linear program that determines weights between artificial neurons in the artificial neural network model. The mixed integer linear program accounts for piecewise linear activation functions for artificial neurons in the artificial neural network model. The mixed integer linear program comprises a functional expression of a difference between actual data and modeled data, and a set of one or more constraints that reference variables in the functional expression.Type: ApplicationFiled: September 22, 2014Publication date: March 26, 2015Inventors: Dmitry Golovashkin, Patrick Aboyoun
-
Publication number: 20140279771Abstract: According to one aspect of the invention, target data comprising observations is received. A neural network comprising input neurons, output neurons, hidden neurons, skip-layer connections, and non-skip-layer connections is used to analyze the target data based on an overall objective function that comprises a linear regression part, the neural network's unregularized objective function, and a regularization term. An overall optimized first vector value of a first vector and an overall optimized second vector value of a second vector are determined based on the target data and the overall objective function. The first vector comprises skip-layer weights for the skip-layer connections and output neuron biases, whereas the second vector comprises non-skip-layer weights for the non-skip-layer connections.Type: ApplicationFiled: March 12, 2013Publication date: September 18, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: DMITRY GOLOVASHKIN, PATRICK ABOYOUN, VAISHNAVI SASHIKANTH