Patents by Inventor Uladzislau Sharanhovich

Uladzislau Sharanhovich has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11775833
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: October 3, 2023
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 11615309
    Abstract: In an artificial neural network, integrality refers to the degree to which a neuron generates, for a given set of inputs, outputs that are near the border of the output range of a neuron. From each neural network of a pool of trained neural networks, a group of neurons with a higher integrality is selected to form a neural network tunnel (“tunnel”). The tunnel must include all input neurons and output neurons from the neural network, and some of the hidden neurons. Tunnels generated from each neural network in a pool are merged to form another neural network. The new network may then be trained.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: March 28, 2023
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Brian Vosburgh, Denis B. Mukhin
  • Publication number: 20220284245
    Abstract: The disclosed embodiments relate to a system that improves operation of a monitored system. During a training mode, the system uses a training data set comprising labeled data points received from the monitored system to train the SVM to detect one or more conditions-of-interest. While training the SVM model, the system makes approximations to reduce computing costs, wherein the approximations involve stochastically discarding points from the training data set based on an inverse distance to a separating hyperplane for the SVM model. Next, during a surveillance mode, the system uses the trained SVM model to detect the one or more conditions-of-interest based on monitored data points received from the monitored system. When one or more conditions-of-interest are detected, the system performs an action to improve operation of the monitored system.
    Type: Application
    Filed: March 3, 2021
    Publication date: September 8, 2022
    Applicant: Oracle International Corporation
    Inventors: Dmitry V. Golovashkin, Mark F. Hornick, Marcos R Arancibia Coddou, Uladzislau Sharanhovich
  • Publication number: 20200272904
    Abstract: In an artificial neural network, integrality refers to the degree to which a neuron generates, for a given set of inputs, outputs that are near the border of the output range of a neuron. From each neural network of a pool of trained neural networks, a group of neurons with a higher integrality is selected to form a neural network tunnel (“tunnel”). The tunnel must include all input neurons and output neurons from the neural network, and some of the hidden neurons. Tunnels generated from each neural network in a pool are merged to form another neural network. The new network may then be trained.
    Type: Application
    Filed: February 27, 2019
    Publication date: August 27, 2020
    Inventors: DMITRY GOLOVASHKIN, ULADZISLAU SHARANHOVICH, BRIAN VOSBURGH, DENIS B. MUKHIN
  • Publication number: 20200034713
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Application
    Filed: October 3, 2019
    Publication date: January 30, 2020
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 10467528
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: November 5, 2019
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 9990303
    Abstract: Techniques herein are for sharing data structures. In embodiments, a computer obtains a directed object graph (DOG) containing objects and pointers interconnecting the objects. Each object pointer (OP) resides in a source object and comprises a memory address (MA) of a target object (TO). An original address space (OAS) contains the MA of the TO. The objects are not contiguous within the OAS. The DOG resides in original memory segment(s). The computer obtains an additional memory segment (AMS) beginning at a base address. The computer records the base address within the AMS. For each object in the DOG, the computer copies the object into the AMS at a respective address. For each OP in the DOG having the object as the TO of the MA of the OP, the computer replaces the MA of the OP with the respective address. AMS contents are provided in another address space.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: June 5, 2018
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Patent number: 9870342
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: January 16, 2018
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Publication number: 20170344488
    Abstract: Techniques herein are for sharing data structures. In embodiments, a computer obtains a directed object graph (DOG) containing objects and pointers interconnecting the objects. Each object pointer (OP) resides in a source object and comprises a memory address (MA) of a target object (TO). An original address space (OAS) contains the MA of the TO. The objects are not contiguous within the OAS. The DOG resides in original memory segment(s). The computer obtains an additional memory segment (AMS) beginning at a base address. The computer records the base address within the AMS. For each object in the DOG, the computer copies the object into the AMS at a respective address. For each OP in the DOG having the object as the TO of the MA of the OP, the computer replaces the MA of the OP with the respective address. AMS contents are provided in another address space.
    Type: Application
    Filed: August 21, 2017
    Publication date: November 30, 2017
    Inventors: ULADZISLAU SHARANHOVICH, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Publication number: 20170286365
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Application
    Filed: June 8, 2017
    Publication date: October 5, 2017
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 9740626
    Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: August 22, 2017
    Assignee: Oracle International Corporation
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Patent number: 9715481
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: July 25, 2017
    Assignee: Oracle International Corporation
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Patent number: 9594692
    Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: March 14, 2017
    Assignee: Oracle International Corporation
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Publication number: 20170046270
    Abstract: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Uladzislau Sharanhovich, Anand Srinivasan, Dmitry Golovashkin, Vaishnavi Sashikanth
  • Publication number: 20170046614
    Abstract: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth
  • Publication number: 20150378962
    Abstract: According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication.
    Type: Application
    Filed: March 9, 2015
    Publication date: December 31, 2015
    Inventors: Dmitry Golovashkin, Uladzislau Sharanhovich, Vaishnavi Sashikanth