Patents by Inventor Ulrich Finkler

Ulrich Finkler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11888698
    Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: January 30, 2024
    Assignee: Utopous Insights, Inc.
    Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
  • Patent number: 11521067
    Abstract: Various embodiments are provided for decentralized distributed deep learning by one or more processors in a computing system. Asynchronous distributed training of one or more machine learning models may be performed by generating a list of neighbor nodes for each node in a plurality of nodes and creating a first thread for continuous communication according to a weight management operation and a second thread for continuous computation of a gradient for each node. One or more variables are shared between the first thread and the second thread.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: December 6, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wei Zhang, Li Zhang, Ulrich Finkler, Minsik Cho, David Kung
  • Patent number: 11501160
    Abstract: In deep learning, and in particular, for data compression for allreduce in deep learning, a gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: November 15, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik Cho, Wei Zhang, Ulrich Finkler
  • Publication number: 20220294700
    Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.
    Type: Application
    Filed: May 31, 2022
    Publication date: September 15, 2022
    Applicant: Utopus Insights, Inc.
    Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
  • Patent number: 11349720
    Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: May 31, 2022
    Assignee: Utopus Insights, Inc.
    Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
  • Patent number: 11093827
    Abstract: Using a processor and a memory at a worker machine, a gradient vector is computed corresponding to a set of weights associated with a set of nodes of a neural network instance being trained in the worker machine. In an ISA vector corresponding to the gradient vector, an ISA instruction is constructed corresponding to a gradient in a set of gradients in the gradient vector, wherein a data transmission of the ISA instruction is smaller as compared to a data transmission of the gradient. The ISA vector is transmitted from the worker machine to a parameter server, the ISA vector being responsive to one iteration of a training of the neural network instance, the ISA vector being transmitted instead of the gradient vector to reduce an amount of data transmitted from the worker machine to the parameter server for the one iteration of the training.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: August 17, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik Cho, Ulrich A. Finkler
  • Patent number: 11093438
    Abstract: Embodiments for pipelining multi-directional reduction by one or more processors in a computing system. One or more reduce scatter operations and one or more all-gather operations may be assigned to each of a plurality of independent networks. The one or more reduce scatter operations and the one or more all-gather operations may be sequentially executed in each of the plurality of independent networks according to a serialized execution order and a defined time period.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: August 17, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik Cho, Ulrich Finkler, David Kung
  • Publication number: 20210149918
    Abstract: Techniques regarding intelligent data pools are provided. Embodiments described herein can include a system comprising a memory that can store computer executable components. The system can also comprise a processor that can execute the computer executable components stored in the memory. The computer executable components can comprise a data pool component that performs a semantic analysis of data access patterns across a distributed computing network to partition file system objects independently of a directory structure and into groups with defined temporary access restrictions. The computer executable components can also comprise: a directory component that organizes data into the directory structure by defining sectors on a node of the distributed computing network into an address section; and a partition component that separates metadata from the data of the directory structure and partitions the metadata into the groups within a continuous virtual memory section based on the data access patterns.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Inventor: Ulrich Finkler
  • Publication number: 20210150351
    Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.
    Type: Application
    Filed: December 21, 2020
    Publication date: May 20, 2021
    Applicant: International Business Machines Corporation
    Inventors: Minsik Cho, Ulrich A. Finkler
  • Patent number: 10977552
    Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: April 13, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik Cho, Ulrich A. Finkler
  • Patent number: 10922606
    Abstract: A method for executing multi-directional reduction algorithms includes identifying a set of nodes, wherein a node includes at least one data element, creating a set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a single direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created set of partitions, creating an additional set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a different direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created additional set of partitions, and providing a set of reduced results corresponding to the at least one data element.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: February 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Minsik Cho, Ulrich A. Finkler, David S. Kung, Li Zhang
  • Patent number: 10831738
    Abstract: Apparatuses and Methods for sorting a data set. A data storage is divided into a plurality of buckets that is each associated with a respective key value. A plurality of stripes is identified in each bucket. A plurality of data stripe sets is defined that has one stripe within each respective bucket. A first and a second in-place partial bucket radix sort are performed on data items contained within the first and second data stripe sets, respectively, using an initial radix. Incorrectly sorted data items in the first bucket are grouped by a first processor and incorrectly sorted data items in the second bucket are grouped by a second processor into a respective incorrect data item group within each bucket. A radix sort is then performed using the initial radix on the items within the respective incorrect data item group. A first level sorted output is produced.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rajesh Bordawekar, Daniel Brand, Minsik Cho, Ulrich Finkler, Ruchir Puri
  • Publication number: 20200311539
    Abstract: Various embodiments are provided for compression for allreduce in a deep learning by one or more processors in a computing system. A gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.
    Type: Application
    Filed: March 28, 2019
    Publication date: October 1, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik CHO, Wei ZHANG, Ulrich FINKLER
  • Publication number: 20200304376
    Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.
    Type: Application
    Filed: February 25, 2020
    Publication date: September 24, 2020
    Applicant: Utopus Insights, Inc.
    Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
  • Publication number: 20200218689
    Abstract: Embodiments for pipelining multi-directional reduction by one or more processors in a computing system. One or more reduce scatter operations and one or more all-gather operations may be assigned to each of a plurality of independent networks. The one or more reduce scatter operations and the one or more all-gather operations may be sequentially executed in each of the plurality of independent networks according to a serialized execution order and a defined time period.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 9, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik CHO, Ulrich FINKLER, David KUNG
  • Publication number: 20200175370
    Abstract: Various embodiments are provided for decentralized distributed deep learning by one or more processors in a computing system. Asynchronous distributed training of one or more machine learning models may be performed by generating a list of neighbor nodes for each node in a plurality of nodes and creating a first thread for continuous communication according to a weight management operation and a second thread for continuous computation of a gradient for each node. One or more variables are shared between the first thread and the second thread.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wei ZHANG, Li ZHANG, Ulrich FINKLER, Minsik CHO, David KUNG
  • Patent number: 10574533
    Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: February 25, 2020
    Assignee: Utopus Insights, Inc.
    Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
  • Patent number: 10516767
    Abstract: A method of presenting data over a Web service interface includes: establishing, by a first computer process, a persistent transmission control protocol (TCP) network connection between the first computer process and a second computer process; dynamically allocating, by the second computer process, memory in response to receipt of static data over the persistent TCP network connection from the first computer process; updating, by the second computer process, the memory in response to receipt of dynamic data received over the persistent TCP network connection from the first computer process; and enabling, by the second computer process, a Web server to access the updated data for presentation by the Web service interface. The static data identifies a given entity and the dynamic data includes metric data provided for the entity.
    Type: Grant
    Filed: April 18, 2016
    Date of Patent: December 24, 2019
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Amith Singhee, Steven Hirsch, Ashok Pon Kumar Sree Prakash, Ulrich A. Finkler, David O. Melville, Scott M. Mansfield
  • Publication number: 20190087723
    Abstract: Using a processor and a memory at a worker machine, a gradient vector is computed corresponding to a set of weights associated with a set of nodes of a neural network instance being trained in the worker machine. In an ISA vector corresponding to the gradient vector, an ISA instruction is constructed corresponding to a gradient in a set of gradients in the gradient vector, wherein a data transmission of the ISA instruction is smaller as compared to a data transmission of the gradient. The ISA vector is transmitted from the worker machine to a parameter server, the ISA vector being responsive to one iteration of a training of the neural network instance, the ISA vector being transmitted instead of the gradient vector to reduce an amount of data transmitted from the worker machine to the parameter server for the one iteration of the training.
    Type: Application
    Filed: September 20, 2017
    Publication date: March 21, 2019
    Applicant: International Business Machines Corporation
    Inventors: Minsik Cho, Ulrich A. Finkler
  • Publication number: 20190087722
    Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.
    Type: Application
    Filed: September 20, 2017
    Publication date: March 21, 2019
    Applicant: International Business Machines Corporation
    Inventors: Minsik Cho, Ulrich A. Finkler