Patents by Inventor Ulrich A. Finkler
Ulrich A. Finkler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240267297Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.Type: ApplicationFiled: December 6, 2023Publication date: August 8, 2024Applicant: Utopus Insights, Inc.Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
-
Patent number: 12039439Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.Type: GrantFiled: December 21, 2020Date of Patent: July 16, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Ulrich A. Finkler
-
Patent number: 11888698Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.Type: GrantFiled: May 31, 2022Date of Patent: January 30, 2024Assignee: Utopous Insights, Inc.Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
-
Patent number: 11521067Abstract: Various embodiments are provided for decentralized distributed deep learning by one or more processors in a computing system. Asynchronous distributed training of one or more machine learning models may be performed by generating a list of neighbor nodes for each node in a plurality of nodes and creating a first thread for continuous communication according to a weight management operation and a second thread for continuous computation of a gradient for each node. One or more variables are shared between the first thread and the second thread.Type: GrantFiled: November 30, 2018Date of Patent: December 6, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Wei Zhang, Li Zhang, Ulrich Finkler, Minsik Cho, David Kung
-
Patent number: 11501160Abstract: In deep learning, and in particular, for data compression for allreduce in deep learning, a gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.Type: GrantFiled: March 28, 2019Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Wei Zhang, Ulrich Finkler
-
Publication number: 20220294700Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.Type: ApplicationFiled: May 31, 2022Publication date: September 15, 2022Applicant: Utopus Insights, Inc.Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
-
Patent number: 11349720Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.Type: GrantFiled: February 25, 2020Date of Patent: May 31, 2022Assignee: Utopus Insights, Inc.Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
-
Patent number: 11093438Abstract: Embodiments for pipelining multi-directional reduction by one or more processors in a computing system. One or more reduce scatter operations and one or more all-gather operations may be assigned to each of a plurality of independent networks. The one or more reduce scatter operations and the one or more all-gather operations may be sequentially executed in each of the plurality of independent networks according to a serialized execution order and a defined time period.Type: GrantFiled: January 7, 2019Date of Patent: August 17, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Ulrich Finkler, David Kung
-
Patent number: 11093827Abstract: Using a processor and a memory at a worker machine, a gradient vector is computed corresponding to a set of weights associated with a set of nodes of a neural network instance being trained in the worker machine. In an ISA vector corresponding to the gradient vector, an ISA instruction is constructed corresponding to a gradient in a set of gradients in the gradient vector, wherein a data transmission of the ISA instruction is smaller as compared to a data transmission of the gradient. The ISA vector is transmitted from the worker machine to a parameter server, the ISA vector being responsive to one iteration of a training of the neural network instance, the ISA vector being transmitted instead of the gradient vector to reduce an amount of data transmitted from the worker machine to the parameter server for the one iteration of the training.Type: GrantFiled: September 20, 2017Date of Patent: August 17, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Ulrich A. Finkler
-
Publication number: 20210149918Abstract: Techniques regarding intelligent data pools are provided. Embodiments described herein can include a system comprising a memory that can store computer executable components. The system can also comprise a processor that can execute the computer executable components stored in the memory. The computer executable components can comprise a data pool component that performs a semantic analysis of data access patterns across a distributed computing network to partition file system objects independently of a directory structure and into groups with defined temporary access restrictions. The computer executable components can also comprise: a directory component that organizes data into the directory structure by defining sectors on a node of the distributed computing network into an address section; and a partition component that separates metadata from the data of the directory structure and partitions the metadata into the groups within a continuous virtual memory section based on the data access patterns.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Inventor: Ulrich Finkler
-
Publication number: 20210150351Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.Type: ApplicationFiled: December 21, 2020Publication date: May 20, 2021Applicant: International Business Machines CorporationInventors: Minsik Cho, Ulrich A. Finkler
-
Patent number: 10977552Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.Type: GrantFiled: September 20, 2017Date of Patent: April 13, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Ulrich A. Finkler
-
Patent number: 10922606Abstract: A method for executing multi-directional reduction algorithms includes identifying a set of nodes, wherein a node includes at least one data element, creating a set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a single direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created set of partitions, creating an additional set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a different direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created additional set of partitions, and providing a set of reduced results corresponding to the at least one data element.Type: GrantFiled: June 13, 2017Date of Patent: February 16, 2021Assignee: International Business Machines CorporationInventors: Minsik Cho, Ulrich A. Finkler, David S. Kung, Li Zhang
-
Patent number: 10831738Abstract: Apparatuses and Methods for sorting a data set. A data storage is divided into a plurality of buckets that is each associated with a respective key value. A plurality of stripes is identified in each bucket. A plurality of data stripe sets is defined that has one stripe within each respective bucket. A first and a second in-place partial bucket radix sort are performed on data items contained within the first and second data stripe sets, respectively, using an initial radix. Incorrectly sorted data items in the first bucket are grouped by a first processor and incorrectly sorted data items in the second bucket are grouped by a second processor into a respective incorrect data item group within each bucket. A radix sort is then performed using the initial radix on the items within the respective incorrect data item group. A first level sorted output is produced.Type: GrantFiled: December 22, 2017Date of Patent: November 10, 2020Assignee: International Business Machines CorporationInventors: Rajesh Bordawekar, Daniel Brand, Minsik Cho, Ulrich Finkler, Ruchir Puri
-
Publication number: 20200311539Abstract: Various embodiments are provided for compression for allreduce in a deep learning by one or more processors in a computing system. A gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.Type: ApplicationFiled: March 28, 2019Publication date: October 1, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik CHO, Wei ZHANG, Ulrich FINKLER
-
Publication number: 20200304376Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.Type: ApplicationFiled: February 25, 2020Publication date: September 24, 2020Applicant: Utopus Insights, Inc.Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
-
Publication number: 20200218689Abstract: Embodiments for pipelining multi-directional reduction by one or more processors in a computing system. One or more reduce scatter operations and one or more all-gather operations may be assigned to each of a plurality of independent networks. The one or more reduce scatter operations and the one or more all-gather operations may be sequentially executed in each of the plurality of independent networks according to a serialized execution order and a defined time period.Type: ApplicationFiled: January 7, 2019Publication date: July 9, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik CHO, Ulrich FINKLER, David KUNG
-
Publication number: 20200175370Abstract: Various embodiments are provided for decentralized distributed deep learning by one or more processors in a computing system. Asynchronous distributed training of one or more machine learning models may be performed by generating a list of neighbor nodes for each node in a plurality of nodes and creating a first thread for continuous communication according to a weight management operation and a second thread for continuous computation of a gradient for each node. One or more variables are shared between the first thread and the second thread.Type: ApplicationFiled: November 30, 2018Publication date: June 4, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Wei ZHANG, Li ZHANG, Ulrich FINKLER, Minsik CHO, David KUNG
-
Patent number: 10574533Abstract: A method, system, and computer program product to manage a network comprising a plurality of interconnected components are described. The method includes obtaining a set of all the components that are part of the network over time, and identifying one or more repeating patterns of components among the set of all the components as corresponding lower-level definitions to generate a hierarchical set of all the components. The method also includes obtaining time-varying information regarding topology and operational values within the network, and creating a representation of the network at a set of times based on the hierarchical set of all the components and the time-varying information.Type: GrantFiled: January 30, 2018Date of Patent: February 25, 2020Assignee: Utopus Insights, Inc.Inventors: Ulrich A. Finkler, Fook-Luen Heng, Steven N. Hirsch, Mark A. Lavin, Jun Mei Qu, Amith Singhee, Wei Wu
-
Patent number: 10516767Abstract: A method of presenting data over a Web service interface includes: establishing, by a first computer process, a persistent transmission control protocol (TCP) network connection between the first computer process and a second computer process; dynamically allocating, by the second computer process, memory in response to receipt of static data over the persistent TCP network connection from the first computer process; updating, by the second computer process, the memory in response to receipt of dynamic data received over the persistent TCP network connection from the first computer process; and enabling, by the second computer process, a Web server to access the updated data for presentation by the Web service interface. The static data identifies a given entity and the dynamic data includes metric data provided for the entity.Type: GrantFiled: April 18, 2016Date of Patent: December 24, 2019Assignee: GLOBALFOUNDRIES INC.Inventors: Amith Singhee, Steven Hirsch, Ashok Pon Kumar Sree Prakash, Ulrich A. Finkler, David O. Melville, Scott M. Mansfield