Patents by Inventor Dionysios Diamantopoulos

Dionysios Diamantopoulos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11907828
    Abstract: A field programmable gate array (FPGA) may be used for inference of a trained deep neural network (DNN). The trained DNN may comprise a set of parameters and the FPGA may have a first precision configuration defining first number representations of the set of parameters. The FPGA may determine different precision configurations of the trained DNN. A precision configuration of the precision configurations may define second number representations of a subset of the set of parameters. For each precision configuration of the determined precision configurations a bitstream file may be provided. The bitstream files may be stored so that the FPGA may be programmed using one of the stored bitstream files for inference of the trained DNN.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Mitra Purandare, Dionysios Diamantopoulos, Raphael Polig
  • Patent number: 11863385
    Abstract: The invention is notably directed to a method, computer program product, and computer system for running software inside containers. The method relies on a computerized system that includes a composable disaggregated infrastructure, in addition to general-purpose hardware. The computerized system is configured to dynamically allocate computerized resources, which include both general resources and specialized resources. The former are enabled by the general-purpose hardware, while the latter are enabled by specialized network-attached hardware components of the composable disaggregated infrastructure. The method maintains a table capturing specializations of the specialized network-attached hardware components. At runtime, software is run inside each container by executing corresponding functions.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: January 2, 2024
    Assignee: International Business Machines Corporation
    Inventors: Dionysios Diamantopoulos, Burkhard Ringlein, Francois Abel
  • Patent number: 11783200
    Abstract: Field-programmable gate array and method to implement an artificial neural network. A trained model of the neural network is processed, in which weights are defined in a floating-point format, to quantize each set of weights to a respective reduced-precision format in dependence on effect of quantization on accuracy of the model. For each set of weights, a partitioning scheme is defined for a set of block memories of the apparatus such that a plurality k of those weights can be stored in each addressable location of the set of memories, wherein k differs for different sets of weights. The apparatus can be programmed to implement the neural network such that weights in each set are persistently stored in a set of block memories partitioned according to the partitioning scheme for that set of weights.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: October 10, 2023
    Assignee: International Business Machines Corporation
    Inventors: Dionysios Diamantopoulos, Heiner Giefers, Christoph Hagleitner
  • Publication number: 20230239209
    Abstract: The invention is notably directed to a method, computer program product, and computer system for running software inside containers. The method relies on a computerized system that includes a composable disaggregated infrastructure, in addition to general-purpose hardware. The computerized system is configured to dynamically allocate computerized resources, which include both general resources and specialized resources. The former are enabled by the general-purpose hardware, while the latter are enabled by specialized network-attached hardware components of the composable disaggregated infrastructure. The method maintains a table capturing specializations of the specialized network-attached hardware components. At runtime, software is run inside each container by executing corresponding functions.
    Type: Application
    Filed: January 21, 2022
    Publication date: July 27, 2023
    Inventors: Dionysios Diamantopoulos, Burkhard Ringlein, Francois Abel
  • Patent number: 11630696
    Abstract: The present disclosure relates to a messaging method for a hardware acceleration system. The method includes determining exchange message types to be exchanged with a hardware accelerator in accordance with an application performed by the hardware acceleration system. The exchange message types indicate a number of variables, and a type of the variables, of the messages. The method also includes selecting schemas from a schema database. The message type schemas indicates a precision representation of variables of messages associated with the schema. The selected schemas correspond to the determined exchange message types. Further, the method includes configuring a serial interface of the hardware accelerator in accordance with the selected schemas, to enable a message exchange including the messages.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: April 18, 2023
    Assignee: International Business Machines Corporation
    Inventors: Dionysios Diamantopoulos, Mitra Purandare, Burkhard Ringlein, Christoph Hagleitner
  • Patent number: 11175957
    Abstract: The present disclosure relates to a hardware accelerator for executing a computation task composed of a set of operations. The hardware accelerator comprises a controller and a set of computation units. Each computation unit of the set of computation units is configured to receive input data of an operation of the set of operations and to perform the operation, wherein the input data is represented with a distinct bit length associated with each computation unit. The controller is configured to receive the input data represented with a certain bit length of the bit lengths and to select one of the set of computation units that can deliver a valid result and that is associated with a bit length smaller than or equal to the certain bit length.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: November 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Dionysios Diamantopoulos, Florian Michael Scheidegger, Adelmo Cristiano Innocenza Malossi, Christoph Hagleitner, Konstantinos Bekas
  • Publication number: 20210303352
    Abstract: The present disclosure relates to a messaging method for a hardware acceleration system. The method includes determining exchange message types to be exchanged with a hardware accelerator in accordance with an application performed by the hardware acceleration system. The exchange message types indicate a number of variables, and a type of the variables, of the messages. The method also includes selecting schemas from a schema database. The message type schemas indicates a precision representation of variables of messages associated with the schema. The selected schemas correspond to the determined exchange message types. Further, the method includes configuring a serial interface of the hardware accelerator in accordance with the selected schemas, to enable a message exchange including the messages.
    Type: Application
    Filed: March 30, 2020
    Publication date: September 30, 2021
    Inventors: Dionysios Diamantopoulos, Mitra Purandare, Burkhard Ringlein, Christoph Hagleitner
  • Publication number: 20210064975
    Abstract: A field programmable gate array (FPGA) may be used for inference of a trained deep neural network (DNN). The trained DNN may comprise a set of parameters and the FPGA may have a first precision configuration defining first number representations of the set of parameters. The FPGA may determine different precision configurations of the trained DNN. A precision configuration of the precision configurations may define second number representations of a subset of the set of parameters. For each precision configuration of the determined precision configurations a bitstream file may be provided. The bitstream files may be stored so that the FPGA may be programmed using one of the stored bitstream files for inference of the trained DNN.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 4, 2021
    Inventors: Mitra Purandare, Dionysios Diamantopoulos, Raphael Polig
  • Publication number: 20200257986
    Abstract: Field-programmable gate array and method to implement an artificial neural network. A trained model of the neural network is processed, in which weights are defined in a floating-point format, to quantize each set of weights to a respective reduced-precision format in dependence on effect of quantization on accuracy of the model. For each set of weights, a partitioning scheme is defined for a set of block memories of the apparatus such that a plurality k of those weights can be stored in each addressable location of the set of memories, wherein k differs for different sets of weights. The apparatus can be programmed to implement the neural network such that weights in each set are persistently stored in a set of block memories partitioned according to the partitioning scheme for that set of weights.
    Type: Application
    Filed: February 8, 2019
    Publication date: August 13, 2020
    Inventors: Dionysios Diamantopoulos, Heiner Giefers, Christoph Hagleitner