Patents by Inventor Jason A. Viehland

Jason A. Viehland has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11119766
    Abstract: Provided are techniques for a hardware accelerator with locally stored macros. A plurality of macros are stored in a lookup memory of a hardware accelerator. In response to receiving an operation code, the operation code is mapped to one or more macros of the plurality of macros, wherein each of the one or more macros includes micro-instructions. Each of the micro-instructions of the one or more macros is routed to a function block of a plurality of function blocks. Each of the micro-instructions is processed with the plurality of function blocks. Data from the processing of each of the micro-instructions is stored in an accelerator memory of the hardware accelerator. The data is moved from the accelerator memory to a host memory.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: September 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael J. Healy, Jason A. Viehland, Jeffrey H. Derby, Diana L. Orf
  • Patent number: 10915477
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for processing events including efficiently processing interrupt service requests for peripheral devices, such as hardware accelerators, utilized in parallel processing are provided. For each core engine of a peripheral device, the peripheral device detects whether one or more interrupt signals have been generated. Information associated with the one or more interrupt signals are stored in one or more registers of peripheral device memory, for each core engine. The information is aggregated and stored in a vector of registers in the peripheral device memory, and the aggregated information is written to memory associated with a CPU to enable CPU processing of interrupt requests from each core engine of the peripheral device.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Chachi Ching, John A. Flanders, Michael J. Healy, Kevin J. Twilliger, Jason A. Viehland
  • Patent number: 10831713
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for hardware acceleration are presented. A plurality of computational nodes for processing data is provided, each node performing a corresponding operation for data received at that node. A metric module is used to determine a compression benefit metric pertaining to performance of the corresponding operations of one or more computational nodes with recompressed data. An accelerator module recompresses data for processing by the one or more computational nodes based on the compression benefit metric indicating a benefit gained by using the recompressed data. A distribution function may be used to distribute data among a plurality of nodes.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Garth A. Dickie, Michael Sporer, Jason A. Viehland
  • Publication number: 20200183686
    Abstract: Provided are techniques for a hardware accelerator with locally stored macros. A plurality of macros are stored in a lookup memory of a hardware accelerator. In response to receiving an operation code, the operation code is mapped to one or more macros of the plurality of macros, wherein each of the one or more macros includes micro-instructions. Each of the micro-instructions of the one or more macros is routed to a function block of a plurality of function blocks. Each of the micro-instructions is processed with the plurality of function blocks. Data from the processing of each of the micro-instructions is stored in an accelerator memory of the hardware accelerator. The data is moved from the accelerator memory to a host memory.
    Type: Application
    Filed: December 6, 2018
    Publication date: June 11, 2020
    Inventors: Michael J. Healy, Jason A. Viehland, Jeffrey H. Derby, Diana L. Orf
  • Publication number: 20190317910
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for processing events including efficiently processing interrupt service requests for peripheral devices, such as hardware accelerators, utilized in parallel processing are provided. For each core engine of a peripheral device, the peripheral device detects whether one or more interrupt signals have been generated. Information associated with the one or more interrupt signals are stored in one or more registers of peripheral device memory, for each core engine. The information is aggregated and stored in a vector of registers in the peripheral device memory, and the aggregated information is written to memory associated with a CPU to enable CPU processing of interrupt requests from each core engine of the peripheral device.
    Type: Application
    Filed: June 25, 2019
    Publication date: October 17, 2019
    Inventors: Chachi Ching, John A. Flanders, Michael J. Healy, Kevin J. Twilliger, Jason A. Viehland
  • Patent number: 10387343
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for processing events including efficiently processing interrupt service requests for peripheral devices, such as hardware accelerators, utilized in parallel processing are provided. For each core engine of a peripheral device, the peripheral device detects whether one or more interrupt signals have been generated. Information associated with the one or more interrupt signals are stored in one or more registers of peripheral device memory, for each core engine. The information is aggregated and stored in a vector of registers in the peripheral device memory, and the aggregated information is written to memory associated with a CPU to enable CPU processing of interrupt requests from each core engine of the peripheral device.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: August 20, 2019
    Assignee: International Business Machines Corporation
    Inventors: Chachi Ching, John A. Flanders, Michael J. Healy, Kevin J. Twilliger, Jason A. Viehland
  • Publication number: 20180052863
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for hardware acceleration are presented. A plurality of computational nodes for processing data is provided, each node performing a corresponding operation for data received at that node. A metric module is used to determine a compression benefit metric pertaining to performance of the corresponding operations of one or more computational nodes with recompressed data. An accelerator module recompresses data for processing by the one or more computational nodes based on the compression benefit metric indicating a benefit gained by using the recompressed data. A distribution function may be used to distribute data among a plurality of nodes.
    Type: Application
    Filed: October 24, 2017
    Publication date: February 22, 2018
    Inventors: Garth A. Dickie, Michael Sporer, Jason A. Viehland
  • Patent number: 9858285
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for hardware acceleration are presented. A plurality of computational nodes for processing data is provided, each node performing a corresponding operation for data received at that node. A metric module is used to determine a compression benefit metric pertaining to performance of the corresponding operations of one or more computational nodes with recompressed data. An accelerator module recompresses data for processing by the one or more computational nodes based on the compression benefit metric indicating a benefit gained by using the recompressed data. A distribution function may be used to distribute data among a plurality of nodes.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: January 2, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garth A. Dickie, Michael Sporer, Jason A. Viehland
  • Patent number: 9836473
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for hardware acceleration are presented. A plurality of computational nodes for processing data is provided, each node performing a corresponding operation for data received at that node. A metric module is used to determine a compression benefit metric pertaining to performance of the corresponding operations of one or more computational nodes with recompressed data. An accelerator module recompresses data for processing by the one or more computational nodes based on the compression benefit metric indicating a benefit gained by using the recompressed data. A distribution function may be used to distribute data among a plurality of nodes.
    Type: Grant
    Filed: October 3, 2014
    Date of Patent: December 5, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garth A. Dickie, Michael Sporer, Jason A. Viehland
  • Patent number: 9684681
    Abstract: Database processing using columns to present to a processing unit decompressed column data without changing the underlying row-based database architecture. For some embodiments, a database accelerator is used to efficiently process the columns of a database and output tuples to a processing unit's memory, such that the columns can be quickly processed (with the advantages of a column-based architecture) to create tuples of requested data, but without having to depart from a row-based architecture at the processing unit level or having decompressed data scattered throughout the processing unit's memory.
    Type: Grant
    Filed: May 7, 2015
    Date of Patent: June 20, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Viehland, John S. Yates, Jr.
  • Publication number: 20160299858
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for processing events including efficiently processing interrupt service requests for peripheral devices, such as hardware accelerators, utilized in parallel processing are provided. For each core engine of a peripheral device, the peripheral device detects whether one or more interrupt signals have been generated. Information associated with the one or more interrupt signals are stored in one or more registers of peripheral device memory, for each core engine. The information is aggregated and stored in a vector of registers in the peripheral device memory, and the aggregated information is written to memory associated with a CPU to enable CPU processing of interrupt requests from each core engine of the peripheral device.
    Type: Application
    Filed: April 7, 2015
    Publication date: October 13, 2016
    Inventors: Chachi Ching, John A. Flanders, Michael J. Healy, Kevin J. Twilliger, Jason A. Viehland
  • Publication number: 20160098420
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for hardware acceleration are presented. A plurality of computational nodes for processing data is provided, each node performing a corresponding operation for data received at that node. A metric module is used to determine a compression benefit metric pertaining to performance of the corresponding operations of one or more computational nodes with recompressed data. An accelerator module recompresses data for processing by the one or more computational nodes based on the compression benefit metric indicating a benefit gained by using the recompressed data. A distribution function may be used to distribute data among a plurality of nodes.
    Type: Application
    Filed: May 4, 2015
    Publication date: April 7, 2016
    Inventors: Garth A. Dickie, Michael Sporer, Jason A. Viehland
  • Publication number: 20160098439
    Abstract: According to embodiments of the present invention, machines, systems, methods and computer program products for hardware acceleration are presented. A plurality of computational nodes for processing data is provided, each node performing a corresponding operation for data received at that node, A metric module is used to determine a compression benefit metric pertaining to performance of the corresponding operations of one or more computational nodes with recompressed data, An accelerator module recompresses data for processing by the one or more computational nodes based on the compression benefit metric indicating a benefit gained by using the recompressed data. A distribution function may be used to distribute data among a plurality of nodes.
    Type: Application
    Filed: October 3, 2014
    Publication date: April 7, 2016
    Inventors: Garth A. Dickie, Michael Sporer, Jason A. Viehland
  • Publication number: 20150234871
    Abstract: Database processing using columns to present to a processing unit decompressed column data without changing the underlying row-based database architecture. For some embodiments, a database accelerator is used to efficiently process the columns of a database and output tuples to a processing unit's memory, such that the columns can be quickly processed (with the advantages of a column-based architecture) to create tuples of requested data, but without having to depart from a row-based architecture at the processing unit level or having decompressed data scattered throughout the processing unit's memory.
    Type: Application
    Filed: May 7, 2015
    Publication date: August 20, 2015
    Applicant: International Business Machines Corporation
    Inventors: Jason A. Viehland, John S. Yates, JR.
  • Patent number: 9087095
    Abstract: Database processing using columns to present to a processing unit decompressed column data without changing the underlying row-based database architecture. For some embodiments, a database accelerator is used to efficiently process the columns of a database and output tuples to a processing unit's memory, such that the columns can be quickly processed (with the advantages of a column-based architecture) to create tuples of requested data, but without having to depart from a row-based architecture at the processing unit level or having decompressed data scattered throughout the processing unit's memory.
    Type: Grant
    Filed: June 21, 2012
    Date of Patent: July 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Jason A. Viehland, John S. Yates, Jr.
  • Publication number: 20130346428
    Abstract: Database processing using columns to present to a processing unit decompressed column data without changing the underlying row-based database architecture. For some embodiments, a database accelerator is used to efficiently process the columns of a database and output tuples to a processing unit's memory, such that the columns can be quickly processed (with the advantages of a column-based architecture) to create tuples of requested data, but without having to depart from a row-based architecture at the processing unit level or having decompressed data scattered throughout the processing unit's memory.
    Type: Application
    Filed: June 21, 2012
    Publication date: December 26, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Viehland, John S. Yates, JR.