Patents Assigned to IP Reservoir, LLC
  • Patent number: 11789965
    Abstract: Various methods and apparatuses are described for performing high speed format translations of incoming data, where the incoming data is arranged in a delimited data format. As an example, the data in the delimited data format can be translated to a structured format such as a fixed field format using pipelined operations. A reconfigurable logic device can be used in exemplary embodiments as a platform for the format translation.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: October 17, 2023
    Assignee: IP Reservoir, LLC
    Inventors: Michael John Henrichs, Joseph M. Lancaster, Roger Dean Chamberlain, Jason R. White, Kevin Brian Sprague, Terry Tidwell
  • Patent number: 11677417
    Abstract: Disclosed herein are methods and systems for hardware-accelerating various data processing operations in a rule-based decision-making system such as a business rules engine, an event stream processor, and a complex event stream processor. Preferably, incoming data streams are checked against a plurality of rule conditions. Among the data processing operations that are hardware-accelerated include rule condition check operations, filtering operations, and path merging operations. The rule condition check operations generate rule condition check results for the processed data streams, wherein the rule condition check results are indicative of any rule conditions which have been satisfied by the data streams. The generation of such results with a low degree of latency provides enterprises with the ability to perform timely decision-making based on the data present in received data streams.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: June 13, 2023
    Assignee: IP Reservoir, LLC
    Inventors: Ronald S. Indeck, David Mark Indeck, Naveen Singla, Jason R. White
  • Patent number: 11449538
    Abstract: Disclosed herein are methods and systems for integrating an enterprise's structured and unstructured data to provide users and enterprise applications with efficient and intelligent access to that data. In accordance with exemplary embodiments, the generation of feature vectors about unstructured data can be hardware-accelerated by processing streaming unstructured data through a reconfigurable logic device, a graphics processor unit (GPU), or chip multi-processor (CMP) to determine features that can aid clustering of similar data objects.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: September 20, 2022
    Assignee: IP Reservoir, LLC
    Inventors: Ronald S. Indeck, David Mark Indeck, Naveen Singla, David E. Taylor
  • Patent number: 11416778
    Abstract: A feature extractor for a convolutional neural network (CNN) is disclosed, wherein the feature extractor is deployed on a member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processing unit (GPU), and (3) a chip multi-processor (CMP). A processing pipeline can be implemented on the member, where the processing pipeline implements a plurality convolution layers for the CNN, wherein each of a plurality of the convolutional layers comprises (1) a convolution stage that convolves first data with second data if activated and (2) a sub-sampling stage that performs a member of the group consisting of (i) a max pooling operation, (ii) an averaging operation, and (iii) a sampling operation on data received thereby if activated. The processing pipeline can be controllable with respect to which of the convolution stages are activated/deactivated and which of the sub-sampling stages are activated/deactivated when processing streaming data through the processing pipeline.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: August 16, 2022
    Assignee: IP RESERVOIR, LLC
    Inventors: Roger D. Chamberlain, Ronald S. Indeck
  • Patent number: 11275594
    Abstract: A system is disclosed that comprises a field programmable gate array (FPGA), a network interface, and hardware description code, wherein the hardware description code is compilable into a plurality of bit configuration files for loading onto the FPGA, wherein each bit configuration file defines a pipelined processing operation for a hardware template. The FPGA comprises configurable hardware logic, and the FPGA can be accessible over a network via the network interface for commanding the FPGA to load a bit configuration file from among the bit configuration files onto the FPGA to thereby configure hardware logic on the FPGA to perform the pipelined processing operation defined by the loaded bit configuration file, and wherein the FPGA is configured to (1) receive streaming data and (2) process the streaming data through the configured hardware logic to perform the pipelined processing operation defined by the loaded bit configuration file on the streaming data.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: March 15, 2022
    Assignee: IP RESERVOIR, LLC
    Inventors: Roger D. Chamberlain, Mark Allen Franklin, Ronald S. Indeck, Ron K. Cytron, Sharath R. Cholleti
  • Patent number: 10965317
    Abstract: Disclosed herein are methods and systems for hardware-accelerating various data processing operations in a rule-based decision-making system such as a business rules engine, an event stream processor, and a complex event stream processor. Preferably, incoming data streams are checked against a plurality of rule conditions. Among the data processing operations that are hardware-accelerated include rule condition check operations, filtering operations, and path merging operations. The rule condition check operations generate rule condition check results for the processed data streams, wherein the rule condition check results are indicative of any rule conditions which have been satisfied by the data streams. The generation of such results with a low degree of latency provides enterprises with the ability to perform timely decision-making based on the data present in received data streams.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 30, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Ronald S. Indeck, David Mark Indeck, Naveen Singla, Jason R. White
  • Patent number: 10963962
    Abstract: Various techniques are disclosed for offloading the processing of data packets that contain financial market data. For example, incoming data packets can be processed through an offload processor to generate a new stream of outgoing data packets that organize financial market data in a manner different than the incoming data packets. Furthermore, in an exemplary embodiment, the offloaded processing can be resident in an intelligent switch, such as an intelligent switch upstream or downstream from an electronic trading platform.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: March 30, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Scott Parsons, David E. Taylor, Ronald S. Indeck
  • Patent number: 10949442
    Abstract: Various methods and apparatuses are described for performing high speed format translations of incoming data, where the incoming data is arranged in a delimited data format. As an example, the data in the delimited data format can be translated to a mapped variable field format using pipelined operations. A reconfigurable logic device can be used in exemplary embodiments as a platform for the format translation.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 16, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Michael John Henrichs, Joseph M. Lancaster, Roger Dean Chamberlain, Jason R. White, Kevin Brian Sprague, Terry Tidwell
  • Patent number: 10942943
    Abstract: Improved computer technology is disclosed for enabling high performance stream processing on data such as complex, hierarchical data. In an example embodiment, a dynamic field schema specifies a dynamic field format for expressing the incoming data. An incoming data stream is then translated according to the dynamic field schema into an outgoing data stream in the dynamic field format. Stream processing, including field-specific stream processing, can then be performed on the outgoing data stream.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: March 9, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Louis Kelly Thomas, Joseph Marion Lancaster
  • Patent number: 10929152
    Abstract: A system is disclosed that comprises a field programmable gate array (FPGA), a network interface, and a plurality of hardware templates. The FPGA comprises configurable hardware logic, and the hardware templates define a plurality of different pipelined processing operations. The FPGA can be accessible over a network via the network interface for commanding the FPGA to load a hardware template from among the hardware templates onto the FPGA to thereby configure hardware logic on the FPGA to perform the pipelined processing operation defined by the loaded hardware template, and wherein the FPGA is configured to (1) receive streaming data and (2) process the streaming data through the configured hardware logic to perform the pipelined processing operation defined by the loaded hardware template on the streaming data.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: February 23, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Roger D. Chamberlain, Mark Allen Franklin, Ronald S. Indeck, Ron K. Cytron, Sharath R. Cholleti
  • Patent number: 10929930
    Abstract: A variety of embodiments for hardware-accelerating the processing of financial market depth data are disclosed. A coprocessor, which may be resident in a ticker plant, can be configured to update order books based on financial market depth data at extremely low latency. Such a coprocessor can also be configured to enrich a stream of limit order events pertaining to financial instruments with data from a plurality of updated order books.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: February 23, 2021
    Assignee: IP Reservoir, LLC
    Inventors: David E. Taylor, Scott Parsons, Jeremy Walter Whatley, Richard Bradley, Kwame Gyang, Michael DeWulf
  • Patent number: 10909623
    Abstract: A method and apparatus use hardware logic deployed on a reconfigurable logic device to process a stream of financial information at hardware speeds. The hardware logic can be configured to perform data reduction operations on the financial information stream. Examples of such data reductions operations include data processing operations to compute a latest stock price, a minimum stock price, and a maximum stock price.
    Type: Grant
    Filed: November 21, 2011
    Date of Patent: February 2, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Ronald S. Indeck, Ron Kaplan Cytron, Mark Allen Franklin, Roger D. Chamberlain
  • Patent number: 10902013
    Abstract: Various methods and apparatuses are described for performing high speed translations of data. In an example embodiment, record layout detection can be performed for data. In another example embodiment, data pivoting prior to field-specific data processing can be performed.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 26, 2021
    Assignee: IP Reservoir, LLC
    Inventors: Joseph M. Lancaster, Kevin Brian Sprague
  • Patent number: 10872078
    Abstract: Various techniques are disclosed for offloading the processing of data packets. For example, incoming data packets can be processed through an offload processor to generate a new stream of outgoing data packets that organize data from the data packets in a manner different than the incoming data packets. Furthermore, in an exemplary embodiment, the offloaded processing can be resident in an intelligent switch, such as an intelligent switch upstream or downstream from an electronic trading platform.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: December 22, 2020
    Assignee: IP Reservoir, LLC
    Inventors: Scott Parsons, David E. Taylor, Ronald S. Indeck
  • Patent number: 10846624
    Abstract: A multi-functional data processing pipeline for use with machine learning is disclosed. The multi-functional pipeline may comprise a plurality of pipelined data processing engines, the plurality of pipelined data processing engines being configured to perform processing operations, and the pipelined data processing engines can include correlation logic. The multi-functional pipeline can be configured to controllably activate or deactivate each of the pipelined data processing engines in the pipeline in response to control instructions and thereby define a function for the pipeline, each pipeline function being the combined functionality of each activated pipelined data processing engine in the pipeline. In example embodiments, such pipelines can be used to accelerate convolutional layers in machine-learning technology such as convolutional neural networks.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: November 24, 2020
    Assignee: IP Reservoir, LLC
    Inventors: Roger D. Chamberlain, Ronald S. Indeck
  • Patent number: 10817945
    Abstract: Systems and methods are disclosed for routing of streaming data as between multiple compute resources. For example, the system may comprise a processor, a field programmable gate array (FPGA), a shared memory that is shared by a user space of an operating system for the processor and the FPGA, a network protocol stack, and driver code for execution by the processor. The driver code can be configured to (1) copy the streaming data received by the network protocol stack into the shared memory, (2) facilitate DMA transfers of the streaming data from the shared memory into the FPGA for processing thereby, (3) receive a stream of processed data from the FPGA, and (4) deliver the received processed data to the network protocol stack for delivery to one or more data consumers.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: October 27, 2020
    Assignee: IP Reservoir, LLC
    Inventors: Scott Parsons, David E. Taylor, David Vincent Schuehler, Mark A. Franklin, Roger D. Chamberlain
  • Patent number: 10719334
    Abstract: Methods and systems are disclosed where an FPGA offloads a plurality of processing tasks from a processor. The FPGA can process streaming data received via a network interface, and the FPGA can be controllable in response to control instructions received from the processor. The FPGA comprises resident hardware logic for a plurality of data processing engines that are combinable as a processing pipeline within the FPGA. In response to the control instructions, the FPGA can control which of the data processing engines are activated and which of the data processing engines are deactivated to selectively tap into the streaming data to perform pipelined processing operations on the streaming data via the activated data processing engines. The deactivated data processing engines remain on the FPGA and provide a pass through path for the streaming data whereby the deactivated data processing engines do not perform processing operations on streaming data received thereby.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: July 21, 2020
    Assignee: IP Reservoir, LLC
    Inventors: Roger D. Chamberlain, Mark Allen Franklin, Ronald S. Indeck, Ron K. Cytron, Sharath R. Cholleti
  • Patent number: 10650452
    Abstract: Various techniques are disclosed for offloading the processing of data packets. For example, incoming data packets can be processed through an offload processor to generate a new stream of outgoing data packets that organize data from the data packets in a manner different than the incoming data packets. Furthermore, in an exemplary embodiment, the offloaded processing can be resident in an intelligent switch, such as an intelligent switch upstream or downstream from an electronic trading platform.
    Type: Grant
    Filed: March 3, 2014
    Date of Patent: May 12, 2020
    Assignee: IP Reservoir, LLC
    Inventors: Scott Parsons, David E. Taylor, Ronald S. Indeck
  • Patent number: 10572824
    Abstract: A multi-functional data processing pipeline is disclosed where the multi-functional pipeline comprises a plurality of pipelined data processing engines, the plurality of pipelined data processing engines being configured to perform processing operations. The multi-functional pipeline can be configured to controllably activate or deactivate each of the pipelined data processing engines in the pipeline in response to control instructions and thereby define a function for the pipeline, each pipeline function being the combined functionality of each activated pipelined data processing engine in the pipeline. In example embodiments, the pipelined data processing engines can include correlation logic, and such pipelines can be used to accelerate convolutional layers in machine-learning technology such as convolutional neural networks.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: February 25, 2020
    Assignee: IP Reservoir, LLC
    Inventors: Roger D. Chamberlain, Mark Allen Franklin, Ronald S. Indeck, Ron K. Cytron
  • Patent number: 10504184
    Abstract: Systems and methods are disclosed for fast track routing of streaming data as between multiple compute resources. For example, the system may comprise a first processor, a second processor, a shared memory that is mapped into a kernel and user space of an operating system for the processor, a network protocol stack, and driver code for execution within the kernel space of the operating system while the operating system is in the kernel mode. The driver code can be configured to (1) maintain a kernel level interface into the network protocol stack, (2) copy the streaming data from the network protocol stack into the shared memory, wherein the copy operation is performed by the driver code without the operating system transitioning to the user mode, and (3) facilitate DMA transfers of data from the shared memory into the second processor for processing thereby.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: December 10, 2019
    Assignee: IP RESERVOIR, LLC
    Inventors: Scott Parsons, David E. Taylor, David Vincent Schuehler, Mark A. Franklin, Roger D. Chamberlain