Method and Apparatus for Managing Orders in Financial Markets
An integrated order management engine is disclosed that reduces the latency associated with managing multiple orders to buy or sell a plurality of financial instruments. Also disclosed is an integrated trading platform that provides low latency communications between various platform components. Such an integrated trading platform may include a trading strategy offload engine.
This patent application is a continuation of U.S. patent application Ser. No. 17/872,226, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Jul. 25, 2022, now U.S. Pat. No. ______, which is a continuation of U.S. patent application Ser. No. 16/044,614, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Jul. 25, 2018, now U.S. Pat. No. 11,397,985, which is a divisional of U.S. patent application Ser. No. 13/316,332, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Dec. 9, 2011, now U.S. Pat. No. 10,037,568, which claims priority to provisional patent application 61/421,545, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Dec. 9, 2010, the entire disclosures of each of which are incorporated herein by reference.
This patent application is related to PCT patent application PCT/US2011/064269, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Dec. 9, 2011, and published as WO Publication WO2012/079041, the entire disclosure of which is incorporated herein by reference.
This patent application is also related to U.S. Pat. Nos. 7,840,482, 7,921,046, and 7,954,114 as well as the following published patent applications: U.S. Pat. App. Pub. 2007/0174841, U.S. Pat. App. Pub. 2007/0294157, U.S. Pat. App. Pub. 2008/0243675, U.S. Pat. App. Pub. 2009/0182683, U.S. Pat. App. Pub. 2009/0287628, U.S. Pat. App. Pub. 2011/0040701, U.S. Pat. App. Pub. 2011/0178911, U.S. Pat. App. Pub. 2011/0178912, U.S. Pat. App. Pub. 2011/0178917, U.S. Pat. App. Pub. 2011/0178918, U.S. Pat. App. Pub. 2011/0178919, U.S. Pat. App. Pub. 2011/0178957, U.S. Pat. App. Pub. 2011/0179050, U.S. Pat. App. Pub. 2011/0184844, and WO Pub. WO 2010/077829, the entire disclosures of each of which are incorporated herein by reference.
INTRODUCTIONDark Pools play a similar function of matching up buyers and sellers, but do not provide full visibility into the available liquidity and pricing information. Dark Pools may be operated by financial exchanges, investment banks, or other financial institutions. Dark Pools are rapidly becoming a key market center for electronic trading activity, with a substantial proportion of transactions occurring in dark pools, relative to public markets.
In order to facilitate the development of trading applications that leverage real-time data from multiple market centers (and their concomitant feeds), trading platforms typically normalize data and perform common data processing/enrichment functions in ticker plants, as described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675 and WO Pub. WO 2010/077829.
Trading strategies consume normalized market data, make decisions to place buy/sell orders, and pass those orders on to an order management system. Note that those orders may provide guidance to the order management system on where to route the order (e.g. whether or not it should be routed to a dark pool), how long the order should be exposed in the market before canceling it (if it is not executed), and other conditions governing the management of the order in the marketplace.
An Order Management System (OMS) (which can also be referred to as an Execution Management System (EMS)) is responsible for managing orders from one or more trading applications. Note that the OMS/EMS may be responsible for managing orders from multiple trading entities. These entities may be competing trading groups within the same investment bank. These entities may also be independent financial institutions that are accessing the market through a common prime services broker or trading infrastructure provider.
The function of the OMS/EMS is to enter orders into a market. Prior to entering an order into a market, the OMS may first perform a series of checks in order to deem the order “valid” for placement. These checks can include:
-
- Individual account and risk profile
- Order quantity, instant and cumulative
- Quantity-price product, instant and cumulative
- Cumulative net value on position
- Percent away from last tick and/or open
- Position limits, margins
- Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
- Corporate account and risk profile
- Order quantity, instant and cumulative
- Quantity-price product, instant and cumulative
- Cumulative net value on position
- Percent away from last tick and/or open
- Position limits, margins
- Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
- Corporate “restricted list” of symbols
- Regulatory
- Short sale restrictions
- Halted instruments
- Tick rules
- Trade through
It can be noted that these checks are driven by account, risk, and regulatory data accessible by the OMS, as well as a view of the current state of the markets provided via normalized market data from a ticker plant.
- Individual account and risk profile
It can also be noted that the OMS/EMS typically is used to manage order placement into multiple markets, including dark pools. Once an order is declared to be appropriate (i.e., “valid”), one of the primary functions of the OMS/EMS is to select the destination for each incoming order. Note that the OMS/EMS may also choose to sub-divide the order into smaller orders that may be routed the same or different markets. The OMS/EMS makes routing decisions based on the current state of the markets provided via normalized market data from a ticker plant, as well as routing parameters input to the OMS/EMS. Routing parameters may be scoped on a per-account or corporate basis. These parameters may include:
-
- Per-market fee and rebate structure
- Account fee and rebate structure
- Per-market outstanding limit
- Market access latency (continuously updated estimate of intra-exchange latency)
- Routing strategy
- Best net execution price (including transaction fees, maker/taker models, etc.)
- Lowest fee
- Inter-market Sweep Order (ISO) to all markets
- Market preference on order
- Order split rules
- Range of markets
- Max size per market
- Price delta limit from current price of each market
Once the OMS has made a decision of where and how to route an order, it may then attempt to optimize the order and communication channel in which it transmits orders to a given market (order entry optimization). For example, orders with a higher probability of getting filled (matched) may be placed prior to orders with a lower probability of getting filled, or orders meeting certain criteria, such as order types or specific financial instruments, may have a higher probability of being filled by utilizing one communication channel rather than another. The order entry optimization may also incorporate the current view of the market (from the normalized market data) as well the current estimate of intra-market latency for the given market.
One or more order validation software components are deployed on one or more servers 202. Each order validation software component requires a market data interface to the messaging bus. The interface allows the validation software component to request the necessary market data to perform validation on incoming orders. Similarly, the order validation software components listen for new incoming orders from trading strategies on the order entry bus. Note that the latency of market data delivery and the bandwidth available on the market data bus affect the quality and quantity, respectively, of data used by the order validation software component. Furthermore, the distribution of order validation software components across multiple servers 202 segments validation decisions. As result, the previously described validation decisions are performed on a limited view of data, which introduces risk, or validation decisions are delayed until data from disparate components can be compiled in order to build a comprehensive view of risk. Such delays may reduce or eliminate market opportunities that depend on a fast response to trading opportunities.
Orders that pass the validation checks are forwarded to one or more routing strategy software components that perform order placement into multiple markets, as previously described. Like the order validation software components, each routing strategy software component requires a market data interface to the market data messaging bus through which it receives current pricing information. The order routing software components typically require a price-aggregated view of the book for the instruments for which it is routing new orders. These book views may be cached locally in the routing strategy software components or requested via the market data interface. The latency associated with these book views directly affects the quality of the data used by the routing strategy software components to make order routing decisions. Delayed data may cause a routing strategy software component to make a decision that results in a missed trading opportunity or a trading loss. Once a routing strategy software component makes a routing decision, the order along with its handling instructions and destination market is forwarded on to the order entry bus.
Typically, output orders from the routing strategy software components are directly passed to one or more FIX engine software components that implement the order-entry interface to one or more markets. The FIX engine software components pass outgoing orders to the markets and pass incoming order responses from the markets to the order entry bus. The latency induced by another transition over a messaging bus and the FIX engine processing represents an additive contribution to the total latency of the OMS/EMS.
Optionally, an OMS/EMS may include one or more order entry optimization software components. As previously described, these software components impose a priority ordering on the orders passed on to the markets. When included in the OMS/EMS, the software components receive orders from the routing strategy software components via the order entry bus, perform their priority queuing operation, and pass orders destined for the market to the appropriate FIX engine software components via the order entry messaging bus. As with the FIX engine software components, the latency induced by another transition over a messaging bus and the order entry optimization processing represents an additive contribution to the total latency of the OMS/EMS.
Thus, distributing OMS/EMS components across multiple systems results in added complexity and latency, which introduces regulatory risk and limits the opportunity to capitalize on latency-sensitive trading opportunities. Furthermore, the overhead of inter-component communication may limit the quantity of data available to components to perform their tasks. This may introduce additional regulatory risk and may further limit trading opportunities.
As a solution to these technical problems of complexity and latency, the inventors disclose a variety of embodiments whereby tight integration is provided between system components to thereby dramatically improve latency and reduce communication complexity.
For example, the inventors disclose an apparatus comprising a processor configured as an order management engine, the order management engine configured to (1) process a plurality of orders relating to a plurality of financial instruments based on a plurality of inputs, and (2) integrate at least two members of the group consisting of an order validation operation, a routing strategy operation, a position blotter operation, and an order entry optimization to thereby process the orders.
As another example, the inventors disclose a method comprising (1) processing, by a processor configured as an order management engine, a plurality of orders relating to a plurality of financial instruments based on a plurality of inputs, wherein the processing comprises performing at least two members of the group consisting of an order validation operation, a routing strategy operation, a position blotter operation, and an order entry optimization via integrated components of the order management engine.
As still another example, the inventors disclose an apparatus comprising a trading platform, the trading platform configured to receive and process streaming financial market data, the trading platform comprising at least two members of the group consisting of (1) a ticker plant engine, (2) a trading strategy engine, and (3) an order management engine, each integrated within the trading platform.
As another example, the inventors disclose a method comprising receiving and processing, by a trading platform, streaming financial market data, the trading platform comprising at least two members of the group consisting of (1) a ticker plant engine, (2) a trading strategy engine, and (3) an order management engine, each integrated within the trading platform.
The inventors also disclose an apparatus comprising a trading platform, the trading platform configured to receive and process streaming financial market data, the trading platform comprising a host system, and a trading strategy engine, wherein the trading strategy engine is configured to offload from the host system at least a portion of a trading strategy with respect to one or more financial instruments and one or more financial markets.
Further still, the inventors disclose a method comprising (1) receiving and processing, by a trading platform, streaming financial market data, the trading platform comprising a host system and a trading strategy engine, and (2) the trading strategy engine offloading from the host system at least a portion of a trading strategy with respect to one or more financial instruments and one or more financial markets.
These and other features and advantages of the present invention will be understood by those having ordinary skill in the art upon review of the description and figures hereinafter.
Order Management Engine
As shown in
The OME can ingest a stream of orders 324 originating from one or more trading strategies from one or more trading entities. Preferably, those trading strategies are accelerated and hosted on the integrated trading platform as described herein in connection with
The mapping component 302 resolves a unique identifier for the financial instrument used by the OME to track per-instrument state. Preferably this key is an index number that allows instrument state to be directly indexed using the number. The mapping component also resolves the unique instrument identifier required for order entry into the markets. Preferably, the mapping component also resolves the instrument identifier required to retrieve the current pricing information from the market view component. As described the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675, the mapping is preferably accomplished by using a hash table implementation to minimize the number of memory accesses to perform the mapping. Similarly, the mapping component resolves a unique identifier for the individual and corporate risk profile records.
In order to seed the order validation checks, the mapping component also initiates the retrieval of relevant validation information associated with the order from one or more of the following sources:
-
- Individual account and risk profile record cache 316
- Corporate account and risk profile record cache 318
- Regulatory record cache 320
Preferably, each of the caches is stored in high-speed memory directly attached to the device hosting the mapping component. Such local memory may be initialized from a centralized database during maintenance windows when trading is not occurring via the operational parameters 322 interface shown inFIG. 3 . The individual account and risk profile is retrieved by using the unique identifier mapped from the individual account number from the incoming order. The corporate account and risk profile is retrieved by using the unique identifier mapped from the corporate account number from the incoming order. The regulatory record is retrieved using the unique instrument identifier mapped from the instrument key as previously described. While the mapping component initiates the retrievals, the read results from the caches are passed to downstream components: order validation, routing strategy, and order entry optimization. In doing so, the mapping component pre-fetches the necessary records for downstream computations, thus masking the latency of the record retrieval from the caches.
Similarly, the mapping component initiates the retrieval of current pricing information for the financial instrument by passing the mapped instrument identifier to the market view component 310.
The market view component can ingest normalized market data 326 from a logically upstream ticker plant. Examples of ticker plants that can be employed for this purpose are the ticker plant engines described in described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675 and WO Pub. WO 2010/077829. The market view component provides a current view of the markets to other components within the OME. Typically, the view of the market is provided as regional and composite price-aggregated book views for each financial instrument such as those described in the above-referenced and incorporated WO Pub. WO 2010/077829. In the preferred embodiment, the market view component provides a current pricing record to downstream OME components that includes a snapshot of current liquidity in the form of a limited-depth price-aggregated composite book, liquidity statistics, and trade statistics, as shown in
In addition to ingesting normalized market data, the market view component has the ability to update those regional and composite book views based on order entry confirmation and order fill reports received from the markets. This information from the order entry interfaces of financial markets is processed by the position blotter component. The position blotter updates the view of current outstanding positions in the market and makes this view available to the market view component, as well as other OME components. Updates to the view of outstanding positions may allow the current view of the market to be updated prior to the concomitant updates being received via the upstream ticker plant that consumes the exchanges' market data feeds. In order to prevent redundant updates to the books, the market view component can maintain a cache 328 of updates triggered by the order entry responses. When a concomitant market data update is received, it must be omitted or adjusted by the amount of liquidity added/removed by the order entry response event.
Similar to the retrieval of necessary regulatory and account records, the retrieval of the financial instrument record from the market view component masks the latency of record retrieval for downstream components.
It should also be noted that optionally, the market view component 310 can itself be a ticker plant engine that ingests financial market data to produce normalized financial market data for consumption by the order validation component.
The order validation component 304 maintains independent input buffers for incoming orders, the regulatory and account records, and the market data records. The buffers provide a synchronization mechanism whereby the order validation component initiates its computations for a new order when all necessary record information is available. The order validation component contains a plurality of rule engines that perform a set of checks as described in the Introduction. Thus the rules engine can instantiate various rules and validate orders (or groups of orders) against those rules. Such rules may be derived from any or all of the following validation rules discussed above (although it should be understood that other validation rules may be desired by a practitioner):
-
- Individual account and risk profile
- Order quantity, instant and cumulative
- Quantity-price product, instant and cumulative
- Cumulative net value on position
- Percent away from last tick and/or open
- Position limits, margins
- Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
- Corporate account and risk profile
- Order quantity, instant and cumulative
- Quantity-price product, instant and cumulative
- Cumulative net value on position
- Percent away from last tick and/or open
- Position limits, margins
- Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
- Corporate “restricted list” of symbols
- Regulatory
- Short sale restrictions
- Halted instruments
- Tick rules
- Trade through
- Individual account and risk profile
An example of a rules engine that can be employed toward this end is disclosed in the above-referenced and incorporated U.S. Pat. App. Pub. 2009/0287628. Note that the set of rule engines may leverage data parallelism (multiple copies of identical rule engines) and functional parallelism (pipeline of function-specific rule engines) to achieve the desired throughput and latency for the order validation component.
The specific set of checks is dictated by the validation information associated with the order (that was retrieved during the order mapping step). If all checks pass, the order is declared as valid and passed on to the routing strategy component. Note that the order validation component may update validation records and write them back to the appropriate record cache, e.g. The current and cumulative statistics on positions for a given account may be updated. As shown in
-
- Regulatory: IF the instrument is currently under a short-sale restriction AND the order is an offer to sell that represents a short sale, THEN reject the order.
- Regulatory: IF the instrument is currently under a volatility trading pause on the NASDAQ market, THEN modify the order to restrict routing to the NASDAQ market.
- Regulatory: IF the instrument is on a restricted stocks in the corporate account record (because the bank is involved in a merger deal with the company), THEN reject the order.
- Individual: IF the notional value of the order to buy a derivatives contract is greater than the credit line available to the individual trading account, THEN reject the order.
- Corporate: IF the aggregate notional value of all outstanding orders for the bank exceeds the defined threshold in the corporate record, THEN reject the order.
The combinatorial rules are typically more straightforward, as a reject result from any of the individual rule checks results in a reject decision for the order. The number of independent rule engines provisioned in the order validation component can be determined by the throughput requirement for the component and an analysis of the complexity of rule checks that must be performed.
Modified and accepted orders are forwarded to the routing strategy component 306, along with its concomitant records via a dedicated interconnect. This allows the routing strategy component to immediately begin processing the order. The routing strategy component determines if a valid order is to be partitioned and where the order (or each order partition) is to be routed. Similar to the order validation component, the routing strategy component utilizes a plurality of rules engines such as those described in the above-referenced and incorporated U.S. Pat. App. Pub. 2009/0287628 to make these decisions (which may also employ a parallelization strategy). The decisions are driven by routing parameters contained in the individual account, corporate account, and regulatory records, as well as data from the market view component and the position blotter component. The rules implement the types of routing strategies outlined in the Introduction. Once a routing decision is completed by the rules engines, the order (or order partitions) are passed on to the order entry optimization component 308 with directives on where and how to enter the order (or order partitions) into the market. Note that an order may be entered into a market with a wide variety of parameters that direct the exchange (or dark pool) on how the order may be matched. The routing strategy component also updates the position blotter component to reflect a new position in the market.
The latency monitor component 312 utilizes data from outgoing order events 332 and incoming order response events 334 to maintain a set of statistics for each channel to each market. The latency statistics may include estimates of intra-exchange latency based on measurements of the round-trip-time (RTT) from transmitting a new order on a channel to receiving a response event (either an order accept, reject, or fill notification). The statistics may include the last measurement as well as the average, minimum, and maximum for a defined time window (e.g. a moving average). The latency statistics may also be further refined to include statics on a per-instrument/per-order-type basis for each channel. Such measurements can be performed by recording a timestamp for the transmission of an order entry event, timestamping each order entry response event, identifying the order entry event that corresponds to the response event, and then computing the difference in timestamps.
The order entry optimization component 308 optimizes the sequence in which orders are transmitted to a given market. Furthermore, the component may select the appropriate communication channel to the market if multiple channels are available. The order entry optimization component utilizes the directives from the routing strategy component, as well current estimates of intra-exchange latency computed for each independent channel to that market. The latency estimates for each instrument and order type combination may also be incorporated. As shown in
A FIX encoder subcomponent 606 then services the queues 604 to generate the outgoing orders 332 in accordance with the selected channels and other optimizations.
An exemplary computation subcomponent 600 can score order channels as a simple weighted sum of antecedents: sum(W[i] *A[i]), where W[i] is a user specified weight, and A[i]=antecedent. Exemplary antecedents include:
-
- Estimated intra-exchange latency for the channel, instrument, order-type combination
- Number of outstanding orders on the channel by instrument
- Number of outstanding orders on the channel by aggregate number
- Price delta of order price to current best bid and best offer on target market
- Liquidity depth, defined to be the total size available between best bid/ask price and order price
A score antecedent selection subcomponent 610 can be employed by the computation subcomponent 600 to select which data from the buffers is to be used for antecedent values.
As indicated above, the subcomponents of the order entry optimization component 308 shown in
The position blotter update component 314 processes order entry response messages 334 from the various markets. The response messages notify the OME of which orders were placed, executed, cancelled, rejected, etc. The position blotter provides updates to the market view component when orders are placed so that the views of the market can be updated with less latency than receiving the update via the market data feed from the market center. Through a dedicated interconnect between the position blotter update component and the market view component, such updates can be passed with minimal overhead. Thus, when the OME 300 receives confirmation that an order has been placed from a destination market, the OME is able to modify its internal view of the state of the market to include the placed order. This provides the OME with a current view of the market, before the change is reported on the public market data feed. This latency advantage in the market view may then be leveraged by the OME and any trading strategies with access to such data.
The position blotter also tracks the current set of outstanding positions that the OME is managing. The component allows the order validation component and routing strategy component to incorporate a view of the outstanding positions when making validation and routing decisions.
The OME may be implemented on high performance computational platform, such as an offload engine or the like. Examples of a suitable computational platform for the OME include a reconfigurable logic device (e.g., a field programmable gate array (FPGA) or other programmable logic device (PLD)), a graphics processor unit (GPU), and a chip multiprocessors (CMP). However, it should be understood that the OME could also be deployed on one or more general purpose processors (GPPs) or other appropriately programmed processors if desired. It should also be understood that the OME may be partitioned across multiple reconfigurable logic devices (or multiple GPUs, CMPs, etc. if desired).
As used herein, the term “general-purpose processor” (or GPP) refers to a hardware device having a fixed form and whose functionality is variable, wherein this variable functionality is defined by fetching instructions and executing those instructions, of which a conventional central processing unit (CPU) is a common example. Exemplary embodiments of GPPs include an Intel Xeon processor and an AMD Opteron processor. As used herein, the term “reconfigurable logic” refers to any logic technology whose form and function can be significantly altered (i.e., reconfigured) in the field post-manufacture. This is to be contrasted with a GPP, whose function can change post-manufacture, but whose form is fixed at manufacture. Furthermore, as used herein, the term “software” refers to data processing functionality that is deployed on a GPP or other processing devices, wherein software cannot be used to change or define the form of the device on which it is loaded, while the term “firmware”, as used herein, refers to data processing functionality that is deployed on reconfigurable logic or other processing devices, wherein firmware may be used to change or define the form of the device on which it is loaded.
Thus, in embodiments where one or more components of the OME is implemented in reconfigurable logic such as an FPGA, hardware logic will be present on the device that permits fine-grained parallelism with respect to the different operations that such components perform, thereby providing such a component with the ability to operate at hardware processing speeds that are orders of magnitude faster than would be possible through software execution on a GPP.
Further, the OME may be hosted in a dedicated system with computer communications links providing the interfaces to the normalized market data, order entry interfaces of markets, and order flow from trading strategies. In a preferred embodiment, the OME is hosted in an integrated system where the full trading platform is hosted.
Integrated Trading Platform
-
- Reduced overall latency from market data receipt to order entry. Such an overall latency reduction can arise from lowered communication latency between components and lowered latency of component processing time by offloading to acceleration engines (e.g., reconfigurable logic).
- Reduced space/power requirements for deploying a trading platform. This can be especially important for co-location in exchange datacenters.
- Increased available bandwidth for data sharing among the trading platform components. This provides for tighter integration between components and allows components to make decisions based on additional data, thereby widening the scope of possible strategies and allowing for more complex and comprehensive processing.
The amount of general-purpose computing resources available in a single host system is fundamentally limited. This implies that pure software implementations of the trading platform or trading platform components will provide less capacity and latency performance relative to systems that leverage hardware-accelerated designs. In order to achieve a higher level of performance in a single system, trading platform components are preferably offloaded to engines that do not consume general purpose computing resources and leverage fine-grained parallelism.
Thus, as shown in
The ticker plant engine(s) 702 can normalize and present market data 714 from disparate feeds for presentation to consuming applications (including consuming applications that are resident in the software sub-system 720). Examples of a suitable ticker plant engine 702 are the ticker plant engines described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675 and WO Pub. WO 2010/077829, which can leverage the parallelism provided by reconfigurable logic devices to provide dramatic acceleration over conventional ticker plants. Furthermore, as shown in
Writing normalized market data to shared (system) memory allows multiple trading applications to view the current state of the market by simply issuing reads to the memory locations associated with the financial instruments of interest. This reduces the latency of data delivery to the trading applications by eliminating the need to receive and parse messages to extract data fields.
An exemplary embodiment of a peer-to-peer hardware interconnect is a PCI Express bus where endpoint devices are each assigned a portion of the addressable memory space. A Base Address Register (BAR) defines the address space assigned to a given device on the bus. If device A issues a write operation to an address within the BAR space associated with device B, data can be transferred directly from device A to device B without involving system software or utilizing host memory. A wide variety of protocols may be developed with this basic capability. Multiple BARs may be employed by a device to implement control structures. For example, specific BARs may be used to maintain read and write pointers for the implementation of a ring buffer or queue for data transfers between devices.
Strategy offload engines 704 may also be hosted in the integrated system. Moreover, such strategy offload engines 704 can be resident in the hardware sub-system 718 as shown in
Note that a hardware-to-software interconnect channel 710 provides for low-latency, high-bandwidth communication between software and hardware components. An example of a suitable interconnect channel in this regard is described in the above-referenced and incorporated U.S. Pat. App. Pub. 2007/0174841. This facilitates the partitioning of trading strategies across general purpose processing and reconfigurable logic resources. Thus, the strategy offload engines 704 can also interact with the trading strategy applications 712 within the software sub-system of the host through the hardware-software channel 710, where a trading strategy application 712 can offload certain tasks to the hardware-accelerated strategy offload engine 704 for reduced latency processing.
The functions of a traditional OMS/EMS that are not performance-critical (e.g. are not performed on every order) may be hosted on general-purpose processing resources in the system if desired (although a practitioner may want to deploy all functions on high performance resources such as reconfigurable logic devices). These functions may include modification of routing parameters, modification of risk profiles, statistics gathering and monitoring. The software components of the OMS/EMS utilize the same hardware-to-software interconnection channel to communicate with the OME(s), update cached records, etc.
As noted above, in connection with the OME, examples of a suitable computational platform for one or more of the engines 702, 704, and 300 include a reconfigurable logic device (e.g., a field programmable gate array (FPGA) or other programmable logic device (PLD)), a graphics processor unit (GPU), and a chip multiprocessors (CMP). However, it should be understood that one or more of the other engines 702, 704, and 300 could also be deployed on one or more general purpose processors (GPPs) or other appropriately programmed processors if desired for parallel execution within the host. It should also be understood that the engines 702, 704, and 300 may be partitioned across multiple reconfigurable logic devices (or multiple GPUs, CMPs, etc. if desired).
Thus, in embodiments where one or more engines within the hardware sub-system 718 is implemented in reconfigurable logic such as an FPGA, hardware logic will be present on the platform that permits fine-grained parallelism with respect to the different operations that such engines perform, thereby providing such an engine with the ability to operate at hardware processing speeds that are orders of magnitude faster than would be possible through software execution on a GPP.
While the present invention has been described above in relation to its preferred embodiments, various modifications may be made thereto that still fall within the invention's scope as will be recognizable upon review of the teachings herein. As such, the full scope of the present invention is to be defined solely by the appended claims and their legal equivalents.
Claims
1. A system comprising:
- a trading platform, the trading platform configured to receive and process streaming financial market data, the trading platform comprising:
- a host system, the host system comprising a host processor and host memory;
- a ticker plant engine, wherein the ticker plant engine is deployed on a (1) a reconfigurable logic device, (2) a graphics processor unit (GPU), and/or (3) a chip multi-processor (CMP);
- an order management engine, wherein the order management engine is deployed on a (1) a reconfigurable logic device, (2) a GPU, and/or (3) a CMP; and
- a peer-to-peer hardware interconnect configured to interconnect the ticker plant engine and the order management engine; and
- wherein the ticker plant engine is configured to communicate normalized financial market data to the order management engine via the peer-to-peer hardware interconnect without using the host processor and without using the host memory.
2. The system of claim 1 wherein the ticker plant engine is configured to communicate the normalized financial market data to the order management engine by writing the normalized financial market data to a shared memory via the peer-to-peer hardware interconnect without using the host processor and without using the host memory, wherein the shared memory is shared between the ticker plant engine and the order management engine.
3. The system of claim 1 wherein the ticker plant engine is deployed on a first computational platform, wherein the first computational platform comprises a reconfigurable logic device, GPU, and/or CMP; and
- wherein the order management engine is deployed on a second computational platform, wherein the second computational platform comprises a reconfigurable logic device, GPU, and/or CMP.
4. The system of claim 1 wherein the trading platform further comprises:
- a trading strategy offload engine, wherein the trading strategy offload engine is deployed on (1) a reconfigurable logic device, (2) a GPU, and/or (3) a CMP;
- wherein the peer-to-peer hardware interconnect is further configured to interconnect the ticker plant engine and the trading strategy offload engine; and
- wherein the ticker plant engine is configured to communicate normalized financial market data to the trading strategy offload engine by writing the normalized financial market data to a shared memory with the trading strategy offload engine via the peer-to-peer hardware interconnect without using the host processor and without using the host memory.
5. The system of claim 4 wherein the host processor is configured to execute a trading strategy via a software application, the trading platform further comprising:
- a shared memory between the ticker plant engine and the trading strategy software application; and
- a hardware-software interconnect channel configured to interconnect the trading strategy software application and the trading strategy offload engine;
- wherein the ticker plant engine is further configured to write normalized financial market data to the shared memory between the ticker plant engine and the trading strategy software application;
- wherein the trading strategy software application is configured to (1) read the normalized financial market data from the shared memory between the ticker plant engine and the trading strategy software application, (2) offload a portion of the trading strategy to the trading strategy offload engine via the hardware-software interconnect channel, and (3) execute the trading strategy based on the read normalized financial market data and an interaction with the trading strategy offload engine via the hardware-software interconnect channel.
6. The system of claim 4 wherein the ticker plant engine is deployed on a first computational platform, wherein the first computational platform comprises a reconfigurable logic device, GPU, and/or CMP;
- wherein the order management engine is deployed on a second computational platform, wherein the second computational platform comprises a reconfigurable logic device, GPU, and/or CMP; and
- wherein the trading strategy offload engine is deployed on a third computational platform, wherein the third computational platform comprises a reconfigurable logic device, GPU, and/or CMP.
7. The system of claim 1 wherein the ticker plant engine and the order management engine are each offloaded from the host system and deployed on one or more field programmable gate arrays (FPGAs).
8. The system of claim 7 wherein the ticker plant engine and the order management engine are deployed on different FPGAs.
9. The system of claim 1 wherein the order management engine is configured to (1) process a plurality of orders relating to a plurality of financial instruments based on a plurality of inputs, and (2) integrate at least two members of the group consisting of an order validation operation, a routing strategy operation, a position blotter operation, and an order entry optimization to thereby process the orders.
10. The system of claim 9 wherein the order management engine comprises a market view component, the market view component configured to ingest the normalized financial market data and provide a current market view to other components within the order management engine, the current market view comprising a current view of pricing and liquidity in one or more financial markets for one or more financial instruments.
11. The system of claim 10 wherein the market view component is further configured to generate the current market view from an input comprising financial market data relating to the one or more financial instruments.
12. The system of claim 10 wherein the market view component is further configured to generate a current market view that includes a pricing and liquidity statistics view from an input comprising financial market data relating to the one or more financial instruments.
13. The system of claim 10 wherein the market view component is further configured to generate a current market view that includes a last trade pricing view from an input comprising financial market data relating to the one or more financial instruments.
14. The system of claim 10 wherein the market view component is further configured to generate a current market view that includes a last trade statistics view from an input comprising financial market data relating to the one or more financial instruments.
15. The system of claim 10 wherein the order management engine comprises a memory configured to store the current market view.
16. The system of claim 10 wherein the current market view comprises a current, composite view of pricing and liquidity across a plurality of financial markets for one or more financial instruments.
17. The system of claim 1 wherein the host processor is configured to execute a trading strategy via a software application, the trading platform further comprising:
- a shared memory between the ticker plant engine and the trading strategy software application;
- wherein the ticker plant engine is further configured to write normalized financial market data to the shared memory between the ticker plant engine and the trading strategy software application; and
- wherein the trading strategy software application is configured to (1) read the normalized financial market data from the shared memory between the ticker plant engine and the trading strategy software application and (2) execute the trading strategy based on the read normalized financial market data.
18. The system of claim 1 wherein the order management engine is configured to track order states on a per-instrument basis using instrument keys that directly index the order states and uniquely identify the financial instruments associated with the orders, wherein the instrument keys comprise index numbers assigned by the ticker plant engine.
19. The system of claim 1 wherein the ticker plant engine and/or the order management engine are deployed on one or more GPUs.
20. The system of claim 1 wherein the ticker plant engine and/or the order management engine are deployed on one or more CMPs.
Type: Application
Filed: Oct 2, 2023
Publication Date: Feb 1, 2024
Inventors: David Taylor (St. Louis, MO), Scott Parsons (St. Charles, MO)
Application Number: 18/375,728