SYSTEM AND METHOD FOR EVALUATING SECURITY TRADING TRANSACTION COSTS

A system and method for comparing investment transaction costs of institution peers includes database and a processor coupled to a network. The processor may be configured receive, via the network, security transaction data of investment institutions, which included data for traded securities, transaction order sizes, execution prices, peer identities and timestamps. The processor is further capable of grouping transaction data into groups of orders, calculating order costs and environmental factors for each order, and calculating a peer's average order cost within each group. The data are stored in the database so that it may be retrieved and displayed.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED CASES

This application is a Continuation of and claims priority to U.S. patent application Ser. No. 12/691,451 filed Jan. 21, 2010, which is a Continuation-In-Part of and claims priority to U.S. patent application Ser. No. 12/471,185 filed on May 22, 2009, which is a Continuation of and claims priority to U.S. patent application Ser. No. 10/674,432, filed Oct. 1, 2003, now U.S. Pat. No. 7,539,636, which claims priority to provisional patent application No. 60/464,962 filed on Apr. 24, 2003, the entire contents of each of which are incorporated herein by reference.

BACKGROUND OF THE INVENTION

The performance of an investment is strongly related to execution costs related to the investment. Often with trading securities, transaction costs may be large enough to substantially reduce or even eliminate the return of an investment strategy. Therefore, achieving the most efficient order execution is a top priority for investment management firms around the globe. Moreover, the recent demand of some legislators and fund shareholder advocates of greater disclosure of commissions and other trading costs makes their importance even more pronounced (see, for example, Teitelbaum [14]). Therefore, understanding the determinants of transaction costs and measuring and estimating them are imperative. For further discussion see, for example, Domowitz, Glen and Madhavan [5] and Schwartz and Steil [13].

Traditionally, there appear to be two different approaches for estimating trading costs. The first approach is purely analytical and emphasizes mathematical/statistical models to forecast transaction costs. Typically, these models are based on theoretical factors/determinants of transaction costs and take into account, for instance, trade size and side, stock-specific characteristics (e.g., market cap, average daily trading volume, price, volatility, spread, bid/ask size, etc.), market and stock-specific momentum, trading strategy, and the type of the order (market, limit, cross, etc.).

The modeling is focused primarily on price impact and, sometimes, opportunity cost. For example, Chan and Lakonishok [4] report that institutional trading impact and trading cost are related to firm capitalization, relative decision size, identity of the management firm behind the trade and the degree of demand for immediacy. Keim and Madhavan [9] focus on institutional style and its impact on their trading costs. They show that trading costs increase with trading difficulty and depend on factors like investment styles, order submission strategies and exchange listing. Breen, Hodrick and Korajczyk [2] define price impact as the relative change in a firm's stock price associated with its observed net trading volume. They study the relation between this measure of price impact and a set of predetermined firm characteristics. Typically, some of these factors are then selected and implemented in mathematical or econometrical models that provide transaction cost estimates depending on different trade characteristics and investment style. ITG ACE® (Agency Cost Estimator), described in [7] is an example of an econometric/mathematical model that is based on such theoretical determinants. It measures execution costs using the implementation shortfall approach discussed in Perold [12]. See also [15] and [16] for other examples of this type of model.

While the first approach implicitly assumes that past execution costs do not entirely reflect future costs, the second approach is specifically based on this principle. In the second approach, the focus is exclusively on the analysis of actual execution data, and resulting costs are used primarily for post-trade analysis. Typically, executions are subdivided into order groups or scenarios, and then costs in each scenario are computed.

The present invention incorporates ideas of both approaches above to provide an improved method for computing transaction costs.

SUMMARY OF THE INVENTION

According to the present invention, there is provided a method for creating a peer group database. The method includes computer implemented steps of receiving, via an electronic network, security transaction data of a plurality of investment institutions, the transaction data including identities of traded securities, transaction order sizes, execution prices, peer identities and timestamps and grouping, using a computer processor, the transaction data into groups of orders. The method further includes calculating, using the computer processor, an order cost and at least one environmental factor for each order of the groups of orders, calculating each peer's average order cost within each group of orders in which the peer has one or more orders, and storing the calculated data in a database.

In some embodiments, the one or more environmental factors are Momentum, Volatility, and/or Liquidity. Peer average order costs may be value-weighted or observation weighted. According to one embodiment, at least some of the security transaction data is received from an institutional trader's order management system.

In some embodiments, each peer is a like entity such as an investment institution, an investment manager, or an investment trader. According to some embodiments, the method further includes a step of computing a peer's aggregate cost performance based on z-scores for the peer's average order costs. According to some embodiments, the method may further include a step of computing a peer rank and/or a step of generating a report for displaying on a display device, where the report includes a rank and an aggregate cost performance of a peer.

In an embodiment of the present invention, there is a peer group investment transaction cost comparison system having a database and a processor coupled to a network. The processor may be configured to receive, via the network, security transaction data of investment institutions, the transaction data including identities of traded securities, transaction order sizes, execution prices, identities of peers and timestamps. The processor may be further configured to group the transaction data into groups of orders, calculate an order cost and at least one environmental factor for each order in the groups of orders, calculate each peer's average order cost within each group of orders in which the peer has one or more orders, and store said data in the database. In some embodiments, at least some of the security transaction data is transmitted to the processor from an institutional trader's order management system coupled to the network.

In some embodiments, an overall average cost, a situational average cost, and one or more average environmental factors may be computed for each peer investment. In an embodiment, the system may generate a report for display on a display device, the report having a selected investment institution's average cost, situational average cost, and one or more average environmental factors. The report may further include data representing peer investment institutions' overall average costs, situational average costs, and one or more average environmental factors.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be described in detail with reference to the following drawings, in which like features are represented by common reference numbers and in which:

FIG. 1A shows the exemplary values ranges and abbreviated codes of categories for cost factors according to an embodiment of the present invention;

FIG. 1B illustrates exemplary situational groups of orders;

FIG. 2 shows exemplary ranges and values for the cost factors shows in Table 1 of FIG. 1A;

FIG. 3 shows average trading costs for various categories and benchmarks of the sample shown in FIG. 2;

FIG. 4 shows order based dollar and equally weighted average trading costs for various categories and benchmarks of the sample shown in FIG. 2;

FIGS. 5-6 are graphs which compare medium cost estimates obtained through different regression techniques;

FIG. 7 is a graph comparing the 25th/percentile estimates obtained for different regression techniques;

FIGS. 8-10 are graphs which compare estimated and realized cost percentile versus trade sizes;

FIG. 11 is a graph showing estimated realized cost percentile versus momentum factor;

FIGS. 12-14 are graphs which compare the estimated cumulative distribution function versus its empirical counterpart;

FIG. 15 is a graph comparing the estimated cumulative distribution function with its empirical counterpart;

FIG. 16 is a block diagram of an exemplary system for estimating transaction costs according to an embodiment of the present invention;

FIG. 17 is a screen shot of an exemplary page of an exemplary client interface;

FIG. 18 illustrates exemplary groupings of orders;

FIG. 19 illustrates overall firm scores computed according to embodiments of the present invention;

FIG. 20 illustrates an exemplary Firm Summary Report according to an embodiment of the present invention;

FIGS. 21 and 22 illustrate detailed reports according to an embodiment of the present invention;

FIG. 23 illustrates an exemplary “Momentum Vs ADV” detail report;

FIG. 24 illustrates an exemplary Momentum report for selected traders;

FIG. 25 illustrates an exemplary performance report for selected traders by Side, Volatility and Market Cap;

FIG. 26 illustrates an exemplary performance report for selected managers by Region; and

FIG. 27 illustrates an exemplary method of creating a peer database.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention provides a novel system and method for estimating financial transaction costs associated with trading securities, and comparing institutional performance among peer institutions. Transactional data from various peer institutions is collected and analyzed on a periodic basis to create comprehensive data relating to transactions, order and executions. Data may be subject to validation, quality filtration, and canonicalization. In one embodiment, data may be collected from Order Management Systems of institutional investors using ITG's Extract Manager and pre-processed using an approach such as ITG's Unified Data Model (UDM). The data can be manipulated and presented to a peer institution so that they can benchmark their performance against their competitors. Costs are measured by comparing the costs of a trade or order by an institution to one or more benchmarks, and then comparing costs between institutions for similar stocks under similar situations. Implementation shortfall costs may also be used.

Embodiments of the present invention help institutional investors to manage their trading costs more efficiently by ranking the performance of asset managers relative to other peer group participants. Embodiments of the present invention stimulate institutional investors to enhance their analytical environment using the most efficient trading execution tools (e.g., POSIT®, TriAct®, ITG SmartServer®, etc.) as well as advanced trading analytical products (e.g., TCA®, ITG Opt®, ITG ACE®, ResRisk™, etc.).

Orders may be block orders of securities requiring the buying or selling of a minimum number of shares of a security, such as one thousand or ten thousand shares. Also, orders such as Portfolio Manager orders may be used.

Embodiments of the present invention include systems and methods for providing security transaction costs. Methodologies are described first, followed by exemplary embodiments of systems for implementing the same. One skilled in the art will readily comprehend that the invention is not limited to the embodiments described herein, nor is it limited to specific programming techniques, software or hardware.

A framework with two different clusterization approaches is provided: single executions and orders. Trades submitted by the same institution with the same order identifier, side and stock are assumed to belong to the same order.

To build the cost estimates, the transaction cost of each trade or order/trading decision may be estimated against a number of benchmarks. Though the true costs to an institutional trader may include costs such as commission costs, the administrative costs of working an order, as well as the opportunity costs of missed trades, the benchmark approach focuses primarily on costs represented by price impact. This price impact can be explained as the deviation of the executed price from an unperturbed price that would prevail had the trade not occurred.

The following values may be employed as benchmarks for estimating transaction costs:

CT−1—the closing price of the stock on the day prior to the day of execution for executions (or on the day prior to the trading decision for orders);
VT—the volume-weighted average price (VWAP) across all trades during the first day of the trade execution for executions (or during the first trading day of the period over which the decision was executed for orders);
CT+1—the closing price of the stock on the first day after execution for executions (or on the first day after the last fill of the decision for orders);
CT+20—the closing price of the stock on the 20th day after execution for executions (or on the 20th day after the first trading day of the period over which the decision was executed for orders);
OT—the open price of the stock on the day of execution for executions (or on the day of trading decision for orders);
MT—the prevailing midquote of the stock prior to execution time for executions (or prior to time of trading decision for orders);
PWP—the participation weighted price, the average price for an order that accesses a specific portion of actual market volume from the order's actual beginning time stamp until sufficient market volumes occur to complete the order based on a participation rate; and
Nt—the next price (or Open if outside of market hours) recorded in the marketplace after the specified time stamp of the transactional event.

Benchmark CT−1 is described more fully in Perold [12]. Benchmark VT is described in detail by Berkowitz, Logue, and Noser [1]. The benchmark MT is, probably, the purest form of unperturbed price that one could choose as opposed to CT−1, for example, because it does not depend on other trades that occur between closing and time of execution. All three benchmarks (CT−1, VT and MT) are widely used in practice both for cost measurement and trading performance evaluation, and will be understood by one of ordinary skill in the art. Although the benchmark VWAP is widely used, it is generally not considered to be appropriate for evaluation of large order executions, because it can be “gamed” by avoiding trading late in the day if prices appear to be worse than the VWAP price. See, for instance, Madhavan [11] for more details and Lert [10] for analysis of differences between various cost measurement methods.

Benchmark PWP offers a base-line from which to measure potential savings or “value-add” of an actively managed order. PWP has, in effect, a variable, but calculated, time horizon, capturing price movement during the order's execution horizon—a predominant explanatory factor in trading costs. Thus, PWP is a “reasonable” price given prevailing market conditions for an order for a particular stock from its trading inception. An adjustment variable to PWP is a participation rate which, in some embodiments employing PWP, may be selected to be a rate appropriate for a particular firm, portfolio manager, fund, portfolio strategy, cost sensitivity and trading capabilities. For example, a 15% participation rate may be assumed and denoted PWP15.

Transaction costs can be calculated in basis points (bps) according to the formula:

p ^ - p b p b × δ × 10 , 000 ; Eq . ( 1 )

where {circumflex over (p)} is the actual execution price, pb is the benchmark price and δ is set to 1 or −1 in case of a sell or buy order, respectively. Positive trading costs show outperformance, which means that the trading decision resulted in profit.

In one embodiment, transaction costs may be based upon the Implementation Shortfall (IS) methodology. In this embodiment, IS cost may be calculated as in Eq. (1) where pb is Nt.

To compare transaction costs of one peer institution against the costs of other peer institutions under similar circumstances, cost estimates and IS costs for median and other percentiles for each comparison framework are built into a database, or other storage means, called a Peer Group Database (PGD). A graphical user interface is preferably provided to allow users to view relative peer performance by both traditional measures, as well as trade characteristics. Soft copy reports and analyses are delivered as well. More precisely, trading costs of executions/orders can be grouped by a number of market and stock-specific cost factors and dimensions, such as type, market capitalization, side, market, size (represented by a percentage of average daily trading volume), volatility and short-term momentum. These factors, in various combinations and value ranges, may define thousands of scenarios or “order groups.” In practice however, any given peer institution is likely to have transactions in a subset of such scenarios. Exemplary values and ranges of exemplary cost factors and situational dimensions are presented in FIG. 1A.

The cost factors listed above have a significant impact on transaction costs, but numerous other factors are contemplated to be used, for instance, broker type (alternate broker, full-service broker, research broker, etc.), order type (market, limit, cross), liquidity and the inverse of dollar price of a stock (see e.g., Werner [17] or Chakravarty, Panchapagesan and Wood [3]).

Additionally, orders may be grouped according to region (e.g., U.S., U.K., Asia, Europe, etc.), and Security Type (e.g., ADR, Equity, GDR, Other, Preferred, REIT, etc.).

Referring to FIG. 1A, the factor Type may be divided into Growth or Value stocks based on the methodology used by Russell 3000® in its indices (Russell is a registered trademark of the Frank Russell Company). Micro-cap stocks are defined as stocks that are neither Growth nor Value stocks and have a market capitalization lower than a predetermined threshold. Note that, by construction, it may happen that a stock belongs to both Growth and Value categories.

The factor Market Capitalization classifies stocks into market capitalization groups. In one embodiment, for executions, the Market Capitalization is always based on the closing stock price CT−1 of the day prior to execution. For orders, the Market Capitalization is based on the closing stock price CT−1, but on the day prior to trading decision. The threshold for Small cap stocks is 1.5 billion dollars. The threshold for Mid cap stocks is 10 billion dollars. In another embodiment, Market Capitalization comprises three groups: Small cap being under 5 billion, Large cap being over 5 billion, and ADRs and GDRs for which capitalization is not considered.

The factor Side comprises two categories: Buy and Sell. Preferably, no distinction is made between normal sells and short sells.

For U.S. applications, the factor Market subdivides stocks in two categories: Listed and over-the-counter (OTC) stocks. However, for other international applications, the Market factor can be subdivided into any number of categories.

The Size factor captures the (total) trade size of an execution (order). Size is measured relative to the average daily share volume (ADV), which is defined as the median daily dollar volume of the latest twenty-one trading days divided by the closing stock price of the day prior to execution, for executions, and the day prior to the trading decision for orders.

In one embodiment, the factor short-term Momentum is measured over the last two days prior to execution. This Momentum measures the price evolution of a stock within the last two trading days as a fraction of absolute price changes. Specifically,

M = ( Q n - Q 0 ) / ( i = 1 n Q i - Q i - 1 ) , Eq . ( 2 )

where Q0 and Qn are the midpoints of the first and last valid primary quotes of the most recent two trading days and Qi, 0<i<n, is the midpoint of the i'th valid primary quote occurring immediately prior to each valid primary trade of the most recent two trading days. Succinctly, a valid primary quote or trade is a quote or trade of a stock that occurred under regular market conditions on the stock's primary exchange.

In another embodiment, a Momentum in basis points may be computed based upon Participation Weighted Price. In particular,

M bp = N t - PWP 15 N t × δ × 10 , 000. Eq . ( 2 A )

In this embodiment, PWP is used to project the horizon for the order execution and the degree of market movement for the specific stock during that period.

When grouping orders according to Momentum, orders may be categorized as having very adverse, adverse, neutral, favorable or very favorable momentum if M falls within, for example, ranges of ≦−0.1; (−0.1, −0.02]; (−0.02, 0.02]; (0.2, 0.1]; >0.1, respectively. Additional categories and ranges are possible based on M and Mbp.

The factor Volatility may be computed using the formula

1 n - 1 i = 2 n ( ln ( P i / P i - 1 ) ) 2 t i - t i - 1 , Eq . ( 2 B )

where n is the total number of valid primary trades for that day; P, and t, are, respectively, a price and time (in seconds) of the i'th valid primary trade within adjusted trading hours. When grouping orders according to Volatility, orders may be categorized with relative terms such as high, medium and low volatility. In an exemplary embodiment, volatility less that 2% is considered Low and Volatility greater than 2% is considered High. Adjusted market hours take into consideration transaction volumes that may derive from pre-opening or post-closing auctions, together with transactions (“prints”) that may carry a delayed time stamp, such as “market-on-close” activity.

Liquidity generally refers to a relative demand for liquidity or an order as a percentage of volume. In one embodiment, Liquidity may be computed based on the ITG ACE® pre-trade cost model with a neutral strategy using filled shares. Aspects of this model are disclosed in co-owned U.S. Patent Application Publications 2003/0233306 and US 2009/0125448, which are hereby incorporated herein by reference.

In some embodiments, the categories of each factor are preferably restricted to be used with other categories as follows: Type categories Value and Growth can be selected only with factors Market Capitalization and Side, and Type category Micro-cap can be selected only with the factor Side.

For scenarios that do not use the factors Size and Momentum, empirical distributions can be natural estimates for peer cost distributions. However, this is not true for the other cases. It is compelling that cost estimates should be consistent and close to each other for close values of size and momentum. In other words, the ranks of realized costs for two very similar scenarios should not differ very much.

Embodiments of the present invention may provide robust and consistent peer costs and cost estimates for any choice and combination of factors: Market Capitalization, Side, Market, Size, Liquidity, Volatility, Type, Security Type and Momentum.

In one aspect, the methodology of the present invention provides estimates for cost percentiles for any values of Size and Momentum M from [0, ∞) and [−1, 1], respectively. Therefore, the methodology provides much more flexibility than actually needed when values of Size and Momentum are subdivided into different groups, and can be applied even if the choice of the ranges for Size and Momentum is different from the ones shown above.

As illustrated by exemplary groupings of orders in FIG. 1B, orders may be grouped so that orders in a group share a number of common characteristics defining a scenario or situation. FIG. 1B illustrates examples of only two such scenarios; embodiments of the present invention may allow for thousands of scenarios. FIG. 1B illustrates that the orders of 1,000 shares of ITG, 2,000 shares of CPX and 1,000 shares of PSUM each had sizes of 0 to 1%, were traded under neutral momentum and low volatility, and were for Small cap securities. The orders of 15 m shares of IR, 12 m shares of DOW and 13 m shares of STX had common factors of order size >500%, were for Large cap securities and were traded under adverse momentum and high volatility. All of the orders shown were buy-side, traded in the U.S. and were for equity securities.

An embodiment of the present invention is described next by way of example. Estimation methodology is based on US execution data from January 2002 to December 2002 submitted by users of TCA®. In this sample, the institutional trades represented 91 firms. All institutions together accounted for 14.6 million trades, 82.7 billion shares and 2,067 billion total dollar value. The trades were clusterized into 6.4 million orders; an average order consisted of 2.3 executions.

FIG. 2 shows descriptive statistics for the entire sample and its sub-samples based on the categories for each cost factor. The table presents the following information: the number of executions, the number of orders, the number of shares traded, the number of stocks (identified by unique cusips) traded and total dollar volume. Statistics for factor Type show that the subdivision between Growth and Value stocks was quite even. Only a minority of executions and orders belongs to the Micro-cap category, although it contains the largest amount of stocks. The subdivision with respect to market capitalization seems to be justified—executions and orders are evenly distributed among the three groups. As shown, the number of large cap stocks was the lowest, but the total dollar value is the highest, Small cap stocks are in the majority, while the dollar volumes of buy and sell orders are approximately the same. Interestingly, the average size of sell orders is larger than the average size of buy orders. The overwhelming majority of executions and orders belong to the smallest size group, i.e., less than or equal to 1% of ADV. This raises another challenge on building reasonable and robust cost estimates for the entire framework, including large trades and orders. Finally, the statistics for the momentum subdivision show that the majority of values of the momentum are close to zero. Moreover, negative values for momentum seem to outnumber the positive ones, which is expected due to the overall market trend for the period.

Analysis of Average Realized Transactions

FIGS. 3-4 present average transaction costs for different factors, benchmarks and clusterization types. For each scenario, two average costs are provided. The first value is the dollar weighted average trading cost, whereas the number in parenthesis indicates the equally weighted average trading cost. Note that dollar weighted and equally weighted averages are very different in most of the cases. By construction, the dollar weighted average depends mostly on a few large trades/orders only. In cases of symmetric distributions, the equally weighted average is identical to the median. From this perspective, the equally weighted averages can be more appropriate to analyze characteristics of peer group cost distributions.

FIG. 3 shows average transaction costs for executions for various categories of factors for six of the benchmarks. Regarding the average trading costs for benchmarks CT−1, OT and MT, growth stocks appear to have slightly higher average trading costs than Value stocks; by definition, Micro-cap stocks are very illiquid and thus encounter much higher average transaction costs. It is apparent from the values in FIG. 3 that trading costs are inversely related to Market Capitalization, and listed stocks have lower average costs than OTC stocks, supposedly, due to the fact that OTC stocks are, in general, more volatile. It can be observed that, on average, sell trades have positive costs while buys appear to have negative average costs. This observation holds for benchmarks CT−1 and OT, which is very likely due to the overall negative market movement within the selected period. This assumption is confirmed by the reversed signs of average costs for sells and buys for post trade benchmarks CT+1 and CT+20. As expected, average trading cost decreases as trade size increases. No specific pattern could be found for the average trading costs in different momentum categories.

For benchmark VT, it is observed that most of the average costs are concentrated around zero for all categories that have been studied. The highest absolute value of average costs is 17 b.p. Average costs for Growth and Value stocks are close, while costs for Micro-cap stocks are significantly negative. Similarly to the previous benchmarks, average trading costs seem to be inversely related to market capitalization and OTC stocks appear to have higher average costs than Listed stocks. However, in contrast to pre-trade benchmarks, there is little difference between average costs for buys and sells (at least for the dollar weighted averages), which is likely due to the fact that, by construction, the VWAP benchmark is set for the day and is not affected by price movement within each day. Average cost behavior for Size and Momentum factors for VT is similar to the case of pre-trade benchmarks.

Post-trade benchmarks CT+1 and CT+20 yield quite different results. Benchmark CT+20 provides average costs that fluctuate substantially, for example, both dollar and equally weighted average costs have inverse signs for the same categories in some cases. Basically, the benchmark CT+20 does not seem to indicate any meaningful measure for price impact. Benchmark CT+1 provides average costs that have the reversed behavior of the pre-trade benchmark CT−1. Costs overall are mostly positive, which indicates that on average, peer institutions have strong performance with respect to this benchmark. Micro-cap stocks have the highest positive costs and executions of OTC stocks outperform those of Listed stocks.

The analysis shows that average realized transaction costs of the exemplary data set are in line with empirical results presented by other researchers (see, for instance, Chakravarty, Panchapagesan and Wood [3]). The results strongly confirm that measuring costs with respect to different benchmarks affects performance evaluation significantly. In light of this fact, it seems to be a challenge to build a methodology that can be efficiently applied for all benchmarks discussed above.

FIG. 4 displays analogous results for orders.

Peer cost percentiles can be estimated for all benchmarks, clusterization types and possible choices of scenarios, assuming that at least one of the factors Size and Momentum has been selected. More precisely, the main result is to derive estimates of cost percentiles:


Xi=CostPercentileMarketCap=y1,Side=y2,Market=y3,Size=y4,Momentum=y5(i),(i),  Eq. (3)

where y=y1, y2, y3, y4, y5) are arbitrary values for factors Market Capitalization, Side, Market, Size and Momentum, iε[0, 100], and costs are measured relative to one of the six benchmarks discussed above.

Before estimating Xi in Eq. (3), one must note that, first, while the factors Market Capitalization, Side and Market are discrete, Size and Momentum can have any values from [0, ∞) and [−1, 1], respectively. Consequently, Eq. (3) consists of an infinite number of functions and thus, an infinite number of estimates have to be derived. Second, a pure empirical approach might not be practical in all cases. Subdividing factors Size and Momentum into different groups and computing the empirical distribution for each scenario may lead to inconsistency and instability. As a result, performance of costs realized from two very similar scenarios may be ranked very differently, which may be confusing for users. Third, it is preferred to have a methodology that provides robust estimates and that works for both clusterization types and the six benchmarks CT−1, VT, CT+1, CT+20, OT and MT. This requirement is important since various benchmarks (for instance, VT and CT−1) have very different properties.

In provisional application No. 60/464,962, an ordinary least squares (OLS) methods method is described for providing estimates. Estimates may be provided not only for the mean or median, but also for the 25th, 40th, 60th and 75th costs percentiles in addition to the median. Instead of regressing all the cost percentiles in the comparison framework directly on the (total) trade size and momentum values, the comparison framework may be subdivided into different groups depending on the Momentum and Size of the executions (orders). Then, for each group, the 25th, 40th, 50th (median), 60th and 75th cost percentiles, are determined, as well as the equally weighted average values of momentum and (total) trade size.

Similar to the simple OLS approach, based on research conducted, all five percentiles are assumed to depend linearly on functions ƒ and g of size and momentum, or, specifically,


Xiiif(S)+γig(M)+εi, i=25,40,50,60 or 75.  Eq. (4)

Moreover, based on empirical research, it is assumed that ƒ is positive, monotonely increasing, ƒ(0)=0, and g is either


g(x)=x or g(x)=|x|v, for some v>0.

A possible choice for ƒ is ƒ(x)=xμ, for x>0 and some μ>0.

In order to have a rough estimate for the whole peer cost distribution of a scenario, the percentiles between 25 and 75 can be computed by linear interpolation. Since transaction cost distributions are heavy-tailed, percentiles below 25 and above 75 are derived assuming Pareto type of distributions.

Different regression estimation techniques can be chosen to estimate the regression parameters (αi, βi, γi) in Eq. (4) by regressing the cost percentiles (i) on average values of momentum and size. Groups without sufficient number of observations are preferably skipped from the regression in order to reduce noise as much as possible and ensure stability of the estimates. The present embodiment focuses on the following three regression techniques: (a) ordinary least squares (OLS), (b) weighted least squares (WLS) with respect to OLS residuals (WLS1), and (c) WLS with respect to observations in each subdivision (WLS2).

The WLS1 approach is an enhancement of the OLS approach and comprises two steps: first, OLS regression is conducted and the residuals of the regression are determined; and second, the parameters are reestimated by weighting the observations with the inverse of their squared residuals. In order to avoid abnormal weighting, inverses of the squared residuals are truncated by the value (Σi=1nei2)−1.

Estimates become more robust due to the weighting. Moreover, based on research, squared residuals are generally the highest for large groups with large trade and order sizes. Weighting by the residuals increases the importance of cost percentiles for groups with smaller sizes. This is desirable since executions (and orders) with small (total) trade sizes are in the majority as pointed out above.

Method WLS2 weights the importance of each group in a different way. Instead of weighting by the OLS residuals, WLS2 takes into consideration the amount of observed data in each subdivision and thus weights by the number of observations in each group. The problem with this method is that the number of observations might vary dramatically from group to group according to the data. The approach might yield reasonable results for some scenarios (usually for small trade sizes and momentum values close to zero) but provide bad estimates overall.

The present embodiment has the advantage that it provides more information about the whole peer cost distribution. Moreover, it filters out outliers in a natural way by taking medians (and other percentiles) in each group. However, it should be noted that there is no theoretical justification how to subdivide groups optimally, and regressing percentiles on the average size and momentum is only an approximation.

FIGS. 5-7 provide comparison of the results for these three regression techniques. In each figure, the empirical percentiles are annotated by points.

FIG. 5 compares median cost estimates obtained by OLS, WLS1 and WLS2 with empirical median costs. The dots denote the empirical medians. The solid line indicates the estimated median costs using the regression techniques WLS1. The two dotted lines show median cost estimates for OLS and WLS2. All estimates have been derived using regression Eq. (4) for all executions in our data sample with ƒ(x)=x and g(x)=0. Costs are measured relative to benchmark CT−1. Empirical percentiles have been regressed on average size and momentum values, i.e. ƒ(x)=x and g(x). The chart illustrates that all regression methods provide good estimates.

FIG. 6 compares median cost estimates obtained by OLS, WLS1 and WLS2 with empirical median costs. The dots denote the empirical medians. The solid line indicates the estimated median costs using the regression technique WLS1. The two dotted lines show median cost estimates for OLS and WLS2. All estimates have been derived using regression Eq. (4) for all executions of Large cap stocks in the data sample with ƒ(x)=x and g(x)=0. Costs are measured relative to benchmark CT−1. Instead of taking all executions into account, estimates have been derived for executions for Large cap stocks only. The functions ƒ and g have been chosen linear again. Median cost estimates using OLS and WLS1 still do not differ considerably (WLS1 seems to yield slightly better results); however, method WLS2 provides unreasonable estimates for large trade sizes.

FIG. 7 compares 25th-percentile estimates obtained by OLS, WLS1 and WLS2 with empirical 25th-percentiles of costs. The dots denote the empirical 25th-percentiles of costs. The solid line indicates the estimated 25th-percentile using the regression technique WLS1. The two dotted lines show 25th-percentile estimates for OLS and WLS2. All estimates have been derived using regression Eq. (4) for all executions in our data sample. ƒ and g have been selected according to Eq. (7). Costs are measured relative to benchmark MT. FIG. 7 shows 25th-percentile estimates for executions and benchmark MT using all data. By construction, WLS2 yields best results for small trade sizes, but underperforms the two other techniques when trade sizes increase.

FIGS. 5-7 are typical examples of the overall performance of the techniques of the present invention. WLS1 is the most appropriate method for estimation of the five cost percentiles overall and provides consistent and robust estimates for all groups, for both executions and orders, and all benchmarks.

Regression Constraints

Special attention should be paid to the fact that, without assuming any constraints on the regression parameters: αi, βi and γi, i=25, 40, 50, 60 and 75, it could occur that for some pair (S, M),


Xiiiƒ(S)+γig(M)<αjjƒ(S)+γjg(M)=Xj, for i>j  Eq. (5),

which is counterintuitive.

To avoid such situations, constraints have to be assigned to the regression parameters. The constraints depend on the choice of benchmark and of function g.

Accordingly, there are three restrictions for each scenario, benchmark and clusterization type. The first constraint suggests that for all cases, condition (5) should not hold for (S, M)=(0, 0). In other words, αi≧α; for i>j is assumed.

The second restriction takes into consideration that dispersion of costs should increase or decrease as size increases depending on the benchmark and clusterization type. Precisely, for i>j,

    • Bi≦βj for benchmark VT and clusterization type “executions”;
    • βi≧βj otherwise.

The last constraint depends on the choice of the function g and on the type of benchmark. Typically, it is a technical condition on the parameters γi that ensures that (5) doesn't happen.

Finally, if any of these constraints is violated, the regression parameters (αi, βi, γi) are adjusted relative to the median are (α50, β50, γ50). This approach guarantees that the medians, as the most important percentile estimates, have no regression constraints, and thus, remain unaffected by possible adjustments.

Selection of ƒ and g

For each benchmark and clusterization type, several functions of ƒ and g are chosen in regression Eq. (4). The linear functions ƒ(x)=x and g(x)=x provide the good results for all benchmarks, except for MT. Performance was measured via the average value of R2 for regressions and the number of adjustments that had to be applied due to the regression constraints. Average R2 of all possible scenarios was around 0.55 for the test set, and parameters had to be adjusted in approximately 30% of cases. The methodology had the best performance for benchmark CT+20 and executions with average R2=0.62, and the worst performance for the benchmark MT and executions with average R2=0.45. It is assumed that the good performance for CT+20 has the following explanation. As already mentioned above, benchmark CT+20 is just a measure for general price movement and noise in the 20 day period. From this point of view, empirical cost percentiles for CT+20 might depend on the underlying trades or orders, very little, and thus, the dependence on momentum and size values of the stocks traded will be weak as well. As a consequence, βi and γi in Eq. (4) can be set to 0 so that Eq. (4) is transformed into


Xiii, i=25, 40, 50, 60 or 75.  Eq. (6)

The poor performance of MT can be explained by the completely different behavior of its cost percentiles. The prevailing midquote benchmark is, probably, the purest benchmark that can mimic the unperturbed price. For small trade sizes, execution prices are naturally bounded by the bid and ask quotes of a stock and thus, by definition, costs with respect to the prevailing midquotes are bounded as well. As a result, all five cost percentiles must lie very closely to each other, which, unfortunately, results in the violation of the regression constraints. Through empirical studies, it was determined that the functions


ƒ(x)=ƒ12(x)) and g(x)=|x|3/4,  Eq. (7)

where

f 1 ( x ) = x 1 / 10 and f 2 ( x ) = { x 4 / 0.02 3 , x 0.02 x , x > 0.02 Eq . ( 8 )

in regression Eq. (4) for benchmark MT yield the most satisfactory results.

The function ƒ2 transforms sizes of less than 2% of ADV into even smaller values. The transformation has the desired effect that percentile cost estimates of small trade sizes do not differ significantly. ƒ1 and g model the overall non-linear behavior of Xi in the variables S and M, respectively.

FIGS. 8-10 illustrate typical plots for estimated and realized, cost percentiles versus trade sizes for the benchmarks CT+20, VT and MT. FIG. 8 shows estimated and realized cost percentiles versus trade sizes. The estimates are based on all executions that had momentum values within the rage (−0.02, 0.02). All estimates have been derived using regression technique WLS1. ƒ and g have been selected as ƒ(x)=x and g(x)=x. Costs are measured relative to benchmark CT+20.

FIG. 9 displays estimated and realized cost percentiles versus trade sizes. The estimates are based on all executions of Large cap stocks that had momentum M values within the range (−0.02, 0.02). All estimates have been derived using regression technique WLS1. ƒ and g have been selected as ƒ(x)=x and g(x)=x. Costs are measured relative to benchmark VT.

FIG. 10 shows estimated and realized cost percentiles versus trade sizes. The estimates are based on all executions that had momentum values within the range (−0.02, 0.02). All estimates have been derived using regression technique WLS1. ƒ and g have been selected according to Eq. (7). Costs are measured relative to benchmark MT. In FIGS. 8 and 10, the estimates are based on all executions with Momentum M values within the range (−0.02, 0.02).

FIG. 9 contains cost percentiles for all Large cap stocks and executions with Momentum M values within the range (−0.02, 0.02). As discussed above, the Figures show different behavior of cost percentiles for various benchmarks. Note that the scale on the y-axis varies considerably from benchmark to benchmark. Peer cost distributions for benchmark CT+20 are generally flat and heavy-tailed, and the form of the distribution does not change drastically as the trade size increases. This is different for the benchmarks VT and MT. In both cases, the standard deviations of peer cost distributions change considerably as trade sizes increase (for MT it increases, for VT it decreases).

FIG. 11 displays estimated and realized cost percentiles versus momentum for the benchmark MT. FIG. 11 illustrates that the cost percentiles depend on the variable Momentum non-linear fashion. Executions with high absolute values for short-term momentum appear to be more costly. The estimates are based on all executions. All estimates have been derived using regression technique WLS1. ƒ and g have been selected as ƒ(x)=x and g(x)=|x|3/4.

Modeling the Tails of Peer Cost Distributions

It is well-known that empirical cost distributions are generally asymmetric and heavy-tailed. The asymmetry has been incorporated in the two-step methodology of the present invention by using five independent regression equations for the estimation of the 25th-, 40th-, 50th-, 60th- and 75th-percentiles. The heavy tails of the peer cost distributions can be modeled by Pareto distributions that are commonly used in extreme value theory (see, e.g., Embrechts, Klueppelberg and Mikosch [6]). The modeling of the left tail of a peer cost distribution can be represented by the function F only. The methodology for the right tail can be modeled in a similar way.

Assuming a Pareto-type distribution tail behavior, the left tail of F is modeled as


F(x)=c(X25+z−x)κ, for x≦X25,  Eq. (9)

where c, z and κ, are positive constants determined from conditions:


0.25=F(X25),  (i)


0.15(X40−X25)=F′(X25), and  (ii)


0.0001=F(−10,000).  (iii)

Condition (i) follows directly from the definition of X25 and Eq. (9), condition (ii) guarantees that the peer cost distribution function F is smooth in X25, and condition (iii) assumes that all peer cost distributions must have virtually finite ranges. Selection of the function (9) does not assume that the function can be equal to 0, but the condition (iii) makes costs below −10,000 basis points practically impossible.

Conditions (i), (ii) and (iii) define the left tail of the distribution function F uniquely and percentiles X1, . . . , X24 can be derived.

Since actual transaction costs are extremely noisy and heavy-tailed, a robust method to build peer group cost distributions is required. The present invention provides a methodology that estimates peer cost percentiles for six different benchmarks, two different clusterization types and all possible choices of scenarios. In the present embodiment, trading costs can be grouped by the factors Type, Market Capitalization, Side, Market, Size and Short-term Momentum. While the first four factors have discrete values as input, it may be assumed that the factors Size and Momentum M can have any values between [0, ∞) and [−1, 1], respectively.

The two-step approach provides smooth and robust estimates for all scenarios corresponding to any values of numerical factors Size and Momentum. If Size and Momentum are subdivided into discrete groups S1, . . . , Sm and M1, . . . , Mn; m, n≧1, respectively, the procedure for estimating peer cost distributions remains similar to the continuous case. For any partition (Sj, Mk) 1≦j≦m and 1≦k≦m, compute average Size and Momentum (S, M) for the partition and determine the five percentiles X25, . . . , X75 by inserting (S, M) in Eq. (4). All other percentile computations are identical to the continuous case.

The present embodiment filters out outliers in a natural way. Moreover, in contrast to a simple OLS regression, the two-step approach yields percentile estimates for the whole peer cost distribution. There is no theoretical justification on how to subdivide Momentum and Size groups in the first step of our methodology optimally. Regressing percentiles on the average Size and Momentum is an approximation only.

To measure performance of the two-step approach for an arbitrary scenario y=y1, y2, y3, y4, y5) for Market Capitalization, Side, Market, Size and Momentum one can compare the theoretical distributions with the corresponding empirical peer cost distributions (for y4 and y5 one can choose intervals [y4−Δy4, y4+Δy4] and [y5−Δy5, y5+Δy5). Comparing the theoretical with the empirical distributions provides an idea on how well the methodology works. Empirical studies performed by the present inventors have shown that in most cases estimated peer cost distributions are very close to the actual distributions. Percentile estimates of scenarios with very flat distributions appear to be less reliable. In particular, peer cost estimates for benchmark CT+20 might differ significantly from the empirical peer cost characteristics.

FIGS. 12-15 illustrate four examples of theoretical and empirical cumulative peer cost distributions for different scenarios and benchmarks. The scenarios are abbreviated by X_Y_Z where the character X stands for the corresponding category Market Capitalization, Y stands for the category Side and Z represents the category Market, assuming codes presented in FIG. 1. The solid black line denotes the empirical cumulative distribution function in each figure. All estimated cumulative distribution functions have been derived using the two-step approach with WLS1. The functions ƒ and g in (Eq. 4) have been selected as indicated in above. The selected Size and Momentum values are specified by two intervals. Estimated cost percentiles have been built using the point in the center of each of these intervals.

FIG. 12 compares the estimated cumulative distribution function with the empirical counterpart. The distributions have been built using all executions that belong to Listed stocks (scenario A_A_N), have 40-50% ADV trade sizes and values for short-term momentum between −0.05 and −0.03. Cost has been measured relative to CT−1. The estimated percentiles have been derived using the two-step approach with WLS1. The functions ƒ and g in Eq. (4) have been chosen as. ƒ(x)=x and g(x)=x. The distributions have been determined using all executions that belong to Listed stocks, have 40-50% of ADV trade sizes and values for short-term momentum between −0.05 and −0.03. The figure shows that the distributions are concentrated around the median. Some discrepancies can be observed around the 25th- and 75th-percentiles. The discrepancies might have appeared because the constraints in Eq. (4) haven't been satisfied and thus the parameters (α25, β25, β25) and (α75, β75, γ75) had to be adjusted. Another, simpler explanation might be that the scenario has a restricted number of empirical observations only. As a consequence, the empirical cumulative distribution might be not robust enough for comparison.

FIG. 13 presents the comparison for all executions that belong to Mid cap, Listed stocks with trade sizes between 0.4% and 0.6% of ADV and short-term momentum values around 0. Note that this is a scenario to which a lot of observations belong. Therefore, it can be expected that the empirical cumulative distribution function is robust. The plot shows that both cumulative distribution functions almost coincide. A similar good performance can be observed in FIG. 13. In this figure, both distributions have been created using benchmark CT+1 and scenario S_S_N with y4=0.14 and y5=0, i.e., sell trades belonging to Small cap stocks with trade sizes around 14% of ADV and short-term momentum around 0. FIG. 15 illustrates the comparison for benchmark CT+20 and scenario S_A_Q with trade sizes around 1% of ADV and momentum values around −0.1. The chart demonstrates again an extraordinarily fit for percentiles between the 25th- and 75th-percentile range. However, in contrast to the other figures, the empirical and estimated cumulative distribution functions do not coincide in the tails. A possible reason might be that the assumptions made for the tail behavior, discussed above, are not always applicable for benchmark CT+20. In particular, costs below −10,000 and above 10,000 b.p., respectively, may regularly occur and thus the threshold value 0.0001 is, probably, too low.

The presented charts can be viewed as a representative sample to assess performance of the two-step approach. The methodology provides consistent cost percentile estimates for the selection of the benchmarks, clusterization types and scenarios. By construction, estimates of median are the most accurate while percentiles for tails are based on modeling assumptions and, therefore, can potentially differ from actual percentiles. One could suggest estimating more percentiles in equation (4). However, increasing the number of percentiles that are estimated by a regression equation has a big drawback. The more regressions one adds to equation (4) the more adjustments and estimation errors can occur. The current method provides the most accurate percentile estimates around the center of the distribution as well as good percentile estimates overall.

Implementation Shortfall and Situational Ranking

In a further embodiment of the present invention, costs and rankings are based on Implementation Shortfall (IS) cost. In this approach, peers may be ranked within each group of orders, or “buckets,” based on their weighted average implementation shortfall costs in basis points. Order weighting may be based on total value, e.g., total dollar value traded. Alternatively, performance may be evaluated using observation weighting. Thus, as illustrated in the exemplary order groupings of FIG. 18, the implementation shortfall cost of all of orders meeting the characteristics of a particular order group, or “bucket,” is computed. For example, Order 1 is shown to have a weighted average IS cost of 10 in the group of orders having size 0 to 1%, Small cap, neutral momentum and low volatility. A score, further discussed below, may be computed for each order.

Overall peer scores may be calculated by weighting the peer's individual order's trade group scores by their percent contribution to the peer's total volume. As illustrated in FIG. 19, Firm 1 may have, for example, five orders in the database, orders 3, 14, 21, 22 and 23 where the orders represent 20%, 5%, 25%, 40% and 10%, respectively, of Firm 1's total dollar volume. Applying the equation:

FirmOverallScore = Groups , Orders OrderScore × Order % FirmTotalVolume , Eq . ( 10 A ) ,

Firm 1's overall score would be computed as 61.45. A peer's “normalized,” raw performance may be computed in basis points according to equation (10), below:


CPbp=Score×Stdev  Eq. (10),

where Stdev is as defined in Eq. (14) below and Score is

Score = buckets Z avg × ISCost D buckets ISCost D . Eq . ( 11 )

CPbp is a representation of “normalized,” raw IS costs and an index of dispersion around the mean of all peers' rankings. Score represents an accumulation of the peer's average z-scores across buckets based upon the proportion of the turnover market value of each bucket compared to total peer turnover. A peer's average z-score in a particular bucket, Zavg, may be computed as

Z avg = PeerOrdersInBucket ( Z Order × ISCost D ) PeerOrdersInBucket ISCost D , Eq . ( 12 )

thus taking the proportional score from each order based upon the market value of turnover. The score of an order, ZOrder, is computed according to Eq. (17) below and the market value (generally represented in U.S. dollars) of the peer's turnover in the bucket, ISCostD, may be computed as

ISCost D = PeerOrdersInBucket ( N t × Shares ) , Eq . ( 13 )

where Nt is the benchmark for the order (in U.S. dollars) and Shares is the number of shares traded for the order.

In order to create the “index of dispersion” for the consolidated ranking of Eq. (10), a weighted standard deviation of each bucket (expressed in basis points), Stdev, is accumulated:

Stdev = buckets StdISCost bp × ISCost D , Eq . ( 14 )

where the distribution of costs (in basis points for each order), StdlSCostbp, is expressed in standard deviations:

StdISCost bp = stddev ( ISCost N × 10000 ISCost D ) , Eq . ( 15 )

where ISCostN represents an order's implementation shortfall cost in a foreign exchange neutralized denomination (generally U.S. dollars):

ISCost N = Executions ExecShares × ( N t - ExecPrice ) . Eq . ( 16 )

Performance rank for each order, Zorder, may be computed based upon the difference of the IS cost and the mean (in basis points) for each bucket, expressed in standard deviations (i.e., z-scores):

Z Order = ( ISCost N × 10000 ) / ISCost D - ISCostMean bp StdISCost bp , Eq . ( 17 )

where the mean of the costs in the bucket, ISCostMeanbp, are computed according to:

ISCostMean bp = PeerOrdersInBucket ISCost N × 10000 / ISCost D count (* ) . Eq . ( 18 )

In another aspect of the present invention, a peer's costs may be ranked and reported relative to its peers in terms of Actual Cost and Relative Performance as well as the environmental conditions, such as Liquidity and Momentum, under which trades were actually conducted. Peer costs, performance, and environmental conditions are indicated by percentile delimited distributions. Each percentile represents an equal number of peers and a varying range of costs. With reference to the sample report illustrated in FIG. 20, the overall cost distribution for “Actual IS Cost” may represent all peers' costs in the database for the time period, in this example, the second quarter of 2009.

FIG. 20 further illustrates Firm 4's actual IS costs, situational performance and environmental conditions, compared to its peers as indicated by way of percentile bands. As shown under the “Actual IS Cost” column in FIG. 20, Firm 4's mean raw IS cost, computed according to Eq. (10B), is illustrated as (58) basis points, representing a value-weighted average IS cost for all of Firm 4's orders in the reporting period. No adjustment or handicap is made to the raw IS cost. Relative to the raw IS costs of its peers in the database, Firm 4 was ranked in the third quartile.

As further illustrated in the sample report shown in FIG. 20, a firm's performance may be compared to that of its peers in the context of similar trading circumstances. This “Performance” cost and “Situational Ranking,” as shown in FIG. 20 under the column so named, represent the firm's costs and ranking relative to the costs of a group of orders traded under similar circumstances. It represents the market-value weighted performance of the firm's ranking within the situational groups in which the firm actually traded. Thus, FIG. 20 illustrates, in terms of performance within the trading groups within which Firm 4 actually traded, that Firm 4 performed better than most, having a Situational Ranking cost of 41 basis points, placing Firm 4 in the top quartile.

In order to combine a firm's IS costs from the various situational groups while minimizing the adverse effect of groups having very wide cost distributions, a z-score, such as Zavg shown in Eq. (12), is computed for each firm in each group. This value represents the distance, in standard deviations, of a firm's value weighted mean IS cost in the group from the mean IS cost in that group.

A summary report may also provide information on the environment and market conditions in which a firm traded. For example, as further illustrated in FIG. 20, a firm's value weighted average liquidity and momentum environments for its transactions may be shown relative to the average liquidity and momentum environments of its peers. Firm 4 is shown to have traded, on average, in high liquidity and adverse momentum environments. Liquidity may be determined according to the ITG ACE® pre-trade cost model as discussed above and Momentum Mbp may be computed according to equation (2A) above.

A Data Summary section may further provide information regarding the coverage of a firm's transactions in the database. For example, FIG. 20 illustrates that Firm 4's turnover for the period was $1,108 MM, of which about 92% was analyzed to arrive at the reported costs.

“Drill-down” detailed reports, as illustrated in FIGS. 21-26, may also be created using the database. Such detailed reports illustrate firms', traders' and managers' cost performance in various situations or dimensions compared to that of their peers in like situations.

For example, a firm's performance may be compared against its peers in sell-side only transactions and buy-side only transactions. FIG. 21 illustrates such a “Side” analysis report for Firm 4. As illustrated in FIG. 21, Firm 4's “Actual IS Cost” for both buy-side and sell-side transactions was below median at (81) and (38) basis points, respectively. However, when compared to other firms trading in similar situations, Firm 4's “Situational Ranking” was in the third percentile for buy-side transactions (48% of its transactions in the database) and in the 28th percentile for sell-side transactions (52% of the transactions in the database).

A firm's performance may also be compared against its peers in various strata of order sizes as illustrated in FIG. 22 for exemplary Firm 4. As indicated by the figures in the “Rank” column of the “Performance” table and the triangles in the “Performance” graph, order size does not strongly correlate with Firm 4's cost performance as compared to Firm 4's peers trading in similar circumstances for the reporting period. Other detailed reports for a firm may include performance according to trading region, market cap, security type and various conditions of volatility and momentum.

According to another aspect of the invention, multi-dimensional detailed reports may be prepared from the database illustrating a firm's performance in multiple strata of, for example, two dimensions such as momentum and average daily volume. FIG. 23 illustrates an exemplary “Momentum vs. ADV” report for Firm 4. In FIG. 23, illustrates the “situational value add” of Firm 4 in various momentum situations (e.g., where there is very adverse momentum as shown in column A) and/or in various average daily volume situations (e.g., where the average daily volume is less than 10% as shown in row A). FIG. 23 further illustrates a graphical indication of % Total Value traded under the stated conditions. For example, the pie slice graphic in row A, column B indicates that total value of orders traded by Firm 4 under Adverse momentum and less than 10% average daily volume was more than a quarter of Firm 4's total value traded during the reporting period. Under these conditions, Firm 4's performance, when compared to its peers, was in the 22nd percentile. Other multi-dimensional detailed reports are possible, for example, Momentum vs. liquidity.

In another aspect of the invention, collected transaction data further contains an indication of a particular manager or trader responsible for an order. In this way, it is possible to granulate each dimensional report according to trader or manager.

For example, reports may also be generated to show the cost performance of selected traders in various trading scenarios. As illustrated in FIG. 24, an exemplary cost performance of Traders A-G is reported in nine Momentum scenarios ranging from very adverse, “Adv −−−,” to very favorable, “Fav +++.” As can be seen in FIG. 24, only Trader C managed to perform above median under adverse momentum, landing in the 46th percentile as indicated in column D; all of the other traders' cost performance percentiles were below median under adverse momentum (columns A-D). Additional trader-oriented reports for showing cost performance in various trading scenarios may be generated from the database. For example, performance relative to peers may be illustrated for various strata of order sizes and in various regions. As illustrated in FIG. 25, cost performance may be shown according to Side, Volatility and Market Cap as well. Trader-oriented reports may also indicate the value traded by each trader as a percentage of the total traded by all of the traders. For example, as indicated by the value 21% in the column “% Total Value” in FIGS. 24 and 25, Trader C traded 21% of the total value traded by Traders A-G.

As with reports for traders, reports may also be generated to show the cost performance of selected managers in various trading scenarios. For example, as illustrated in FIG. 26, the cost performance of Managers A-E is reported in eight trading regions. Manager D is shown to have is shown to have managed 21% of the total value of Managers A-E and have an overall rank in the 23rd percentile compared to peers managing trades under similar circumstances for the reporting period. Manager D is further shown to have performed in the 12th percentiles in Japanese and U.S. trading. Other manager-oriented reports are possible. For example, manager performance relative to peers may be illustrated for various momentum, side, volatility, and market cap scenarios.

Percentile rankings for a manager or trader compared to the other managers or traders in the peer database for the reporting period may be depicted relative to Order Size, Momentum, Region, Side, Volatility and Market Cap. In this aspect, in contrast to the peer group being investment institutions, the peer group is traders or the peer group is managers.

One skilled in the art will understand that the above methodologies may be implemented in any number of ways. For example, referring to FIG. 16, a system 100 for estimating transactions costs for peer institutions can include a processor unit 102 and a PGD database 104, coupled with a network 106, such as the Internet. Institutional traders use various client systems for performing securities transactions. For example, a client interface 108 may use a trader client 108 to trade on NASDAQ 200.

Tools can be used to collect trade data. For example, ITG markets a product called TCA® (transaction cost analysis), which can collect and analyze transaction data. This tool may be used to collect transaction data and download the data to PGD database 104. As transactional data is collected, the costs may be calculated in real-time as the data become available, or data can be collected later by batch processing. The data may be separated or organized according to cost factors, such as Size, Type, etc. Data may be subject to validation, quality filtration, and canonicalization. In one embodiment, data may be collected from Order Management Systems of institutional investors using ITG's Extract Manager and pre-processed using an approach such as ITG's Unified Data Model (UDM).

Periodically, such as weekly, monthly or quarterly, the computations and statistical analysis described above are performed on the transaction data to generate costs for each institution for each scenario. Data may be grouped according to size and momentum and percentiles regressed using linear interpolation, and/or computed using other techniques described above. The data can be presented to a user in any number of ways.

Accordingly, processor unit 102 may be appropriately outfitted with software and hardware to perform the processes described above, and configured to communicate with database 104 as necessary. One skilled in the art will understand that the system may be programmed using a number of conventional programming techniques and may be implemented in a number of configurations, including centralized or distributed architectures.

Peer investment institutions may access the PGD via a client interface. An exemplary display is shown in FIG. 17. Display 300 shows a peer institutions' performance for a particular benchmark relative to the entire cost distribution. The X axis is Size, by percentile, and the Y-axis is cost related to the benchmark, in basis points. The cost distribution can be represented by bars, or any other graphical fashion, to show the peer institutions estimated costs with reference to then entire PGD. For example, graph 300 shows the current peer's performance as being good, relative to the entire PGD, for transaction sizes of less than 1%, 1% to 5%, 5% to 10%, 25% to 50% and for transaction sizes over 50%. This particular institution performs poorly for transaction sizes of 10% to 25%. This is merely an example of one way that meaningful results can be presented graphically, and one having ordinary skill in the art will readily recognize that once costs are calculated for a particular institution, groups and percentiles, there are many ways to present the results, either graphically or otherwise, in a meaningful fashion.

With reference to FIG. 27, an exemplary method of creating a peer group database is disclosed. In step S10, an appropriately programmed computer, such as processor 104 or other processing device, receives security transaction data. The data may comprise transaction data including identities of traded securities, transaction order sizes, execution prices, peer identities and timestamps. In step S20, the data are grouped into orders and in step S30, costs and environmental factors are computed for each order. Environmental factors may be, for example, momentum, volatility, and/or liquidity. In step S40, orders are grouped into groups of orders having common situational dimensions. In step S50, scores and ranks are computed for each peer in each situational group. In step S60, overall peer rankings may be computed. In step S70, some or all of the transaction data, order data and computed data are stored, for example, in a database such as database 104. Step S10 may be optionally preceded by step S05 for performing a data extraction process on one or more peer Order Management Systems. A step S80 of displaying selected data may follow step S70. Optionally, displaying selected data may be performed at other stages of the method.

Thus, the present invention has been fully described with reference to the drawing figures. Although the invention has been described based upon these preferred embodiments, to those of skill in the art, certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention. In order to determine the metes and bounds of the invention, therefore, reference should be made to the appended claims.

REFERENCES

The following documents were referenced above throughout the present disclosure by author and [number]. The entire contents of each of the following publications are incorporated herein by reference for the purpose referenced above:

  • [1] Berkowitz, S., Logue, D. and Noser, E. (1988), “The total cost of transactions on the NYSE,” Journal of Finance 43.n1 (March 1988), 97-112;
  • [2] Breen, W. J., Hodrick, L. S. and Korajczyk, R. A. (2002), “Predicting equity liquidity,” Management Science 48.4 (April 2002), 470-483;
  • [3] Chakravarty, S., Panchapagesan, V. and Wood R. A. (2002), “Has decimalization hurt institutional investors? An investigation into trading costs and order routing practices of buy-side institutions,” a copy of which is available in the image file wrapper of U.S. patent application Ser. No. 10/674,432, now U.S. Pat. No. 7,539,636, as copied from http://www.nber.org/˜confer/2002/micro02/wood.pdf;
  • [4] Chan, L. K. C. and Lakonishok, J. (1995), “The behavior of stock prices around institutional trades,” Journal of Finance 50.n4 (September 1995), 1147-1174;
  • [5] Domowitz, I., Glen, J. and Madhavan, A. (2001), “Global equity trading costs,” a copy of which is available in the image file wrapper of U.S. patent application Ser. No. 10/674,432, now U.S. Pat. No. 7,539,636, as copied from http://www.itginc.com/research/whitepapers/domowitz/globaleqcost.pdf;
  • [6] Embrechts, P., Klueppelberg, C. and Mikosch, T. (1997), “Modeling Extremal Events for Insurance and Finance,” Springer, Heidelberg;
  • [7] ITG Inc. (2003) ACE™—Agency Cost Estimator, ITG Financial Engineering;
  • [8] Keim, D. B. and Madhavan, A. (1996), “The upstairs market for large-block transactions: analysis and measurement of price effects,” Review of Financial Studies 9.1 (Spring 1996), 1-36;
  • [9] Keim, D. B. and Madhavan, A. (1997), “Transaction costs and investment style: an inter-exchange analysis of institutional equity trades,” Journal of Financial Economics 46.n3 (December 1997), 265-292;
  • [10] Lert, P. (2001) Methods of measuring transaction costs, Investment Guides, Spring 2001, 44-48;
  • [11] Madhavan, A. (2002) VWAP strategies, Investment Guides, Spring 2002, 32-39;
  • [12] Perold, A. (1988), “The implementation shortfall: paper versus reality,” Journal of Portfolio Management 14.3 (1988), 4-9;
  • [13] Schwartz, R. A. and Steil, B. (2002), “Controlling institutional trading costs: we have met the enemy and it is us,” Journal of Portfolio Management 28.3 (Spring 2002), 39-49;
  • [14] Teitelbaum, R. (2003), “Know a fund's cost? Look deeper,” The New York Times, Feb. 9, 2003;
  • [15] “Transaction Costs—A Cutting-Edge Guide to Best Execution” (2001) Investment Guides, Spring 2001, edited by Brian R. Bruce, Institutional Investor Inc;
  • [16] “Transaction Performance—The Changing Face of Trading” (2002), Investment Guides, Spring 2002, edited by Brian R. Bruce, Institutional Investor Inc.; and
  • [17] Werner, I. M. (2000), “NYSE execution costs,” a copy of which is available in the image file wrapper of U.S. patent application Ser. No. 10/674,432, now Pat. No. 7,539,636, as copied from http://www.rufrice.edu/˜jgsfss/Werner.pdf.

Claims

1. A method for creating a peer group database, said method comprising the computer implemented steps of:

receiving, via an electronic network, security transaction data of a plurality of investment institutions, said transaction data including identities of traded securities, transaction order sizes, execution prices, peer identities and timestamps;
grouping, using a computer processor, said transaction data into groups of orders;
calculating, using said computer processor, an order cost and at least one environmental factor for each order of said groups of orders;
calculating each peer's average order cost within each group of orders in which the peer has one or more orders; and
storing said calculated data in a database.

2. The method as recited in claim 1, wherein the order cost is calculated relative to a metric selected from the group of metrics comprising:

a closing price CT−1 of the security on a day prior to the day of the execution of the corresponding order;
a volume-weighted average price VWAP across all trades for the security during the day of execution of the corresponding order;
a closing price CT+1 of the security on the first day after the day of execution of the corresponding order;
a closing price CT+20 of the security on the 20th day after the day of execution of the corresponding order;
an open price OT of the security on the day of execution of the corresponding order;
a prevailing midquote MT of the security prior to the execution time of the corresponding order;
a Participation Weighted Price PWP of the security based on a pre-determined participation factor; and
an Implementation Shortfall cost Nt of the security based on the next composite tick price or next composite open price.

3. The method as recited in claim 1, wherein said costs are calculated in real-time as transactions are executed.

4. The method as recited in claim 1, wherein average costs are computed only for orders executed or initiated in a pre-determined time period.

5. The method as recited in claim 1, wherein the one or more environmental factors are selected from the group consisting of Momentum, Volatility, and Liquidity.

6. The method according to claim 1, wherein peer average order costs are value-weighted.

7. The method according to claim 1, wherein peer average order costs are observation weighted.

8. The method according to claim 1, wherein at least some of the security transaction data is received from an institutional trader's order management system.

9. The method according to claim 1, wherein each peer is a like entity, said like entity selected from the group of like entities consisting of an investment institution, an investment manager, and an investment trader.

10. The method according to claim 1, further including a step of computing a peer's aggregate cost performance based on z-scores for the peer's average order costs.

11. The method according to claim 10, further including a step of computing a peer rank.

12. The method according to claim 12, further including a step of generating a report for displaying on a display device, wherein the report includes a rank and an aggregate cost performance of a peer.

13. A peer group investment transaction cost comparison system, said system comprising:

a database; and
a processor coupled to a network, said processor configured to: receive, via the network, security transaction data of a plurality of investment institutions, said transaction data including identities of traded securities, transaction order sizes, execution prices, identities of peers and timestamps; group said transaction data into groups of orders; calculate an order cost and at least one environmental factor for each order of said groups of orders; calculate each peer's average order cost within each group of orders in which the peer has one or more orders; and store said data in the database.

14. The system according to claim 13, wherein at least some of the security transaction data is transmitted to the processor from an institutional trader's order management system coupled to the network.

15. The system according to claim 13, wherein the processor is further configured to:

calculate for each peer investment institution an overall average cost, a situational average cost, and one or more average environmental factors; and
generate a report for display on a display device, said report comprising a selected investment institution's average cost, situational average cost, and one or more average environmental factors and data representing peer investment institutions' overall average costs, situational average costs, and one or more average environmental factors.
Patent History
Publication number: 20110196773
Type: Application
Filed: Sep 7, 2010
Publication Date: Aug 11, 2011
Applicant: ITG SOFTWARE SOLUTIONS, INC. (Culver City, CA)
Inventors: Jon FATICA (Culver City, CA), Michael Williams (Culver City, CA), David Turner (Culver City, CA), Kevin O'connor (Culver City, CA), Joseph Emanuelli (Upper Montclair, NJ), Milan P. Borkovec (Boston, MA), Ananth Madhavan (New York, NY), Artem V. Asriev (Winchester, MA), Kumar Giritharan , Thomas Strande
Application Number: 12/876,929
Classifications
Current U.S. Class: Trading, Matching, Or Bidding (705/37)
International Classification: G06Q 40/00 (20060101);