By Amir Khwaja, Clarus
Originally published on Clarus Financial Technology blog
Compression List Trading volumes have continued on their upward trend this year and in this article I will look into the what the data shows both in terms of volumes and also SEF market share.
ON SEF COMPRESSION LISTS BY MONTH
Lets start with an SDRView Res chart of monthly gross notionals in G4 currencies (5 Jan to 20 Mar).
A rising trend, with each month higher than the last
February was $180 billion
March with a week left to report, is > $200 billion
Of which USD is 75%, JPY 20%, EUR 5% and GBP < 0.1%
Two caveats, firstly our identification of Compression packages from the SDR trades requires a number of assumptions, so is not 100% accurate and secondly block trade rules means that notionals are caped for these to below their actual size. This means the actual volumes for USD will be higher, but the trend holds true.
Update: For JPY, drill-down into the trades shows that these are single period with future dates executed at the same time, so rather than Swaps, look like FRAs perhaps executed as part of an FRA reset/match exercise but reported in-correctly as Swaps. (More on this later).
Looking at trade counts, we see:
So a substantial number of trades, meaning a good number of compression lists are being traded, each with between 2 to 90 trades. See my earlier blog Swap Compression and Compaction on TrueEx and Tradeweb SEFs for the rational for these.
Interestingly if we now show volumes excluding Compression for the same period, we see:
From which it is obvious that Compression Lists represent a growing share of On SEF trading activity.
It will be interesting to see the final percentage for March.
OFF SEF COMPRESSION LISTS BY MONTH
Lets now look at the same analysis for Off SEF, first for Cleared Swaps.
Which shows that similar amounts of these compression lists are being traded Off SEF, perhaps not surprising as their is no mandatory reason for compressions to be On SEF. So we assume these are being transaction directly between clients and dealers or dealer to dealer or brokered Off SEF by IDBs.
Second if we just select Uncleared or Bi-lateral Swaps, we see:
Also substantial volumes but the assumption these are actually compressions lists and not some other activity like backloading is a weak one, as the whole point of compression lists is for CCPs to net opposite cleared deals, a rational that does not exist for bi-lateral deals.
In general we can say that as much compression list activity is Off SEF as is On SEF, which is surprising as we would expect that the superior automation of On SEF should make it much more compelling than Off SEF.
Time will tell and we will keep an eye on the Off SEF activity.
ON SEF BY WEEK
A chart of weekly On SEF volumes shows very clearly the positive upward trend.
With USD showing steady high volumes in each of the past 5 weeks, while JPY and EUR show more up and down volumes.
We know that these On SEF compression lists are traded on Tradeweb, TrueEx and BSEF.
So lets look at each of these in turn.
For BSEF, as we know that only BSEF reports to BSDR, so we can just use SDRView to look only at compression trade activity on BSDR.
The USD $10b in March can be compared to the $150b of USD (75% of $200b) we see from the first chart in this article. Meaning that $10b out of $150b or 7% of USD compression trading in March so far that we see on SDR is on BSEF.
However before sticking with the $10b, can we estimate how much larger the real number is if we did not have the capped rule for block trades? Luckily in this case we can exactly quantify this as BSEF separates out its compression trade activity in its daily SEF Reports.
Using SEFView we can drill-down to that daily data in March and sort the list to get Compression trades:
Summing these we get a gross notional of $21.4 billion. So our actual figure is twice the SDR figure. We will need to use this rule of thumb when we look at the next two SEFs.
Now both TrueEx and Tradeweb compression lists are reported to the DTCC SDR, so we have no obvious way to separate out which comes from which. However TrueEx does report its PCT volumes and using SEFView we can see the daily reported figures between 1-20 March.
And the same but by Currency.
In USD TrueEx has $102 billion
Interestingly CAD is $544 million
And CHF, EUR, NOK, SEK have small amounts
Nothing in JPY or GBP
Comparing the $102 billion in USD on trueEx with the $21 billion on BSEF, we can say that trueEx has almost five times the volume of BSEF in the period 1-20 March.
Tradeweb does not break out its compression volumes in its daily SEF reports, so we cannot use SEFView to get a figure for these.
This leaves us with SDRView and the problem of capped trades.
First lets start with EUR, JPY, GBP, where SDR has $11b, $38b and $240m respectively in the 1-20 March period. As there is nothing in BSEF in these currencies and just $34m in EUR on TrueEx, we can say that all of the volume in EUR, JPY & GBP is on Tradeweb.
Update: However looking at TW SEF volume by Currency we see that only $350m is reported in JPY. This confirms our earlier conjecture that the $38b of JPY is not compression activity.
Second lets look at USD. What information do we have?
$150 billion is reported to SDR
But we know this is under-stated due to capped notionals
$10billion of this is from BSEF
$21 billion is the actual (un-capped) BSEF figure
$102 billion is the actual (un-capped) TrueEx figure
We do not know what the capped TrueEx figure is
We not know the capped or actual Tradeweb figures
So we need to make an informed guess-estimate.
Lets assume the BSEF 1:2 ratio of capped to un-capped holds for TrueEx and Tradeweb, we would then estimate that TrueEx is $51b of the $150b in SDR. Which leaves $89 billion of the $150 in SDR as attributable to Tradeweb. Making the actual Tradeweb amount $178 billion.
Given that TW reported $422 billion in the period 1-20 March, this makes compression trading 42% of their volume, which strikes me as a little higher than the 30% we have observed in the past. If we instead used a 1.75 multiplier instead of 2 for TrueEx and Tradeweb, we would get Tradeweb volume as $143b and a 34% compression to trades volume ratio for Tradeweb.
So we can guess-estimate current market share in USD compression list trading as:
Tradeweb with 54% to 59%
TrueEx with 34% to 38%
BSEF with 7% to 8%
On SEF Compression list trading has increased month on month in 2015.
USD represents the majority of this activity.
Compression list trading is also an increasing percentage of USD On SEF Swap trading (increasing from 7% to 12%).
Off SEF Compression list trading is as large if not larger than On SEF.
Weekly volumes show a strong pick from Feb 16 onwards.
Bloomberg reported $21b in USD between 1-20 March, making its USD share 7% to 8%.
TrueEx has some CAD, CHF, EUR, NOK, SEK, but USD is the main, with $102b in USD between 1-20 March, a share of 34% to 38%.
Tradeweb has significant volume in USD, JPY, EUR & GBP, with $143b-$178b in USD between 1-20 March, a share of 54% to 59%.
By Amir Khwaja, Clarus
Originally published on TABB Forum
The advent of Swap Execution Facilities and Swap Data Repositories means that it is now possible to implement effective trade surveillance in the OTC swaps market. US SDR public dissemination means that most trades are available within a few minutes of execution; and as the market is characterized by large trades in low frequency, this is a perfectly adequate time frame for real-time surveillance.
The advent of Swap Execution Facilities (SEFs) and Swap Data Repositories (SDRs) means that it is now possible to implement effective trade surveillance in the OTC swaps market, and to do so in an analogous manner to the futures market.
Swap Dealers or Major Swap Participants can and should compare their executed trades with the trades reported in the market, a role that usually falls to a surveillance manager in the compliance department.
In this article I will look at post-trade surveillance, both historical and real-time.
USD Swaps on March 9, 2015
Let’s start with the trades reported to US SDRs on March 9, focusing just on USD Fixed Float Swaps:
Which shows that out of the 1,553 trades reported to DTCC and BSDR, 884, or 57%, were on-SEF cleared.
So is it as simple as starting with these and identifying which are our trades and comparing their prices and notionals with the rest?
Unfortunately, no – for a few reasons:
Unlike futures, there is no single instrument identifier (e.g., EDH15); rather, there are 40+ fields that we need to use to determine what specific type of trade it is.
Even when looking only at what SDRs term “Price Forming transactions,” we need to separate out package trades, such as Compressions, Curves and Butterflies.
Cancel and Correct transactions need to be accounted for.
We now enrich the SDR public dissemination data, using SDRView Res, with additional fields to address precisely these issues. In the drill-down we show the enriched fields below:
SDR source, one of DTCC or BBG.
Subtype of the Swap, e.g., Spot for the standard trade or IMM or MAC or Fwd for others.
MAT as true or false to signify Made Available to Trade.
Package as Compression, Curve or Butterfly.
Package ID to link the trade legs of a package.
Tenor to identify 5Y trades rather than have to use the maturity date
Forward Term to identify IMM, MAC and Fwd Start term.
DV01 as a useful risk measure
All of which make our task much easier.
For example, if we know we have executed some vanilla 5Y Swaps on March 9, we can now extract just these from SDRView and compare the execution times, prices and notionals of our trades against the market at large:
What does the data enrichment do to our population of 1,553 trades? The table below shows precisely this:
So only 423 out of the 884 trades that are on-SEF are standard spot starting vanilla trades in MAT tenors.
These are the ones we should first focus on, as it is likely we will find comparable trades to our own.
IMM and MAC, with 47 and 69, are also interesting, assuming we trade these.
Curves and Butterfly with 112 and 63 legs (56 and 21 packages) are also interesting for comparisons.
But after these types, a simple price comparison is not practical or possible.
For Forwards or Non-Standard trades, we are unlikely to find comparable terms to our own trades and consequently cannot do a meaningful price or size comparison. This means the only course of action is to calculate a fair price for these from the standard trades – which is a process we are currently working on, using the common NPV measure.
An interesting view is one of gross notional by Swap types and Packages for the same March 9 date:
Historical and Real Time
Surveillance should be performed daily using all of today’s trades; historically looking for patterns of behavior; and in real-time with alerts for possible off market transactions.
US SDR public dissemination means that most trades are available within a few minutes of execution; and as the market is characterized by large trades in low frequency, this is a perfectly adequate time frame for real-time surveillance.
There are many uses of the SDR public dissemination data.
One of the most interesting is Trade Surveillance.
However, this is not easily done on the raw SDR data.
Clarus’s SDRView now provides enriched data. This serves to isolate packages and differentiate between types of Swaps, which makes trade surveillance practical and possible.
Swaps trading firms should now be able to improve their existing surveillance process – another benefit resulting from the markets’ investment in real-time trade reporting.
By Miles Reucroft, Thomas Murray
Originally published on TABB Forum
Regulatory equivalence means that European banks clearing their trades via CCPs in equivalent jurisdictions can do so without the need for added compliance burdens. But ESMA has not yet judged the US standards to be equivalent to its own. As a result, the cost of clearing at US institutions for European banks could be cripplingly expensive.
The European regulator, ESMA (the European Securities and Markets Authority), has so far deemed the rules and their regulatory outcome around central clearing to be equivalent to its own standards in Japan, Singapore, Australia and Hong Kong, with the frameworks in Canada, Mexico and India expected to follow shortly. The big name missing from this list is the US.
Equivalence is important since it means that European banks clearing their trades via CCPs (central counterparty clearinghouses) in equivalent jurisdictions can do so without the need for added compliance burdens, since their domestic regulator (ESMA) recognizes the jurisdiction in which they are trading as being up to an equivalent standard as regards the regulatory framework. For example, a European bank clearing trades at a Japanese CCP can do so within a similarly robust safety framework as it does at an approved European CCP – according to ESMA’s view of the matter,at any rate.
European banks and market participants can also clear their trades at CCPs in those equivalent jurisdictions without being subjected to increased capital requirements on their balance sheets to margin the trades. Margin, both initial and variation, is posted at the CCPs to cover potential losses on positions.
Europe has linked its regulatory framework concerning central clearing to the new capital requirements laid out in CRD IV, meant to reflect Basel III in appropriate circumstances. To “de-risk” the system, these requirements greatly encourage banks to put their OTC trades through CCPs, since this is seen as the safest way to operate – central clearing of OTC contracts was one of the 2009 G20 responses to the global financial crisis, with a CCP acting as a buyer to every seller and a seller to every buyer. The implementation of globally harmonized regulatory frameworks around these systemically important infrastructures has not been smooth, with a battle having being fought over extraterritoriality, the manner in which one jurisdiction can impose its rules upon another. FATCA is an example of this emanating from Washington.
If banks are using Qualified CCPs (QCCPs), those approved by ESMA in Europe, then the banks can subject their trades to a 2 percent risk-weighted capital charge. If they are clearing through a CCP that is not recognised by ESMA, then this capital charge leaps up. This is to reflect the greater perceived risk exposure generated via clearing at a non-QCCP. The QCCP title is handed down by a CCP’s local regulator, so for the purposes of equivalence with Europe, the regulatory framework in which a CCP operates still needs to be approved by ESMA.
CME Group has publicly estimated that this could result in capital charges as much as 30 times in excess for banks not using QCCPs. It is, therefore, very important for US clearing houses to be recognized in Europe, since if they are not, the cost of clearing at US institutions for European banks could be cripplingly expensive and they will walk away.
The implementation of CRD IV in Europe already has been pushed back twice as a result of this delay in judgement on equivalence, most recently to June 15, 2015. ESMA and its US regulatory equivalent, the Commodity Futures Trading Commission (CFTC), have been discussing the best path to equivalence for two years now, with some hope of it being reached soon.
At the FIA conference in Boca Raton last week, Timothy Massad, chairman of the CFTC, said that his organization has agreed to a lot of the changes requested by Europe in order to find harmonisation – a partial compromise – although he added that they “would not be making significant changes.”
The major point of difference between the two regulators is around margin requirements at CCPs, with Europe having imposed a tougher margining framework than the US. It therefore carries over that they will need to hold increased capital on their balance sheets against trades conducted at non-QCCPs in order to mitigate the failure of that CCP, something deemed to be more likely by ESMA than at those CCPs it recognizes.
By Michael Maiello, Capital Ideas
Originally published on TABB Forum
The multitrillion-dollar market for credit default swaps came under withering criticism during the 2007-10 financial crisis, and Warren Buffett famously deemed them “financial weapons of mass destruction.” But the CDS market may be improving transparency in the stock and bond markets, and more CDSs may lead to healthier and more robust markets.
The multitrillion-dollar (notional) market for credit default swaps (CDSs) came under withering criticism during the 2007-10 financial crisis. Warren Buffett famously deemed them “financial weapons of mass destruction,” and others compared them to taking out fire insurance on a neighbor’s home.
But the CDS market may be improving transparency in the stock and bond markets. Research suggests that hyper-informed CDS traders force company managers to disclose some of the negative news that only banks are privy to.
CDS contracts are financial agreements that protect their buyers from default risk in exchange for a stream of payments known as the “CDS spread.” The owner of the CDS contract is compensated for negative credit events such as a downgrade or default, according to the terms of the contract. If CDS buyers and sellers believe that a negative credit event is likely, the spread that a buyer must pay to purchase the contract grows larger.
The financial institutions that issue CDSs are often lenders to the underlying companies and, as such, have significant insight into the results of operations, balance-sheet quality, and the covenants attached to any outstanding debt. The CDS market is lightly regulated, and trades are generally conducted “over the counter,” in private negotiations between dealers. The securities have not been subject to the same insider-trading laws that govern stock purchases so, as in the commodities-futures markets, what would be considered insider trading in equities has been generally acceptable in CDS markets. The 2010 Dodd-Frank Act did make the CDS market subject to some insider-trading rules, but implementing those rules poses some serious challenges.
Because of the information advantage enjoyed by CDS-market participants, CDS prices generally lead stock and bond prices, so if a CDS spread widens it can signal future bad news for outstanding bonds and equities.
This can put pressure on corporate managers, who have strong incentives to delay revealing bad news. A company in danger of breaching a debt covenant would not have to reveal that to either stockholders or bondholders unless the covenant were actually breached, and it may delay mentioning the situation before mandatory reporting deadlines.
But the presence of a liquid CDS market makes delaying tactics more difficult to employ, argue Chicago Booth’s Regina Wittenberg-Moerman, Singapore Management University’s Jae B. Kim, University of Minnesota’s Pervin Shroff, and University of Toronto’s Dushyantkumar Vyas. Buyers and sellers of a company’s CDS contract are more apt to know how likely the company is to default, and will price that risk accordingly. CDS prices are also available to participants in the stock and bond markets.
The researchers find that companies with liquid CDS contracts are more likely to give earnings forecasts and issue press releases, both forms of disclosure where management has great latitude. They are 14 percent more likely to give earnings forecasts and 1 percent more likely to issue bad-news press releases. While the latter increase may sound modest, given the scarcity of such releases, that represents 15.8 percent of the total of such releases in a typical year.
“Our findings suggest that informed trading by lenders in the CDS market results in a positive externality for capital markets by eliciting enhanced voluntary disclosures, thus contributing to a richer information environment,” conclude the researchers.
Further, they cite previous work that details how “higher disclosure quality leads to more liquid equity trading due to reduced information asymmetry.” It may well be that more CDSs will lead to healthier and more robust capital markets in the future.
By Shagun Bali and Alexander Tabb, TABB Group
Originally published on TABB Forum
The Consolidate Audit Trail is one of those unique initiatives that the capital markets community agrees is important. The details, however, are reason for concern. Who will pay for the CAT and how remains the No. 1 question. But implementation timelines, data challenges, and how to incorporate options market data all remain critical challenges.
The Consolidate Audit Trail is one of those unique initiatives that the capital markets community agrees is important. When completed, it will be the single largest repository of financial services data in the world. The CAT will house more than 30 petabytes of data, connect to approximately 2,000 separate data providers, and enable regulators to reconstruct market activity at any point in time.
According to TABB Group interviews with 100 financial institutions from the buy and sell sides, the majority of the industry agrees that the CAT is necessary; everyone recognizes the benefits. But what concerns market participants are the details.
When questioned about the importance of the CAT, more than three-quarters of respondents said they viewed the CAT as an “important” or“critically important” element that contributes directly to the health and well-being of the US markets (see Exhibit 1, below).
Exhibit 1: Market Perceptions Regarding the Importance of the CAT
However, the uncertainty in the process – which includes funding, implementation timelines, data challenges, and new participants from the options markets – has the community concerned. Cost and funding of the CAT is surely giving the community sleepless nights. First, the direct and indirect costs associated with the CAT are a concern for the broker-dealers, as they recognize that all funding avenues lead directly to their doorstep. In addition, they are deeply concerned over the fact that not only do they have to pay for the development and upkeep of the CAT, they also will need to cover the costs of FINRA’s Order Audit Trail System (OATS) and the SEC’s Electronic Blue Sheet (EBS) requests for at least five years after the CAT is started.
Yes, the CAT is necessary. But the big question among many on the sell side remains, “Can the industry afford it?” The broker-dealers need from the SROs a solution that will lighten their burden of investment and that will reassure them that the industry can indeed afford it.
In addition, elements related to data – gathering, storing, and data usage and governance – need heightened attention. To win the wholehearted support of the industry, the SROs need to put out clear strategies that address these challenges. The B/Ds understand the requirements, but ultimately are still uncomfortable with the idea of supplying this type of data with so many ambiguities left unanswered. The SROs need to address these issues head on and need to develop a solution that takes the concerns seriously – especially when it comes to data security and governance.
Furthermore, both TABB Group and the community recognize that the real wild card in the CAT process is the options market. Previously under-appreciated, adding options to the CAT mix greatly increases the complexity of the endeavor. A naiveté has been replaced with a clear understanding that incorporating the options markets into the CAT is no small feat and that whomever is tasked with this undertaking needs to understand all of the implications involved.
The same can be said of the need to solve the data storage and government questions, as well as the security and control issues associated with the program. Unfortunately, the CAT is a unique project, one whose size and scale are unmatched within the institutional capital markets. This means that the SROs do not have the luxury of learning from other people’s mistakes. They have to figure this all out in advance, with everyone looking over their shoulders and trying to influence the outcome.
Getting this right is critical for the success of the project. Market confidence is so fragile that authorities cannot afford to make mistakes in such harsh market conditions, when volumes are low and each participant is struggling with its bottom line numbers. Success is only possible if the SROs prioritize these key elements of the CAT and select a bidder that can deliver against all of the challenges. The entire onus and responsibility of the CAT’s success lies on the SROs’ ability to work out the problems and choose a solution that is in the best interest of the markets, and not their own.
By Kevin Foley, AQUA
Originally published on TABB Forum
There’s plenty of demand for block liquidity. The problem is supply. Reg ATS, which was intended to broaden market transparency, instead eviscerated it, in the process leaving price discovery behind. And without price discovery, there’s no economic incentive for liquidity suppliers.
Natural block liquidity is increasingly harder to come by. Traders tells us they would prefer a market in which they could access the larger orders only other naturals can trade, away from the reaches of high-frequency trading. Larger trades reduce information leakage, adverse price movement and cost.
People are often surprised when I argue there’s no shortage of demand among buy-side traders to trade in larger size, but it’s true. I can’t tell you how often I hear the complaint, “I want to trade blocks; it’s everybody else who doesn’t.” Otherwise, they would be trading blocks, right?
Well, no. When traders are all saying they want to trade blocks but can’t, the problem isn’t one of demand. There’s plenty of demand for block liquidity. The problem is supply.
So what happened to the supply of natural block liquidity? No big secret. Look no further than the incredible rise of indexing and ETFs over the past decade. As much as 30% or more of the outstanding float in most US equities is in the hands of indexers, closet-indexers, indexed mutual funds, ETFs, smart betas and the like. What does this group have in common? They don’t trade anymore. They used to, but not anymore. They are investors who concluded their returns were being eaten up by fees and commissions. They decided they were tired of paying for the privilege of providing their liquidity to the market.
After the indexers left, much of what remained of natural block liquidity was lost to algorithmic trading. Traditionally, buy-side traders endeavored to supply liquidity to each other. But it’s hard for naturals to find each other when they’re both masquerading as something else. Buy-side traders use algos to hide their intentions from the HFT firms. It’s an open question whether these algos are in fact fooling the likes of Tradebot and Virtu. But they do a good job of concealing the buy-side traders from each other.
If demand is undiminished and supply has fallen off, Econ 101 says price should be the mechanism that restores market equilibrium. Just as unsatisfied demand for a stock will produce higher stock prices, we would expect unsatisfied demand for a stock’s liquidity to produce thicker prices – wider markets for larger sizes. Wider markets would provide an incentive for new suppliers of liquidity to enter the market, earning profits while satisfying demand and restoring equilibrium.
Why isn’t this happening?
Reg ATS is the reason why. Following on the heels of the 1997 Order Handling Rules implementation, the SEC’s Reg ATS ended the practice of limited public display of order information. Prior to that, Instinet’s popular “I-Only” feature (for “institutions only”) dominated institutional trading in Nasdaq stocks. Large orders could be displayed at various price points to other buy-side firms and not be visible to anyone else. With Reg ATS, the SEC essentially said, if the buy side wants to show orders to each other, they have to show them to everybody else too. Retail investors, that’s who regulators had in mind. But in practice, it includes the likes of Tradebot and Virtu, too.
And because buy-side traders understandably didn’t want to show their hands to HFT firms, they turned away from displaying orders. This fear of signaling drove buy-side trading into the dark. And let’s be honest about the results – there is no pre-trade transparency anymore. Not in quantities or lengths of time that have any meaning to institutional investors. The bids and offers you see are a fleeting picture of small orders already executed or canceled, vanishing even before they reach your retinas.
During that timeframe, the broker-sponsored ATS rose to prominence to control costs, and also because a successful crossing pool could raise a broker’s prestige. But that doesn’t explain why broker-sponsored ATSs are all dark. Undoubtedly, brokers would broadcast at least some ATS market data to their buy-side clients if they could, just as Instinet once did. Reg ATS is the reason they don’t. It’s not permitted. Reg ATS was intended to broaden market transparency but instead eviscerated it.
In the process, price discovery was left behind. Trading in the dark can only succeed at or close to the midpoint. Try any other price and you get nothing but near-misses. That’s why the midpoint became the price of choice when pre-trade transparency ended.
And here’s the problem with the midpoint: It provides no incentive for new suppliers of liquidity to enter the market.
It’s as simple as that. No transparency, no price discovery. And no price discovery, no new supply. Why should anyone go out of their way to supply liquidity at the midpoint? Never gonna happen. It’s not for lack of demand that block trades are harder to come by. Without price discovery, there’s no economic incentive for suppliers to fulfill their function.
It doesn’t have to be that way. That’s why our mission at Aqua is to restore price discovery. Price discovery provides the incentive for new supply of block liquidity to enter the market. At Aqua we deploy Reg ATS-compliant technology with a little ingenuity to facilitate a buy-side-only display at price points that create an incentive to display.
And there are two great potential sources of block liquidity out there waiting for a reason to come back – the indexers and the algorithms. Price discovery provides the framework for these two big former suppliers of liquidity to return on terms that are attractive to them.
By Rebecca Healey, TABB Group
Originally published on TABB Forum
The UK Financial Conduct Authority has continued its relentless pressure on asset managers to come clean about the hidden fees and charges in dealing commissions. But while the transparency resulting from unbundling research and execution may benefit the industry in the long term, changing the payment-for-research structure will guarantee continued consolidation of the asset management industry, creating new challenges for investors.
Opacity and confusion over financial services costs and fees have convinced some regulators and market participants that the use of dealing commissions to purchase research is fatally flawed. In fact, buy-side trading desks have been slowly realigning internal processes to meet regulators’ requirements to unbundle the cost of research from the cost of execution. While the speed and depth of change may not exactly be in line with the FCA’s expectations, 58% of European firms now have a research budget in place, and 55% intend to switch to execution-only commissions when the budget has been reached.
And unbundling no longer is the domain of just the UK in Europe. The reality is that dealing commissions is yet another area of financial services where the global industry is undergoing painful metamorphosis. Constrained resources and greater accountability already have created demand for an improved understanding of costs versus profitability. The sell side is becoming more selective of what it provides, while the buy side has become more discerning about what it chooses to consume and how to pay for it. Seventy-nine percent of participants in the annual EU equity trading survey now use Commission Sharing Agreements (CSAs) to manage this process.
Rather than this standing as a resounding endorsement of the FCA’s lead in pushing for greater unbundling, however, greater confusion is setting in. Not every European regulator has interpreted the latest ESMA text in the same manner as the FCA, and European MEP’s are of the view that research payments were not part of ESMA’s remit. As a result, 30% of all firms are now halting the rollout of unbundling processes until there is greater clarity as to whether CSAs will be admissible in the final text from the European Commission.
An Inconvenient Truth
Paying for research from the bottom line would, on the face of it, appear to be a cleaner and more efficient payment structure. Using a bundled commission model, fund managers may or may not receive best execution if they automatically route orders to favored providers of research. Cross subsidization among funds and firms may lead to end investors paying for services they did not use. However, the alternative unbundled model is not without its complications.
If small and large asset managersare to pay equally for research – that is, for research to be paid for via a flat fee – smaller asset managers will be worse off, as the cost of paying directly for research will have a disproportionate impact on their P&L; larger asset managers have greater scale with which to absorb research costs. If the research is subsidized in favor of smaller asset managers, however, this would unfairly penalize larger asset managers.
Further, use of CSAs may penalize existing clients of the funds; if research is purchased earlier in the year and then the firm switches to execution-only commissions for the remainder of the year, any new clients in effect receive the research for “free.” However, if CSAs are no longer admissible as per the FCA’s recent indications, 100% of smaller asset managers believe they will be negatively impacted; 79% of mid-tier and 75% of larger asset managers also believe smaller firms will be negatively impacted. In this competitive environment, where few can afford to increase their fees, should market forces then dictate the outcome?
Small European brokers also are likely to suffer. A decrease in consumed research will lead to a decline in investment in research provisions, which will lead to a fall in revenue, which will in turn make the provision of research an expense few can afford. Unattractive sectors may also suffer. We have already witnessed the widespread closure of small-cap execution desks. Few global investment banks will be motivated to carry out research on SME firms given the lack of profitability; if research is not produced, funds will also be less likely to invest in these firms. The contra argument is that this opens up the market to competition from more bespoke research providers and sector specialists facilitating a flight to quality.
But it is not only about the research itself. Other important considerations remain. If research payments are unbundled from commissions, the buy side may have to foot a potential Value Added Tax (VAT) bill, negating any potential savings from supposed misspent funds. And complex management of commission allocation payments to end funds will require in-depth technological solutions to solve new administrative headaches.
Keep CSAs in any form and it will still be too easy for less well-intentioned asset managers to fake unbundling, and that is why the nuclear option of asset managers paying for all research from the bottom line is being proposed. It is easy to spot what is wrong with the current system; but it is far harder to determine what would be a good alternative to establish who pays for what and how.
Without an explicit link to the underlying end client, it is difficult to envisage how firms can avoid passive funds paying for active managers’ research. Should there be a distinction? Is the true cost of passive research versus active research more or less than the other? And if you are paying for research “ex ante,” how do you know it is “good” research in advance? You can only really know after the event. Yet to drill down to the individual client level will ensure firms incur huge administration costs from RPAs across literally hundreds of clients.
While the need for greater transparency is one side of the debate, eliminating the ability of firms to compete is the other. The provision of research already is undergoing subtle shifts that, like a snowball, have the potential to turn into an avalanche. As the sell side starts to cut off clients from access to research, difficult decisions will become more commonplace. If research providers are ranked in the Top 5, but are only being paid Top 10 rates, competitive questions will have to be asked.
As research content shifts from PDF documents to the Internet and becomes more accessible and searchable, asset managers are likely to pick and choose niche services and providers as and when they require them. Fintech will deliver new solutions that will revolutionize how research is delivered and consumed in a similar manner to how Google and Apple have transformed how information is accessed. As we become more reliant on technology for efficiency and cost control, the added value will increasingly come from the skill of the analysts’ interpretation.
While larger asset managers are able to make the requisite investment in technology, smaller boutique players will need to rely on third-party relationships and technology to bridge the gap.
A Third Way
The use of dealing commissions does not have to be fatally flawed. Mandated CSAs, budgets and robust payment structures could provide the necessary transparency and increased competition in the research market that the regulators seek. There are those that believe CSAs can never work, given the lack of industry progress so far. There may be holes in the wall, but surely it is better to identify and fix the holes than smash the wall down in its entirety?
Transparency may benefit the industry overall in the long term; but changing the payment-for-research structure will guarantee one thing: continued consolidation of the asset management industry. Unwittingly, this will create new alternative challenges. Larger order sizes will create growing liquidity issues as fund managers retrench their portfolio coverage to an ever decreasing circle of assets. As instruments become harder to trade and harder to source, firms will increasingly have to turn to technology to deliver best execution.
The industry now needs to step up and demonstrate to the regulators that commissions can be managed fairly – or put up with being regulated into submission. The time for standing on the sidelines is over. With the asset management industry continuing to consolidate and operate on a global basis, these changes will resonate beyond Europe, as firms adopt common systems globally to reduce business complexity.
By Shagun Bali and Alexander Tabb, TABB Group
Originally published on TABB Forum
The complexity of our market structure and underlying technologies surpasses our current ability to monitor, analyze and reconstruct market events. If the US wants to maintain its predominate position as a global finance center, the SEC and the SROs need the ability to proactively review and analyze events that occur within the markets as whole.
Today’s market structure is not just complex and fragmented, it is dynamic, with trading activity shifting across multiple exchanges, asset classes and hyper-connected marketplaces. Though equities, OTC equities, options and futures all comprise market events, each is an independent market with its own ecosystem and regulatory infrastructure. But while each asset class is unique unto itself, they are inextricably linked. Unfortunately, the complexity of our market structure and the underlying technologies surpasses our current ability to monitor, analyze and reconstruct the events that shape our economic destiny. Recognizing these gaps, the SEC mandated the Self-Regulatory Organizations (SROs) to develop the CAT NMS Plan and propose the Consolidated Audit Trail, which would create a unified system to enable market reconstruction and analysis.
The need for the CAT is made somewhat self-evident by the markets’ inability to reconstruct some of the near-catastrophic events that have occurred in the past few years. The Flash Crash and the Madoff scandal, for example, seriously undermined invetsors’ confidence in the US markets. While the markets have been able to regain much of their swagger since, another such event with similar outcomes and indeterminate causes could be disastrous. The mere fact that neither the SEC or the SROs were able to reconstruct accurately the eventsthat led up to these disasters is unacceptable in today’s data-centric world. If the US wants to maintain its predominate position as the leading global economic center, the SEC and the SROs need the ability to proactively review and analyze events that ocur within the markets as whole.
The CAT will be the ”go to” system for regulators and exchanges to examine and analyze market activity in its totality. While it would not directly prevent future flash crashes from occurring, it would indirectly prevent these and other potentially disastrous events by enabling rapid reconstruction and analysis of market events, which in turn would protect the markets as a whole. The CAT initiative is the SEC’s main tool in its strategy to become more proactive and preventative and to take a more comprehensive and timely approach to market events.
Though initiated by the SEC, the CAT now is in the hands of the SROs. The SROs have downsized the bidding list and are in the process of selecting the CAT processor and finalizing the technical details. In July 2014, the SROs announced the final short-list of CAT bidders that included the following firms/consortiums:
AxiomSL & Computer Sciences Corporation (CSC)
CATPRO Consortium: HP, Booz Allen, Buckley Sandler, J. Streicher Analytics
EPAM Systems & Broadridge
SunGard Data Systems Inc. & Google
Thesys Technologies, LLC
There is no doubt that a program such as the CAT is what the industry needs. But the SEC does not have the budget or the political support in Congress to take on a project this large and/or complicated. As a result, the SEC put forward Rule 613, placing the CAT squarely in the laps of the SROs. By letting an industry consortium take over responsibility for the CAT, the SEC has been able to advance its needs for a sophisticated analytical tool without imposing a new bureaucracy on the markets that would require taxpayer dollars. However, the phrase “Too many cooks spoil the stew” comes to mind when summarizing the CAT process at present.
For their part, the SROs have been working with the broker-dealers, since they are the ones that are going to pay for the CAT. But each participant within the community has its own views concerning the CAT. It would appear that getting everyone in sync is surely as daunting as building the CAT itself. Consequently, the continually moving goalposts and additional requests for more information from vendors, along with the exemptive relief request filed in January 2015, mean the process still has a long way to go before it is finished.
Though the community at large is supportive of the initiative, there is a considerable amount of mistrust over the lack of transparency in the process. The biggest questions still unanswered include who exactly is going to foot the bill through the development phase, and how broker-dealers are going to pay for the CAT.Complying with OATS reporting system has been painful enough for the industry; market participants do not want to go bankrupt with the implementation of the CAT.
From TABB Group’s perspective, the CAT is a necessary tool for the 21st Century. The SROs and exchanges need timely access to a more robust and effective cross-market order and execution audit trail. However, the SROs need to tighten up the process and set definite targets against which they can deliver in a timely fashion.
In theory, this should not pose a problem; but in reality, the SROs are not a unified group, and as such, they bring their own challenges to the table – which in turn makes the task even more daunting. The current challenge with the CAT program is that nobody wants to take responsibility for this massive undertaking. Though the SROs did not initiate the CAT, if this project does not deliver what it promises, the SROs will be first in a long line of participants that will take the blame for its failure.
The CAT is the largest data undertaking ever proposed for the US securities market. Clearly, there is a lot at stake here. A lot of time and effort have gone into getting the market to approve and support this critical initiative. Now it’s in the hands of the 10 SROs to make sure that if Humpty Dumpty falls off that wall, we can accurately reconstruct what occurred and ensure it doesn’t happen again.
By Shagun Bali, TABB Group
Originally published on TABB Forum
Intensifying regulatory demands are forcing financial institutions to streamline application portfolios, standardize and normalize data more effectively, and ensure the rapid integration and sharing of data across systems. But many banks simply do not have the systems or control of data they need to make complex analytics an integral part of workflow.
The institutional capital markets are dealing with the aftermath of one of the most aggressive periods of regulatory intervention since the Great Depression. Not only have new governance, risk and compliance (GRC) rules forced institutions to manage more data than ever, they are forcing many of these institutions to do so in increasingly shorter timeframes.
Across the board, regulators want more and more information from financial institutions and, consequently, institutions and banks are mandated to store, manage, analyze, and produce (whenever required) data needed to address regulatory challenges/requirements and mitigate risk. And the pressure to get it right is dramatically increasing, as fines are becoming increasingly harsh.
Anyone who doesn’t believe these new rules are changing the way that banks think about their businesses need only look at the front page of virtually any major newspaper’s financial section. Just this past week, 10 banks were fined for commodities trading practices, and one global bank announced it is returning $100 billion in customers’ deposits because new capital rules make just holding this money problematic.
In response to many of the challenges, senior industry members gathered this past week for a Cognizant-sponsored roundtable discussion on GRC, hosted by TABB Group CEO Larry Tabb. Reflecting the urgency of the industry response, it was a full house, with 20 very senior industry professionals eager to share ideas and discuss the impact and complexity of new GRC rules, including macro-prudential rules such as Basel III, Volcker, EMIR and MiFID2, as well as traditional regulatory initiatives such as the Consolidate Audit Trail (CAT) and the FINRA proposed Comprehensive Automated Risk Data System, or CARDS.
GRC regulations have made it imperative for all functions and businesses within financial institutions to share information more readily and rapidly in order to meet regulatory requirements and support and improve investment decision-making as a whole. For the effective aggregation of data, institutions need to streamline their application portfolios, standardize and normalize data more effectively, and ensure the rapid integration and sharing of data across systems. However, the consensus among attendees was that many banks simply do not have the systems or control of data they need to make complex analytics an integral part of workflow or enable information to be passed back and forth within different functions.
Rather, most institutions have monolithic legacy and proprietary IT systems that operate as silos and impede the 360-degree view needed to gather and aggregate enterprise information. Aggregating data from various businesses and functions in a central repository or data warehouse is almost impossible in today’s world, as the typical financial institution can have as many as 1,100 data elements from more than 60 different systems and data formats ranging from one-year-old standards to 30-year-old standards.
In addition, institutions have to place a great deal of emphasis on the quality of data that is required to drive new reporting and risk analytics, as problematic outcomes could causes banks to have problematic stress-test results. And the importance of getting this right can’t be downplayed, as getting it wrong means very harsh scrutiny from management, regulators and investors.
On the other hand, new regulations are pushing banks to streamline internal operations and upgrade systems. As banks evolve their compliance strategies to collaborate across business units, they wish to move toward simplified workflows and automated solutions that are flexible enough to keep pace with changing regulatory demands and integrated to maintain a comprehensive view of the business. With the right investments, banks will be in better shape to achieve not only compliance but also higher standards of efficiency and risk management in the future. And that makes regulation an opportunity as well as a challenge, roundtable participants agreed.
Rome wasn’t built in a day; nor will all the IT problems of financial firms be solved overnight. Yes, financial institutions are struggling with the implementation of these new regulations; however, TABB Group advises institutions to reach out and seek help and advice from the vendors – they understand the IT challenges and can offer expert advice going forward. Today, institutions are very averse to working with vendors on their GRC deployments; we suggest this needs to change.
The bottom line is that more than ever, institutions need sophisticated systems and tools to assess and manage data of all classes, from the start of every trade or order, and to incorporate analytics and regulatory constraints into all their workflows. Those that make the real-time analysis and exchange of data part of their everyday workflow across the front, middle and back office functions will be primed for both compliance and profitable growth.
By George Bollenbacher, Capital Markets Advisors
Originally published on TABB Forum
All of the world’s swap regulators recognize that reporting is a mess. And while the SEC’s final rule on Swap Data Repositories does not mandate SDRs monitor reporting data quality, there are signs that such monitoring may be in the offing. But don’t bet on the SEC getting the rules right.
In my last article, I reviewed the SEC’s final and proposed rules on transaction reporting by market participants (“Missed Opportunity: The SEC Finally Weighs in on Swaps Reporting – Part 1”). In this article I will look at the final rule on SDRs, and make some observations on the effectiveness of current and future reporting regimes.The SEC’s final SDR rule is entitled “Security-Based Swap Data Repository Registration, Duties, and Core Principles” and runs some 468 pages. Don’t worry – you don’t have to read them all; just go to page 424 to find the beginning of the rule text. The bulk of the rule is in §232.13, which itself is divided into 12 subsections:
240.13n-1 Registration of security-based swap data repository.
240.13n-2 Withdrawal from registration; revocation and cancellation.
240.13n-3 Registration of successor to registered security-based swap data repository.
240.13n-4 Duties and core principles of security-based swap data repository.
240.13n-5 Data collection and maintenance.
240.13n-6 Automated systems.
240.13n-7 Recordkeeping of security-based swap data repository.
240.13n-8 Reports to be provided to the Commission.
240.13n-9 Privacy requirements of security-based swap data repository.
240.13n-10 Disclosure requirements of security-based swap data repository.
240.13n-11 Chief compliance officer of security-based swap data repository; compliance reports and financial reports.
240.13n-12 Exemption from requirements governing security-based swap data repositories for certain non-U.S. persons.
The Boring Stuff
As we can see from the list above, the first three sections of the rule pertain to registration as an SDR, or, as the SEC abbreviates it, SBSDR (except that the regulator very seldom abbreviates it). Given that SDRs have been functioning in the US for more than a year, it would be astonishing if the SEC had significantly different registration requirements from the CFTC’s, and it doesn’t. So 13n-1 through 13n-3, 13n-6 through 13n-8, and 13n-11 are pretty much as expected.
Items of Interest
In light of the recognized problems with reporting accuracy, the following wording in 13n-4 bears examination:
(b) Duties. To be registered, and maintain registration, as a security-based swap data repository, a security-based swap data repository shall:
(7) At such time and in such manner as may be directed by the Commission, establish automated systems for monitoring, screening, and analyzing security-based swap data;
13n-5 has similar wording:
(i) Every security-based swap data repository shall establish, maintain, and enforce written policies and procedures reasonably designed for the reporting of complete and accurate transaction data to the security-based swap data repository and shall accept all transaction data that is reported in accordance with such policies and procedures.
So far, none of the regulators have mandated any responsibility on the part of the SDRs to monitor data quality, nor have they laid out any guidelines for doing so. However, there are signs that such monitoring may be in the offing, and this language lays that responsibility squarely on the SBSDR. How extensive the monitoring might be, how the regulators would verify that it was being done, and what the penalties would be for failing in this function aren’t covered here. And, since 13n-4 is the only place in the rule text where the term “monitoring” is used, it isn’t covered anywhere else in the rule or, as it turns out, in the preamble.
The Current State of Affairs
In January 2014, the CFTC issued a proposed rule called “Review of Swap Data Recordkeeping and Reporting Requirements.” The comment period ended May 27, 2014. I haven’t been able to find any comment letters on this proposal on the CFTC’s website, nor any final rule on this subject.
So how accurate is swaps reporting today? I took a look at a snapshot of the most liquid swaps category, rates, from the DTCC SDR site and posted it below. The questionable items are in red.
Just to help us read the table, the first item is a new, uncleared ZAR three-month forward rate agreement beginning 5/18 and ending 8/18. The notional amount appears to be ZAR 1,000,000,000, and the rate is 6.12%. With that as background, let’s look at some of the anomalies.
Item 4 is a new two-year USD basis swap beginning 9/21/2016. A basis swap is normally between two different floating rates, but the underlying assets in this transaction appear to be the same (USD-LIBOR-BBA). I’m not sure what a basis swap between the same rates would be, unless it is between two different term rates, like 1-year and 5-year. However, if that’s true, the report doesn’t tell us, so we are in the dark as to what this trade really is.
Item 7 is a new 8-year Euro-denominated fixed-fixed above the block threshold (that’s what the plus at the end of the notional means), which appears to have gone unreported for two weeks. There is a delay in reporting block trades, but it isn’t two weeks. One of the monitoring functions the regulators might implement would be any trade where the difference between the execution and reporting timestamps is greater than the rule allows.
Item 13 is a new 12-year Euro-denominated fixed-floating swap that appears to be above the block threshold of €110,000,000. What is interesting here is that the 12-year Euro rate at the time was about 0.4%, not 0.824%. If there is no other parameter on this trade, it looks to be significantly off the market, unless there was a large credit risk component.
Item 15 is … what, exactly? It’s a new trade in some exotic that went unreported for 5 days, with no price given, apparently. Since the notional looks like 5,000,000,000 Chilean pesos, or about US$8,000,000, perhaps we don’t need to worry too much about what it really is; but exotics of this size denominated in dollars should cause us to ask just what kind of swap was done, and how much risk it entails.
All of the world’s swap regulators recognize that reporting is a mess. For example, here’s an excerpt from ESMA’s annual report:
“In order to improve the data quality from different perspectives, ESMA put in place a plan which includes 1) measures to be implemented by the TRs and 2) measures to be implemented by the reporting entities. The first ones were/will be adopted and monitored by ESMA. The second ones are under the responsibility of NCAs. This plan was complemented by regulatory actions related to the on-going provision of guidance on reporting, as well as the elaboration of a proposal for the update of the technical standards on reporting, leveraging on the lessons learnt so far by ESMA and the NCAs.” (emphasis added)
However, it is hard to find any mention of such a plan in ESMA’s 2015 work programme.
We might have expected that the SEC, the latest to the swaps reporting party, would have taken pains to get it right and perhaps lead the way to a better world. Since some of its rulemaking is still in the proposal stage, we might still see the regulator get it right. But I wouldn’t bet on it.