Learning Curve: the growing need to normalise trade data
As banks struggle with tracking brokerage costs, they need to work out ways to properly manage cost transparency by normalising their trade data.
By Kerril Burke, CEO of Meritsoft
For too long now, business heads have lacked the tools and information needed to work out the best brokers to use. As a consequence, banks have been paying inflated sums in brokerage fees, which in some cases have exceeded tens of millions of dollars. In these cost-conscious times, many of the biggest financial institutions have found themselves in this situation for two reasons.
First, after years of seeing the cost of doing business with brokers as a necessary evil, banks have realised they need to have a better grasp of their cost base. Today, the head of a desk's profit and loss (P&L) is coming under intense scrutiny from the business to reduce gross revenues to net revenues where possible. As a result, the days of ignoring all their cost (contra revenue) are long gone.
The numbers that banks can recoup from broker fees are now in the tens if not hundreds of millions — which is a huge percentage of any desk's P&L.
The second issue is centred around cost transparency, which is fuelled by regulation. For any bank wanting to push its brokerage costs onto end clients, it has to know exactly what the costs are or face the wrath of regulators. The problem is that many banks, due to the highly complex nature of the trades that they deal in, simply can’t get a handle what many of their costs are.
Take calculating the brokerage on OTC derivative contracts. On an FX option for, for example, there is so much to consider. From the currency itself and the term to maturity of the asset, trade type (strategy or structured trades), execution method (voice or electronic) not to mention the actual FX rate that the broker will use to bill the brokerage.
The trouble is that because the data needed to calculate a convoluted transaction like this is not held at the trade execution level, banks have no way of passing the cost on. After all, fees vary significantly depending on how a trade is executed and settled. As a case in point, a bank may receive a bill from their broker that has a plethora of derivative trades on it.
How on earth does the bank in question know which trade belongs to which client? It's nigh on impossible for a bank to say to a regulator: “This £2,000 is for this interest rate swap settled at Eurex for this asset manager." This is all due to a huge disconnect between the transaction carried out and the billing.
And therein lies the nub of the problem. The majority of front office trading systems do not carry all the data right the way through to the guys beavering away in the back office. As a result, a lot of important information gets mislaid. And it is too late to start to try to solve this issue by ripping out and replacing decade-old back office systems that have cost huge amounts to build.
Instead, in order to accurately calculate how much banks should be paying their brokers, more and more firms are starting to explore ways to capture and normalise their data. This way, it is much easier for them to work out where they are paying more, and where they are paying less. On top of this, they can also figure out if they have been executing at an extortionate rate at a certain venue.
Ultimately, if banks are going to thrive in these bottom-line conscious times, identifying pivotal issues surrounding brokerage spend is one major way to close the gaps in their cost base.