banner icon

News and views

Learn about the latest thoughts and perspectives shared by Velocimetrics’ leadership on company developments and emerging industry trends

Blog: Assessing market data quality – The risky business of focusing on the channel and ignoring the data traversing it  

Shimrit Or, Senior Professional Services Consultant, Velocimetrics

Shimrit Or, Senior Professional Services Consultant, Velocimetrics

I often find it helpful to think of market data quality assessment in the context of a highway.  You have the actual road made of concrete or asphalt (and that’s the channel).  Then you have the market data traffic – the cars and trucks, inside of which are people and cargo travelling on the road.  I strongly believe, that if you really want to accurately assess the quality of market data your firm is producing, delivering or consuming you need to look at both the road and what’s being transported on it.  Here’s why:

Monitoring the quality of the road is a very well practiced way of assessing market data.  It tends to focus on the network, looking at the health of the channels and feeds, versus anything going on around those elements.  So it’s about determining, for example:

  • If the traffic is moving on the road or if it’s fully blocked?
  • If the traffic is flowing at a fast rate on all lanes? When similar data is being received on two different network channels, is it performing in the way you’d expect it to?
  • Or whether the exit ramps are jammed. For instance, can all of the data’s consumers manage the masses of market data that’s being sent to them or are they experiencing problems?

There are many solutions available, some old, some new – software or hardware based – that allow firms to do this kind of monitoring.   However, my concern is that it only really touches on the quality of road and ignoring the traffic itself can prove to be a potentially hazardous oversight.

Let’s say the road is looking pretty great.  Data is flowing from A to B without any issues – data isn’t being dropped, there aren’t any bursts.  By only looking at the road I might easily come to the conclusion that we’re all good – however something out of the ordinary may be happening to the data that assessing just the quality of the road could mean I’m completely oblivious to a problem and that’s a risk.

Let’s say the channel is ticking along just fine, but an individual instrument within that feed might not be or a symbol might be ticking, but only on one side with lots of sells and no buys. Continue trading on this data and losses can ensue.  By assessing just the channel you could easily miss these types of things.

Another good indicator that something isn’t quite right is the rate at which a symbol is ticking.  If a particular symbol usually ticks 10 times a second but all of a sudden it’s increased to 50 times a second this could signal a problematic situation either with that specific symbol or the full exchange.  Conversely, it could be that the tick rate rapidly declines.  To identify issues like this you need to be taking a closer look at the traffic.  Who are the passengers travelling inside those cars on the road and is everything OK with them? – In effect what is actually going on with the data traversing the channel?

It’s by assessing both the road and the traffic that you’re going to gain the type of actionable insights that can deliver real value.  The kind that enables the trading losses, SLA and regulatory breaches that can stem from the consumption or distribution of poor quality market data to be entirely avoided, or at the very least swiftly brought under control.

The business value this type of actionable insight can deliver will vary whether a firm is using the data to trade themselves or simply passing the data onto clients either in its raw form or after it’s been normalized and aggregated into a new feed that’s sent on.  Either way insight into the quality of this data can allow firms to minimize trading losses or for them to provide clients with added value that then enables them to remain competitive in the market.

It’s the value market data presents to your firm that should dictate its focus when assessing its quality and what actions you should be taking based on the insights delivered.

If a firm trading on a particular market is alerted to the fact that the data their basing trading decisions on is incorrect or delayed, gaining this insight (and gaining it fast) means action can be taken before it’s too late.  Alternatively, a firm able to quickly detect problems with the data being passed onto clients can immediately contact those affected.  It’s about proactive control.

There are different actions that could be taken depending on whether you’re assessing the road or the traffic.  If the road is bad you can fix the road.  You could increase the bandwidth (in effect build a wider road), or you could update your network infrastructure, or remedy issues identified with switches and routers – you know fix the things that might be causing market data packets to drop at a very technical level.  Or if you’ve been alerted to issues with the traffic itself you can effectively react to those issues.  You could reissue subscriptions, either manually or automatically, or in more severe cases determine if you should halt trading on a symbol or market to prevent what could otherwise evolve into a major loss.

So as I hope I’ve explained, I believe both types of market data quality assessment have their value.  Choose to ignore one over the other and you never know what might be hovering in your blind spot.

If you’re interested in learning about how Velocimetrics’ technology can be used to assess market data quality please visit: www.velocimetrics.com/products/market-data-quality/

 

 

 

3 Comments

  • Great topic. It’s a very important determination to make when offering a service to application developers and support teams. In the past I’ve termed the two as Data Flow Monitoring (DFM) and Data Quality Monitoring (DQM). DFM itself can be a very complex process and adds a lot of value when implemented correctly. In my opinion DQM is best included in application logic rather than being provided as a separate service as there are so many permutations many of which an application may not care about.

    • Hi Denny,

      Great to meet you and thank you for your comment. In my experience DQM is always likely to be included in application logic, such as in the feed-handlers etc, and it provides great value. I do feel though that external DQM is no replacement for that, I see it as more of a critical addition. For one, it is a “second set of eyes”, external to the system, developed very differently, and often receiving data differently. This means it may be aware of issues the application isn’t. Also, when there are multiple applications involved some may not be easily accessible or simple to change in a way that allows these measures to be added quickly (as in the case of, off-the-shelve feed handlers for example). Taking external measures may work around these issues and provide oversight.

      I also think that maintaining historical information collected during this process can be extremely valuable. It’s different to the historical record of the data, what we’re talking about here is historical trends of data quality. This can be later analyzed to spot trends and associated events.

      • Hi Shimrit, valid points and as a data and infra services provider I definitely see the value. I just think it’s an uphill battle to get application developers to buy in to the proposition as they will need to understand DQM logic and specifications and build these into trading applications. There’s obviously value in doing thIs but does it outweigh the cost in terms of application performance, application maintence, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to News & Views