Guest Contributor: Adam Devine, VP Product Marketing & Strategic Partnerships, WorkFusion
Whether you provide a corporate actions feed or integrate them at an institution, you know that the gold standards for data accuracy, speed, and efficiency are rising. If you’re a provider, you know that the lake from which you fish relevant news has turned into an ocean that spans continents and languages, and it’s fed by an increasing number of sources and formats that neither headcount nor technology can efficiently keep up with. STP is still a unicorn.
These are the 7 reasons why:
- Automation is picky and noisy. There is no one platform that can extract data from any source in any format in any language. That’s a little like pointing out the inexistence of unicorns. Data providers are generally forced to make due with decent automation point solutions for each type of format and muddle through irrelevant extracted data and exceptions.
- Traditional scrapers and harvesters break when sources change, and repairing them causes downtime. They’ve become more durable over the last three years, but changes in source formats still throw off even the most “intelligent” scrapers and harvesters. Just as printer companies make their money on ink cartridges, traditional extraction vendors make their money on the IT projects born of broken algorithms.
- Off-the-shelf web scrapers and harvesters require IT to configure and IT to repair.
There is never enough IT staff (or budget) to solve all business problems as quickly and efficiently as business teams and customers would like. Web scrapers and harvesters in particular are vital tools for corporate actions teams, but they aren’t set up for plug-and-play business use. Implementing them is an IT engagement, and fixing them when source formats inevitably change, is often an even bigger IT engagement and expense.
- Most custom automation is too big of an investment for too little of an ROI. Using machine learning to automate proprietary, in-house workflows has been the dream of the information industry for years, but it’s too expensive and time-consuming to realize it for all but the very biggest and most profitable data products. Stock automation and human exceptions management is the default for the majority of workflows.
- Data specialist resources are squandered on extraction and exceptions work. Data specialists, whether they’re in-house or outsourced, are skilled and expensive, and using them for repetitive work that can’t be efficiently automated or for exceptions that automation can’t handle isn’t the best allocation of resources.
- Budgets for increasing data specialist headcount are declining. This is making it increasingly difficult to process the growing number of exceptions as fallible automation churns through the growing volume of source data.
- Outsourcing is becoming more expensive, which is increasing the cost of human-powered data extraction. Offshore labor rates are rising 10-15% each year. The moderate savings you achieved in the first year or two of your contract will start looking dusty by the third or fourth.
What’s the solution?
A tunnel of problems isn’t all that useful without a light at the end of it. The light is crowd computing, a method of work that pairs crowdsourced workers and data specialists with machine learning to train and maintain extraction algorithms. It can be used for any source in any format in any language, and it works 24/7 without downtime. Crowd computing is an efficient path to automating all but the most analytical of corporate actions data work, and chances are excellent that at least one of your competitors is already using a crowd computing platform for data extraction. If this sounds too good to be true, it’s because the future of corporate actions has only just arrived. It’ll take a little time before it becomes everyone’s present.
Leave a Reply