A fairly new buzzword, “Big Data” generally describes large sets of key information that present many businesses with major management challenges. The nearly overwhelming volumes of Big Data make it difficult for IT managers to capture, store and analyze it. The volumes are often too large to process via traditional IT infrastructures. The typical dataset that falls into this category exceeds 30 petabytes, which are units of information that exceed one quadrillion bits of data.
For example, securities firms frequently need huge amounts of data for risk management systems that require many metrics. Thus firms are in need of ever-better methods of managing the volumes of data required for a risk management platform.
Big Data also refers to a complex variety of tools, strategies and procedures targeting large amounts of data. This issue is of particular concern in business analytics, which is the process of evaluating large amounts of different data—a concern that primarily impacts IT professionals. Therefore, the term Big Data is also referred to as “Enterprise Big Data or Big Data Analytics.”
IT managers generally use the “Three V’s” to define and tackle Big Data: volume, variety and velocity. The excessive volumes create storage issues. The variety of data formats, both structured and unstructured, make it challenging to sort and process data. The velocity, or the speed at which data is received and transforms, is yet another IT management challenge.
About 80% of big data is said to be unstructured data that has not been identified, organized and stored in a database. Having such a high percentage of unstructured data can be costly to a company in terms of storage. Structured data, on the other hand, is data that has been organized and stored in a database.
For securities firms, much of the drive to better process Big Data stems from the U.S. government’s push for new regulatory reporting requirements.
Need a Reprint?
Leave a Reply