Visual Analytics, especially the exploration of data requires a scalable and flexible data backend. It is not uncommon that gigabytes, maybe even terabytes of data need to be queried for a specific analytics tasks. Furthermore, the more context around log data is available, the more expressive the data gets and the deeper the insight that can be discovered in the data. How can we gather all that context and combine it with both network-based, as well as host-based data? What are the data access requirements? How can we run data mining algorithms, such as clustering across all of the data? What kind of data store do we need for that? Do we need a search engine as a backend? Or a columnar data store?
I recently wrote a paper about the topic of a security data lake that is a concept of a data backend enabling a variety of processing and access use-cases. A short introduction to the topic is available as well.
Maybe at a later point in time, I will try to address the topic of data science and techniques, as well as workflows to make all that big data actionable. How do you take a terabyte of data and find actual insights? Just dropping that data into a network graph visualization is not going to help. You need a bit more to make that happen. But again, more on that later.
If you want to learn more about how to visualize and analyze terabytes of data, attend the Visual Analytics Workshop at BlackHat 2015 in Las Vegas.
Again, here is where you download the paper.