I am quite frustrated with a lot of the research papers and tools that get published. In a lot of cases you can just tell that the authors and developers of certain tools have good intentions, but unfortunately no, or limited, domain knowledge.
One example was a recent paper I read about some visualization tool. They were talking about occlusion and how filtering can help address that problem. Absolutely. I could not agree more. However, the context was security alarms. It was proposed that one of the most effective ways to deal with occlusion was to filter based on the alarm priority and only show a certain alarm level. Well, why would I need a visualization tool for that? I can use grep to do so. And if you are going to visualize only the highest priority alerts (or any level of priority for that matter), you are loosing context. It might have been enormeously important to see those level 10 alerts in context with all the level one alerts. That's why you want to use visualization, to see relationships. The relationships among level 10 alerts are limited and most likely there won't be many!
The second point I want to get accross about visualization (or in general security research) papers, is the use of the wrong data to verify and justify a tool's usefulness. Simulated data feeds, artificially generated user behavior, etc. is just a really really bad way of testing or at least justifying why a tool is well suited for finding important/relevant events. And if you are going to use metrics on top of that data which talk for example about recall and precision, you are just in the wrong profession. Get that tool on a real network where people are trying to solve real problems!