This is where you can start discussions around security visualization topics.

NOTE: If you want to submit an image, post it in the graph exchange library!

You might also want to consider posting your question or comment on the SecViz Mailinglist!

Discussion Entries

warning: Creating default object from empty value in /usr/www/users/zrlram/secviz/modules/taxonomy/taxonomy.module on line 1387.

Security Visualization - State of 2010 and 2011 Predictions

At the recent SANS Incident response and log management summit, I was part of a panel on security visualization. As an introduction, I presented the attached slides on the security visualization trends and where we are today.
I looked at four areas for security visualization: Data, Cloud, Tools, and Security. I started with looking at the log maturity scale that I developed a while ago. Barely any of the present companies could place themselves to the right of correlation point. It's sad, but probably everyone expected it. We have a long way to go with log analysis!


It's very simple. If you don't have the data, you cannot visualize it. A lot of companies are still struggling to collect the necessary data. In some cases, the data is not even available because applications do not generate it. This is where data analysis or security people have to start voicing their needs to the application owners and developers in order to generate the data that they need. In addition, developers and security people have to communicate more to learn from each other. Ideally, it is not even the security folks that visualize and analyze the application logs, but it is the application people. Just a thought!
What we will see next year is that the Big Data movement is going to enable us to crunch more and bigger data sets. Hopefully 2011 will also give us an interoperability standard that is going to ease log analysis.


What does the cloud have to do with security visualization? Well, it has to do with processing power and with application development. Applications generate logs and logs are used for security visualization. Cloud services are new pieces of software that are being developed. We have a chance here to build visibility into those applications, meaning we have an opportunity to educate these developers to apply logging in the right way.
Next year we will see a lot of companies that are going to roll their own log analysis systems based on big data technology, such as Hadoop. We have seen a number of companies doing this already in 2010: Facebook, Linkedin, NetFlix, Zynga, etc. Traditional log management solutions just don't scale to these companies' needs. This will continue next year.


With tools I mean security visualization tools. We are absolutely nowhere with this. There are a couple of simple tools out there, but there is no tool that really does what we need: brushing, linked views, supports large data sets, easy to use, contextualized, etc.
Next year won't really change anything in this area. What we will see is that more and more tools are built on the Web. The cloud movement is kind of responsible for this push, but so is the broad utilization of HTML5 with all of it's goodness (e.g., Websockets, Canvas). We will see advances in the social space with regards to visualization tools. Security will continue utilizing those tools to analyze security data. It's not ideal because these tools are not meant for this, but hey, better than nothing! Maybe this will help creating awareness and will surface some interesting use-cases for security visualization.


What will we see in security visualization? Well, as we saw earlier, we don't have the data. What that means is that we haven't really had a chance to learn how to visualize that data. And because we didn't have that chance, we don't really understand our data. Read that again. I think this is an important point!
Next year will give us more bad security visualization examples. And I am lumping product displays into this. Have you looked at your tool lately? During the SANS summit, I had a chance to look at some of the vendor's dashboards. They are horrible. 3D charts, no legends, bad choice of colors, non actionable dashboards, etc. Note to log management vendors: I offer a security visualization class. You might want to consider taking it! But back on topic. Visualization, just like security, will stay an afterthought. It's being added when everything else is in place already. We know how that generally turns out.

I know, I am painting a gloomy picture. Hopefully 2011 will have some surprises for us!

Equilibrium Networks free/open-source software release

Equilibrium Networks' free/open-source visual network traffic monitoring software is now available for download at http://www.eqnets.com. A video of our enterprise system in action and technical documents detailing our approaches to traffic analysis, real-time interactive visualization and alerting are also available at our website.

Besides a zero-cost download option, we are also offering Linux-oriented installation media and an enterprise version of our system with premium features such as configurable automatic alerting, nonlinear replay, and a 3D traffic display.

Discounts—including installation media for a nominal shipping and handling fee—are available to institutional researchers or in exchange for extensions to our platform.

The software can run in its entirely on a dedicated x86 workstation with four or more cores and a network tap, though our system supports distributed hardware configurations. An average graphics card is sufficient to operate the visualization engine.

Log Visualization in the Cloud - Webinar

On August 19th, at 10am PST I will be giving a Webinar on the topic of visualization. You can register and watch the Webinar right here:

A BrightTALK Channel

Cloud-based Log Analysis and Visualization

I was giving a talk at RMLL 2010, a french free software conference. The title, Cloud-based Log Analysis and Visualization, already gives the content away. But in case, here is the abstract for the talk:

Cloud computing has changed the way businesses operate, the way businesses make money, and the way business have to protect their assets and information. More and more software applications are moving into the cloud. People are running their proxies in the cloud and soon you will be collecting your logs in the cloud. You shouldn't have to deal with log collection and log management. You should be able to focus your time on getting value out of the logs; to do log analysis and visualization.

In this presentation we will explore how we can leverage the cloud to build security visualization tools. We will discuss some common visualization libraries and have a look at how they can be deployed to solve security problems. We will see how easy it is to quickly stand up such an application. To close the presentation, we will look at a number of security visualization examples that show how security data benefits from visual representations. For example, how can network traffic, firewall data, or IDS data be visualized effectively?

Monitoring / Visualisation Stations, & relevance of layer 4 traffic

Opinions sought from those working in the relevant areas - handed this document in as part of a degree project in security visualisation & monitoring, and the feedback was that the network and monitoring station/s are not realistic, and that I should have focused on port 80 and layer 7 traffic only, as layer 4 is not relevant any longer. The link provided below is only part of the document, I presume it's the part they had issues with. I wasn't actually intending to focus on web traffic, which was made clear in the document anyway (tho I did indicate to them that with the likes of Rumints packet contents visualiser, it is certainly viable to utilise that to match up with malware signature databases - but that aspect wasn't the focus of the project).
I don't expect it says anything that people working in those areas will be unaware of, and the general intention was to address what would be required for a monitoring station / network, which includes visualisation software, that would work in real-time as well as offline analysis and traffic capture.
The grouping into 'objectives' is just part of how the work has to be presented to comply with guidelines. Cheers for input, I know you're probably busy.


nb - the last part is probably wrong about ad-hoc IPs; I can't remember exactly right now how they are handed out; they probably aren't always dynamic esp. now it's more common to get fixed-IP SIMs.

EDV - Event Data Visualization

Afterglow has been on my list of 'neat tools' for quite some time. Thankfully, last month I finally had a bit of spare time to really play with it.

The result was EDV: http://www.pintumbler.org/code/edv

See the page for more info. Keep in mind, this is BETA!

It currently supports Snort (Sguil DB format). However, even the untrained eye can easily modify it for straight Snort
or anything else you can MySQL query. Once you have your sources defined it will take care of the rest.

The tool is static (controlled by configs and cron) for now but I do plan on adding a query tab to the web page so that you can do on the fly queries. Low priority for now. I have been focusing on 2 parsers that log directly to MySQL. One parses Syslog output from a Barracuda spam firewall and the other URL info captured by URLSnarf. These will be my next additions.

Comments and suggestions welcome.


Interesting patterns World of Warcraft

It's been a pretty quiet day today, but I noticed an intersting pattern emerge. I hadn't seen it before, which is really strange considering I work at a college. Im using Sphere of Influence 3.0 summary window and timeline from a Cisco ASA.
In patten "C" I show the normal allowed network traffic. (the horizontal "bars" of traffic are a p2p program not associated with WoW) This shows traffic both into and out of the college. I noticed the patten and highlighted it some more. This showed me the organization. Now if anyone knows anything about world of warcraft the organization was blizzard communications. I filtered all traffic to and from organizations with the word blizzard in them. As you can see from pattern "A" it shows clearly a world of warcraft traffic patten - updating itself is the easier pattern to spot. I also filtered the traffic in pattern "B" denied window. The traffic being denied is port 3724...voice. The timeline (Pattern c) assured me that traffic was indeed seen on 3724 (WoW port) Although tempted to put in a QOS statement to slowly grind that machine to a crawl, I opted for the easier solution. It came from a library computer. So it was just a simple matter of visiting the library and removing the software off a machine that somehow was unfrozen. Freezen the machine and updating a few rule sets.

Patterns always interest me, just thought I'd share this one with you all.

SOI URL's added

We added a URL's visual to the pix/asa..so now we collect the URL's...this helps when monitoring a system as you not only see the connection, like in the old way, but now you see the urls ....As per usual you can filter it so as to look for particular organizations or countries...but using the key word you can also hunt for anything in the url...be useful if hunting C2 traffic for infections


Visualisation hardware & software

This is a snippet of a report written for an honours project I'm doing on security visualisation. Just some ideas I want to punt out there, cause it'd be nice to see them take off, & in case they've gone un-noticed because of their being in different topic areas,

Visualisation software for security can be used to display graphical information about the data being captured in real-time and also used for offline analysis. The difference between visualisation applications and the monitoring software of the previous objective is in the presentation of the data, although both kinds can and do make use of the more familiar graphs, such as line graphs, bar charts, pie charts, flow charts.
In general, information visualisation is a way to gain insight into complex datasets and textual information in a condensed and understandable way.
Consequently, evaluating a tools effectiveness means taking into account multidisciplinary areas knowledge of visual systems. Successful visualisation tools take into account user interface design, human-computer interaction, psychology of human perception, machine pattern recognition, and are as much borne from certainly the design side of art as they are about presenting quantified data.
To some extents this kind of information visualisation is quite new, and at its current stage is itself viewable as an overall discipline at a time before its emergence as a distinct discipline; but at the same time the areas that will feature heavily in its development are burgeoning in somewhat unnoticeable ways. For example, the prevalence of touchscreen mobile communications devices, whose interfaces are so intuitive and easy to pick up that many people only need a general idea – like another graphic that shows them in use – of how the interface works to be able to use it correctly. It feels natural enough to be able to press buttons with symbolic and pictorial representations of functions, go to the next page using a sweeping motion, zoom in and out to gain more precise datasets or larger overviews using hardware or onscreen rollbars and sliders, manipulating the onscreen display by tilting the device itself; the world wide web itself was designed from the outset as a distributed hypertext system. This sounds obvious as it is well known what the H in HTML stands for, but the framework itself is another example of a new idea (though clearly built upon cross-indexing, as used in libraries) that people find easy to accept without really noticing it – the amount of extra data conveyed within a document using an tag, navigation made easier with anchors, the hypertext links themselves that allow keywords when activated by a button click to jump to another document with further information in relation to the keyword, the use of tabbed graphical browsers – these web basics are so integrated to the user precisely because they use intuitive design interfaces.
The same ease of information access is also behind why it is so frustrating for the user to have the desktop or interface become slowed down and cluttered with unwanted elements, which aside from being relevant to the overall objectives of this project (as spam and other malware and adware are certainly cumbersome additions to any user experience) give very good design tips of what to include and not include in a graphical console.

To some extents the development of information visualisation has been impeded because the hardware is either too expensive, spacious, or simply not available yet, therefore not able to keep up with the code requirements of the applications or the amount of data needing to be accessed, sorted through, processed. As previously mentioned, clustering is definitely a viable solution to many of the problems slowing down development. Parallel computing and information visualisation station design are very complimentary, as the latter greatly benefits from incorporating the former; this is easily understood by merely counting the amount of nodes being monitored in a given network, and considering that the monitoring station has to capture, make sense of (to various degrees), and possibly interpret and present, and certainly store or produce hard copies in realtime, for all of the nodes combined.
Video game hardware and onscreen interfaces, and music visualisers, are another two areas where a lot of progress has already been made that can be directly lifted and incorporated into information visualisation.

Like lightpens and graphics tablets used for a long time in artistic and photo editing digital applications, devices that offer remote pointing that manipulates onscreen elements are very useful to someone sat far back from multiple monitors, as the interaction is required but their field of vision has to be able to take in all the displays.
There are other existing solutions here also, particularly in the field of wearables, such as being able to fit large display formats inside regular sized glasses, and using one-handed small footprint keypad controllers.
Again, other existing areas have already taken multifunction keypad concepts onboard – gaming and video editing decks being prime examples. These allow complex functions to be executed with a key press, by assigning the desired functions as hotkey shortcuts.
Onscreen GUI menus in games offer the user at-a-glance statistics and information as well as easy access to point-of-view changes, and commonly offer the same information on teammates and enemies – it can be seen how this can be utilised in realtime security monitoring, to track multiple connections and see data on them continually updated, monitor a collegues progress, and shift between emphasis on varying datasets without having to minimise or close any displays.
Online and network gaming network configurations themselves have to deal with multiple users changing the game elements on a constant basis, and be able to update the changes and present them to all users in a synchronised way, so everyone is interacting with the same scenario. This is for now more successful in some places than others, purely because of latencies and the haphazard manner that packets may traverse the internet, and also of course based on the users own hardware and the features offered by their ISP and the associated telecoms infrastructures. However the framework itself is available and in a LAN environment can be demonstrated to work very well.
Graphics cards have also developed greatly in recent years, to the extent that what would have required a dedicated visualisation station can now be done on a home PC with one to four graphics cards. GPU and CPU hybrid systems are already in the Top 500 Supercomputer listings and the main hardware chip vendors are or have already been focusing a lot of attention on GPU development.
Music visualiser applications can also be adapted to instead of matching the visuals to audio events, to match them to network or other data events. This is a very promising area as baselining can be used to produce a backgrounded pattern or visual of the networks behaviour, and therefore any fluctuations are readily noticeable even to someone knowing nothing about network data itself.
Use of colour and shading types is also very relevant, and comes out of areas like topography. Many current security and network visualisation tools allow the user to alter colouring of data elements to suit themselves; this is another important consideration of a user interface and from a security point of view is a welcome feature, as user view customisation makes it potentially less obvious to an intruder what the data represents. Of course in collating and sharing data between the authorised users, means there has to be a means to easily combine differing views, which can be done with mapping and parsing.

REQUEST for Hilbert Curves

I was just looking for some examples of IPv4 Hilbert Curves and realized there were non in the image gallery. Does anyone have examples of IPv4 space visualizations of that sort? They are also called IPv4 Heatmaps. I have never generated any of them myself and didn't just want to post a screenshot of someone else's images.