This is where you can start discussions around security visualization topics.
NOTE: If you want to submit an image, post it in the graph exchange library!
You might also want to consider posting your question or comment on the SecViz Mailinglist!
This year's IEEE VAST Challenge features two mini-challenges that particularly appeal to the SecViz community. These challenges are open to participation by individuals and teams in industry, government, and academia. Creative approaches to visual analytics are encouraged.
Mini-Challenge 2 tests your skills in visual design. The fictitious Big Enterprise is searching for a design for their future situation awareness display. The company's intrepid network operations team will use this display to understand the health, security, and performance of their entire computer network. This challenge is also very different from previous VAST Challenges, because there is no data to process and no questions to answer. Instead, the challenge is to show off your design talents by producing a creative new design for situation awareness. Please visit http://www.vacommunity.org/VASTchallenge2013MC2 for more information.
Mini-Challenge 3 focuses on unusual happenings on the computer network of a marketing company. Can you identify what looks amiss on the network using the network flow and network health data provided? And can you ask the right questions to help you piece together the timeline of events? Two weeks of data will be released for this challenge. Week 1 data is now available. Please visit http://www.vacommunity.org/VASTchallenge2013MC3 for more details.
For more information, please contact email@example.com
Big data and security intelligence are the two hot topics in security for 2013. We are collecting more and more information from both the infrastructure, but increasingly also directly from our applications. This vast amount of data gets increasingly hard to understand. Terms like map reduce, hadoop, mongodb, etc. are part of many discussions. But what are those technologies? And what do they have to do with security intelligence? We will see that none of these technologies are sufficient in our quest to defend our networks and information. Data visualization is the only approach that scales to the ever changing threat landscape and infrastructure configurations. Using big data data visualization techniques, you can gain a far deeper understanding of what's happening on your network right now. You can uncover hidden patterns of data, identify emerging vulnerabilities and attacks, and respond decisively with countermeasures that are far more likely to succeed than conventional methods. The attendees will learn about log analysis, big data, information visualization, data sources for IT security, and learn how to generate visual representations of IT data. The training is filled with hands-on exercises utilizing the DAVIX live CD.
Log Management and SIEM
Raffael Marty is one of the world's most recognized authorities on security data analytics. The author of Applied Security Visualization and creator of the open source DAVIX analytics platform, Raffy is the founder and ceo of PixlCloud, a next-generation data visualization application for big data. With a track record at companies including IBM Research and ArcSight, Raffy is thoroughly familiar with established practices and emerging trends in data analytics. He has served as Chief Security Strategist with Splunk and was a co-founder of Loggly, a cloud-based log management solution. For more than 12 years, Raffy has helped Fortune 500 companies defend themselves against sophisticated adversaries and has trained organizations around the world in the art of data visualization for security. Practicing zen has become an important part of Raffy's life.
In conjunction with the 2013 IEEE International Conferences on Intelligence and Security Informatics (ISI), we present a special topics workshop on:
Evaluating Security Visualizations in Supporting Analytical Reasoning & Decision Making in Cybersecurity
As the potential for visualizations in cybersecurity analysis becomes exceedingly more apparent, efforts to evaluate these visualizations become more imperative than ever to supporting the cybersecurity mission. As technology and big data continue to grow rampantly so does the deployment of insufficiently evaluated cybersecurity visualizations that claim to be most aligned with how analysts think and perceive data. Before organizations may intelligently incorporate visualization into their cybersecurity analysis process they must be prepared to pose tailored sets of questions that directly relate to the particular objective of the cyber analyst. This workshop addresses these gaps with the intent of bringing together experts from a variety of disciplines relevant to the topic of evaluating cybersecurity visualizations in their ability to support analytic reasoning and decision making in cybersecurity.
We welcome paper submissions on the following or related topics:
Empowering the Human Analysts
Methods and techniques for evaluating the impact cybersecurity visualizations have on enabling the human perception and cognitive processes that are required for intelligent decision making.
Addressing current deficiencies in cybersecurity analysis
Methods and techniques for measuring the impact cybersecurity visualization tools have on addressing current deficiencies that still exist in cybersecurity analysis such as exploration and prediction.
The Unique nature of Cybersecurity Visualization
Identifying aspects that are specific to cybersecurity visualization, and identifying relevant contributions from current research in the broader fields of information visualization and scientific visualization, and from visualizations in other domains.
Workshop papers due: March 31, 2013
Notices of acceptance and comments provided to authors: April 12, 2013
Camera ready paper submitted: April 29, 2013
Submission file formats are PDF and Microsoft Word. Required Word/LaTex templates (IEEE two-column format) can be found on IEEE's Publications web pages. Submissions can be long (6,000 words, 6 pages max) or short (3000 words, 3 pages max). Papers in English must be submitted by email to Lisa Coote at Lisa.Coote@innovative-analytics.com. The accepted workshop papers from will be published by the IEEE Press in formal Proceedings. Authors who wish to present a poster and/or demo may submit a 1-page extended abstract, which, if selected, will appear in the conference proceedings.
Conference content will be submitted for inclusion into IEEE Xplore as well as other Abstracting and Indexing (A&I) databases. The selected IEEE ISI 2013 best papers will be invited for contribution to the Springer Security Informatics Journal.
Kevin O'Connell, Innovative Analytics & Training
Lisa Coote, Innovative Analytics & Training
Raffael Marty, PixlCloud
Tomas Budavari, John Hopkins University
Antonio Sanfilippo, Pacific Northwest National Laboratory
John T. Langton, VisiTrend LLC
Claudio Silva, NYU Polytechnic
Bernice Rogowitz, Visual Perspectives Consulting
Cullen Jackson, APTIMA
Enrico Bertini, NYU Polytechnic
John Goodall, Oak Ridge National Laboratory
The 10th International Symposium on Visualization for Cyber Security (VizSec) is a forum that brings together researchers and practitioners from academia, government, and industry to address the needs of the cyber security community through new and insightful visualization and analysis techniques. VizSec will provide an excellent venue for fostering greater exchange and new collaborations on a broad range of security- and privacy-related topics. Accepted papers will appear in the ACM Digital Library as part of the ACM International Conference Proceedings Series.
Important research problems often lie at the intersection of disparate domains. Our focus is to explore effective, scalable visual interfaces for security domains, where visualization may provide a distinct benefit, including computer forensics, reverse engineering, insider threat detection, cryptography, privacy, preventing 'user assisted' attacks, compliance management, wireless security, secure coding, and penetration testing in addition to traditional network security. Human time and attention are precious resources. We are particularly interested in visualization and interaction techniques that effectively capture human analyst insights so that further processing may be handled by machines, freeing the analyst for other tasks. For example, a malware analyst might use a visualization system to analyze a new piece of malicious software and then facilitate generating a signature for future machine processing. When appropriate, research that incorporates multiple data sources, such as network packet captures, firewall rule sets and logs, DNS logs, web server logs, and/or intrusion detection system logs, is particularly desirable.
See http://www.vizsec.org/ for additional information.
There are a couple of seats open for next week's security visualization workshop in Dubai. The training is held Friday and Saturday, November 9th and 10th in Dubai.
The topics are anything from data sources to log processing to a lot of eye-catching visualizations, and a great module on big data. The signup link contains all the information you need.
Hope to see you in Dubai next week!
A week ago, in Seattle, VizSec 2012 was taking place. I had the honor to present the keynote, which I used as an opportunity to talk about the state of the security visualization space. Here is the video of the talk.
This is a quick outline of the talk:
Following are the slides from the talk. Unfortunately, my video recording from the VizSec keynote failed. I was presenting at Microsoft however, the same week and I was able to record my talk there. Same slides.
The past couple of months have been pretty clean from SPAM in the secviz feed, after I implemented a moderator queue for all the content. This seems to work pretty well. However, the system doesn't let me enable a moderator queue for images in the image gallery. That's why you have seen a few SPAM images in the feed (for example this morning).
I took another step to prevent this. When signing up as a user, you will have to be approved from now on. This seems to be the only way for me to prevent SPAM once and for all. I hope I'll be able to distinguish real users from spammers upon signup. I'll figure that one out.
Looking forward to seeing your posts here!
AfterGlow cloud has evolved further into another release; with many improvements added to the initial version. With GSoC 2012 approaching an end, we've covered all the additional features we planned for in the second phase of development, post mid-term. Building up on the initial version, this post will run you through the general features and additional improvements covered.
A live demo of this release can be found here: http://andromeda.ayrus.net:8080/
Data sources: In addition to the initial method of uploading AfterGlow compatible CSV file, the application now supports two new methods of visualizing your data. You can now upload your logs straight from the source and have it parsed (to a CSV file) and then, rendered on the fly. Additionally, AfterGlow Cloud is now integrated with Loggly.com's API. Loggly is a service which is used collect log data for monitoring and analyzing the data. With an account at Loggly, you can now search and import your logs straight from Loggly and have it visualized. Your authorization to the application to access your account at Loggly remains on our end for about a hundred days, beyond which you'll have to re-authenticate the application again. You can however revoke access to the application anytime. Both of these new additions, require you to specify a parsing scheme which is covered below.
Log parsing: Logs you upload directly or from your Loggly account have to be parsed initially for them to be visualized later. For this to happen, a regular expression has to be provided which groups two or three columns of data (if you're using two columns you'll have to check "Two Node Mode") from each line in your log. You can either specify a custom expression or select one the 'predefined' expressions. While provided a a custom regular
expression to parse your log, you're given an opt-in choice to save your expression as 'Predefined' for other users to use.
Settings tab: These define general settings over the way you want your final graph to be rendered. For example, "Print Node Count" prints the frequency at which each node occurs in the data uploaded beside the node's label while, "Text Label Colour" lets you choose the colour of text on each node. Each field on the application throughout is appended with a "?" help link. Hovering over this link, will provide you a small tip as to what they're supposed to do.
Advanced settings tab: These settings go a little beyond the general settings. As examples, "Soruce fan out threshold" will give a lower limit on the number of edges originating from each source node. If one or more source nodes don't have the required (threshold) number of edges originating from them, they're omitted on the graph. Same idea applies to "Omit threshold for each node" but with the threshold presiding over the frequency at which each node occurs throughout the data.
Configurations tab: These settings fine-tune your graph and often bring about interesting patterns, useful for visualization. Each fieldset in this tab provide a way to identify properties across the nodes in the graph. Examples: Option 'port' in clustering provides a way to cluster all the nodes with a specific bound for the port they represent. Giving a value of "2000" would mark and cluster (group) all the nodes representing a port higher than 2000 together. 'Number of occurrences' in Size fieldset, will have the size of each node proportionate to the frequency at which they appear throughout the log. A node with a heavy frequency would appear thick and vice-versa. This helps you to notice interesting patterns. If you're very familiar with the way configurations work with AfterGlow you also have the option of specifying these manually using the "Manual" option. You can simply cut/paste from a configuration file or write your configuration file by hand in a textbox. AfterGlow Cloud also saves your configuration every time you render a graph. When you choose to render a graph again, you can simply use your 'last used configuration' and it'd import the same configuration file you used last time. Alternatively, you can 'import' the last used configuration into the manual mode and further fine-tune it manually.
Rendering Engines: The application now supports using dot/sfdp in addition to using neato using GraphViz. More information about these, if you're unsure can be found here.
Gallery: You can submit the graphs you render using the application to a public gallery (with some details of it) for other users to view.
As an example (a very rudimentary example actually), here's how you might parse a typical Apache log. For this example we'll be using a very small portion of the log (attached), parse the client IP and the size of the request from it and try to render it.
We first point to the demo access log file from Apache and since we're uploading a log directory from our source, we'll have to select the "Log" option to have it parsed to a CSV (compatible with AfterGlow) and then render it. We've also checked the "Two node mode" box since, we'll be only extracting two columns (IP/Size) from our data. For the parser, we use a predefined regular expression which extracts the client IP and request size from an Apache log using the Common Log Format:
On the settings end, for some eye-candyness we define an edge length of 1.5 (length of an edge between two nodes) and define the text label colour as white:
Finally, on the configurations end, we add three colour configuration settings. All source node (Client IP) will be coloured with a shade of green. Target nodes (size nodes with a value of more than 2000 -- in this context it converts to more than 2000 bytes) will be coloured red. All the other target nodes will be coloured with a shade of orange. It's important to note here that configurations are read line-by-line hence, the line ordering matters. This type of configuration will show a really simple relation between how many bytes each client has requested in each request, but it will specifically mark the request with more than 2000 bytes in red (say you wanted to visualize the 'heavy' requests):
The resulting graph from these settings looks like:
You can see from the (really simple) example above some of the 'heavy' (we're classifying 'heavy' as >2000 bytes for the sake of an example here) requests from different clients.
From the development perspective, AfterGlow Cloud can now be deployed to a production like environment. The application (and the demo above) README cater to deploying the application on Apache using mod_wsgi. If you wish to run your own instance of the application, you can clone the source from the repository. A detailed README (pertaining to a machine running Ubuntu) is also available to help you setup. The README walks through the complete steps required from scratch to set the environment and the application (to the point that you can get it running on a fresh Ubuntu install without a hassle). The codebase has also been documented in detail, should you wish to fork and play with it.
This release marks the next version of AfterGlow Cloud. Please report any bugs or comments you have using the contact form on the demo :)
With the marking of the mid-term milestone in GSoC 2012, we're happy to announce a first version release of AfterGlow Cloud. After a lot of discussions and review the project seems to be in a good position for an initial release. The project in essential is based on AfterGlow , a security visualization tool which facilitates generating visual graphs from data you upload. The tool described at  is originally command-line based, the aim of this project, in general is to bring this tool and its options to the cloud -- so as to provide a neat interface for on-the-fly visualizations.
Live demos of the project are currently available at:
This release covers all the basic features discussed and agreed upon initially . You can upload any comma-seperated file (only CSV files) as your log source to visualize it. The current version doesn't cover parsers for exporting logs from different sources (example tcpdump) into CSV -- but this is a future addition, likely in the next release. To have a feel of what the application is capable of, you can try uploading the sample "firewall.csv" file (in the attachments). This sample file contains some rules (pass, block) over different source and destination nodes. Getting any sense of what's exactly going on is difficult by merely inspecting the CSV file -- this is where AfterGlow is needed.
Labels "Settings" and "Advanced Settings" cover some rendering settings you might want to choose or override for better customization. For example, "Print Node Count" would append the number of times each source/destination node occurs in the log file provided -- this gives a sense of the frequency of the nodes. Similarly, "Text Label Colour" provides the option to override the default black colour of text on the graph (You can hover over the "?" next to any input for a description of what they exactly mean).
Configurations are used to further scrutinize the rendering of the graph, for example you might want to colour a set of source/destination nodes "red" if their IP is '68.xx.xx.xx'. Each of these configuration lines bring about several layers of visualization. For example, you'd probably want the 'size' of the node on the graph to be proportionate to the frequency they appear throughout the log (configuration under 'Node Sizes' - 'Predefined - Number of Occurrences'). You can remove or change the ordering (ordering of configurations matter) once a line is added. A detailed guide to the different configuration options available, would be added later.
A sample configuration file is added as an attachment (sample.properties). If you'd like to try this out with the sample "firewall.csv" data file, you could choose "Manual" under the configurations and paste the contents of the file (instead of manually feeding in every line). The application also provides the feature of "saving" your settings. All changes you make in "settings" and "advanced settings" pane are stored as a cookie (for four days) if the save feature is checked. AfterGlow populates your settings every time you visit the application with an active cookie.
Here's how a rendered graph looks like:
Original CSV data:
Graph rendered by AfterGlow on the above data:
The source for the entire projects rests at the GitHub repository. If you choose to run your own local install of the project, detailed instructions are provided in the README. The instructions and requirements listed in the README cater to Ubuntu and run Django's development runserver module (instructions for a production like environment -- Apache would be added later).
With this release, we've started to list out the possible features and additions that can be brought on to the next version of AfterGlow cloud (API, adding parsers to convert data from tcpdump etc to CSV files, among others). There's still a lot to be covered and added so please let us know if you'd like to suggest new features on the project, report a bug or any general comments (a feedback form would soon be added to the current demos)!