Discussions

This is where you can start discussions around security visualization topics.

NOTE: If you want to submit an image, post it in the graph exchange library!

You might also want to consider posting your question or comment on the SecViz Mailinglist!

Discussion Entries

warning: Creating default object from empty value in /usr/www/users/zrlram/secviz/modules/taxonomy/taxonomy.module on line 1387.

VizSec 2013 - Paper Deadline Extended, Poster Deadline Announced

The 10th Visualization for Cyber Security (VizSec) will be held in Atlanta GA, USA on October 14, 2013 in conjunction with IEEE VIS. VizSec brings together researchers and practitioners in information visualization and security to address the specific needs of the cyber security community through new and insightful visualization techniques.

The paper deadline has been extended to July 22, 2013 at 5:00pm PDT. Full papers offering novel contributions in security visualization are solicited. Papers may present techniques, applications, practical experience, theory, analysis, or experiments and evaluations. We encourage papers on technologies and methods that promise to improve cyber security practices, including, but not limited to:

  • Situational awareness / understanding
  • Incident handling including triage, exploration, correlation, and response
  • Computer forensics
  • Recording and reporting results of investigation
  • Reverse engineering and malware analysis
  • Multiple data source analysis
  • Analyzing information requirements for computer network defense
  • Evaluation / User testing of VizSec systems
  • Criteria for assessing the effectiveness of cyber security visualizations (whether from a security goal perspective or a human factors perspective)
  • Modeling system and network behavior
  • Modeling attacker and defender behavior
  • Studying risk and impact of cyber attacks
  • Predicting future attacks or targets
  • Security metrics and education
  • Software security
  • Mobile application security
  • Social networking privacy and security
  • Cyber intelligence
  • Human factors in cyber security

We are also soliciting posters. Poster submissions may showcase late-breaking results, work in progress, preliminary results, or visual representations relevant to the VizSec community. Accepted poster abstracts will be made available on this website. Poster submissions are due August 23, 2013 at 5:00pm PDT.

See vizsec.org for the full Call for Papers and additional details.

VAST Challenge 2013 Now Available

This year's IEEE VAST Challenge features two mini-challenges that particularly appeal to the SecViz community. These challenges are open to participation by individuals and teams in industry, government, and academia. Creative approaches to visual analytics are encouraged.

Mini-Challenge 2 tests your skills in visual design. The fictitious Big Enterprise is searching for a design for their future situation awareness display. The company's intrepid network operations team will use this display to understand the health, security, and performance of their entire computer network. This challenge is also very different from previous VAST Challenges, because there is no data to process and no questions to answer. Instead, the challenge is to show off your design talents by producing a creative new design for situation awareness. Please visit http://www.vacommunity.org/VASTchallenge2013MC2 for more information.

Mini-Challenge 3 focuses on unusual happenings on the computer network of a marketing company. Can you identify what looks amiss on the network using the network flow and network health data provided? And can you ask the right questions to help you piece together the timeline of events? Two weeks of data will be released for this challenge. Week 1 data is now available. Please visit http://www.vacommunity.org/VASTchallenge2013MC3 for more details.

For more information, please contact vast_challenge@ieeevis.org

Visual Analytics Workshop With World's Leading Security Visualization Expert


VISUAL ANALYTICS – DELIVERING ACTIONABLE SECURITY INTELLIGENCE


BlackHat Las Vegas


only a few seats left!
Dates: JULY 27-28 & 29-30
Location: Las Vegas, USA
SIGN UP NOW

OVERVIEW

Big data and security intelligence are the two hot topics in security for 2013. We are collecting more and more information from both the infrastructure, but increasingly also directly from our applications. This vast amount of data gets increasingly hard to understand. Terms like map reduce, hadoop, mongodb, etc. are part of many discussions. But what are those technologies? And what do they have to do with security intelligence? We will see that none of these technologies are sufficient in our quest to defend our networks and information. Data visualization is the only approach that scales to the ever changing threat landscape and infrastructure configurations. Using big data data visualization techniques, you can gain a far deeper understanding of what's happening on your network right now. You can uncover hidden patterns of data, identify emerging vulnerabilities and attacks, and respond decisively with countermeasures that are far more likely to succeed than conventional methods. The attendees will learn about log analysis, big data, information visualization, data sources for IT security, and learn how to generate visual representations of IT data. The training is filled with hands-on exercises utilizing the DAVIX live CD.

SYLLABUS

Log Analysis

  • Data sources
  • Data Analysis and Visualization Linux (DAVIX)
  • Log data processing

Log Management and SIEM

  • Log management and SIEM overview
  • Application logging guidelines
  • Logging as a service
  • Big data technologies

Visualization

  • Information visualization history
  • Visualization theory
  • Data visualization tools and libraries
  • Visualization resources

Security Visualization

  • Perimeter threat use-cases
  • Network flow data
  • Firewall data
  • IDS/IPS data
  • Proxy data
  • User activity
  • Host-based data analysis


TRAINER

Raffael Marty is one of the world's most recognized authorities on security data analytics. The author of Applied Security Visualization and creator of the open source DAVIX analytics platform, Raffy is the founder and ceo of PixlCloud, a next-generation data visualization application for big data. With a track record at companies including IBM Research and ArcSight, Raffy is thoroughly familiar with established practices and emerging trends in data analytics. He has served as Chief Security Strategist with Splunk and was a co-founder of Loggly, a cloud-based log management solution. For more than 12 years, Raffy has helped Fortune 500 companies defend themselves against sophisticated adversaries and has trained organizations around the world in the art of data visualization for security. Practicing zen has become an important part of Raffy's life.

SIGN UP

Evaluating Security Visualizations in Supporting Analytical Reasoning & Decision Making in Cybersecurity

In conjunction with the 2013 IEEE International Conferences on Intelligence and Security Informatics (ISI), we present a special topics workshop on:

Evaluating Security Visualizations in Supporting Analytical Reasoning & Decision Making in Cybersecurity

Workshop Description
As the potential for visualizations in cybersecurity analysis becomes exceedingly more apparent, efforts to evaluate these visualizations become more imperative than ever to supporting the cybersecurity mission. As technology and big data continue to grow rampantly so does the deployment of insufficiently evaluated cybersecurity visualizations that claim to be most aligned with how analysts think and perceive data. Before organizations may intelligently incorporate visualization into their cybersecurity analysis process they must be prepared to pose tailored sets of questions that directly relate to the particular objective of the cyber analyst. This workshop addresses these gaps with the intent of bringing together experts from a variety of disciplines relevant to the topic of evaluating cybersecurity visualizations in their ability to support analytic reasoning and decision making in cybersecurity.

Paper Topics
We welcome paper submissions on the following or related topics:

Empowering the Human Analysts
Methods and techniques for evaluating the impact cybersecurity visualizations have on enabling the human perception and cognitive processes that are required for intelligent decision making.

Addressing current deficiencies in cybersecurity analysis
Methods and techniques for measuring the impact cybersecurity visualization tools have on addressing current deficiencies that still exist in cybersecurity analysis such as exploration and prediction.

The Unique nature of Cybersecurity Visualization
Identifying aspects that are specific to cybersecurity visualization, and identifying relevant contributions from current research in the broader fields of information visualization and scientific visualization, and from visualizations in other domains.

Important Dates

Workshop papers due: March 31, 2013
Notices of acceptance and comments provided to authors: April 12, 2013
Camera ready paper submitted: April 29, 2013

Website: http://www.isiconference2013.org/pgs/workshop-on-cybersecurity-visualizations.php

Paper Submission:
Submission file formats are PDF and Microsoft Word. Required Word/LaTex templates (IEEE two-column format) can be found on IEEE's Publications web pages. Submissions can be long (6,000 words, 6 pages max) or short (3000 words, 3 pages max). Papers in English must be submitted by email to Lisa Coote at Lisa.Coote@innovative-analytics.com. The accepted workshop papers from will be published by the IEEE Press in formal Proceedings. Authors who wish to present a poster and/or demo may submit a 1-page extended abstract, which, if selected, will appear in the conference proceedings.

Conference content will be submitted for inclusion into IEEE Xplore as well as other Abstracting and Indexing (A&I) databases. The selected IEEE ISI 2013 best papers will be invited for contribution to the Springer Security Informatics Journal.
Organizing Committee:

Kevin O'Connell, Innovative Analytics & Training
Lisa Coote, Innovative Analytics & Training

Program Committee:

Raffael Marty, PixlCloud
Tomas Budavari, John Hopkins University
Antonio Sanfilippo, Pacific Northwest National Laboratory
John T. Langton, VisiTrend LLC
Claudio Silva, NYU Polytechnic
Bernice Rogowitz, Visual Perspectives Consulting
Cullen Jackson, APTIMA
Enrico Bertini, NYU Polytechnic
John Goodall, Oak Ridge National Laboratory

VizSec 2013

VizSec 2013 will be held in Atlanta, Georgia on October 14, 2013 in conjunction with IEEE VIS. Paper submissions are due July 8, 2013 and poster abstracts are due August 23, 2013.

The 10th International Symposium on Visualization for Cyber Security (VizSec) is a forum that brings together researchers and practitioners from academia, government, and industry to address the needs of the cyber security community through new and insightful visualization and analysis techniques. VizSec will provide an excellent venue for fostering greater exchange and new collaborations on a broad range of security- and privacy-related topics. Accepted papers will appear in the ACM Digital Library as part of the ACM International Conference Proceedings Series.

Important research problems often lie at the intersection of disparate domains. Our focus is to explore effective, scalable visual interfaces for security domains, where visualization may provide a distinct benefit, including computer forensics, reverse engineering, insider threat detection, cryptography, privacy, preventing 'user assisted' attacks, compliance management, wireless security, secure coding, and penetration testing in addition to traditional network security. Human time and attention are precious resources. We are particularly interested in visualization and interaction techniques that effectively capture human analyst insights so that further processing may be handled by machines, freeing the analyst for other tasks. For example, a malware analyst might use a visualization system to analyze a new piece of malicious software and then facilitate generating a signature for future machine processing. When appropriate, research that incorporates multiple data sources, such as network packet captures, firewall rule sets and logs, DNS logs, web server logs, and/or intrusion detection system logs, is particularly desirable.

See http://www.vizsec.org/ for additional information.

Security Visualization Events

In December I'll be presenting on security intelligence and the interplay of visualization and data mining.

I wrote a blog post that introduces the talk in Palo Alto a little bit. It's about Supercharging Visualization with DataMining. Check it out and make sure you RSVP for the event tomorrow.

Security Visualization Training in Dubai

There are a couple of seats open for next week's security visualization workshop in Dubai. The training is held Friday and Saturday, November 9th and 10th in Dubai.

The topics are anything from data sources to log processing to a lot of eye-catching visualizations, and a great module on big data. The signup link contains all the information you need.

Hope to see you in Dubai next week!

VizSec 2012 - Keynote

A week ago, in Seattle, VizSec 2012 was taking place. I had the honor to present the keynote, which I used as an opportunity to talk about the state of the security visualization space. Here is the video of the talk.

This is a quick outline of the talk:

  • Security visualization - The most exciting field
  • The vision - This section talks about some of the challenges that we have in security visualization and what I would like to see in a security visualization application. Well, some of what I would like to see, there are some parts I left out and will hopefully deliver through pixlcloud in the not so far future.
  • Why is security visualization so hard? I am talking about a few reasons why we have such a hard time with visualizing security data. One of the issues is that we are different; security visualization is different from all the other fields out there. We have problems and data that no other area deals with. We have a lot of IP addresses, for example or port numbers. If we try to work with other domain experts, for example from the data mining space, they don't understand our data well enough to build good algorithms. One very common problem are 'distance functions'. They are incredible hard to define and because our data is mostly categorical and not numerical, that presents a significant problem. I also see port numbers being treated as continuous variables, which is just plain wrong.
  • Security analysts - I am providing a little bit of a provocative view of security analysts. There is no defined way of analyzing security data and therefore, every analyst is doing his/her work differently. If we try to build a tool for any one of them, the next one might not be able to use it at all.
  • Visualizing big data - I am offering a little bit of an answer on how to visualize a large amount of data. It all comes back to Ben Shneiderman with his information seeking mantra.
  • Data mining - I have been looking into data mining a lot lately. I am trying to define what the right interplay between data mining and visualization is. Either of the disciplines alone won't solve our problems. Together they can unlock a lot of insights, however. But don't be fooled. Data mining is super hard to get right.
  • Moving forward - I quickly outline what's going on out there. Visualization contests seem to gain popularity. I close with my challenge to everyone of solving the many problems that we still face. If you are a researcher, have a look at this slide and help us solve some of the problems.
    • Following are the slides from the talk. Unfortunately, my video recording from the VizSec keynote failed. I was presenting at Microsoft however, the same week and I was able to record my talk there. Same slides.

More SPAM

The past couple of months have been pretty clean from SPAM in the secviz feed, after I implemented a moderator queue for all the content. This seems to work pretty well. However, the system doesn't let me enable a moderator queue for images in the image gallery. That's why you have seen a few SPAM images in the feed (for example this morning).
I took another step to prevent this. When signing up as a user, you will have to be approved from now on. This seems to be the only way for me to prevent SPAM once and for all. I hope I'll be able to distinguish real users from spammers upon signup. I'll figure that one out.

Looking forward to seeing your posts here!

AfterGlow Cloud: Second release

AfterGlow cloud has evolved further into another release; with many improvements added to the initial version. With GSoC 2012 approaching an end, we've covered all the additional features we planned for in the second phase of development, post mid-term. Building up on the initial version, this post will run you through the general features and additional improvements covered.

A live demo of this release can be found here: http://andromeda.ayrus.net:8080/

Data sources: In addition to the initial method of uploading AfterGlow compatible CSV file, the application now supports two new methods of visualizing your data. You can now upload your logs straight from the source and have it parsed (to a CSV file) and then, rendered on the fly. Additionally, AfterGlow Cloud is now integrated with Loggly.com's API. Loggly is a service which is used collect log data for monitoring and analyzing the data. With an account at Loggly, you can now search and import your logs straight from Loggly and have it visualized. Your authorization to the application to access your account at Loggly remains on our end for about a hundred days, beyond which you'll have to re-authenticate the application again. You can however revoke access to the application anytime. Both of these new additions, require you to specify a parsing scheme which is covered below.

Log parsing: Logs you upload directly or from your Loggly account have to be parsed initially for them to be visualized later. For this to happen, a regular expression has to be provided which groups two or three columns of data (if you're using two columns you'll have to check "Two Node Mode") from each line in your log. You can either specify a custom expression or select one the 'predefined' expressions. While provided a a custom regular
expression to parse your log, you're given an opt-in choice to save your expression as 'Predefined' for other users to use.

Settings tab: These define general settings over the way you want your final graph to be rendered. For example, "Print Node Count" prints the frequency at which each node occurs in the data uploaded beside the node's label while, "Text Label Colour" lets you choose the colour of text on each node. Each field on the application throughout is appended with a "?" help link. Hovering over this link, will provide you a small tip as to what they're supposed to do.

Advanced settings tab: These settings go a little beyond the general settings. As examples, "Soruce fan out threshold" will give a lower limit on the number of edges originating from each source node. If one or more source nodes don't have the required (threshold) number of edges originating from them, they're omitted on the graph. Same idea applies to "Omit threshold for each node" but with the threshold presiding over the frequency at which each node occurs throughout the data.

Configurations tab: These settings fine-tune your graph and often bring about interesting patterns, useful for visualization. Each fieldset in this tab provide a way to identify properties across the nodes in the graph. Examples: Option 'port' in clustering provides a way to cluster all the nodes with a specific bound for the port they represent. Giving a value of "2000" would mark and cluster (group) all the nodes representing a port higher than 2000 together. 'Number of occurrences' in Size fieldset, will have the size of each node proportionate to the frequency at which they appear throughout the log. A node with a heavy frequency would appear thick and vice-versa. This helps you to notice interesting patterns. If you're very familiar with the way configurations work with AfterGlow you also have the option of specifying these manually using the "Manual" option. You can simply cut/paste from a configuration file or write your configuration file by hand in a textbox. AfterGlow Cloud also saves your configuration every time you render a graph. When you choose to render a graph again, you can simply use your 'last used configuration' and it'd import the same configuration file you used last time. Alternatively, you can 'import' the last used configuration into the manual mode and further fine-tune it manually.

Rendering Engines: The application now supports using dot/sfdp in addition to using neato using GraphViz. More information about these, if you're unsure can be found here.

Gallery: You can submit the graphs you render using the application to a public gallery (with some details of it) for other users to view.

As an example (a very rudimentary example actually), here's how you might parse a typical Apache log. For this example we'll be using a very small portion of the log (attached), parse the client IP and the size of the request from it and try to render it.

We first point to the demo access log file from Apache and since we're uploading a log directory from our source, we'll have to select the "Log" option to have it parsed to a CSV (compatible with AfterGlow) and then render it. We've also checked the "Two node mode" box since, we'll be only extracting two columns (IP/Size) from our data. For the parser, we use a predefined regular expression which extracts the client IP and request size from an Apache log using the Common Log Format:

On the settings end, for some eye-candyness we define an edge length of 1.5 (length of an edge between two nodes) and define the text label colour as white:

Finally, on the configurations end, we add three colour configuration settings. All source node (Client IP) will be coloured with a shade of green. Target nodes (size nodes with a value of more than 2000 -- in this context it converts to more than 2000 bytes) will be coloured red. All the other target nodes will be coloured with a shade of orange. It's important to note here that configurations are read line-by-line hence, the line ordering matters. This type of configuration will show a really simple relation between how many bytes each client has requested in each request, but it will specifically mark the request with more than 2000 bytes in red (say you wanted to visualize the 'heavy' requests):

The resulting graph from these settings looks like:

You can see from the (really simple) example above some of the 'heavy' (we're classifying 'heavy' as >2000 bytes for the sake of an example here) requests from different clients.

From the development perspective, AfterGlow Cloud can now be deployed to a production like environment. The application (and the demo above) README cater to deploying the application on Apache using mod_wsgi. If you wish to run your own instance of the application, you can clone the source from the repository. A detailed README (pertaining to a machine running Ubuntu) is also available to help you setup. The README walks through the complete steps required from scratch to set the environment and the application (to the point that you can get it running on a fresh Ubuntu install without a hassle). The codebase has also been documented in detail, should you wish to fork and play with it.

This release marks the next version of AfterGlow Cloud. Please report any bugs or comments you have using the contact form on the demo :)