Download presentation
Presentation is loading. Please wait.
2
Evolution of the threat
“Cyber” has been coopted by intelligence services and organized crime – it’s no longer just spambots Arrests of mules, etc. in Europe The low-level stuff hasn’t gone down: see Conficker 6.4 million computer systems, 230 countries, 18 million+ CPUs But we need to focus on the ones that matter Objectives are becoming much more abstracted from the initial capability True for both foreign intelligence & organized crime Players: authors, commanders, developers, etc. The good news is that for these threats, the objectives – and organizations – are large, complex and ripe for intelligence targeting Problem statement is: *
3
Intelligence spectrum
Blacklists Net Recon C2 Developer fingerprints TTP Social nets “DIGINT” Physical / HUMINT Nearly Useless Nearly Impossible MD5 checksum of a single malware sample SSN & Missile coordinates of the attacker Blacklists and A/V are no longer valuable. Still need to be conducted, but not effective for new threats. Push visibility to the right, we have a chance again. Obviously, we’d like to always have 100% attribution, but if we can’t get that, then there is a sweet spot for analysis: If stuxnet is any indication, .. Not gonna be command and control based: they’re autonomous. They go out, execute, do whatever their job is, then get the data out. There’s not a c2 ro network infrastructure to monitor. More intelligence gathering: what will it be coming to do, where will it be going Understanding “threats” at a much higher level than c2. Processing large amounts of intelligence & cyber data Track threat evolution
4
The World Health Organization
Responder metaphor vs. One doctor The World Health Organization We’re still in the “one doctor metaphor” – one person looking at symptoms and trying to find the root cause of a single incident. We need to look for root causes across incidents; intelligence. SOC workflows are top-down *and* bottom-up – there’s a necessity for both styles of analysis in cyber. Doctor feeds the strategic view.
5
Intelligence spectrum
Blacklists Net Recon C2 Developer fingerprints TTP Social nets “DIGINT” Physical / HUMINT Nearly Useless Nearly Impossible Different kinds of analysis Knowledge management Collaborative environment Going from the strategic to the tactical and back again: There’s been an over-focus on network traffic because it’s easy to collect and easy to analyze – dozens of companies in the area have their solution – IDSes, etc. -- but because network traffic is so spoofable, it’s become valueless. What are the challenges to working with the whole spectrum of data? there’s too much of it – where do you start? We don’t have a data shortage. How do you structure analysis? How do you connect across INTs? How do you generate understanding? So where do you start? We start with developer fingerprints
6
Intelligence spectrum
Developer fingerprints Blacklists Net Recon C2 TTP Social nets “DIGINT” Physical / HUMINT Nearly Useless Nearly Impossible Different kinds of analysis Knowledge management Collaborative environment Going from the strategic to the tactical and back again: There’s been an over-focus on network traffic because it’s easy to collect and easy to analyze – dozens of companies in the area have their solution – IDSes, etc. -- but because network traffic is so spoofable, it’s become valueless. What are the challenges to working with the whole spectrum of data? there’s too much of it – where do you start? We don’t have a data shortage. How do you structure analysis? How do you connect across INTs? How do you generate understanding? So where do you start?
7
What are fingerprints? OS Loader Even if the malware looks different on disk… TMC can tell if it’s related once the malware runs. What are developer fingerprints? “Forensic toolmarks” – code idioms, commonality, frequency of library or algorithm function uses, order, environmental variables… Frequency & commonality of those markers within malware sets, as compared from one specimen to the other, can bring correlations to the fore. We can pull these out by looking at running malware samples, we can get past the failures of A/V and whitelisting.
8
Blacklists ATTRIBUTION-Derived Algorithms Hooks Protocol Install
Developer Toolmarks Signatures Algorithms Hooks Intrusion Detection Protocol Install DNS name IP Checksums Those higher-level toolmarks are much more difficult to modify. Attackers change IP – most of them focus on these things because it gets them blocked. In fact, signatures right now can look for Ips and checksums IDS can look at protocols and DNS But the toolmarks, hooks, install processes, and algorithms are much more difficult to change – requires expensive developers and a period of months or years to build a new solution. E.g. a good encryption algorithm: folks are not going to redevelop that for another piece of malware.
9
Remember gh0stnet? Just to give you one example of how this might look in practice: This is what ghostnet looks like to the person running it.
10
Toolmark view Rootkit e:\gh0st\server\sys\i386\RESSDT.pdb
e:\job\gh0st\Release\Loader.pdb Cgh0stView Cgh0stDoc e:\job\gh0st\Release\gh0st.pdb C:\gh0st3.6_src\HACKER\i386\HACKE.pdb \gh0st3.6_src\Server\sys\i386\CHENQI.pdb GUI (MFC) Development path … but we see it like this. “Digital DNA” concept From lots of different sources This is one of the ways we were able to associate variants of malware. They all had specific malwares inside the code like development paths. References to gh0st found throughout all of the malware speciments that we collected as part of that investigation. This is a pretty obvious example. There are more complicated and subtle examples as well.
11
Integrated workflow Having visual map of threats
Integrated w/ complete incident handling process – strategic threat process Arcsight, centaur… Need context to understand threat (v. incident) Usable by folks less technical than malware analysts Can help develop response Bridge to demo. Right now the process is: the malware shop at any stop develops reports and writes textual reports on threats. But that does nothing for continuity of knowledge of the threats themselves across an organization. Not everyone is a high-priced malware analysts. Using an integrated framework we can connect between the front-liner incident handlers. Incident handlers aren’t going to want to have access to the TMC itself, they just need to put their incident in the context of a larger threat.
12
Incident responders Bottom up: Start with one sample
Bring in TMC reports Tag up one thing of interest – adds to global knowledge base. Searcharound to find related Validate correlations Assign to threat grouping. Share graph?
13
Threat analysts Start small In miniature – differences Build clusters
Explain large / small threats Clip some social network data live! … switch to Object Explorer to show high-level analysis.
14
Threat analysts Object explorer:
Thousands to hundreds of thousands of objects Differentiated in time Differentiated in properties Feeds back into bottom-up analysis
15
Fusion = Attribution Fusion methodology integrating data from
Network-based technical collection Toolmarks Open source data Key informant interviews Field investigations Hacker forums Visual analysis Collective memory of prior attacks Still requires skilled analysts But it’s not magic.
16
Fusion = Attribution Fusion methodology integrating data from
Network-based technical collection Toolmarks Open source data Key informant interviews Field investigations Hacker forums Visual analysis Collective memory of prior attacks Still requires skilled analysts But it’s not magic.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.