Before I dive in to the meat of this post, let me offer some friendly advice. I'm sitting at the food court adjoining the south conference center here at the Mandalay Bay, and I just saw an attendee plug his device directly into one of those convenient USB sockets provided for charging your gadgets. I guess it's probably safe all things considered, but on principle please don't do this. Not here. Not now.
Where was I? At the food court. Speaking of food, Dan Geer's keynote Wednesday morning was chock-full of food for thought. He proposed nothing less than a set of proposals intended to radically reset the balance between safety and order on the Internet, noting that its very nature prevents us from having much of both. I want you to go watch his speech for yourself, but I will provide a quick list of the ten areas he discussed today. Do watch the speech, as my gross oversimplifications will fail to do justice to Geer's ideas.
1) Enforce CDC style mandatory reporting rules for cybersecurity failures exceeding a (yet to be negotiated) severity threshold.
2) Net neutrality: Geer contended that if an ISP makes itself privy to the content, source or destination of the traffic flowing over its network, it necessarily makes itself responsible (and liable) for what it learns about this traffic. Or ISPs can be common (i.e., neutral) carriers. But not, he claimed, both.
Welcome to Black Hat 2014. Are you ready? Ready for the sessions? Ready for the parties? Ready for the unspeakable, stifling, death-from-above heat? Ready for the late-night alcohol-fueled conversations about what’s next in our industry?
I am. I’m ready. Well I’m almost ready. It’s been awhile since I’ve seen a wave of security community predictions so let’s get a fresh one going. And, I’ll back it up with a $1000 Best Buy gift card to one randomly selected prediction.
Here’s how to enter:
Follow @FidSecSys and tweet your predictions for the next twelve months using the #SecurityFuturist hashtag by 5:00 p.m. PT on August 4, 2014. Don’t have a Twitter account? That’s weird. I guess you could leave a comment below with your name and contact info.
Your tweet may be an authentic prediction, and it may be snark, because, well, this is the Internet. If you like, aim for a perfect exemplar of Poe’s Law. Feel free to submit as many predictions as you want but each one has to be unique.
We’ll randomly select one of the predictions at 5:00 p.m. PT on August 4 and announce the lucky winner on Twitter at 9:00 a.m. PT on August 5.
We’ll have the prize ready at our Black Hat booth (#347) but if you’re not attending this year we can ship it out to you following the show.
Here’s my entry (regrettably I’m not eligible):
My information security technology gizmo will fix your #APT problems immediately, completely, and forever. #SecurityFuturist
That’s it. Travel safe to Vegas, be safe while you’re there, and come visit our booth to say hello, chat about the threat landscape, steal a t-shirt and maybe offer your predictions in person.
In part 1 of this series I discussed the use of digital forensics as a legitimate science in court for expert witnesses. In this installment of the series I will introduce some research in this article to help drive positive change.
I believe using scientific methods, such as The Supreme Court of the United States (SCOTUS) decided was minimally required in the outcome of the Daubert case, can lead to statistical outcomes of fairer expert answers rather than just the typical three answers of "Yes", "No", or "That's just how computers work" as a conclusion. Introducing the scientific method and statistically based expert answers leaves room for weight between a probability of 0 and 1. A judge or layman jury can use that weight from the statistical outcome for a more informed decision when comparing it to other weights from other tests in the case. It is well known that other forensic sciences such as fingerprinting, drug testing, forensic pathology, DNA testing, and so on use probabilities, so it would not be a great leap to apply it to our newer sciences. Furthermore, in a number of instances other methodologies even use a rate of error which is discussed in the Daubert case. I have yet to personally see others introduce numerical statistics on the record for digital forensics, let alone a rate of error. That needs to change.
This example is a very common question I am asked by clients on nearly any project, especially data loss prevention scenarios and desperately needs to be researched using scientific methods. I wanted to know if I could detect if computer media was wiped. I would also like to know what program was used to wipe the data, if possible. Imagine a situation where an internal employee "went bad" and copied information to a USB memory stick and walked out with the company's crown jewels. The data could be worth billions of dollars and the victim company would most likely litigate. If the same USB memory stick was turned over for examination, would it be possible to detect data wiping? You cannot see what programs were installed on the computer(s) the memory stick was plugged into (because computer(s) were not produced); therefore you do not know if there was a wiping program present. There are many wiping tools available on the internet. It would be near impossible to download every one and try to write a signature for each. That is similar to writing static signatures for every piece of malware which is a nightmare.
It’s 4 AM and the alarm reminds me that I’ve been awake for an hour or more after grabbing a couple hours of shuteye sometime after midnight. The realization of a 45-minute drive, with a stop off at the office to pick up equipment, breaks my intense contemplation of what it was like to have regular hours. I meet my fellow investigator at the office and we head to the train station. Some eight hours prior, we learned that a company had been hit with a very nasty malware attack sometime in the previous few days. The story was familiar and both I and my compadre could relate the basic details without even hearing them.
A number of employees received a phishing email designed to look like benign financial communications. The emails contained attachments purporting to be financial documents and were designed to look like they arrived from an innocuous sender. The emails were sophisticated and convincing enough to pass a quick skeptical glance and several employees opened the attachments. As it turned out, executed was probably a better description of what was done with the attachments. One of the employees was using a workstation vulnerable to the malware attachments and the malware did its job which resulted in a large number of important documents being rendered inaccessible. Therefore, the client decided to bring in outside consultation which brought me to rolling out of the rack at 0400.
At 6:30 AM, the sun is finally up as the train rolls out of the station. My partner and I luck out and get a decent workspace at the front of a car. We decide to make the most of the ride by going over what we know about the incident. Luckily, my MiFi works relatively well on the train. The client’s description of the incident provides plenty of fodder for research. As we discuss the details, I have to keep the investigator in me at bay. My instinct is to drop into the client site and deep dive into technical details in an attempt to fully understand the malicious activity and associated malware. Instead, the more practical approach is to determine what questions to ask and prepare to listen to what the client describes, and perhaps more importantly, listen for the outcome the client desires. This preparation time is crucial as it is nigh on impossible for my team to anticipate every type of incident and have a checklist or process guide to follow in every case. This particular incident involved crimeware.