Sunday 11 May 2014

Some basic thoughts on the Target Data Breach

 

Hi All, I happened to be given a copy of the A “Kill Chain” Analysis of the 2013
Target Data Breach (majority staff report for the Chairman Rockefeller March 26, 2014). An article by the United States Senate. It can be found here.

Some basic notes:

  • Target was PCI-DSS compliant prior to the breach.
  • The initial breach was to a third-party vendor.
  • Installed malware on Target systems
  • Were able to pivot around the network
  • Were able to copy large amounts of data via FTP.

My basic thoughts on failings

The target breach has been covered to death, from every angle. My words are from someone not so experienced and thinking how it could have been done different.

A third party vendor was breached initially, giving the attackers the basic access to Target’s network:

 

  • There is minimal target could have done to stop attackers using publicly available information to create phishing attacks onto their vendor. I am sure nothing legal could be forced, other than simply asking them to have process where they remove publicly available information that is not required. The article explains Targe could have done this, but with so many vendors how would that be enforced, or how could it be checked that it was carried out. Would contracts stating that such a thing been done have been enough?

 

  • The article explains that Target could have enforced a security process on third parties, which I  am sure PCI-DSS compliance requires; by that I mean the vendor could be more secure How could one manage many vendors to have a base line of security? Could they have forced each vendor to be PCI-DSS compliant?

 

  • Security awareness training is recommended, saying it could have stopped the attack that way. My argument to this is, that phishing attacks are becoming so advanced these days, there is no way to tell if the email is ‘not’ legit. If it is a work email requiring some action of an employee will be in trouble, then the employee must prove it is not legitimate. Therefore training has a limited impact.

 

  • It is said the vendor used a free version of Malwarebytes antivirus. I am wondering if it would have been feasible to have a basic checklist for basic security, such as software licensing, or evidence of using a DLP or IPS for example. This could be used as an on boarding process for each vendor.

 

  • The article states that Target could have required 2 factor authentication. This is a great idea. Again it could be part of the on boarding process. I imagine this would have to be managed  by the vendor, as Target managing it for every vendor would be death by overhead. Target could simply require 2 factor authentication and leave it to the vendor to decide the way it is delivered. This would allow flexibility with vendors to choose sms to their phone, an rsa token or other methods.

 

Installed malware on Target systems :

  • Target staff failed to act on alerts generated by software. It is also stated that that Target staff could have viewed this as a false positive, possible being overwhelmed with alerts. I read about this all the time, security software being misconfigured or not being used to its full potential, there is no point spending budget on something and not learning how to use it. How is one to confirm if their software is sending too many false positives? Is there a balance? A quality process of some sort should be put in place, follow best practice for each software. Many pieces of software have tuning guides.

 

  • The articles states Target could have paid greater attention to industry updates on RAM scraping malware (used in the breach). This is true, but there are so many updates, so many articles to read, how is one to keep up to date on the latest attacks. Perhaps there needs to be (as always is said) more sharing between companies of attack statistics, or a central Security Operations Centre for say payment card industry. That may work, everyone puts in a lump sum of money each year to support the SOC and the information learned is shared between everyone. Or perhaps the way security alerts are delivered via government could be improved, I can imagine industry being bombarded with so many alerts.

Were able to pivot around the network

  • The article advises that target could have used white listing, a technique where only approved processes are allowed to run on a machine. In theory this is great, but if I am an attacker, I would assume you would be wiping any type of exe created on the file system, as such I would attempt to use a current process, or at the very least migrate into a process to look around. This can be very easily done with the Metasploit Framework’s meterpreter shell command migrate which allows easy process migration (but of course you must be on the box in the first place). As such white listing does not work as well as it would sound. It is a great idea, just be are of its down sides. The defensive security podcast gives a great explanation here.

 

  • I do not think prevention at this stage is most important, I believe detection to be the main focus. This is because if someone is on your network, there is a plethora of opportunities for them to privilege escalate or to pivot, just a simple default password missed in audits or an old firewall rule left in place could be the culprit. Detecting attempts or unusual behaviour would do great to have in place in order to detect.

Were able to copy large amounts of data via FTP:

  • I did read once that someone claimed the attackers copied 70gigs via FTP. Whether that is true or not I do not know, but it is a high amount of data and should have been noticeable. Anyhow, the article says that one of the servers FTP was used to transfer the data to was located in Russia via plain text. It also claims this could have been prevented via whitelisting FTP servers. This is a great idea, but what about location spoofing a FTP server. I would have thought the attackers did that already. If whitelisting is not strong enough in that they can pretend to be an allowed FTP server, a limit to the amount of data that can be sent over time could work. Or a warning  sent to the security team that in x amount of time a high amount of data had been transferred and as such this was not normal behaviour and has been flagged.

 

Hopefully that is a good read.

Just some food for thought as I learn to blog about current issues and trends.

 

Cheers,

Haydn

No comments:

Post a Comment