Five Theses on Security Protocols

Inspired by recent discussion, these are my theses, which I hereby nail upon the virtual church door:

1 If you can do an online check for the validity of a key, there is no need for a long-lived signed certificate, since you could simply ask a database in real time whether the holder of the key is authorized to perform some action. The signed certificate is completely superfluous.

If you can’t do an online check, you have no practical form of revocation, so a long-lived signed certificate is unacceptable anyway.

2 A third party attestation, e.g. any certificate issued by any modern CA, is worth exactly as much as the maximum liability of the third party for mistakes. If the third party has no liability for mistakes, the certification is worth exactly nothing. All commercial CAs disclaim all liability.

An organization needs to authenticate and authorize its own users; it cannot ask some other organization with no actual liability to perform this function on its behalf. A bank has to know its own customers, the customers have to know their own bank. A company needs to know on its own that someone is allowed to reboot a machine or access a database.

3 Any security system that demands that users be “educated”, i.e. which requires that users make complicated security decisions during the course of routine work, is doomed to fail.

For example, any system which requires that users actively make sure throughout a transaction that they are giving their credentials to the correct counterparty and not to a thief who could reuse them cannot be relied on.

A perfect system is one in which no user can perform an action that gives away their own credentials, and in which no user can authorizes an action without their participation and knowledge. No system can be perfect, but that is the ideal to be sought after.

4 As a partial corollary to 3, but which requires saying on its own: If “false alarms” are routine, all alarms, including real ones, will be ignored. Any security system that produces warnings that need to be routinely ignored during the course of everyday work, and which can then be ignored by simple user action, has trained its users to be victims.

For example, the failure of a cryptographic authentication check should be rare, and should nearly always actually mean that something bad has happened, like an attempt to compromise security, and should never, ever, ever result in a user being told “oh, ignore that warning”, and should not even provide a simple UI that permits the warning to be ignored should someone advise the user to do so.

If a system produces too many false alarms to permit routine work to happen without an “ignore warning” button, the system is worthless anyway.

5 Also related to 3, but important in its own right: to quote Ian Grigg:

*** There should be one mode, and it should be secure. ***

There must not be a confusing combination of secure and insecure modes, requiring the user to actively pay attention to whether the system is secure, and to make constant active configuration choices to enforce security. There should be only one, secure mode.

The more knobs a system has, the less secure it is. It is trivial to design a system sufficiently complicated that even experts, let alone naive users, cannot figure out what the configuration means. The best systems should have virtually no knobs at all.

In the real world, bugs will be discovered in protocols, hash functions and crypto algorithms will be broken, etc., and it will be necessary to design protocols so that, subject to avoiding downgrade attacks, newer and more secure modes can and will be used as they are deployed to fix such problems. Even then, however, the user should not have to make a decision to use the newer more secure mode, it should simply happen.

Perry

Perry E. Metzger perry@

Posted on the cryptography mailing list at Jul 31, 2010

AthCon begins

“A beginning is the time for taking the most delicate care that the balances are correct” –Frank Herbert’s Dune.

AthCon begins today. Since it is the first AthCon it really begins today. It is a non-product, non-vendor-biased conference aiming to present the best research and cutting edge exploitation techniques from the field’s leading experts. I feel extremely privileged that I was invited to participate in the (first) PC of such an effort. However due to the 24-hour strike of the public transportation workers and the law of unexpeted consequences that always finds opportunity to emerge I will not be able to attend the event. I was really looking forward to watch:

  • “OWASP Top 10 – 2010: Towards a secure Software Development Lifecycle” by Konstantinos Papapanagiotou
  • “Context-Keyed Payload Encoding: Fighting the Next Generation of IDS” by Dimitrios Glynos and
  • “BNF (Backus-Naur Form) based blackbox fuzzing” by Chariton Karamitas

Maybe these kind souls will email me their presentations.

Good luck AthCon and be a nice journey. See you next year and every year!

Q: ITU botnet mitigation toolkit?

This was sitting in my drafts folder for quite some time. It seems that ITU has (had?) and effort to create a botnet mitigation toolkit. As the web page says:

The first draft of the background material for the project was made available in December 2007 with pilot tests planned in a number of ITU Member States in 2008 and 2009.

It is 2010 now, so does anyone have any more information on the toolkit’s progress?

Q: What is an Incident?

Despite the toxicity that certain meetings carry, I’ve decided to try and make the most out of them. In a meeting that I attended the other day the question arose:

– What is an Incident?

So how does one define a security incident? The easy way out is “an incident is when I say it is”. Would you easily define as an incident every policy violation? Do automated ssh scans count as incidents? Or do we care for the interesting ones only?

How do you define an incident as such?

Winning as a CISO

Winning as a CISO” (Chief Information Security Officer) is the second book I have bought from the ISACA bookstore. The book’s opening phrase is “If performing vulnerability assessments, configuring firewalls and performing network forensics makes you happy then becoming Chief Information Security Officer may not be the right career choice for you”. That may be true and has been stated in other fields too, but it does not mean that this is not a book for security professionals not on the CISO career path.

In fact this is a book on understanding corporate management and not only for security people, but for other techies too! What this book tries to put into the reader’s mind is the simple fact that anything you do in a sufficiently large (or beaurocratic) organization is a service that you sell inside the organization. For your service to sell it does not only take hard work; Hard work is fine but can only get you so far. People who understand the organizational dynamics and politics are the ones who can both increase their budget and advance their careers. In his “Time Management for System Administrators” Tom Limoncelli mentions the martyr complex that many sysadmins seem to suffer from. Martyr complex is the result of both lack of automation of routine stuff that devours our time unproductively and the lack of effectively communicating of what it is exactly that we do*. Well guess what: It is not their job to try and understand what we do; it’s ours and in the security arena it is even harder because the “security guys” are the ones who block other people’s fun work for obscure reasons contained in dusty policy tomes.

“Are you the type of person that can stand up to superiors without being afraid of risking employment status? Will you stand up for an employee who acted with reason and responsibility but erred nonetheless?” This is a question that the author asks to anyone that considers a CISO career path. Well I have stood up to management (but not without personal loss) and my managers have stood up for me when I made errors. In fact one of them argues that “the only person that never errs, is the one that never does any actual work”. This is the kind of management that wins your team’s heart (any team, not just the security team).

Now I understand that the book belongs to a class that, as my friend XLA puts it, describe “an ideal corporation, in an ideal country where everyone eats ice-cream”, but nevertheless it is the thinking mode that matters. Do not let daily tactic stuff distract you from your target (strategy if you like). That and the realization that although hard work pays, it pays better when you invest in marketing it. I cannot say that I learned anything that I did not already knew from the book. But it is not always necessary for people to learn about such stuff from experience only.


[†] – The first one being Nigrini‘s book on Benford’s Law.

[‡] – “As long as technology is your thing, plan to die reading manuals

[*] – “You do a lot of work, but not many people understand the work you do” from the opening of the speech from the Estonian Ministry of Communications representative at RIPE-54.

The need for discipline

A major point of David Greer‘s talk at AIFS was the hyper-connectedness of people. Most computing professionals are already hyper-connected and most connected people will be in less than five years. Hyper-connectedness here is used in the context that people use a lot of different devices to connect to the Internet, their home computer, their work, access resources and do whatever they want to do by using these facilities remotely. They have many interfaces to the Cyberspace.

So now the attack vector expands: “you or your child uses your home computer to share information through social networks or email and through this process may infect the computer with a virus. You then could use this computer to “work from home” and indirectly infect a work related file or through network connections, infect your corporate workstation”. Interestingly (inspired by a friend who advocates “people get hacked and not machines”) I had blogged about such a possibility back in 2007.

@gkoutep tells me, for quite some time now, that we are to expect “single target” attacks. The need for discipline for us who use different devices to connect to networks that we manage and/or the Internet is more than pressing: Shall we connect to our corporate network using a friend’s computer in case of an emergency? Although most systems now boot from USB drives (which avoids the possibility of an infected host system) what about our friend’s home network? Will “proper procedures” for exceptions be followed, or should one wait until being in front of a better controlled terminal?

While in the “old days” we could relax temporarily some restrictions in favor of convenience, friendship (being friends with the BOFH could result in exceptions) or emergency, this is no more. (Digital) Trust is not what it used to be (or what we believed we could get away with when bending the rules).

We live in a hyper-connected world aiming to facilitate everybody’s daily stuff, but will the need for discipline and caution lead system administrators (and other computing professionals) to start de-hyper-connecting?

report on AIFS

Thanks to a draw run by the Greek Chapter of the ISACA, I got to attend the 3rd Athens International Forum on SecurITy (AIFS). This was the first time in years that I used a benefit offered by the Greek Chapter, which makes me regret that I had not taken more advantage of my membership previously. I guess I’ll have to make some time and fix this though.

AIFS – day 1

Since I did not pay for the event, I tried hard to attend most of the presentations and keep some notes on the talks that I liked. David Greer gave an interesting presentation on Security Strategies, the various definitions of Cyberspace (according to the point of view) and how in the cyberspace battlefield technology is an equalizer (in contrast to the kinetic / traditional warfare) covering everything from the Internet to powergrids to automated and interconnected devices that find their ways into our houses and then present a possible leverage for an attacker to use.

Spiros Liolis gave a fantastic and provocative presentation on how we people in the IT security sector are chasing a “chimera”: (1) no one is really in charge (2) there exists policy confusion (3) information classification is problematic (4) people think that technology will solve everything (5) BOFHs treat users like cattle and how you treat people bites back and (6) management is not really involved in the process. He also posed a very interesting question for those organizations that move on the “cloud”:

– Who audits the cloud?

Dr. Stefan Frei gave a presentation on the dynamics of (in)security. A presentation in which he analyzed over 30000 vulnerabilities reported in the CVE and in a way that makes one think “Why did not I think of that?”. Well as in all things it does not matter who thought of what and when, but who actually did something. And Stefan Frei did (and promised to continue doing) an extraordinary job. For example, ten years ago the top-10 vendors were responsible for half of the vulnerabilities. Today they are responsible for 20%. The insecurity gap can be more frightening. An exploit can be in the wild for 200 days (and more) prior to disclosure. I am looking forward to newer versions of this report to see how this trend evolves.

Lucas Cardholm gave a presentation on the cost benefit analysis of information security. This was accompanied by a whitepaper which I suppose will show up on the Ernst & Young web site sometime. For the impatient, a similar presentation describing the same methodology, and using high school math, is available here [pdf].

AIFS – day 2

George Simos talked about “All you need to know about ISO27001 in 30 minutes” and he managed to do it in less. One can spend hours days talking about ISO27001. However in his 30 minute talk he managed to get the message through: ISO27001 is not a technical document, it is a management document. It is about Information and therefore above and beyond IT. When implementing it, one should be careful enough to implement only the controls that are needed and have repeated assessments which should produce comparable results.

George Raikos (from the ISACA Athens Chapter) gave a quick review of the newest set of standards from ISACA, the RiskIT (which is based on COBIT). A variation the given presentation is available here [ppt].

Joshua Leewarner gave a complete account of social engineering, why it works, the psychological factors used, how it works and examples of audits that he has performed that included social engineering, with and without the use of technology. From him I learned about PhishMe, a really interesting service if you want to “test drive” and educate your users.

Matthew Pemble talked about corporate and personal privacy on Facebook (and other social networking sites). Of particular interest to me was his reply to a question about the cost that is to a company an employee’s time on Facebook:

– I do not know; how much is the cost of a smoker employee versus a non smoker?

[ A friend faced a similar situation recently. His new CEO instructed that Facebook and Youtube be blocked and that he makes sure that the administrators actually do that. I told him that blocking sites does not lead to a productivity boost. People will find other ways to procrastinate, and what is even more funnier you will not know what they are and you will be forced to find out. ]

Matthew also pointed out that from a company perspective an employee’s Facebook activity might be relatively harmless (and therefore preferred) when compared to P2P, surfing porn, etc.

I think Daniel J Blander gave one of the best two speeches that I attended. He is one of the few speakers in IT that I have heard that clearly know the difference between the social network and the social network platform. IT people tend to think they are the same, because they did not think of networking much before the rise of the platforms. Which, at least for Greece, is weird given that “meso” and “vysma” describe getting stuff done, exactly because you know a key person or someone who knows a key person. Some advice that came out of this presentation is that you should join, use and understand social networks and platforms. The “instant” distribution of the medium should always be in your mind, plus the fact that the network never forgets. As an example he mentioned that one of the oldest posts that he had made still lives on and it was about firewalls. Indeed.

Was it worth it?

I have to admit that had I not been lucky in the draw, it would have been difficult for me to attend. I am satisfied that I attended though and now I am looking forward to the next AIFS. I regret that I did not sit around during the breaks and lunches, but work was withing walking distance and there was stuff that needed attendance.

What I did not like: I was really surprised that no one in the Security Metrics session mentioned the securitymetrics mailing list or the LinkedIn group. I was also surprised that one of the speakers recommended COPS and SATAN. Today? Only three people in the audience knew about Koobface. People doing security policies and processes need not be so detached from the “running code” reality. Your work translates to running code too.

Presenters who exceeded their time slot. If you wonder why the best presenters are always in time, it is easy: They are the best because they do not lose track of time. Rehearse your presentation. Think when you are in the audience: How long can you stay focused on a presentation? That long is how your presentation must last, for there is nothing that guarantees you that you are a better and captivating speaker. If you plan on saying that 70%-80% of the breaches come from insiders, please cite raw data and not another analyst’s report. If you plan to join the crowd that devotes a slide or two on the Russia vs. Georgia cyberwar, please find something new or a novel way to say what we already know. If your presentation is a survey (you know when it is) try to have at least one slide with something new. Do not spend half of your time-slot telling the audience of your past achievements. If you are that good either we know you already or we will look you up thanks to the amazing presentation that you will give (you did not).

Did I learn anything with regards to security? Depending on who asks the question I am tempted to answer no. However this was neither a trade show, nor an academic conference (which I like most because of either running code and/or theory). Did I learn useful things about my job? Of course I did! When trying to persuade management about an investment, do not talk about the cost of failure (for they will risk running with the legacy or unpatched system). Tell them instead about the cost of success. Executives love four color slides and “traffic light” coloring (green, amber, red) when identifying risk. You have to learn to place correctly the question. You are allowed to do guestimates on numbers, because that is what the other departments do too. Always try to show the positives when seeking for budget. Identify all the stakeholders and get them on your side. Learn to use Annex A of ISO27001. And a whole lot of details that connected “dots” in my mind (“ah, so that’s how it is done” flashes).

It becomes clear to me that security people need to read about Cybernetics. And emergence. Yes, that includes you the system administrator too. You too have to understand the dynamics of the organization you work for, not just the network. Awareness training was preached around, but I’ve established before why I believe it does not work. If you do not like to pay for the ISO27001, use the ISF standards.

I am changing my mindset on how I am thinking about these things and AIFS helped a lot (that or I am getting older). Do not forget: Conferences are about networking and exchange of ideas and I missed the “hallway track”.

That being said, I look forward to AIFS 2011.

PS: Angelos thanks for the coffee.

on security policies

While reading “Rule Based Analysis of Computer Security” I stumbled upon the following phrase:

All the desired operations should be allowed, and all the undesired operations should be disallowed

Many times we focus so much on the latter part (disallowed) that we force users to circumvent obstacles in order to share or access information and work in ways that they end up granting more access than what is actually required. Then trouble, friction among admins and users and exceptions emerge.

Open Systems Security – an Architectural Framework

“In the old days” when security information was scarce and many of us began shaping our security mentality (be it white, gray or black) by reading “Improving the Security of Your Site by Breaking Into it” and the Computer Security FAQ and running tools like iss and Crack. I think it was there that I first read about Arto Karila’s PhD thesis. Even though it is an OSI based document, it helped understand basic concepts. However there were two problems with the document:

  1. It was hard to find, and
  2. It was in a weird PostScript format that even modern versions of ghostscript refuse to display.

With the help of a friend I managed to transform it to PDF and upload it to Scribd: Open Systems Security – an Architectural Framework

Of historical value mostly.

Update 2013/04/13: Now available at https://github.com/a-yiorgos/karila

Users circumvent control

People don’t want to be disciplined and structured when writing programs. They are ingenious in finding ways to circumvent any kind of externally imposed control.”*

While the above quote was written for people who were writing macros in spreadsheets instead of writing code, the highlighted part of the quote can equally be applied to any userbase that is required to follow a security policy. Because first and foremost users want to get their job done (even if they believe that part of their job is playing Solitaire).


[*] – When is a picture a program?

[†] – And one could add that this article predicts the emergence of the species of key punchers who thought they were programmers because they wrote lines in Visual Basic. For the article continues: “No sooner had we invented structured programming and software engineering than someone countered with the invention of spreadsheets. Spreadsheet users were not programmers, and what these users were doing was not programming. So naturally the rules didn’t apply to them- right?”