Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Jan 4, 2013

Step-up authentication as-a-service for SURFnet

Two-factor authentication used to be the domain of secret services and the military. The enterprise and consumer e-Banking and e-Government domains have since embraced two-factor (or: step-up) authentication. More recently social network sites such as Facebook and Google have started offering two-factor to protect their (free) services. Federations of higher education and research (operated by the NRENs) are still largely basing their authentication on username/password.

A parallel development is that service providers in federations for higher education and research are starting to offer services that deal with highly sensitive information; for instance privacy sensitive administrative, research, or medical data. This is both a consequence of the success story of federations as well as the "move to the cloud" where traditional in-house applications like accounting systems are increasingly being outsourced to cloud providers. Because of the sensitive nature of the data in such systems, stronger forms of authentication are necessary.

NRENs such as SURFnet have noticed these trends, and the discussion of how to best approach two-factor within a federated setting is now in full progress. An Identity Provider (IdP) within a federation is ultimately responsible for providing the identity of its users. This includes authentication and the IdP can make authentication as strong as it wishes, of course. The case for two-factor in a true federation is, however, significantly more complex than rolling out two-factor in a situation where Identity Provider and Service Provider coincide (such as in e-Banking or the enterprise), as information about the Level of Assurance is shared with and interpreted by Service Providers in the federation. Introducing multi-factor authentication within a federation is really only sensible if registration (enrollment of an authentication token) procedures also warrant a strict higher level of assurance.

Novay, in close collaboration with SURFnet, has made an initial design for a service to assist Identity Providers in the introduction of two-factor authentication solutions that can be used across the SURFconext federation. The report describing the design is available for download from the SURFnet website. The report describes both the technical (architecture, standards) and the procedural (registration, logging, de-registration) challenges.

Architecture
The two main architectural challenges to focus on are:

  • the best location for a multi-factor-authentication service, such that it can support multiple Identity Providers;
  • which standards (and what choices within those standards) should be used for uniformly signaling the level of assurance from Identity Providers to Service Providers.

As for the location of the service: in a SAML based hub-and-spoke federation (such as SURFconext) it makes sense to base (the initial version of) the service as a transparent proxy on the Service Provider bound exit of the hub (as shown above). This separates the service from the hub. It also means that the Identity Providers and Service Providers can remain unchanged, except for Service Providers that need to deal with higher levels of authentication. The paper builds a case for this simple architecture.

As for the standards to use: there are many variations on levels of assurance frameworks. To name  just a few: NIST SP800-63 for the US and STORK for the EU. The best option, at this moment, would be to standardize on the upcoming ISO/IEC 29115 standard which will unify some of these standards. SAML 2.0 has had support for signaling details about the authentication process (and related processes) since its inception in the form of the so-called Authentication Context. This concept, however, leaves a lot of implementation freedom (and therefore interpretation freedom) for Identity Providers and Service Providers. Attempts to merge the Authentication Context concept with ISO 29115-style levels of assurance are relatively recent, and also appear in other authentication protocols such as OpenID Connect. The paper will give recommendations for how to best apply these standards.

Registration process
The level of assurance of an authentication token is not only determined by characteristics of the token itself, but also by the process by which the token is bound to an individual user by the Identity  Provider. The paper recommends appointing a Registration Authority within the institute of higher education or research and making that person responsible for binding authentication tokens to users  (staff, students) of the institute. The paper gives precise guidelines for setting up a Registration Authority. The most important recommendation is that individual users should visit the Registration Authority in person for face-to-face binding to the authentication token. The user should bring the token and a valid passport or identity card. The Registration Authority will check that these match with the user and with the attributes as issued by the Identity Provider. The Registration Authority also oversees an authentication attempt (with the second factor only) to make sure the user actually  controls the token.

The registration process is supported by an online service hosted by the federation operator. The service contains both a self-service user interface for end-users (so that most of the administrative process can be dealt with before the user actually visits the Registration Authority) as well as a user  interface for the Registration Authority. The paper shows mock-ups for both user-interfaces.

It is highly likely that the service proposed in the paper has broader applications beyond the boundaries of SURFconext. The architecture was described with portability in mind, such that the service can be re-used in other federations. Although the paper makes some concrete choices (to make it relatively easy to actually start building such a service) these choices are documented in the  paper.
Acknowledgements
The authors would like to acknowledge Ruud Kosman of Novay for designing the mock-ups and other colleagues from SURFnet and Novay for reviewing early drafts of the paper.

Feb 9, 2012

Context-enhanced Authorization

Context information can make authorization management more flexible and more secure. Knowing when and where users are, and what they are up to helps in determining which access rules to apply. We recently did a project with Rabobank and IBM where we ask (and answer) questions such as: 
  • What authorization related use-cases could benefit from context information?
  • Which context-sources are relevant, mature enough, secure enough to be used today (or in the very near future)?
  • How to deal with the (lack of) quality and authenticity of context?
  • How does context information interact with authorization standards such as XACML and today's implementations of those standards? (See my previous posts for more technical details on the hands-on XACML work that we did in that project.)


The main lessons learned (the answers to the above questions) are:
  • Typical use-cases can be found in the area of the mobile workforce ("nomadic working", etc.). As organizations introduce these new ways of working, traditional security policies that are only based on (authenticated) identity and static roles and entitlements are too strict and too coarse-grained. Context can make a difference here and allows finer-grained access so that, for example, medium level security tasks can be performed from home if the context allows this.
  • A model for context-information can be constructed around different context-types, some traditional (location, time, ...), some more exotic (physiological, mental, social, ...). The above use-cases can already be adressed with the more traditional context-sources: location, time, proximity, device id, network id. These basic context-sources are readily available, and are under control of the organization.
  • The easiest way to deal with authenticity and quality of context is to rely on trusted context-sources that are under control of the organization.
  • Externalization of authorization, such as propagated by the Attribute Based Access Control (ABAC) paradigm (and facilitated by standards such as XACML) works well in practice when combined with context information. In a demonstrator (see video above) we showed that adding context to authorization policies managed by Tivoli Security Policy Manager (a XACML IBM product) comes down to adding a policy information point. Relying applications only need to understand XACML in order to become context-enabled.
Obviously, there are questions left for future research. How to deal with privacy issues is one of them. Complexity of policies and other scalability and performance issues form another. Want to read more? Go check out the project page or read the white paper.

Sep 17, 2010

SMS text authentication for patient access to Dutch electronic health record


The encryption algorithm A5/1 used in GSM has been suspect since at least 1994 (when the algorithm leaked). Nohl's talk at 26C3 (November 2009) demonstrates that a practical attack will become possible soon. And all of a sudden people start to get nervous in 2010.

As a follow-up to their report for the Dutch Ministry of Health Radboud University and PriceWaterhouseCoopers recently published a risk assessment focusing on GSM based SMS text authentication as a factor to strengthen the Dutch government citizen-to-government authentication solution DigiD.

SMS text authentication is already used in DigiD level 2, but the binding of a user's subscriber number to their DigiD is rather weak: anyone with access to the mailbox of the user's registered home address (the so-called GBA address) can bind a new mobile phone to the user's existing DigiD account (and subsequently order a password reset, completely hijacking the account). The original report by RU, PWC and TILT recommended to strengthen this binding process so that a patient would have to prove possession of a subscriber number to a government representative face-to-face. The strengthened DigiD (known as EPD-DigiD) can then be used by patients to access their electronic health record in a standard SMS OTP authentication scenario (during a session the user has an extra factor with a separate network connection to the provider).

The conclusion of the RU/PWC risk analysis is that although breaking A5/1 leaves SMS authentication relatively secure (the risk of actual abuse is not that high) the perceived lack of security in the public opinion and the non-compliance with security standards may be damaging to the reputation to the government. The solution is not secure enough to allow patients to access their health records at this point in time.

What I don't get is the proposed solution: a conversion table (on paper) sent to each user over regular snail mail (how secure is that?). The user uses this table to manually translate the code that was sent in an SMS message before entering it in the browser's form. This appears not to add an extra factor: an attacker that can eavesdrop on the Web channel and the GSM channel will soon learn the mapping. Also from a user experience perspective that sounds horrible.

An alternative approach would be to install a SIM toolkit applet on the SIM which performs the translation automatically for the user. Rather than a static table per user one can even use a key (but with a decent cipher; I'm sure the current generation of SIMs in the field support AES or at least 3DES) and have real security. Sort of a light-weight-Mobile-PKI-without-the-PKI solution.

Feb 10, 2010

Community generated trust

I like CAcert.org. The basic premise of this CA is that trust is a community effort: the "by the people, for the people" kind of stuff. A social network for security geeks. Trust in derived identities (not identities of persons but identity of domain names or of Web servers) can then in principle be based on community generated trust so that steep yearly prices for server certificates can be avoided. We all benefit (except if you run a commercial CA, of course).

I created my CAcert account ages ago, but only recently undertook some action to get my identity assured by the community. Here's how it works:
  • You create an account with the service and register one or more email addresses.
  • The service checks possession of each email address by sending a challenge link to click.
  • You can also register domain names (where you typically host Web servers) with the service, possession of the domain is checked in a similar way.
  • As a user you now have 0 points
    • You can have the service issue email certificates for your email addresses (for sending encrypted or signed emails, or for client side TLS authentication).
    • You can have the service issue Web server certificates for your domains (for server side TLS authentication, i.e. HTTPS).
    • Issued certificates (based on a CSR that you generate) are valid for 6 months and contain only basic information (not your full name, for instance).
  • Once you have over 50 points, newly issued certificates will be valid for 2 years and can contain your full name.
  • Once you have over 100 points, you can also have the service issue code signing certificates and you become a so-called assurer (after you take the official online exam).
  • Certificates are signed by the service's root private key and can be checked using the service's root certificate (at the time of writing that certificate is valid until 2033). Currently viewers of your TLS secured Web site will have to manually insert the root certificate into their browser's trust store. The ambition of CAcert is to have the service's root certificate included in Mozilla's trust store distributed with Firefox.
How do you get more points? You will need to find an assurer (another user with over 100 points) and meet with him or her face-to-face. The assurer will check you passport (or driver's license or similar photo ID) according to certain guidelines and fill out a paper form which you need to sign. Depending on the experience of the assurer, he or she can give you 10 to 35 points maximum. The form is kept by the assurer for seven years and then destroyed. The service's Web site has a database that can be queried to find assurers near your location. I used this mechanism over the last couple of weeks to find some friendly people in Twente willing to check my identity (thanks Peter, Ashwin, Tom, Alex & Stephan).

So how trustworthy is all of this, really? The foundation behind the CAcert is a non-profit organization being supported by other non-profits. They seem serious about their infrastructure's security. The server side software is open source, and although it is written in PHP and Perl, it can be inspected by security researchers. For cryptography the implementation relies on OpenSSL. There's a whole community effort to train assurers in recognizing authentic government issued IDs. That all sounds pretty trustworthy (except maybe for the use of OpenSSL, which is written by monkeys ;) ).

Let's say I want a fake identity assured (i.e., a freshly generated free-mail account with a fake name and date of birth with 100 points). How difficult is that? I'll assume that until now all other users have been honest and have been perfectly assured based on government issued IDs. I'll need to find n evil assurers (at most ten). Those evil assurers should be willing to falsely assure my fake identity. Do those n assurers need to be n different people? Maybe not: creating ten different accounts under my real name is possible (the service should be available to users which happen to have the same name and date of birth as an existing user). I could get those ten accounts assured by at most (n * (n + 1)) / 2 honest assurers so that each account gets 100 points. I then use those ten accounts to give my fake account 100 points. Better yet, I create ten fake accounts this way and give each of those 100 points so that I no longer need my ten original accounts (which are all in my real name, better delete those now).

How to remedy this? There seems to be an audit program in place, where assurers are asked to contact other assurers to sanity-check past assurances. Eventually my fraudulent accounts will be discovered and traced back to my real identity (the ten accounts in my real name that were assured by honest assurers). I could then be held to the community agreement which I agreed to when I signed up for the service. The combination of government issued ID, face-to-face meetings, community vigilance, and legal agreements actually forms a pretty good deterrent security control against the described attack. In the end what CAcert is doing is not so different from what the commercial CAs are doing.

Update 2010/02/17: Looks like this same meme was recently discussed on the CAcert mailing list.

Nov 18, 2009

Variable Road Pricing

We seem to be getting variable road pricing over here in the Netherlands. Which generates a lot of discussion, of course. The Dutch ministry of transport has a nice high level overview including a diagram with some interfaces of the system:


I haven't made a detailed security analysis of this system, obviously. But couldn't one simply block the incoming GPS signal (say, using a GPS jammer). Better yet, why not relay the signal from a stationary GPS receiver at home to your on board unit?

Oct 27, 2009

RSA Conference Europe 2009

I attended RSA Conference Europe 2009 in London the other week, where I gave a presentation on something I blogged about before (combining ePassports and Information Card, a project sponsored by NLnet). My talk was scheduled for the very last slot on the very last day, which means I had plenty of time to go and listen to the other talks. Some of my impressions are below.



I checked out the booths of the conference's sponsors and noticed a relative large number of authentication factor vendors (G&D, Kobil, smspasscode.com) and of course the big guys (RSA Security, Microsoft, Qualys, CA).

As for the presentations, there were at least 4 different tracks, and all talks had catchy titles. Very difficult to choose from. There were a lot of "securing the cloud" talks. I've heard people claim that 'cloud==deperimeterization'. Others claim that 'cloud==virtualization', and yet others claim that 'cloud==SaaS', and even 'cloud==social networks'. Most of the talks dealt with managing the risks of enterprise cloud computing (sharing resources is risky, you'll need good SLA contracts for that). I especially liked the Collateral Hacking panel session which focused on the risk presented by totally unrelated parties you happen to share services with.

There were a few hacking-presentations. I really enjoyed Björn Brolin and Marcus Murray's Breaking the Windows driver signing model. Great live reversing demo. Bottom line: Running an anti-virus suite with badly engineered (yet Microsoft signed) kernel drivers can actually render your PC less secure from malware.

Talking about anti-virus software vendors. Both McAfee's Anthony Bettini's and Kaspersky labs' Stefan Tanase's presentation focused on threats from social networks (personalized spam, Twitter based C&C, targeted attacks based on synchronization between personal and enterprise information). Anthony had the best sound-bites IMHO: 'open-sourcing one's life', 'keep your enemies closer'. Stefan showed a glimpse of crawler based technology that Kaspersky's R&D team in Romania is working on.

More targeted social network threats came from Brian Honan who introduced the audience to some of the tools of the trade, notable pipl.com and Maltego. Interestingly, in Ireland, anyone can request everyone else's birth certificate (apparently for reasons of genealogical research), and the only thing needed to request a driver's license or passport in Ireland is a birth certificate.

Microsoft's keynote was delivered by Amy Barzdukas. She made some valid points about the perception of privacy and security by the average computer user. The FUD (initially directed at Google: Chrome's auto-completing address bar will send packets to Google, OMG, better stick with IE8) was a little too much for my taste. They're going to make it more difficult to download and install third party software through IE because of the fake virus scanner scams.

The keynote by special agent Mularski of the FBI and Andy Auld of SOCA about the Russian Business Network was so secret that I cannot blog about it. The keynote by Dave Hansen of CA on content-aware extensions of RBAC was pretty interesting and included another secret agent.

Andrew Nash of PayPal gave an insightful presentation on the consumer identity bootstrap problem. After explained the clever big bang/steady state analogy he showed just how big the problem is. What's the most important feature an Identity Provider should offer to its users? Right. Anonymity. The other PayPal presentation was by Hadi Nahari who put forward some requirements (or rather, desirements) for identity in mobile computing. It appears that PayPal is trying to get some of these ideas into the Global Platform specifications.

Ira Winkler went on a one-hour rant over the use of the term information warfare. Funny stuff, except for the one Estonian guy in the audience.

Oct 1, 2009

Mobile PKI


Mobile PKI, also known as Wireless PKI (and a lot of other names such as Mobile Secure Signature Service, Secure Signature Creation Device, ...) is a technology which allows users to place electronic signatures with their cell phone. This can be used for applications that run on the phone, but also for applications that run on other platforms (the user's computer connected to the Internet, for instance). One could use this, for example, as an authentication mechanism at a relying party. In the latter scenario your phone is a "something-you-have" token which provides extra security as an attacker would have to manipulate two separate channels to mount an attack. Before placing a signature, the cell phone will ask the user for his or her PIN.

The SIM card inside the cell phone plays a central role in Mobile PKI. Actually, the obvious way to implement Mobile PKI is through a so-called SIM Application Toolkit (SAT) applet installed on the SIM card. SAT has some really cool features that make things easy, both for the mobile operator and for the user:
  • They can be installed over the air (OTA) to an already deployed SIM by the mobile operator, without disturbing the user
  • They can add extra (basic menu-based) features to the GUI
  • They can react to events such as selection of menus by the user or incoming SMSs sent by the mobile operator
This makes Mobile PKI a pretty secure solution:
  • The application resides on a tamper resistant smart card
  • Most handset manufacturers will make sure that there's a trusted path from the phone's keyboard to SAT applications (the malware problem seems to still be small for the mobile platform)
  • The separate channel advantage was already mentioned above
It's also user-friendlier when compared to other authentication solutions such as smart cards, PKI tokens, and one-time-password SMSs:
  • The PIN is the same for each and every transaction
  • There's no need to install software on the user's PC
  • There's no need to read and type challenges or responses
  • Most users will not forget or leave their cell phone unattended, and most will notice and report a missing or stolen phone
Mobile PKI has been standardized by ETSI around 2002/2003. Also Common Criteria protection profiles for Secure Signature Creation Devices have existed since 2001. So the technology is pretty old. It has found its way to end-customers in some countries, most notably Turkey and more recently to the Nordic countries (in Finland you can apparently even add your government issued eID to a SIM card). Most of the SIM manufacturers and technology providers offer Mobile PKI as an option to their customers (the mobile operators). I wonder why this hasn't caught on here in the Netherlands.

Aug 13, 2009

Anti-skimming measures

Someone glued small pieces of metal to the PIN entry pad at the POS of my local self-service gas station. It must have been one of the good guys, because it says "veiligheidsstrip" at the bottom.

Certainly raises security awareness amongst customers...

(Until they get used to it and the bad guys manage to produce mini cameras that look like small pieces of metal.)

Mar 31, 2009

Security in the workspace - Part 3


It seems that we will have to learn to live and work in a de-perimeterized world. Acceptance of the problem is often the first step towards a solution. So, what alternatives to perimeter defense are there? And what is the impact of these alternatives on the future workspace and vice versa? Below are some thoughts. I hesitate to call these conclusions. Please consider these as starting points for a discussion.
  • Defense in depth is the complete opposite of perimeter defense (when considering the location where controls are implemented). This security principle advises to apply multiple layers of security controls, so that if one layer fails other layers take over.
    • Unfortunately, complete defense in depth is increasingly expensive as it is difficult to maintain,
    • and because too many layers of security get in the way. (Is there a usabilty vs security trade-off? I'm not sure. But usability is probably not helped with adding multiple layers of security.)

  • Most experts see a shift from perimeter defense (and other location based defenses) to data oriented security. (Perhaps that should be information oriented security?)
    • Because of the multiple contexts in which employees now process data, this requires some sort of watermarking of sensitive and valuable data. If, for example, lost information can be tracked back to employees responsible for that information than those employees can be held accountable for the loss. But wasn't DRM declared dead?
    • Moreover, data oriented security makes valuation of information necessary: relative sensitivity and value to the organization should be made explicit. Valuation of assets should be done anyway (as part of information risk management), but that doesn't mean that it is easy, cheap or common practice today!
    • Related to the above point: information should be stored and processed with a clear goal in mind (for reasons of Governance, Regulations, Complicance). This is at least as difficult as valuation.

  • Accountability (the other A-word) may be an alternative to access control. Access control is somewhat problematic in the absence of a perimeter after all. Access control is expensive in the future workspace since employees join and leave the organization on a more regular basis (access credentials management is costly). Accountability certainly seems to be more compatible with the greater responsibility given to employees as part of the future workspace trends.

  • Identity management is necessary, as accountability usually means a great deal of logging (of actions of employees). Logging obviously requires the capability to distinguish between employees (try holding individuals accountable for their actions when you can't tell them apart). However, since we left the perimeter behind us, we can't rely on the classical identity management process which involves provisioning, authentication, and authorization.
    • The provisioning problem could be overcome if we could use an established identity provider's infrastructure which extends beyond the bounds of the organization. The existing identity provider (I'm thinking of national governments) has the infrastructure for issuing authentication means to individuals already in place. If such a global identity provider is not (yet) possible, federated identity management and user-centric identity management may be alternatives (in the mean time).
    • Authentication has to be done decentralized (in absense of a perimeter with check points) and preferably as often as possible yet also as unobtrusive as possible. Perhaps context-information could help here?
    • Authorization, on the other hand, is better done centralized so as to achieve consistent rules which are easy to manage. Well-defined roles could be useful here
Other points? Leave a comment!

Feb 17, 2009

Security in the workspace - Part 2

The word de-perimeterization is used by security experts both to describe a problem and a solution. The problem is clear: when we rely on perimeter defense, a disappearing perimeter is problematic. The solution says that instead of fighting de-perimeterization, by trying to rebuild parts of the perimeter, we should admit that perimeters will be gone soon and implement our security measures on a different level.

What is causing the problem? Here are three major factors which seem to drive de-perimeterization:
  • Networked Business: Suppliers, customers, and service providers all work with the organization on a much finer grained level than they used to. This is the result of specialization. An example is outsourcing: It can be very cost-effective to outsource certain tasks to more specialized organizations. Outsourcing requires so-called service level agreements: contracts between the organization and service provider about the quality of the services rendered. Security should be a part of such agreements as these parties operate within the perimeter.
  • Governance, Regulations, Compliance: Organizations need to comply with more and more external laws and regulations. Often these call for more transparency towards shareholders, governments and the general public. This means that these parties need to pass the perimeter.
  • Insider Threats: Employees are not the loyal workers they once were. Maybe most of them still are, yet some of them will try to gain access to your most valuable assets for personal gain. If you cannot trust your own employees, who operate within the perimeter, then you might as well get rid of the perimeter.
It is clear that each of these factors impacts the perimeter. Are there more?

The de-perimeterization factors are closely related to trends typically attributed to Future Workspaces. The difference is in the perspective. When I think of securing an organization, I tend to take the perspective of the organization. When I try to imagine what the workspace of the future will look like I tend to take the perspective of employees. We identify the following trends:
  • Relation to employer (or, perhaps, loyalty to the organization)
    • Employees no longer work for one employer for 40 years but switch jobs regularly.
    • Employees work for different employers at the same time (I used to work here and here at the same time, which rarely led to conflicts of interest).
    • Professional social network of most employees is bigger than it used to be, extending well beyond the organization’s borders.
  • Responsibilities
    • Employees are given greater responsibility in representing the organization.
    • Organizations are less hierarchically managed.
    • Employees (are encouraged to) write about their professional lives in blogs.
  • Collaboration
    • Not every organization has experts in every field. Organizations are aware of external experts (thanks to openness of other organizations) and encourage employees to collaborate with them.
  • Work in different contexts
    • Employees can work from home.
    • Employees (especially knowledge workers) travel much more and work while in transit (using mobile devices).
    • Employees work (while outsourced) at client.
    • Employees work irregular hours.
    • Employees work shorter hours, some colleagues may almost never meet in person.
At the very least we can claim that the Future Workspace trends reinforce the de-perimeterization factors. The de-perimeterization problem is made bigger and more urgent for organizations to deal with. In fact, many of the security incidents that organizations are faced with can be explained in terms of security controls which are part of the old perimeter defense interacting with employees' new found freedom.

In part 3 I will look at ways forward in the de-perimeterized future workspace.

Feb 10, 2009

Security in the workspace - Part 1


The workspace is changing. What will mostly be different is the relationship between employees and the organizations they work for. I’m interested in the consequences these changes have for the administration of information security in organizations.

Information security incidents have become part of our lives during the last couple of years. Popular media regularly report on incidents which range from lost pen drives filled with privacy sensitive data to financial fraud by employees costing financial organizations billions. The increase in reported incidents not only shows that security incidents are on the rise but it also indicates a change (yes we can!) in how organizations respond to incidents. Reputation and trust are increasingly important concepts in today’s business world, and organizations need to find ways to deal with security problems.

The openness that organizations are showing lately, both to customers, to employees, to other organizations, and to the general public is interesting. From a security perspective openness is a double edged sword: On the one hand, openness means granting access to parties which may not be trusted yet. This clearly complicates security administration. On the other hand, openness also stands for transparency and open standards which simplify matters. And simple things are easier to secure.

Security researchers who study organizational security associate the new found openness in organizations with de-perimeterization. De-perimeterization means that the perimeters of organizations are disappearing. This is problematic because most security strategies pay a lot of attention to perimeter defense: Concentrate your efforts on the perimeter and the rest of the organization is secure.

Is perimeter defense a bad strategy? Thousands of huddling Emperor penguins can’t be wrong, can they? And if you’ve ever played the board game Risk you know that the best strategy to defend a continent is to move all your armies to the border countries.

In part 2 we will have a closer look at de-perimeterization and see how it interacts with future workspaces.

Feb 1, 2009

A "Game-Theoretic" Analysis of De-perimeterization


De-perimeterization is a word which (despite being impossible to pronounce or spell correctly) is used more and more in discussions about security of organizations. Studying the effects of the disappearing perimeter in practice is difficult because organizations are complex and it is difficult to measure the quality of newly deployed security measures. Instead, let’s describe some of the issues of de-perimeterization here using an analogy with the well known board game Risk.

In Risk players occupy countries by placing armies on them. Given a configuration of the board where every player has a number of countries with armies, players can attack countries owned by other players from a neighboring country. If all armies of the defending player are completely defeated then that country is conquered and the attacker can place a number of armies on it.

Although luck is certainly a factor (the game uses no less than five dice) the general rule is that the more armies you bring to a fight, the bigger the odds that the country will (still) be yours at the end of the attack. When attacking, a great number of armies can be moved on to the newly conquered country. Armies can also be moved from one country to a neighboring country if owned by the same player when not attacking, but the number of movements is limited per turn. Playing Risk demonstrates that logistics is one of the most difficult parts of administering security.

Countries are organized in six continents. Continents are a lot like organizations: they contain assets (countries, armies) and they have a perimeter. A player receives bonus armies at the start of every turn in which a continent was completely owned by that player and was successfully defended.

Countries on the border of a continent form the perimeter of that continent. Perimeter countries need special attention because enemies need to first travel through perimeter countries before they can attack an inner country. Recall that if an attacker occupies any country of a continent held by a player, then the defender will not get his bonus at the beginning of their next turn. For the defender, moving most armies to the border countries seems therefore a good strategy. We will call this strategy Perimeter Defense.

At first, Perimeter Defense seems like a good idea. All players are each other’s enemies, after all. In practice, however, what happens is that players form temporary alliances so as to effectively attack a common enemy. The common enemy is typically the player with the most armies. This means, for example, that the members of an alliance agree to follow a certain attack strategy and agree not to attack each other for a number of turns so that they can keep borders between alliance-owned continents minimally manned. The armies no longer needed to defend alliance-owned borders can be better used to attack the common enemy with greater force.

But there are far more complex forms of cooperation possible within an alliance. A pattern that is often seen is that one player in the alliance allows another player to move troops over territory owned by the first player. The first player creates a corridor of countries occupied with only 1 army on them. The countries in the corridor are easily conquered by the second player when he attacks them with a great number of armies. Since moving armies during an attack is free, this allows a player to move a great number of troops towards the common enemy’s border, circumventing the per-turn troop movement limits. The second player also leaves only 1 army on the countries in the corridor, allowing the first player to easily recover the original countries of his continent later on.

So what are the alternatives to perimeter defense? It is tempting to think of Defense in Depth as the complete opposite of Perimeter Defense. In the Risk analogy naive Defense in Depth means equally distributing one’s armies over every country of a continent, both inner and border countries. Obviously this means that it becomes easier for a single enemy to occupy a border country (which means the defender won’t get his bonus armies). Yet at least the continent is more difficult to completely conquer by attackers. It very much depends on the situation (the agenda of other players, alliance agreements) whether Defense in Depth is a good strategy.

Defense in Depth also makes it more difficult to move armies to specific places, for example to allow a fellow alliance member to move troops across your continent. Yet, if one doesn’t completely trust the other players in the alliance a certain degree of Defense in Depth is actually a good thing. After all, when alliance member are moving troops through our corridors they should not be tempted too much to occupy our complete continent while they’re at it.

The real world consisting of real organizations is in many aspects much more complex than than simple board game world, if only because the goals of organizations are much more complex than simply ‘winning the game’. Still, real organizations also deal with security strategies. Two organizations will work together if it is of benefit to both of them (although usually not to mount an attack on the security perimeter of some competitor). At the same time organizations need to restrict access to their assets from outsiders as much as possible.

The problem is not that the perimeter is disappearing. The problem is that it is continually changing. The quality of a security strategy depends greatly on external forces such as the goals of other organizations. That these external forces change dynamically makes things even more complex.

Perimeter Defense and Defense in Depth are still good concepts to use when defining a mixed security strategy but much more important seems to be the ability to quickly change strategy. If security controls are resilient rather than brittle (see Schneier’s book Beyond Fear for an explanation of these concepts) then they can easily be used as part of a dynamically configurable perimeter.

(Thanks to Tim, Marcella, Victor, Suzana, Dragan, and Georgi for playing numerous games of Risk. Disclaimer: The author lost most of these games.)