Open Source Security
Welcome at » Planet LTC

by George Wilson, IBM Linux Technology Center

I was recently reading through the NIST “Draft Guide to Security for Full Virtualization Technologies” (SP 800-125 draft) []. It discusses various considerations relating to hypervisor security. One section that particularly struck me was the comparison of bare metal vs hosted hypervisors. These are also known as Type I and Type II hypervisors, respectively. The document states that choosing between them is a critical security decision. That started me wondering if it is actually true that Type I hypervisors offer superior security to Type II hypervisors. While a Type I hypervisor may have a small kernel, it relies on and trusts an entire OS instance in the resource-owning partition (Dom0 in Xen parlance) for device access. So while it might at first blush appear that a Type I hypervisor has a much smaller TCB than a Type II, the TCB is really just in a different place. Given imperfect knowledge of the implementations and similar size, complexity, and maturity, it would seem that Type I and Type II hypervisors would in general offer similar security. I can’t find any solid evidence to the contrary. I’d love to hear from someone who can clarify why the Type I vs Type II distinction is in any way a major factor in hypervisor security analysis.

by Rajiv Andrade, Linux Technology Center

Since the foundation of the Trusted Computing Group, previously named Trusted Computing Platform Alliance, the pillars required to win most of today’s security challenges have been heavily developed.

The Trusted Platform Module and the Trusted Software Stack are two of these. Now that we have in our hands the required enablement, the next expected step is to come up with the development of detailed and implementable use cases that were originally envisioned when starting the Trusted Computing Initiative.

The use case presented in this newly published Blueprint exploits the integrity measurement capability that the TPM provides. Other than using a passphrase as an authorization token, it describes how to use a machine’s integrity to authorize access to sensitive files, by means of a key sealed to those integrity parameters.

The parameters include the loaded kernel image, the bootloader and its configuration file, and the BIOS. Thus, if one tries to load a different flawed kernel image, those sensitive files won’t be accessible. It’s also worth mentioning that the bootloader used is able also to measure critical system files (e.g. the libraries placed at /lib), making the job of a rootkit even harder.

The next step is to attest a machine’s integrity using the Integrity Measurements Architecture (IMA) logs that contain a list of measurements of all files accessed by the root user during runtime.

Check it out at:

By Bryan Jacobson, Linux Technology Center.

While Virtualization offers many benefits, there can also be increased security risks. For example, consider a system running two hundred virtual images. All two hundred images are at risk if a flaw in the hypervisor (or configuration) allows any virtual guest to “break out” into the host environment and affect other virtual guests.

sVirt is a project to improve the security of Linux virtualization. Svirt applies the Mandatory Access Control (MAC) features of SELinux to strengthen the isolation between virtual images. Svirt works with KVM/QEMU and other Linux virtualization systems where the virtual image runs as a Linux user space process.

sVirt is a community project, with founding authors from Red Hat: Daniel Berrange, James Morris, and Dan Walsh. sVirt is integrated with libvirt.

One of my favorite sVirt use cases is: “Strongly isolating desktop applications by running them in separately labeled VMs (e.g. online banking in one VM and World of Warcraft in another; opening untrusted office documents in an isolated VM for view/print only).” (From the 8/11/2008 sVirt project announcement at

The project announcement also identifies an excellent design goal: “Initially, sVirt should “just work” as a means to isolate VMs, with minimal administrative interaction. e.g. an option is added to virt-manager which allows a VM to be designated as “isolated”, and from then on, it is automatically run in a separate security context, with policy etc. being generated and managed by libvirt.”.

You can find a 48 minute video of James Morris’s February 2009 presentation on sVirt at

Slides from that presentation are at:

by Klaus Heinrich Kiwi, LTC Security team

The openCryptoki project, a PKCS#11 provider for Linux with support for software and hardware tokens, has released new versions for both the openCryptoki code itself as well as for it’s associated library, libica.

  • Libica-2 is a major cleanup from the previous versions. It has a new API and supports software fall-back (OpenSSL) when no Crypto hardware is present. The current version (2.0.2) has bug fixes and improved code examples.
  • OpenCryptoki 2.3.0 includes support for Libica-2 and has a number of bug fixes and minor improvements

OpenCryptoki is the most common way that PKCS#11-enabled applications (including Java JCE aplications) can exploit cryptographic hardware in a Linux environment.

The Trusted Computing Group (TCG) Trusted Platform Module (TPM) specification v1.2 is now officially ISO/IEC standard 11889. The TCG has published a press release commemorating the event and the TCG president Scott Rotondo has written a blog entry on the importance of this accomplishment.

Congratulations and thanks to the TCG members who made this possible!

By Bryan Jacobson, Linux Technology Center.

Tyler Hicks (from our team) recently attended the 5/25-29 Ubuntu Developers Summit for Karmic Koala in Barcelona, Spain.

Some of Tyler’s observations on Security topics:

  • There are quite a few eCryptfs users out there and they are generally happy with the version shipped in Jaunty. Most were using the encrypted home feature, but some wanted more flexibility and had custom setups.
  • eCryptfs encrypted swap is on the roadmap for Karmic.
  • Michael Rooney has been working on graphical applications to compliment some of the eCryptfs userspace tools that are currently bound to the command line.
  • Tyler held an eCryptfs roadmap talk about future eCryptfs features: eCryptfs on top of popular network filesystems, improved key management, and asking for someone interested in completing the eCryptfs GPG key module.

Some general observations from Tyler:

  • Ubuntu would like to be the premier guest available in Amazon EC2.
  • Ubuntu users will soon have a daily build of the virtualization stack available, which is a big win for both the upstream developers and the users.
  • Dustin Kirkland gave a talk on leveraging the cloud for data center power savings.
  • The Ubuntu kernel team committed to removing non-upstream kernel code that no one is using anymore.

See the whole story on Tyler blog at:

By Debora Velarde, IBM Linux Technology Center

Someone recently pointed me to a study on the Open Source Trusted Computing Software Stack which was sponsored by The German Federal Office for Information Security (BSI). The study titled “Introduction and Analysis of the Open Source TCG Software Stack TrouSerS and Tools in its Environment” was performed by Sirrix AG security technologies. The study is available in English from the BSI web site. Since the study was published on the BSI site a year ago, some of the information is a little outdated. But it is still a good read for anyone trying to understand the different components that make up the Trusted Computing Software Stack and the relationship between the different components.

The study covers many of components that I was already familiar with: TrustedGRUB [1], GRUB-IMA [2], the Linux TPM Device Driver [3], TrouSerS [4], TPM Tools [5], and the OpenSSL TPM Engine [6]. However, the study also covered some items that I hadn’t known about prior to reading the study: the Open Secure Loader (OSLO) and the TPM Manager. OSLO is a security enhanced bootloader that uses the Dynamic Root of Trust for Measurement [7]. TPM Manager is a graphical user interface for managing the TPM which Sirrix AG helped to develop [8]. One item the study does not cover is Hal Finney’s Privacy CA which Emily blogged about back in January of 2008. For each component included in the study, it provides an overview, some install and configuration information, and an analysis of the quality of the implementation. The quality analysis includes details such as: implementation language, lines of code, whether the code is well commented, available documentation and support such as mailing lists.

In the “Compliance and Interoperability” chapter, the study takes a look at each of the components focusing on their compliance with respect to different specifications. Next, the study includes results from testing the components interoperability with SELinux [9], the Xen hypervisor [10], and the Turaya security kernel [11]. If you’ve never heard of the Turaya security kernel, you’re not alone. Information about Turaya is available on the Sirrix AG web site.

In the final chapter, the study makes some conclusions about the Open Source Trusted Computing Software Stack. It states that “the most important building blocks” are “available and robust enough to be used in a wide variety of security-critical services and applications”. The study continues to note that there is currently no application that actually takes advantage of this trusted computing technology. The study also concludes that the results from the interoperability testing with SELinux, Xen, and Turaya, are “high enough to realize TC-enabled applications on top of them.” Finally, the study closes by discussing some open issues including suggestions for improvement.

Related Links:
[1] TrustedGRUB:
[3] Linux TPM Device Driver: now part of the Linux kernel
[4] TrouSerS:
[5] TPM Tools:
[6] OpenSSL TPM Engine:
[7] Open Secure LOader:
[8] TPM Manager:
[9] SELinux:
[10] Xen:
[11] Turaya:

By: Bryan Jacobson (    As always, the following are my personal opinions.


“Product X”

 I recently heard about an authentication product, let’s call it “Product X”.   According to their website:

Product X . . . implements the equivalent of a “one-time pad” system – the most secure communication possible.

Product X uses applied physics to defeat all known Internet authentication threats.

Sounds good, maybe too good.  Can we trust it?


Cryptographic Snake Oil


Serge Hallyn introduced me to the term “cryptographic snake oil”, which is explained at


Good cryptography is an excellent and necessary tool for almost anyone. Many good cryptographic products are available commercially, as shareware, or free. However, there are also extremely bad cryptographic products which not only fail to provide security, but also contribute to the many misconceptions and misunderstandings surrounding cryptography and security.


Why “snake oil”? The term is used in many fields to denote something sold without consideration of its quality or its ability to fulfill its vendor’s claims. This term originally applied to elixirs sold in traveling medicine shows. The salesmen would claim their elixir would cure just about any ailment that a potential customer could have. Listening to the claims made by some crypto vendors, “snake oil” is a surprisingly apt name.


The snake-oil-faq is a fun website with a lot of information.  Regarding “one-time-pads” it says: 

A vendor might claim the system uses a one-time-pad (OTP), which is provably unbreakable.


Snake oil vendors will try to capitalize on the known strength of an OTP. But it is important to understand that any variation in the implementation means that it is not an OTP and has nowhere near the security of an OTP.

 What are One-time-pads, and why are they “unbreakable”?

 A One-time-pad is a key as long as the message.  Each byte of the OTP is generated with an unpredictable random process. 

 The sender and receiver each need a copy of the OTP and must insure no one else has a copy. The OTP should be physically exchanged, not transmitted.

 Each byte of the OTP is only used once – so there is no “statistical pattern” that an adversary could use to crack the message.  (More info is at:

The unbreakability of one-time-pads rests on three factors:

1. Every byte in the OTP is generated by a truly random (unpredictable) process.

2. Every byte in the OTP is used only once.

3. The sender and recipient insure that no one else could have a copy of the pad.

When these are true, the OTP is unbreakable – there is no vulnerability that can be exploited.


How Product X works (I think)

Note: This is not a comprehensive evaluation of “Product X”, but rather my personal quick comparison of the  information on their website to One-time-pads.  Their website does not have a complete technical description, so I’ve made some assumptions that could be inaccurate.

 If I understand correctly, “Product X” works like this:

 – “Product X” uses a USB device and some software to provide secure authentication (login) from the user’s client system to a remote server.

– The user supplies a User ID and a Password on the client system.

– The User ID is sent to the server software, which selects an “index” that is sent back to the client.

– The “index” and secure information in the USB device create a “one-time password”, claimed to be equivalent to a One-time-pad.

– The “one-time password” is used to securely transmit the User’s password to the server.


Is “Product X” the equivalent of a one-time-pad?

 Let’s look at the factors that make one-time-pads unbreakable:

1. Every byte in the OTP is unpredictable.

I will assume they got this right.   You can use, or several other techniques.

2. Every byte in the OTP is used only once.

I don’t think this is the case.  I believe the “index” sent back from the server, works with the USB device to “randomly” select a pad.  If enough logins happen, eventually pads will get re-used.

The Snake Oil website says:

OTPs are seriously vulnerable if you ever reuse a pad. For instance, the NSA’s VENONA project [4], without the benefit of computer assistance, managed to decrypt a series of KGB messages encrypted with faulty pads. It doesn’t take much work to crack a reused pad.

How soon are pads reused?  The “Product X” website mentions “billions”, but doesn’t give specifics.

3. The sender and recipient insure that no one else could have a copy of the pad.

I don’t think this is the case.  I believe all users share the same set of pads (otherwise the remote server would need a huge amount of per-user data).

However, I believe the role of the USB Device is to scrambles the pad selection on a per-user basis.  I think security experts agree – a device like this (assuming well implemented) with a physically secure secret, provides significant security advantages.

So, the strength of “Product X” is based on:

– Could an adversary detect re-use of a pad?

– Could an adversary subvert the secret in the USB device?

This is the point of the “Snake Oil” FAQ.  The strength of “Product X” is based on its own implementation details – not the “unbreakable” strength of one-time-pads.


I hope users of “Product X” also understand that it  *ONLY* provides special security for the authentication step (the communication of the password).   It does not help with the rest of the communication between the client and the server.


Since One-time-pads are so dang secure, why aren’t they used for everything?

OTPs have two important limitations:

– They must not be reused, and need to have as many bytes as the messages they are encoding.  This is not practical if you’ve got gigabytes going back and forth every day.

– There must be some other secure mechanism to get the pad from one party to the other.  That’s hard to do if you’re communicating with someone you’ve never met before (common on the web).


The Snake Oil FAQ lists many other things to watch out for, such as:

  • Secret Algorithms
  • Revolutionary Breakthroughs
  • Experienced Security Experts, Rave Reviews, and Other Useless Certificates

Intel has done a study on the costs associated with a stolen or lost laptop. One of the most interesting aspects of the study is that they were able to quantify how much a company saves when the confidential data on the lost laptop is encrypted. The grand total is


saved per lost laptop when the confidential data is encrypted.

I’d recommend using eCryptfs if you are running Linux on your laptop.

by Klaus Heinrich Kiwi, IBM LTC Security Team.

In the Information Security world, authentication and authorization are orthogonal concepts:

  • Authentication refers to the act of correctly identifying an user or other entity, e.g.: making sure a user is who he really say he is. This is often done by associating passwords or keys to user accounts.
  • Authorization refers to the act of granting access from certain users to certain services or resources, e.g.: allowing the user john_doe to read the file /foo/bar. This is usually done by mapping users and groups to resources through the use of permissions.

Kerberos is a network authentication protocol aimed at providing secure and reliable authentication semantics over an insecure (open) network. In a glimpse, it relies on symmetric key cryptography and in a trusted third-party to provide mutual authentication between two entities (called Principals in Kerberos nomenclature). This means that in a scenario where a user is authenticated against a network service, not only the service can be sure of the user identity, but the user can also be sure that he is communicating with the right server. All of this is done without exposing clear passwords or keys in the network.

The Kerberos Protocol is a standard (RFC 4120) with different implementations such as Microsoft’s Active Directory, Heimdal, the AFS kaserver and the Open Source MIT-Kerberos implementation.

LDAP, on the other hand, is an information retrieval protocol for accessing special purpose databases, called Directories. Directories are usually optimized for reading (queries) as opposed to writing operations (inserts), thus they are often used in write once, read many scenarios. This optimization aspect, associated with the hierarchical manner the objects are organized in the database makes LDAP an ideal choice for performing the mapping operations an authorization system needs.

LDAP is also a standard (RFC 4510, RFC 3494 among others) with numerous implementations such as the Open Source OpenLDAP and the IBM Tivoli Directory Server, aimed at enterprise use.

Since release 1.6 of the Open Source MIT-Kerberos (krb5) implementation, it is possible to combine the powerful authentication aspects of the Kerberos Protocol with the reliability and scalability provided by LDAP authorization. Such feature is included in recent enterprise distributions like Red Hat Enterprise Linux 5 series and Novell SUSE Linux Enterprise Server 11 and later, giving those platforms the possibility to benefit from combining the Open Source MIT-Kerberos implementation with the enterprise features of IBM Tivoli Directory Server.

It was with the intention of demonstrating how the above scenario can be achieved that I wrote a Blueprint covering the subject of Using MIT-Kerberos fo IBM Tivoli Directory Server backend.

Blueprints are documents describing the detailed plan of action for a specific task involving IBM hardware or technology. Blueprints bring a step-by-step description showing the exact actions needed to perform a certain task. Those steps are written with the expertise from the Software Engineers who actually work on development, but are also tested for correctness inside IBM labs – an IBM-branded HOWTO.

Besides the above Blueprint, please check-out the other publications I’ve authored or co-authored, including the Enterprise Multiplatform Auditing Redbook and  my Logical Volume Management developerWorks article.

And as always, feedback is greatly appreciated!

Klaus Kiwi