- Authentication refers to the act of correctly identifying an user or other entity, e.g.: making sure a user is who he really say he is. This is often done by associating passwords or keys to user accounts.
- Authorization refers to the act of granting access from certain users to certain services or resources, e.g.: allowing the user john_doe to read the file /foo/bar. This is usually done by mapping users and groups to resources through the use of permissions.
- The Fedora Weekly News Issue 114 (dated Dec. 31, 2007) describes three “SELinux Rants” along with the response from the Fedora community. Choice quote: “…suggested that rather than blame SELinux for complexity it was better to realize that it was describing the complex interactions between different pieces of software.” Personally, I disagree with this sentiment. I think that our tools should abstract away some of the complexity rather than reflecting the complexity up to the user. I understand that details get lost during abstraction which can be detrimental to security, but if there cannot be some level of secure abstraction, then the tool is not going to be usable by the average user/administrator. Thanks to Oisin Feeley for this excellent synopsis of the threads.
I was given 3-5 minutes today to express my thoughts on the future of Linux security. I was the token non-kernel developer in the room, so I wanted to bring a different perspective - security from the point of view of application developers, of users, and focused on recent events. I’m not sure what I said, but here is what I wrote to prepare for the session.
Kudos to Elena Reshetova for doing such a great job leading the panel discussion and keeping the discussion flowing! Kudos also to panel members Dmitry Vyukov, Christian Brauner, Nayna Jain, and Andrew Lutomirski who I count among my heroes for all that they do for Linux security.
A recent article from Wired: “Hacker Eva Galperin Has a Plan to Eradicate Stalkerware” about Eva Galperin’s efforts to get antivirus vendors to appropriately label the spyware used by perpetrators of Intimate Partner Violence (IPV) caught my eye and made me wonder whether the Mobile matrix of the MITRE ATT&CK framework contains all of the techniques used by this malware.
Twitter was buzzing. SwiftOnSecurity tweeted a series of tweets praising MITRE ATT&CK: “If I had MITRE ATT&CK when I started… you have no idea the value of this.” She was right, I had no idea the value, but I wanted to learn, so I headed over to the MITRE ATT&CK website to check it out. I took a look at the MITRE ATT&CK matrix, clicked on a few of the links, and immediately relegated it to the backlog of things that I want to look at later, but which somehow never seem to make it to the top of my TODO list. This article is meant to save you from that same fate by showing an example of applying the MITRE ATT&CK framework to recent threat intelligence to get you past the first hurdle of learning the framework.
Security Week published my latest opinion piece about Developing Below the Security Poverty Line. I love the visceral impact of Wendy Nather’s phrase “security poverty line”. I wish we were all above the poverty line, using effective SDLC processes, but it sadly isn’t the case yet as the Black Duck survey vividly shows.
David Wheeler and I promoted the CII Best Practices Badge on FLOSS Weekly with Randal Schwarz and Guillermo Amaral. It was a fun show to do despite my aversion to video. And by the end of the day, we already had an issue posted by someone who watched the show, so it is definitely reaching the right audience. I’ve been a fan of FLOSS Weekly since I first heard about OpenROV on the show.
Gunnar interviewed Dr. David A. Wheeler and I about the CII Best Practices Badge program for an episode of The Dave and Gunnar Show called “Badge of Open Source Honor“. With a little editing, it even turned into something that I can listen to without cringing. 🙂 Thanks, Gunnar!
SecurityWeek just published my latest article “No Exit: The Case for Moving Security Information Front and Center“. “No Exit” is a reference to Sartre‘s existential play where three people wind up locked in a room together for eternity driving each other crazy. They are in hell. They represent developers, QA, and security people for the purposes of this article (or three random devops guys, your choice).
Security Week has published an article that I wrote called, “Establishing Correspondence Between an Application and its Source Code; How Combining Two Completely Separate Open Source Projects Can Make Us All More Secure“. I would love to see this concept come to fruition. IBM Research has had a long term vision for enabling this type of integrity which, years after I first heard about it, still astounds me how far ahead of their time they were and how durable their vision has been. The Debian Reproducible Builds project likewise amazes me because the leaders fearlessly took on a huge mountain of work and are making it happen. The glue piece is still missing. Someone will need to stand up and be willing to sign the file hashes with a recognizable and valuable key but we are inching closer to having the technology to ensure the integrity of the delivery chain between code and executable process. Yeah, yeah, yeah, there is still the problem of trusting the compiler and realistically being able to audit the source code, but solutions to the former problem have been posited and tools and techniques exist to deal with the latter (if you care enough to do it). We are inching closer.
Uber pulled out of Corpus Christi, Texas a couple of weeks ago. They are threatening to pull out of Austin if the vote in May goes against them. Venture capitalists are saying that Austin’s city council is “too hostile” and anti-tech because of the desire to regulate tech-enhanced old business the way that traditional old business is regulated. If you somehow haven’t heard, the debate in Austin (and elsewhere) is about whether security practices around hiring Uber/Lyft drivers should be the same as security practices around hiring taxi drivers. Effectively, Uber and Lyft are using their market clout to weaken security practices around only their own taxi services. Whenever cities don’t go along, they pull out and let the resulting market backlash force the city governments into weakening security. To do anything else is “anti-tech”.
I wrote a blog post for Linux.com on how to approach dynamic analysis of large projects. The tldr is to use afl if you can. If you can’t, then you will probably have to write your own tools.
After a lengthy hiatus during which I focused on building secure things on top of open source and with open source rather than on building actual open source, I’m back to focusing on open source security in my day job. I hope that will give me more time to focus on things that I would be willing to discuss here, on my blog. I also hope that I will be able to discipline myself and focus on technical topics, such as my most popular and wildly outdated post on maximum password length from eons ago. But I fear that I will never be able to wean myself entirely from the “someone is wrong on the Internet!” type of post, because they are fun, cathartic, and easy to write.
Don’t be jealous, but I now have the best job in security. Make no mistake, I still speak for myself and not for my employer.
USA Today has two eye popping stories on the NSA crypto capabilities. The first story is entitled “Why NSA’s decrypting is OK” in their mobile app and The Case Supporting the NSA’s PRISM decrypting in their online version. The title already gives an idea of the slant that the article will take. The article starts with a bold statement “A consensus is gelling that the NSA — in using brute-force password hacking techniques, cracking into Virtual Private Networks and Secure Sockets Layer services and taking steps to weaken certain inherently weak encryption protocols – is simply doing what the NSA has always done, and was, in fact, created to do: keep the U.S. competitive in the spy-vs-spy world.” The article never defends this assertion and it is wildly at odds with the consensus that I see gelling on Facebook and on the technical cryptography mailing lists which I browse. To give the author the benefit of the doubt, I could be convinced that this is a consensus of NSA mouth pieces.
Apropos of nothing, this squiggled my funny bone this morning: Pew Research reports that there is a glass ceiling for female white collar criminals. It sounds like they are doing it wrong: “More than half of all women (56%) did not personally profit from the fraud”. Some backbone is needed: “Still others said they knowingly committed illegal acts simply because they were instructed to do so by a superior”. Sigh. They couldn’t at least ask for a candy bar? I heard the story on NPR this morning during my commute.
The hack of iOS devices by a malicious charger is one of the most interesting stories from Black Hat this week. Pretty amazing that the chargers have this much power yet are not authenticated via a solution such as ORIGA from Infineon. (I do not now and have never worked for Infineon. I’m sure that there are many more solutions like this from other companies, this is just one at hand that would serve to fix this vulnerability without giving up any of the functionality. Whether or not a charger needs that functionality is another kettle of worms.)
University of Texas demonstrates GPS signal spoofing quite dramatically, by sending a private yacht off course and thus “hijacking” it.
Another source with an ad wall and less technical detail but with the following key quote:
These consumer spoofing devices, the sale of which has been banned in the U.S., can still be legally purchased in the UK, and are available for as cheap as $78 (£50).
And, of course, North Korea has already experimented with the technology, reportedly blocking GPS signal in South Korea on several occasions. One such attack launched in 2012 affected 1,016 aircraft and 254 ships.
Article from May 2013 from Azimuth Security on Exploiting Samsung Galaxy S4 secure boot.
Examining the check_sig() function in more detail revealed that aboot uses the open-source mincrypt implementation of RSA for signature validation. The bootloader uses an RSA-2048 public key contained in aboot to decrypt a signature contained in the boot image itself, and compares the resulting plaintext against the SHA1 hash of the boot image. Since any modifications to the boot image would result in a different SHA1 hash, it is not possible to generate a valid signed boot image without breaking RSA-2048, generating a specific SHA1 collision, or obtaining Samsung’s private signing key.
The Trusted Computing Group has released a draft version of the new Trusted Platform Module specification for public review and comment: TPM 2.0. Five years+ in development, the spec contains a lot of new material to allow for hash and algorithm agility and enhanced authorization support. (Details of what is included in this new version can be found in the FAQ.) Comments can be submitted to a mailing address created especially for this review which can be found on the first page of each part of the specification. Weighing in at 1,397 pages, you better get started now, if you want to have any chance of completing your review before TPM 3.0 comes out. That reminds me… I have some work that I have to go do.
My first experience with Gnome 3 is that it frowns at me for not living up to its expectations.
Time reports that Bin Laden’s computer contains a “mother lode of intel“. The article ends with the question: “The official posed the same question that’s likely on plenty of other people’s minds: ‘Can you imagine what’s on Osama bin Laden’s hard drive?’”
by George Wilson, IBM Linux Technology Center
I was recently reading through the NIST “Draft Guide to Security for Full Virtualization Technologies” (SP 800-125 draft) [http://csrc.nist.gov/publications/drafts/800-125/Draft-SP800-125.pdf]. It discusses various considerations relating to hypervisor security. One section that particularly struck me was the comparison of bare metal vs hosted hypervisors. These are also known as Type I and Type II hypervisors, respectively. The document states that choosing between them is a critical security decision. That started me wondering if it is actually true that Type I hypervisors offer superior security to Type II hypervisors. While a Type I hypervisor may have a small kernel, it relies on and trusts an entire OS instance in the resource-owning partition (Dom0 in Xen parlance) for device access.
by Rajiv Andrade, Linux Technology Center
Since the foundation of the Trusted Computing Group, previously named Trusted Computing Platform Alliance, the pillars required to win most of today’s security challenges have been heavily developed.
The Trusted Platform Module and the Trusted Software Stack are two of these. Now that we have in our hands the required enablement, the next expected step is to come up with the development of detailed and implementable use cases that were originally envisioned when starting the Trusted Computing Initiative.
By Bryan Jacobson, Linux Technology Center.
While Virtualization offers many benefits, there can also be increased security risks. For example, consider a system running two hundred virtual images. All two hundred images are at risk if a flaw in the hypervisor (or configuration) allows any virtual guest to “break out” into the host environment and affect other virtual guests.
Steve Hanna has written an excellent cloud security overview article A Security Analysis of Cloud Computing which talks about how trusted computing can help solve some of the cloud security problems.
Here are seven links that are worth the time that it takes to read them if you are interested in systems security.
The September 2009 edition of the Communications of the ACM had a very fascinating article called Spamalytics: An Empirical Analysis of Spam Marketing Conversion. Aside from the catchy title, this article is well worth a read. You will definitely understand more about spam after doing so. Given how much fun the authors must have had doing the background research for this article, it seems a shame to quibble with it, but there were a couple of things that set my teeth on edge so I’ll do so anyway. Besides, it gives me the reason to point out this article which really is a fun read. With that said, here are the things about the article that that affected me like nails on a chalkboard.
By Bryan Jacobson, Linux Technology Center.
Tyler Hicks (from our team) recently attended the 5/25-29 Ubuntu Developers Summit for Karmic Koala in Barcelona, Spain.
AMTU 1.07 has just been released on ATMU’s Sourceforge home. This release incorporates a patch from Joy Latten to add IPv6 interfaces to the list of interfaces probed to test networking devices. It also contains a small fix to the memory separation routine.
By Debora Velarde, IBM Linux Technology Center
Someone recently pointed me to a study on the Open Source Trusted Computing Software Stack which was sponsored by The German Federal Office for Information Security (BSI). The study titled “Introduction and Analysis of the Open Source TCG Software Stack TrouSerS and Tools in its Environment” was performed by Sirrix AG security technologies. The study is available in English from the BSI web site. Since the study was published on the BSI site a year ago, some of the information is a little outdated. But it is still a good read for anyone trying to understand the different components that make up the Trusted Computing Software Stack and the relationship between the different components.
The study covers many of components that I was already familiar with: TrustedGRUB , GRUB-IMA , the Linux TPM Device Driver , TrouSerS , TPM Tools , and the OpenSSL TPM Engine . However, the study also covered some items that I hadn’t known about prior to reading the study: the Open Secure Loader (OSLO) and the TPM Manager. OSLO is a security enhanced bootloader that uses the Dynamic Root of Trust for Measurement . TPM Manager is a graphical user interface for managing the TPM which Sirrix AG helped to develop . One item the study does not cover is Hal Finney’s Privacy CA which Emily blogged about back in January of 2008. For each component included in the study, it provides an overview, some install and configuration information, and an analysis of the quality of the implementation. The quality analysis includes details such as: implementation language, lines of code, whether the code is well commented, available documentation and support such as mailing lists.
By: Bryan Jacobson (firstname.lastname@example.org) As always, the following are my personal opinions.
Intel has done a study on the costs associated with a stolen or lost laptop. One of the most interesting aspects of the study is that they were able to quantify how much a company saves when the confidential data on the lost laptop is encrypted. The grand total is
by Klaus Heinrich Kiwi, IBM LTC Security Team.
In the Information Security world, authentication and authorization are orthogonal concepts:
Malware (malicious software), not virus, is the general term for software that is designed to behave badly. Malware encompasses the complete line of viruses (boot sector, stealth, polymorphic, multipart, self-garbling), worms, trojan horses, logic bombs, rootkits, etc. As you can see by the list above, malware comes in many shapes and sizes. We previously talked about viruses, so let’s briefly address some of the other forms of malware.
by George Wilson «email@example.com», IBM Linux Technology Center
Operating system security features are notoriously difficult to explain. Folks who work on security have their own specialized vocabulary, which serves well to communicate concisely with other members of our community. However, it can be difficult to translate concepts into everyday language. Have you ever tried talking about SELinux to those who have never been exposed to MAC? You have to provide a large amount of background material simply to describe what SELinux is, let alone what interesting things can be done with it.
In brief, some cool links:
By Bryan Jacobson, Linux Technology Center, IBM ( firstname.lastname@example.org).
As a special treat, some members of IBM’s Linux Technology Center security team have agreed to be guest bloggers for the Open Source Security Blog. You can expect to hear interesting, insightful, educational and just plan fun ideas on eCryptfs, labeled IPSec, trusted computing, PKCS#11, and general Linux security topics. I’m happy to announce the following line up of guest bloggers coming soon!
Currently, the best source of information on eCryptfs performance is by Phoronix Global using the phoronix test suite. The phoronix test suite is included in Ubuntu 9.04 Jaunty and the results for for eCryptfs in Jaunty beta are posted on the phoronix website. The results are surprisingly good for the compilation and encoding tests. The IOzone write test shows some pain.
During my lab admin days as an undergrad, people used to come into the computer lab with virus infected 5 1/4″ diskettes and (inadvertently) try to infect the lab machines. (It doesn’t feel like it was THAT long ago!) Next, viruses were commonly spread attached to email. More recently, viruses have been propagated through music sharing. All of these infection vectors have one thing in common – a proper virus requires a host to carry the malicious code. Colloquially many people have become used to calling all malware viruses, but this is not correct terminology and I do believe that it is important to be pedantic on this point.
I ran my previous blog entry past a co-worker and he said, in effect, all you are saying is that you don’t think that anti-virus is necessarily on Linux. What about all those rants out there from people who believe that they need anti-virus so strongly that they believe that they got a virus that could have been prevented by anti-virus software?
The question about Linux security that is most frequently asked of me is
What anti-virus software do you recommend for Linux?
While looking at SSL/TLS in a little more detail, I noticed that many websites default to RC4 which Firefox characterizes “High-grade Encryption” (Tools->Page Info, General and Security Tabs) but which is characterized by Wikipedia as “RC4 has weaknesses that argue against its use in new systems”. RC4 is used because it is much faster than AES. (Web servers can drive 15-20% more traffic with RC4 (128) than with 3DES (EDE). Based on actual results, but YMMV.) Example websites using RC4 include my credit union, a well known online savings account provider, and my 401K provider. Algorithm negotiation is built into the TLS protocol, so you can tweak your Firefox configuration so that your browser no longer offers to use the RC4 protocol. To change your Firefox configuration, surf to about:config and promise to be careful. Search on rc4
security.ssl2.rc4_128 default boolean false<br />
security.ssl2.rc4_40 default boolean false<br />
security.ssl3.ecdh_ecdsa_rc4_128_sha default boolean true<br />
security.ssl3.ecdh_rsa_rc4_128_sha default boolean true<br />
security.ssl3.ecdhe_ecdsa_rc4_128_sha default boolean true<br />
security.ssl3.ecdh_rsa_rc4_128_sha default boolean true<br />
security.ssl3.rsa_1024_rc4_56_sha default boolean false<br />
security.ssl3.rsa_rc4_128_md5 default boolean true<br />
security.ssl3.rsa_rc4_40_md5 default boolean true
Thinking about the future and browsing Wired, I got distracted by two articles by Bruce Schneier. This first describes my favorite security concept ever: “turtles all the way down”! “The World Wide Web sits on top of a turtle, and then below that is an older turtle, and that sits on the older turtle. You don’t have to feel fretful about that situation — because it’s turtles all the way down. Now, we don’t have to think about it in that particular way. The word ‘turtles’ makes it sound absurd and scary, like a myth or a confidence trick.” OK, so Bruce is using it to describe an architecture here and not a security concept, but this may be my favorite turtle quote of all time. It achieves the heretofore unachievable invoking the thought of turtles as scary!
Mike Halcrow has written a paper on Installing and configuring eCryptfs with a trusted platform module (TPM) key. This paper is available on IBM Systems Information Center along with a bunch of other step-by-step guides.
This paper describes how to use a TPM key directly with eCryptfs. It demonstrates the flexibility of eCryptfs’ pluggable key module framework. Since the TPM wasn’t designed to do bulk encryption, if you actually set eCryptfs up this way, you’ll get pretty low performance, but it is an interesting exercise nonetheless and if you have small bits of information that you want strongly protected, this does provide one good option. I hear that Mike is working on replicating this experiment with a wrappered key which should provide much better performance but requires a little additional code.
In addition to showing how to integrated the TPM with eCryptfs, this paper also contains a step-by-step descriptions on how to do ancillary operations like how to enable encrypted swap in Red Hat Enterprise Linux 5.2 and how to get your TPM up and operational. This side content alone makes the paper useful.
I’ve been writing this blog just over a year now. The year started out very strong with some my favorite posts coming early on. As my core job responsibilities moved beyond security, writing a security focused blog has become more difficult, and I have posted much less frequently over the past several months.
My colleagues have written a comprehensive step-by-step guide to enabling disk encryption in your choice of RHEL 5.2 or SLES 10 SP2. This is pretty much as easy as it gets. If you have questions or comments about the paper, they also have an online forum for security discussions. I suggest the PDF version which packages the whole (short) paper up into a single, easily consumable whole.
Red Hat Enterprise Linux 5.2 was released today. That is significant news in and of itself, but I am especially excited because it contains Technology Previews of eCryptfs, TrouSerS, and tpm-tools! As Technology Previews, they are not yet supported for production use, but this is the first step to allow for experimentation and time for ripening. I’m happy to see Red Hat’s continued dedication to security. If you try these packages out in RHEL, I’d love to hear of any successes or problems that you encounter.
Fedora Weekly News continues to be a(n unexpectedly) great source for security content. I’ve recently been cleaning up the backlog of my email and have discovered nuggets of valuable information such as
In a major validation of the FLASK architecture, the OpenSolaris community has created a new project called Flexible Mandatory Access Control (fmac) to adapt the FLASK architecture to OpenSolaris. (The FLASK architecture that is the basis for SELinux.) Stephen Smalley will be one of the community leads. OSNews picked up the email thread today with some interesting comments.
One of the cool new features included in Red Hat Enterprise Linux 5 was VFS polyinstantiation. This work was in support of the Multi Level Security configuration. It allows files to exist in a directory at different security classifications. The subset of files visible to the user depends on the user’s clearance. There is an excellent description of the functionality in both section 4.1.2 of Extending Linux for Multi-Level Security by Klaus Weidner, George Wilson and Loula Salem, as well as Russell Coker’s article Polyinstantiation of directories in an SELinux system.
Ed Felten this week released some research on defeating disk encryption by recovering keys from DRAM. His blog entry mentioned by name Bitlocker, FileVault and dm-crypt as implementations which can be defeated in this way. Some 70+ articles appeared over the next 24 hours discussing the attack. Of course, we all immediately pinged Mike Halcrow to hear his thoughts on the issue. Between this article and the one a few weeks ago “Encryption could make you more vulnerable”, he just isn’t feeling the love, so he sat down and pounded out his own blog response. In light of news stories such as these, it is well worth keeping in mind that a key motivator for server encryption is to ease disposition of obsolete hardware. It is just too easy to do it the wrong way if you don’t employ encryption.
Roy Fielding finally quit the OpenSolaris community today, see his resignation letter. The kettle finally boiled over and the realization come to many (but not all) that Sun is publishing their Solaris code for marketing purposes, rather than creating an independent, community-led, open source project with the ability to make real decisions.
It’s been a little time since I have written in the blog. I’m still experimenting with how often to post to balance out the drivel with the interesting and the original. I have to say that I’m was a little surprised at how well received the “Best Security News Stories” line has been so I will keep that up. If a story makes me want to run down the halls and tell my co-workers, I’ll post it here instead.
Russell Coker is running a security blogging contest in conjunction with LCA 2008. Only people who have never been employed to work on security, have their own blogs, and who write positive blog entries on a security topic are eligible. He’s looking for commercial sponsors and offering cash prizes. This looks like a very cool contest that will hopefully have the nice side effect of garnering complete coverage of all of the security topics at the conference for those of us who are not there. Thanks, Russell!
LinuxSecurity.com is running a fascinating retrospective on the Top 10 SELinux stories of 2007. It makes for fascinating reading and shows some of the issues around SELinux (complexity – #1 and #8), some of the progress that was made in 2007 (secure networking – #4, setools – #6), and some of the critical benefits of using SELinux (SELinux protection of Samba – #2 and #10). The top stories were chosen based on the number of hits they generated. I think that it is too bad that a story about a wiki (#3) beat out any story on the first Common Criteria certification of SELinux (not present on the list at all). The importance of the Common Criteria certification of SELinux in RHEL 5 is that it makes it more easily adoptable by the U.S. government which in turn makes Linux in general more easily adoptable by the government.
A longstanding limitation of doing remote attestation between “strangers” has been eased through some experimental work that Hal Finney recently announced on the TrouSerS user’s list. Hal has announced that he has created a Privacy CA at PrivacyCA.com. Question 2.1 of the TrouSerS FAQ contains a graphic showing the prerequisite pieces for doing remote attestation. Hal has filled in the Privacy CA and notes that Infineon does supply the Endorsement Credential. He also provides a “test and debug mode” so that users of other TPMs can still experiment with the service without the guarantee that they are using real TPMs. Up to now, attestation keys had to be exchanged via sneaker net (manual exchange and verification before attestation was possible) to enable machines to do remote attestation. Hal’s announcement represents a great leap forward in the usefulness of TPMs.
Oh boy, I thought I had quibbles with the news story on the Coverity announcement yesterday and today someone points out the worst piece of yellow journalism that I have seen in quite some time: Open Source Code Contains Security Holes. First the title is atrocious and this quote “the popular open source backup and recovery software running on half a million servers, were all found to have dozens or hundreds of security exposures and quality defects” may (have) be(en) accurate, but without context sounds worse than it really is. The truth, as George Wilson said, is that this is an article along the lines “And in other news, fire is hot and water is wet.” I personally consider this irresponsible journalism. They had to willfully ignore older stories based on information from Coverity and Carnegie Mellon such as Open Scrutiny of Open Source Code which contains the nugget “The average defect rate of the open source applications was 0.434 bugs per 1000 lines of code. This compares with an average defect rate of 20 to 30 bugs per 1000 lines of code for commercial software, according to Carnegie Mellon University’s CyLab Sustainable Computing Consortium.” This is simply yellow journalism whose primary intention is to drive traffic and raise the ire of open source fans! Harrumph! Outrageous!
Coverity has announced “Rung 2” and that 11 open source projects have achieved “Rung 2”. This means that they have resolved all Rung 1 defects found by the latest release of Coverity Prevent. There is news coverage at news.com: 11 open-source projects certified as secure which claims that the projects “have been certified as free of security defects”. The 11 projects with bragging rights are Amanda, NTP, OpenPAM, OpenVPN, Overdose, Perl, PHP, Postfix, Python, Samba, and TCL. The Coverity announcement itself says “resolved all of the defects identified at Rung 1”. Looking at the Rung 2 page, it appears to me that there are uninspected defects remaining at Rung 2 which may or may not represent actual defects (and/or actual security flaws), so I’m not sure that the news article’s claim is justified. I also would quibble with the use of the word “certified” which is at risk of becoming overused and rendered meaningless when applied in this context. Despite my quibbles with the news story, Coverity has done us all a major service by exercising their excellent source scanning tools on hundreds of open source projects and reporting the results in a controlled fashion. The 11 projects: Amanda, NTP, OpenPAM, OpenVPN, Overdose, Perl, PHP, Postfix, Python, Samba, and TCL, have done themselves proud by grinding through the reports and fixing defects found. Thanks to Homeland Security for sponsoring this effort, I appreciate this use of taxpayer money. Congratulations and a hearty Thanks! to Coverity and Amanda, NTP, OpenPAM, OpenVPN, Overdose, Perl, PHP, Postfix, Python, Samba, and TCL!
TruTV (was CourtTV) has created a new show on security testing called
Tiger Team. You can view the first episode online at the TruTV video website. Their “Share” feature yielded this link but these links don’t tend to stay fresh long, so to find it click on New, then look down through the listings for Tiger Team (on page two as of Jan. 2). This show has widely been reported as an IT show, but the first episode is about pen testing a car dealership. Only one person on the team specializes in computer security, another person specialized in social engineering. It shows them dumpster diving, social engineering, breaking in after dark (“daring late night break in”), casing the dealership, etc. Choice quote: “If there is any other team in the world who does what we do, hands down we are the best”. Don’t expect to learn anything from it, but it is highly amusing in the reality show breathless kind of way and vividly demonstrates the security mindset.
The Trusted Computing Group has launched a new group blog. The actual bloggers haven’t yet been announced, but presuming that they will include some people who are already actively writing about Trusted Computing (say Steve Hanna, Marion Weber, Dave Challener, perhaps) it will be a blog worthy of attention.
When my daughter saw the OLPC, her face lit up. “What is that?” She immediately wanted to play with it. At 3.25 years old, she is well below the targeted age range, but she still loved the look and feel of it. She enjoyed the paint program although it is a little challenging still. She really got into the picture books at the OLPC library. And she was totally thrilled by the Recorder. I got a great clip of her singing her ABCs. She also really got into TamTamMini and had great fun making noise. She is a great stress tester because her approach is to hit all of the buttons and see what happens. This has caused some interesting desktop configurations under KDE and Gnome. For the most part, Sugar took everything she threw at it and shrugged it off, but she was able to crash TamTamMini by typing random characters in the Activity name field. It didn’t actually crash, it just stopped making music (noise).
Yay! The OLPC XO laptop arrived today. My husband called me at work to let me know that it is here. It is awesome, of course.
The box that the OLPC XO Laptop was shipped in:
The box that the OLPC XO Laptop was shipped in:
Current and former co-workers, Kent Yoder, Dave Challener, Ryan Catherman, Dave Safford, and Leedert van Doorn have written a book called
A Practical Guide to Trusted Computing. It’s now available for pre-order on Amazon and will available on Jan. 7, 2008. The authors have been instrumental in the creation of the TCG specs and key open source software, for example, Dave led the TSS Working Group for years and Leendert was on the Board of Directors. I reviewed an early copy of the book almost exactly a year ago. My favorite parts of the version that I read were the chapters on TSS along with the sample code for how to use the TSS API and the chapter on use cases for Trusted Computing (for the sheer fun of it). I think that it definitely lives up to its billing as a practical guide and it provides a complete grounding in the concepts of trust, attestation, measurement, etc. that are foundational to Trusted Computing. It is very readable and is a faster read and shorter than it seems because of the reference information included. I haven’t yet seen the ultimate version of the book, but I’m eagerly awaiting my copy from Amazon. Congratulations to the authors for sticking through the long haul and providing such a useful book!
My co-worker, Serge Hallyn was in town the other day, so he popped by to tell us about file capabilities. I think that file capabilities are the missing link for making capabilities useful and I’m tremendously excited that they will soon be generally available. File capabilities are a feature that allow a system administrator to add specific capabilities to an executable (stored in extended attributes, set using
setcap). This in turn means that if the necessary capabilities exist then executables no longer have to be setuid root. Rather than having daemons start as root and drop privileges, if the proper file capabilities are set, they can just start as their regular user. The canonical example is ping. It is currently setuid root but it only needs the cap_net_raw capability. Using file capabilities, you can remove the setuid bit, add the cap_net_raw bit and you decrease the chance that ping can be used to subvert your system. Chris Friedhoff has an excellent page which describes how to use file capabilities in more interesting ways, for example on X and Samba.
If you want to try out some of the Trusted Computing features but don’t want to add them to your running system, check out this version of Knoppix that Japan’s National Institute of Advanced Industrial Science and Technology (AIST) produced with IBM Tokyo Research Lab. It includes Grub-IMA, Linux-IMA, TrouSerS, tpm-tools and TPM Manager(by rub.de). More features are still being developed. Thanks to Seiji Munetoh for pointing this out to me. I downloaded it and tried it on my T42p and it is very clean and slick.
The NSA has published their Guide to the Secure Configuration of Red Hat Enterprise Linux 5. This is an excellent document that describes best practices for securing a Linux system – tailored to Red Hat Enterprise Linux 5. It starts with best practices, such as, encrypt transmitted data and minimize installed software. It then follows up with exact configuration recommendations, for example, the exact configuration option to prevent root from logging in directly via ssh (Section 126.96.36.199). They do a pretty good job describing the rationale for making the changes that they recommend (“The root user should never be allowed to login directly over a network, as this both reduces auditable information about who ran privileged commands on the system and allows direct attack attempts on root’s password.”). If you are responsible for the security of any Linux system (whether as a developer or an administrator), I highly recommend taking a look at this document and thinking twice about any decision that you make that runs counter to these recommendations.
Steve Hanna has written an excellent introductory article on Network Access Control (NAC) discussing the motivations for implementing NAC and how Trusted Computing can help further secure NAC. Trusted Computing works well here because while the endpoint can still lie, it gets noticed that the endpoint is lying even if the exact lie is not known. The lie is detected because the measurement log no longer matches the signed quote of the PCR values. IBM Research wrote an excellent paper in 2004 describing attestation in detail as implemented on a Linux system: The Role of TPM in Enterprise Security.
IBM has announced  plans to contribute to the Mifos  open source microfinance software project. Microfinanciers loan small sums of money to the extremely poor to help them get businesses off the ground to improve not only the person who receives the loan, but the entire community. Kiva  while not affiliated with Mifos to my knowledge is one of the best known players in this space. It is a microfinance loan aggregator where individuals can loan small sums of money to projects that they select. The Mifos community seems to be quite well established and extremely active.
The current magazine from the European Network and information Security Agency (ENISA) highlights Trusted Computing in their current issue of ENISA Quarterly . There are four articles on Trusted Computing – one which compares TC to automobile airbags. There is an interesting article on Trusted Computing from a European perspective which covered the workshop by the same name held in Germany earlier this year. Another article touches on the OpenTC project’s goal of providing European citizens “informational self-determination” in a secure context. Also noteworthy is the call for papers for Trust 2008.
This combination of stories makes me crazy:
So, the One Laptop Per Child Get One, Give One program  started this week and I ordered one for my kids. I can’t wait to get it and try it out and see what my kids will do with it. I downloaded the ISO earlier this year and tried it out and it seems pretty awesome. My secret hope is that early education (pre-k) software will really take off on Linux once more these have been distributed.
According to HP Backs Red Hat in Government Biz Bid , “Lillestolen said, however, that HP has gone further than Big Blue by certifying a wider range of hardware.” Hopefully, this is just a mistake in the reporting and HP isn’t actually making such outrageous claims. As you can see in the Validation Report , HP tested on
As a security practitioner, you’ve got to love it when your company comes out with a line like “Security is our brand”  and the press eats it up. Of course, security has always been our brand and, on the Open Source side, we have done some significant things to prove it. I’m speaking here of our multi-million dollar investment over the course of many years to Common Criteria certify Red Hat and Novell SUSE. We started out at EAL2 with the security functionality defined in our Security Target against the pre-existing security functionality in SLES 8. We got that evaluation done within 6 months when everybody was saying that it couldn’t be done – ‘Common Criteria certification takes years’, ‘open source can’t be certified’ they said. From that ground work, we marched up the value chain to LSPP/RBACPP/CAPP at EAL4+ with RHEL5 when still (after 6 successful evaluations at progressive levels) people were saying that it couldn’t be done (although much more subtly now) – “The lack of this protection might prevent another evaluation target from passing this evaluation.” 
This press item has been picked up all over the place – IBM announces an initiative to invest $1.5B in security development and marketing for 2008. This is seriously cool.
RSA London is going on this week and professional blogger David Lacey is blogging that not much interesting is going on, but that he was very excited to meet Steve Hanna. Steve says that 2008 is going to be the year that Trusted Computing breaks out. I hope he is right! Gartner for 2006 still has Trusted Computing sliding into the trough. But the saddest testament to the slow uptake on Trusted Computing is that Gartner uses it as an example technology to explain two different factors that can cause a technology to have a “Long Fuse” (that is to spend more time than average in the Trough of Disillusionment). I am starting to see some signs that the deep trough that the technology has been in the past couple of years is coming to an end (more on this later) and Steve’s optimism is heartening.
Maker Faire was in Austin this past weekend and it was awesome! It was busy but not packed so it was quite pleasant and so there is a reasonable chance that we might see it again next year. (Please, please, please Maker Faire organizers, come back again soon!)
One of the most exciting new features in Fedora 8 will be eCryptfs. I downloaded the latest test release of Fedora 8 just to try it out. Mike Halcrow has done a terrific job writing the code, getting it and design documents reviewed and letting people know about eCryptfs. He has written two OLS papers ( and ) and one Linux Journal article  about the design and implementation of eCryptfs. The nice thing about eCryptfs is that it provides per file encryption rather than requiring that the entire block device be encrypted like dm-crypt. This means that if you want to backup the encrypted content and one file changes, on an incremental backup only that one file will be added to the incremental backup, rather than the complete encrypted file system blob.
If you are interested in security and security metrics, I highly recommend reading Dan Geer’s chart deck on “Measuring Security”. It weighs in at a hefty 426 pages, but it made me laugh out loud in parts and go hmmm. Highlights include p. 108 on “Decision Making” says “*Rational decisions are not enough, *Need to also allow for your preferences”. I really like the model for “Tracking Performance” that he shows for selected security software on pages 154-156, but caution still needs to be applied and meta-information about the numbers is important for full understanding – did the product undergo extensive review one year? are the CVE’s equivalent to each other in severity? etc. Well worth a read and on my list for more more comprehensive study.
LWN also weighed in (last week) with an article on Smack and the LSM debate. It should exit from subscriber only status soon (but if you are reading this blog, you should subscribe to LWN, it is the best Linux information that money can buy. Plus they seem to really get security and offer a single site that rolls up daily all of the security updates per distro in a single location.).
Matt Bishop’s text
Computer Security Art and Science is an excellent introduction to the field of computer security. Chapter 13 covers the computer security
Design Principles originally laid out in a 1975 paper by Salzer and Schroeder
"The Protection of Information in Computer Systems". This is the foundational lore of computer security. The design principles are
Linus Torvalds has stated rather firmly that the LSM hooks will stay in the Linux kernel. I’m often asked by people who are vaguely aware of the issue (why AppArmor isn’t upstream, why customers can’t easily run Dazuko, why IMA isn’t upstream), why Linus hasn’t expressed his opinion on this strongly before. This is at least the third time that he has spoken clearly and decisively on this issue (Kernel Summit 2001 when he originally proposed the LSM interface, Kernel Summit 2006 when he definitively stated that the LSM interface was staying in, and now “Hell f*cking NO! You security people are insane.”). I wish that I could believe that this will be final but I don’t hold out much hope that the situation will actually improve as far as customer ability to choose to run a particular LSM or for the upstream adoption of more LSMs. The strong anti-LSM stance of a few outspoken and brilliant Linux community members has caused a lot of projects to not even bother to consider proposing their LSMs for inclusion. That is a real shame because the careful and critical review LSMs receive when proposed for inclusion is especially critical for security. There is real customer demand and interest for choice in this space and, my spin on Linus lack of metrics rant, no real reason to deny them this choice.
Moments after I posted the previous entry, I received notification that Ulrich Drepper has now published his new proposal to add sha256 and sha512 to crypt. It includes a proposal to lengthen the salt to 16 and allow for the number of rounds to be optionally specified in the salt string. According to the specification, “The maximum length of a password string is therefore (excluding final NUL byte in the C representation):
Within the past several months, an amazing number of people have asked me about the maximum length of passwords and user names in Red Hat Enterprise Linux and SUSE Linux Enterprise Server. Every one of them says that they have searched for this data and are not able to find it. I’ve searched around too and find it in bits and pieces but not as a consolidated whole. It is funny though (as in things that make you go hmmm), that Solar Designer has written an updated man page for crypt which contains almost all of this data, which is included in the glibc source RPM for SLES but not installed.
How do you know that you live in an obscure part of the Net? It took 27 days to get my first 3 spam comments. 🙂
The Register has a story Root-locked Linux for the Masses that I’m finding almost unreadable because in the first sentence, the project originator has overloaded by the acronym TCP and the term Trusted Computing. I find it supremely amusing that someone would start a new project called the Trusted Computing Project, which has absolutely nothing to do with Trusted Computing – talk about starting from a deficit.
OSSI submitted OpenSSL 0.9.8 for FIPS 140-2 validation. They have a press release on their home page. They successfully completed a validation of OpenSSL 0.9.7j earlier this year. This is an important validation of the cryptographic strength of OpenSSL. Red Hat has also validated NSS at FIPS 140-2 level 2. Since FISMA was enacted, government agencies are required by law to use FIPS 140-2 validated encryption whenever cryptography is used to protect the security of the system. FISMA explicitly eliminates waivers so these FIPS 140-2 validations are critical for Linux adoption by government agencies.
I just added a link to Alan Robertson’s new blog Managing Computers. I’m glad that Alan is starting a blog because whenever I talk to him, I always learn something new and walk away with a new insight or koan to ponder. I hope that in addition to blogging about Managing Computers, Alan blogs about Managing Communities, because during my limited and short lived involvement in the Linux-HA community I found it wonderfully refreshing how nice people were to each other in that community.
Gerrit is blogging the Linux Kernel Summit this week and his blog entrys are well worth reading, if just for the use of the word kerfuffle. Seriously, there is good stuff there – Andrew Morton on Linux Kernel Quality is especially interesting to me. I had heard that Andrew was tossing around the idea of requiring test cases for patch submissions. That would greatly increase test code coverage and reduce regressions, but based on the discussion in Gerrit’s blog posting, it looks like it would have been dismissed out of hand for requiring to much additional work, if it had even been brought up at the kernel summit. A related topic was brought up during the Documentation session with a proposal to pull LTP tests into Linus’ git tree.
Off the topic of quality, this cracked me up: “The running joke was that long explanations of x86 functionality requested by the s390 people was usually ended with the comment “oh, I understand now, we have an instruction that does that” ;)”
I love this coverage of the Kernel Summit, along with LWN’s coverage, it is better than actually being there!
Joe Herrick has written a very interesting article about security in a virtualized environment: Virtualization security heats up in which he states that 43% of the readers of iTnews have completely disregarded the issue. This article reviews current thoughts about virtualization security and puts virtualization security best practices into print: “Defense in depth and proper virtual machine layout and design, including not mixing VMs with different security postures and requirements on the same host system, are crucial.” (A more detailed list is included later in the article.) This article also carries the meme that open source is more secure: “The million-dollar question here: Is it safer to rely on the open source community to vet and test Xen, or are VMware and other vendors of proprietary hypervisors the best path to secure hosts?” An interesting bit of news that I somehow completely missed: “XenEnterprise has endured the pokes and prods of the open source community, earning a Common Criteria Level 5 rating.” Wow! Congratulations to them! Buried in the middle of the article is how Trusted Computing can help secure this environment. It was nice to see the topic discussed without all of the usual political rhetoric and just the technical features analyzed for their enterprise applicability. Very interesting and educational article. Thanks!
It is amazing to me to see such ignorance about Common Criteria displayed in an article in a magazine targeted to the very government agencies that require it. Common Criteria has loads of critics, but is it getting a bum rap? In the print version, the comment “You are not testing the product at all. You are testing the paperwork.” is called out prominently with the picture of the person who allegedly said it. All levels of Common Criteria require functional verification of the security features and EAL3 and higher require both positive and negative testing of the security controls. You can see the thousands of tests that were used for the recent certification of Red Hat Enterprise Linux 5 at the LTP download site: RHEL5 LSPP Tests. That is just the most egregious comment in the article. Far more reasonable is the claim, “The evidence so far suggests that it is a waste of time and resources.”
It is amusing to see the juxtaposition of the following stories:
Walmart now selling DRM free music
Two New Surveys Show Increased Consumer Acceptance of DRM.
Walmart’s move heralds the death of DRM just in time!