It is amazing to me to see such ignorance about Common Criteria displayed in an article in a magazine targeted to the very government agencies that require it. Common Criteria has loads of critics, but is it getting a bum rap? In the print version, the comment “You are not testing the product at all. You are testing the paperwork.” is called out prominently with the picture of the person who allegedly said it. All levels of Common Criteria require functional verification of the security features and EAL3 and higher require both positive and negative testing of the security controls. You can see the thousands of tests that were used for the recent certification of Red Hat Enterprise Linux 5 at the LTP download site: RHEL5 LSPP Tests. That is just the most egregious comment in the article. Far more reasonable is the claim, “The evidence so far suggests that it is a waste of time and resources.”

I do feel the tug of this argument and agree with it to a limited degree. The intangible value of Common Criteria is that products evaluated at the same level against the same protection profile are within a reasonable degree of security functionality. This means that the purchaser doesn’t have to do all of the due diligence work themselves, if (and only if) the security functionality that they need is included in the protection profile. The most common criticism of Common Criteria is that the level of security functionality that most people need is not in any commonly used protection profile.

A little more tangible are the bugs found and fixed during the Common Criteria evaluations. Several kernel bugs and PAM bugs were fixed during the course of the Linux evaluations. I am glad that those bugs have been found and fixed. The small number of bugs in this category validate the open source methodology and the high quality of the Linux codebase. I don’t believe that the handful of bugs in this category justifies the high cost of Common Criteria certification.

Most tangible are the new security features that have been adopted and are now seeing widespread use. The audit subsystem was created primarily in response to Common Criteria requirements. There was resistance to adopting an audit subsystem until the requirement was made clear by the call for Common Criteria (along with other enterprise customers). The LSPP evaluation brought in labeled networking, polyinstantiation and other security features that might still be languishing if not for the evaluation requirements. As these features become more widely understood, they will see more widespread usage.

Adding together these three types of tangible and intangible benefits, I do believe that the Common Criteria evaluations of Linux have been worth their cost.

Another part of the article that made me scream with pain (and why):
“One of the reasons for the focus on the process rather than the code is that the evaluation process is intended for proprietary code, which developers generally keep secret.” The evaluation labs have full access to the code that they are evaluating whether it be proprietary or open code.

A reasonable point:
“…disconnect between industry and NIAP has resulted in an awkward evaluation process that ensures that security products are well into their life cycles, if not obsolete, by the time they can be evaluated…” It has taken most vendors a long time to complete evaluations and it is a difficult proposition to balance starting the evaluation before the product is completed with the time to finish the evaluation after the product has been made available. Microsoft took 3 years to complete their evaluation of W2K. But the evaluation of Red Hat Enterprise Linux 5 at EAL4+ completed only about 3 months after RHEL5 was made available.

And some very good points:
“Under the scheme, everyone accepts one lab’s results … results cannot be easily confirmed.”
“… there is no feedback loop for measuring and refining the Common Criteria process.” NIAP did poll companies who had completed Common Criteria evaluations a year or so ago, but it is hard to see any impact that the comments may have had.

Only one person interviewed for the article had actually taken a product through evaluation and he didn’t really have much to say. The article would have been stronger and more accurate if they had pulled in more participants, but it was certainly entertaining and engaging as written.