Andy Webber's Home
IT Security is a very large field. There is already a lot of useful information either on the web or in books, so I'm not going to try to repeat any of it here - have a look in any search engine. What I am going to do is publish or link to resources that I have that I know are not readily available or not widely known. Most of these resources are associated with security assessment through evaluation and testing.
Crypto systems may be assessed under FIPS 140 or the Evaluation Criteria (ITSEC, Common Criteria, etc). In 1999, the CACR held a conference on criteria (the 4th CACR Conference). I presented a paper on the ITSEC and also noted the UK approach to crypto assessment (CESG Assisted Products Scheme - CAPS). At this presentation, I predicted that the UK would start to adopt FIPS 140, at least for certain classes of crypto. My presentation Abstract and full presentation (with whitepaper) are available for download.
At the 2nd CMVP conference in Washington DC in March 2002, CESG, the UK technical authority on IT Security announced that a new Protective Marking "PRIVATE" is due to be introduced. At present, a large amount of information that is not critical to national security is marked RESTRICTED in order to afford it suitable protection. However, the level of protection is often more than is strictly necessary, for example RESTRICTED material that requires encryption must be encrypted with CAPS approved Baseline products. For interactions between Government and citizens, the requirement for CAPS approved products (of which there are few) is a limiting factor. It is therefore proposed that for the new PRIVATE marking FIPS 140 certified products will provide suitable encryption. The FIPS 140 list of certified products is far more extensive and contains commercial off the shelf (COTS) products, whereas the CAPS list is almost exclusively Governemnt off the shelf (GOTS). The FIPS 140 list is also far more extensive.
I worked in evaluation through an interesting period. I started out working to the CESG evaluation criteria ("UK levels"), then helped with assessing the DTIs replacement ("L levels"). This effort mutated into the development of the European ITSEC ("E levels"). These then contributed, together with the Canadian Criteria and the TCSEC (Orange Book) into the Common Criteria. Through all stages there have been continuing efforts for mutual recognition of the results of evaluations between countries. Much of the work on gaining acceptance has been at the technical level. One of ">major contributing factors to the current wide mutual recognition of evaluations was the TMach project.
TMach was a 'trusted' Orange Book B level version of the Mach message based micro-kernel operating system. TMach was being developed by Trusted Information Systems Inc with funding from DARPA. DARPA also funded evaluations of TMach against the ITSEC in the UK and in Germany (as 2 evaluations) in order to gain direct visibility of the ITSEC evaluation process in action. In addition, the evaluations were ovserved by, amongst others, the Canadians, the French and the Swedish evaluation oversight bodies.
The DARPA funding of TMach and the evaluations was cancelled before the work was complete. The work was none the less valuable, not least for giving a jump start to the Common Criteria. The summary report published by NIST is available as NISTIR 6068 - Report on the TMach Experiment".
Security of community developed and 3rd-party wiki plug-ins, published as part of ACM WikiSym 2008