Saturday, May 21, 2016

One more try to skin a cat


I mentioned several times before, that the proper and relatively simple way to bring back trust in PKI would be to replace the existing tree-like hierarchy of X.509 certificates with something, which should ensure, that no CA (Certificate Authority) breach alone would compromise thousands of certificates and millions of TLS connections and signed documents.

Obviously, counter-signing all CA certificates in another one or two certificate authorities would be a logical step. And this is exactly what Microsoft has been doing with code-signing certificates for kernel-mode signing for years.

There's a different approach available as well, though. This is splitting of the private key (of CA certificates, as I understand) among several CAs. This approach is offered by Apache Milagro project.

At first sight the project seems to offer great ideas and great solutions. Well, until you start looking at practical applicability and openness of the project and of the formats and protocols besides it. The makers didn't go for standard RFC-defined protocols. Neither they opened their protocols or made them a standard.

In opposite, the project is offering a library for some programming languages and platforms. This is the approach that corrodes the industry. As often said, open-source != free. The open-source project doesn't mean an open standard for the community to implement. In opposite, we are seeing that the vendor is trying to push his proprietary solutions to the market by declaring them as open-source.

I strongly believe, that the developer community should not use such solution, as it is more of a troyan horse, than an open standard that improves the existing corpus of  industry-adopted PKI-related standards.

Friday, April 22, 2016

A magic bull.t

As a provider of solutions, related to security and data protection, we are often asked for some magic software, that will let the the software developers and their customers protect the data from copying.
Moreover, such software should work on a platform, which was designed to be open, modular and to large extent hackable (and no, it's not Linux).

The problem with such requirements is that they contradict and conflict each other. The open platform means the ease to get into other processes' memory space (not that this is trivial, but very much possible) and copy bits and bytes from it. Protection means prevention of such operations.

Then, copying of information is the cornerstone of computer engineering and of information-related sciences. Any use of information that you can imagine exposes this information to the outside world. And once exposed, this information can be translated further, and thus copied. This means, that information is unusable without a freedom to copy it. The question of whether information exists at all, if it can not be observed, should be left to philosophers, and we are more interested in practical aspects of this feature of information.

Copying of information is possible on any stage of its lifecycle, from the moment it is placed to the medium beyond your control, and up to the moment when the information disappears in the black holes billions of years later.

Talking about practical matters, such as copying of the document that you give to someone, you can not prevent copying of information. You can only make this information unusable (by encoding it in some way) and take measures to restrict the user from decoding this information in an uncontrolled manner.

What does this mean when applying the above said to documents and data that you distribute? Unfortunately, not very bright future for the classified secrets of your company. Once the document is decoded, it is used in one way or another. It is shown on the display or printed by the legitimate user. It can be also captured by the hacker, troyan application or a spy sitting with a powerful electromagnetic antenna in the opposite building and listening for emissions of the computer display. Displayed or printed information can be copied, that's obvious. The information decoded in memory can be copied in decoded form to some other media. And the data of the document opened in the office suite can be easily transferred to another application via the system clipboard.

The described copying is easy to do on general-purpose systems like Windows. Would the closed system protect your information from being misused and abused? In theory, it could. Practically, though, prevention of leakage of information in serious organizations takes much more than a closed system, and usually involves restricted access to the rooms, no windows in such rooms, proactive protection measures like scanning of the electric networks, air conditioning systems etc. for spying electronic devices and more. It's doubtful that you could enforce your recipients of data to take such measures.

So what is it all about? Is there no solution? If you search for "DRM" or "digital rights management" in the search engine, you'll get millions of links and dozens of software solutions. Yet, they (at least most of them) don't  claim absolute protection, but they promise to make stealing of information much harder. This is doable, and the question is how hard-to-crack is this or that protection.

We (our company) don't offer DRM solutions. We sell [licenses for] components, which can be used in building such solutions, and those components do make copying of information harder. But we also realize the shortcomings of most general approaches. If you let your file be opened in MS Office, then the user can copy the data to the clipboard or just save the file elsewhere. And neither our components nor the DRM solution itself can effectively counteract this without severely restricting users' capabilities to work with the data.

To make your life harder, let me remind you, that information carved in stone can also be copied using the simple tool called "camera". So if the information is so valuable that the risk of copying it can not be tolerated - don't disclose this information. Or take other, non-technical measures to secure your information and you position. NDAs, license agreements and alike could be a better defense than the most sophisticated DRM software.

Saturday, March 26, 2016

Why Cloud security is impossible

Yesterday I've come across a rather old, but still actual (and even more and more actual, I'd say) article, that explains, why in-browser cryptography isn't going to work.

The article is "What's wrong with in-browser cryptography". It is written from security researcher's point of view, and as such is focused on flaws of the in-browser cryptography model. However, out-of-browser (but initiated by the server-side code) model is flawed equally (if not more) seriously, as an in-browser one. It doesn't matter, where the cryptography code is located - will it be some JavaScript in the web page, or a browser plugin or WebCrypto API offered by the browser itself. As long as the web page is changed dynamically, the user can not be sure, that the information is really encrypted, and that it is sent to the intended recipient. SSL/TLS does NOT ensure such security, unlike what most users think.

The article makes a good point, that with desktop software you can always capture the code that was executed and analyze it, in order to find what is being done and where the data is sent. And the code is signed, proving (to some extent) that it is authentic and has not been altered. With web pages such analysis and protection are not possible, because they can be easily altered by the server or by third-party actors (like browser plugins etc).

Browser applets like Flash and Java could solve the problem by applying code signing of applets. Unfortunately, in their desire for openness (which in most situations contradicts to security) the industry has rejected "closed" solutions that applets offer in favor to everchanging HTML, which makes security of user's data a distant dream.

One of solutions would be development of signed scripts, which could be signed with certificates as much as applets are. Unfortunately I didn't see any widespread attempts to introduce "secure Javascript" or "secure web pages". If you came across some, please let me know.

Friday, December 18, 2015

PKI reforms starts. Kind of.

Microsoft has lost faith in more than 20 CAs.

Still this is a partial measure. The chain is as strong as its weakest link is. If we have a web server certificate, signed by CA X , whose CA certificate is issued / signed by (trusted) root CA R, the X can be the weakest link, and no Microsoft measures will help prevent this link to be broken. This is exactly what happened in previous cases, when sub-CAs (like those X) issued certificates in violation to PKI rules and practices.

The solution? Web of trust. This would require certain modifications of the PKI, but the requirement for the end-entity certificate to be signed by at least two CAs would eliminate most issues related to wrongdoing by sub-CAs. Look - if you are an attacker and you hijack CA X , there's little use in this - you would need to hijack CA Y and/or CA Z . This is possible, but much more complicated and imposes higher risks to your attack to you.

In general there's much there that can be borrowed from OpenPGP. CA (Issuer) can still be present in the certificate, but there can be other extensions like subsignatures or counter-certificates included, and that would significantly increase the protection level.

Wednesday, December 16, 2015

A "fatal flaw" which is neither a fatal, nor a flaw.

The article in SC Magazine talks about "security flaws" in Kerberos protocol.  But what are those flaws about?

If we dig deeper, the only phrase in the article suggests that "if the attacker knows user's secret key, he can replay authentication without the need for user's password". Actually this is not a flaw. If the attacker got to user's secret key somehow, the user and the network are already in trouble, because this means that the attacker has already found some flaws elsewhere.

Now, Kerberos' shortcomings and disadvantages were known for years, and the discussed one was known as well. This is why Kerberos is not recommended and is replaced by modern protocols like SAML and OAuth even in intranets.

To sum it up, digging the grave and finding old flaws and bringing them back to the sunlight is an easy way to establish yourself as a security researcher, but you still need to look at the roots. And protection of credentials and use of multifactor authentication are the things that separate good security from the bad one.