Sunday, October 9, 2016

Ignorance as a key to malware

The very disturbing study, described in detail here, indicates that any security warning are not effective, cause in the worst case people simply not "trust" the warnings. I.e. they trust a fishing website more, than the respected developer of the web browser. Ok ...

I think that's the same reason people go for Trump in these elections.

Wednesday, October 5, 2016

The flipside of cross-signing

The Chinese certificate authority WoSign used shady practices when issuing the certificates. To make things worse, it acquired the Israeli company named StartSSL and supposedly made it use WoSign's infrastructure. And that infrastructure was either misconfigured, or intentionally abused - we can only guess now.

Now, Apple has removed the root certificate of WoSign from its Trusted Roots. However, the WoSign's CA certificate is counter-signed by two other CAs, and that gives trust to the WoSign's CA. Without explicitly blocking this CA certificate neither Apple nor any other software vendor can't effectively prevent the abuse of the PKI infrastructure, when co-signing is used.

I was a proponent of the approach that certificates must be signed by more than one CA. This makes the system harder to compromise. But counter-signature must be validated using logical AND, not logical OR. One must require all three trusts to remain valid, rather than rely on any of the signatures, like in the above case.

I am wondering, how many security breaches should happen until the industry starts moving in the direction of requiring more than one valid signature on each certificate (or at least on CA certificates).

And I am afraid, that unless such movement is implemented in standards, we'll see vendors going in all directions implementing incompatible, if not mutually exclusive, approaches to strengthening PKI.

Full story here.

Wednesday, September 14, 2016

The inhuman factor

It would be an anecdote if it didn't reflect the poor situation with humans dragging severely behind  the development of technology.

The website of Social Security Service of the US attempted to introduce two-factor authentication. The idea was cancelled after it appeared, that only about 35% of the target audience know, how to read SMS on their phones.

Well, the target audience is those after 65 y.o, so no wonder. But these same people, who can't manage reading a couple of digits from the screen, and typing them back, are still allowed to vote and define the future of the younger generations.

Actually, we'd call all those people technically illiterate. They've learned to read and write, but didn't learn to understand technology.

For those people only biometric authentication such as FIDO UAF could probably work. And that's another investment, which most retired people can't afford. A Catch 22 of a kind ...

Friday, August 19, 2016

A password as a lever

A nice story about how passwords changed the life of the author.

And you know what? It's a great idea. Just try it. 

Tuesday, August 9, 2016

How open-source kills innovation

The terrifying stories about notebooks with thousands of confidential data records being lost come every other week. The best solution for this problem (apart of not carrying the notebook or the data) was to encrypt the data at rest. And this is to be done using either whole-disk encryption (not always feasible or even supported), or by creating virtual encrypted disks.

The niche of virtual encrypted disk software was initially occupied by PGPDisk (first by PGP Corporation, now by Symantec, that purchased PGP Corporation). Later the open-source alternative, TrueCrypt has appeared. There were several commercial attempts made to compete with TrueCrypt, but those alternatives didn't get popular because why pay when you can get it for free.

Ok ... And now the X day has come and TrueCrypt was declared to be insecure, and it was abandoned by the developers as well. Not a problem for open-source, you might say, as one can make a fork, plug the security holes and release an update. Yes, to some extent. Besides the lack of one important factor - motivation. Maintaining somebody else's software is not a big fun, especially when it's a badly designed kernel-mode driver (which is the core of TrueCrypt). And when it's done for free, there's always something more important in the to-do list, as you can imagine.

There were several groups that attempted to fork TrueCrypt (CipherShed and VeraCrypt are just two names). But they have kind of failed.  Neither CipherShed nor VeraCrypt have a good track of frequent releases and bug fix updates. Bugs remain numerous, support is not provided (see "motivation"). We call that DoA.

Now, we (the company I've been working in for all my life) have the products (kernel-mode drivers, encryption modules) that would let us create such software relatively easily. But we never did this, exactly for the reason of necessity to compete with open-source. TrueCrypt has effectively blocked the market for us. And we are not alone. I know at least several other attempts to build solutions for data encryption on disk level (say "virtual encrypted disks"), and none of them are successful for the same reason.

Ok, but do we have a chance once TrueCrypt is gone? Well, no. Truecrypt is dead but not gone, neither are VeraCrypt. While that buggy open-source is still available, people will prefer living with bugs to licensing the maintained product for a fee. And this is true for both personal and business users (the latter ones are also driven by people, who are used to asking the initial question of why pay for what seems to be free).

Well, I would happily say "good luck" to the world of socialism and open-source, if we could have at least some solution of the problem (how to create the encrypted disk).

Suggestions, anyone?

Saturday, May 21, 2016

One more try to skin a cat


I mentioned several times before, that the proper and relatively simple way to bring back trust in PKI would be to replace the existing tree-like hierarchy of X.509 certificates with something, which should ensure, that no CA (Certificate Authority) breach alone would compromise thousands of certificates and millions of TLS connections and signed documents.

Obviously, counter-signing all CA certificates in another one or two certificate authorities would be a logical step. And this is exactly what Microsoft has been doing with code-signing certificates for kernel-mode signing for years.

There's a different approach available as well, though. This is splitting of the private key (of CA certificates, as I understand) among several CAs. This approach is offered by Apache Milagro project.

At first sight the project seems to offer great ideas and great solutions. Well, until you start looking at practical applicability and openness of the project and of the formats and protocols besides it. The makers didn't go for standard RFC-defined protocols. Neither they opened their protocols or made them a standard.

In opposite, the project is offering a library for some programming languages and platforms. This is the approach that corrodes the industry. As often said, open-source != free. The open-source project doesn't mean an open standard for the community to implement. In opposite, we are seeing that the vendor is trying to push his proprietary solutions to the market by declaring them as open-source.

I strongly believe, that the developer community should not use such solution, as it is more of a troyan horse, than an open standard that improves the existing corpus of  industry-adopted PKI-related standards.

Friday, April 22, 2016

A magic bull.t

As a provider of solutions, related to security and data protection, we are often asked for some magic software, which will let the software developers and their customers protect the data from copying.
Moreover, such software should work on a platform, which was designed to be open, modular and to large extent hackable (and no, it's not Linux).

The problem with such requirements is that they contradict and conflict each other. The open platform means the ease to get into other processes' memory space (not that this is trivial, but very much possible) and copy bits and bytes from it. Protection means prevention of such operations.

Then, copying of information is the cornerstone of computer engineering and information-related sciences. Any use of information that you can imagine exposes this information to the outside world. And once exposed, this information can be translated further, and thus copied. This means that information is unusable without freedom to copy it. The question of whether information exists at all, if it can not be observed, should be left to philosophers, and we are more interested in practical aspects of this feature of information.

Copying of information is possible at any stage of its lifecycle, from the moment it is placed in the medium beyond your control, and up to the moment when the information disappears in the black holes billions of years later.

Talking about practical matters, such as copying of the document that you give to someone, you can not prevent copying of information. You can only make this information unusable (by encoding it in some way) and take measures to restrict the user from decoding this information in an uncontrolled manner.

What does this mean when applying the above said to documents and data that you distribute? Unfortunately, not very bright future for the classified secrets of your company. Once the document is decoded, it is used in one way or another. It is shown on display or printed by the legitimate user. It can also be captured by the hacker, trojan application or a spy sitting with a powerful electromagnetic antenna in the opposite building and listening for emissions of the computer display. Displayed or printed information can be copied, that's obvious. The information decoded in memory can be copied in decoded form to some other media. And the data of the document opened in the office suite can be easily transferred to another application via the system clipboard.

The described copying is easy to do on general-purpose systems like Windows. Would the closed system protect your information from being misused and abused? In theory, it could. Practically, though, prevention of leakage of information in serious organizations takes much more than a closed system, and usually involves restricted access to the rooms, no windows in such rooms, proactive protection measures like scanning of the electric networks, air conditioning systems, etc. for spying electronic devices and more. It's doubtful that you could enforce your recipients of data to take such measures.

So what is it all about? Is there no solution? If you search for "DRM" or "digital rights management" in the search engine, you'll get millions of links and dozens of software solutions. Yet, they (at least most of them) don't  claim absolute protection, but they promise to make stealing of information much harder. This is doable, and the question is how hard-to-crack is this or that protection.

We (our company) don't offer DRM solutions. We sell [licenses for] components, which can be used in building such solutions, and those components do make copying of information harder. But we also realize the shortcomings of most general approaches. If you let your file be opened in MS Office, then the user can copy the data to the clipboard or just save the file elsewhere. And neither our components nor the DRM solution itself can effectively counteract this without severely restricting users' capabilities to work with the data.

To make your life harder, let me remind you, that information carved in stone can also be copied using the simple tool called "camera". So if the information is so valuable that the risk of copying it can not be tolerated - don't disclose this information. Or take other, non-technical measures to secure your information and your position. NDAs, license agreements and alike could be a better defense than the most sophisticated DRM software.

Saturday, March 26, 2016

Why Cloud security is impossible

Yesterday I've come across a rather old, but still actual (and even more and more actual, I'd say) article, that explains, why in-browser cryptography isn't going to work.

The article is "What's wrong with in-browser cryptography". It is written from security researcher's point of view, and as such is focused on flaws of the in-browser cryptography model. However, out-of-browser (but initiated by the server-side code) model is flawed equally (if not more) seriously, as an in-browser one. It doesn't matter, where the cryptography code is located - will it be some JavaScript in the web page, or a browser plugin or WebCrypto API offered by the browser itself. As long as the web page is changed dynamically, the user can not be sure, that the information is really encrypted, and that it is sent to the intended recipient. SSL/TLS does NOT ensure such security, unlike what most users think.

The article makes a good point, that with desktop software you can always capture the code that was executed and analyze it, in order to find what is being done and where the data is sent. And the code is signed, proving (to some extent) that it is authentic and has not been altered. With web pages such analysis and protection are not possible, because they can be easily altered by the server or by third-party actors (like browser plugins etc).

Browser applets like Flash and Java could solve the problem by applying code signing of applets. Unfortunately, in their desire for openness (which in most situations contradicts to security) the industry has rejected "closed" solutions that applets offer in favor to everchanging HTML, which makes security of user's data a distant dream.

One of solutions would be development of signed scripts, which could be signed with certificates as much as applets are. Unfortunately I didn't see any widespread attempts to introduce "secure Javascript" or "secure web pages". If you came across some, please let me know.