Friday, December 18, 2015

PKI reforms starts. Kind of.

Microsoft has lost faith in more than 20 CAs.

Still this is a partial measure. The chain is as strong as its weakest link is. If we have a web server certificate, signed by CA X , whose CA certificate is issued / signed by (trusted) root CA R, the X can be the weakest link, and no Microsoft measures will help prevent this link to be broken. This is exactly what happened in previous cases, when sub-CAs (like those X) issued certificates in violation to PKI rules and practices.

The solution? Web of trust. This would require certain modifications of the PKI, but the requirement for the end-entity certificate to be signed by at least two CAs would eliminate most issues related to wrongdoing by sub-CAs. Look - if you are an attacker and you hijack CA X , there's little use in this - you would need to hijack CA Y and/or CA Z . This is possible, but much more complicated and imposes higher risks to your attack to you.

In general there's much there that can be borrowed from OpenPGP. CA (Issuer) can still be present in the certificate, but there can be other extensions like subsignatures or counter-certificates included, and that would significantly increase the protection level.

Wednesday, December 16, 2015

A "fatal flaw" which is neither a fatal, nor a flaw.

The article in SC Magazine talks about "security flaws" in Kerberos protocol.  But what are those flaws about?

If we dig deeper, the only phrase in the article suggests that "if the attacker knows user's secret key, he can replay authentication without the need for user's password". Actually this is not a flaw. If the attacker got to user's secret key somehow, the user and the network are already in trouble, because this means that the attacker has already found some flaws elsewhere.

Now, Kerberos' shortcomings and disadvantages were known for years, and the discussed one was known as well. This is why Kerberos is not recommended and is replaced by modern protocols like SAML and OAuth even in intranets.

To sum it up, digging the grave and finding old flaws and bringing them back to the sunlight is an easy way to establish yourself as a security researcher, but you still need to look at the roots. And protection of credentials and use of multifactor authentication are the things that separate good security from the bad one.


Saturday, November 28, 2015

HTTPs as it could be

Google has reinvented the wheel HTTP and called it HTTP/2. This comprehensive article about HTTP/2 describes how the web will benefit from this new protocol.

The problem with HTTP/2, as with most of what Google does is that it was designed by coders, not by system architects. The protocol severely lacks internal clarity and integrity. The server should behave in many (at least 4) completely different ways depending on what it supports and what the client requests. It's like trying to combine the truck's wheel with the bike one.




The authors are definitely not readers but writers. There exists SSH family of protocols, which does a thing very similar to what HTTP/2 does. And SSH has quite complicated but logical internal structure. The only thing missing from SSH (not exactly missing but not used) is X.509 certificates and/or OpenPGP keys - while both are in theory supported as authentication methods, almost no real software supports these methods (our SecureBlackbox supports OpenPGP and import of keys from X.509 certificates). Meanwhile HTTP/2 is a combination of old HTTP, new protocol (completely unrelated to HTTP) with fallback to HTTP, and more. Probably the authors of HTTP/2 are adepts of the pastafarian church.

The authors could easily learn how to design the multiplexing scheme right, but, as said, they are likely not readers.

Hitting him with nails

PKI is having hard time, as more and more human mistakes are revealed, which undermine PKI's position.

In fact,  PKI is not about technology, its about people actions. People forget or don't care to guard their property (private keys, in this case). This is common - people are negligent. There was a study several years ago, which revealed that a huge part (40-something %, if memory serves) of office workers shared their passwords for a chocolate bar. Why would they invest time and resources into guarding other one's secrets, if they don't guard their own?

Tuesday, November 24, 2015

Another nail in the coffin

of PKI as we know it. Dell has introduced a huge security hole in its devices.

And more complete coverage, together with remedies, can be found here.

Wednesday, November 4, 2015

Invest in your own security first

Iboss Cybersecurity raised $35 million from Goldman Sachs' Private Capital Investing group, the article tells us.

At the same time Goldman Sachs has deployed an SSH/SFTP server for their corporate operations, and has built it on the outdated version of the open-source SSH server library. Moreover, they've implemented the server badly, in the way that is incompatible with the wast majority of SSH client implementations. They have probably saved a couple of thousands by choosing an in-house (or, maybe, even worse, outsourced to overseas junior developer assistants) implementation based on outdated open-source, instead of paying for the supported commercial solution without such nasty bugs. At the same time they have found 35 mln. to invest into third-party something. Good job, security boys. 

Thursday, October 22, 2015

On hitting nails with a microscope

The newly presented RFC introduces probably the most contradictory extension, and by itself is the one of the most meaningless RFCs adopted in the last 20 years.

The address of the RFC is https://www.rfc-editor.org/rfc/rfc7685.txt and it defines the padding extension, whose only function is to insert some zero bytes into the ClientHello packet of the TLS protocol. What's the purpose, you might ask? The purpose is to work around the bugs in some implementation(s) that is/are confused by certain lengths of ClientHellow packet.

You've got it right. Instead of fixing bugs (or pushing the developers to fix bugs) they invent extensions to make other developers complicate their software with those extensions to work around the bugs.

Tolerance is acceptable to people of different race/origin/group. Tolerance to bugs in unacceptable. Tolerance to idiocy is not acceptable either.

Saturday, October 10, 2015

On reinventing the wheel



Google did it again, and there's a kind of hype around this new wheel.

Protocol Buffers are a bit simplified form of ASN.1 notation, which has been in use for decades. They even mention BER (Basic Encoding Rules, a form of ASN.1) but hide the "ASN" name.

This is what happens when imported eastern developers are writers and not readers.

Sometimes I feel pity that standards and protocols are not patented. One should prosecute those "reinventors" for plagiarism (taking the industry-adopted standard, hiding its name and claiming it the new protocol or something).

Tuesday, August 18, 2015

On how not to do things right

Recently one of the users of our SecureBlackbox product has reported, that the SMTP client component can't login to GMail with a strange message: "Someone just tried to sign in to your Google Account [my email] from an app that doesn't meet modern security standards". 

The message itself doesn't sound like having lots of sense, as it neither explains the problem ("doesn't meet standards" is not an explanation) nor it suggests a solution. 

Forum search has taken me to an the answer in Google knowledgebase, which doesn't add much to the error message' explanation either. While it offers a partial solution for the owner of the mailbox, it does not explain what exactly happens.

Finally another forum post has driven me to the what could be an explanation, although it's only a hint. Turns out that Google has implemented OAuth2 in protocols like SMTP and IMAP. And here lies a huge problem.

OAuth2 is a web-based protocol which in many cases involves the web browser and user interaction. This makes fully automated operations nearly impossible and also significantly complicates the implementation of any client. 

There exist plenty of authentication schemes which prevent password transfer and/or allow third-party servers to be used for authentication. Google seems to have chosen the worst least appropriate variant instead.

Google is known for non-standard technical and business solutions. And they mostly work for users' and Google's own benefits. But some solutions seem not just to be not tested, but not to pass any sanity check.

Friday, February 6, 2015

Come as you are

Any authorization (and to some extent authentication) is based on one or more of three elephants (and a turtle): "what you know", "what you have" and "what you are".

All those three components were used since prehistoric times. Passwords, keys on the keyring, secret signs or labels on the skin (including tattoos) - these are the widely used examples of those three types of authentication.

"What you know" in the digital age is something that is extremely easy to disclose. Passwords are hard to remember and easy to steal. While still being used, they are now complemented by other factors to form multifactor authentication.

The article on CIO has an excellent overview of methods and technologies to authenticate you based on what you are and to some extent on what you have. Not only body parts themselves are expected, but also the way they function. Heartbeat and brain waves - they seem to be the most advanced authentication sources for today.

Yet it remains unclear, how the freshness of the data can be ensured. A computer system receives authentication data from the person by digitizing them and comparing them to the stored patterns. Potentially the data can be intercepted while in transit and then replayed later for false authentication.

And even worse, fingerprints and iris pictures can be captured distantly by using powerful photo cameras and then misused.

The only way I can think about right now is a challenge-response mechanism that measures how the person reacts to certain stimuli such as certain light flash pattern (when inspecting iris) or math problem that the user has to solve (when capturing brain waves).

Saturday, January 10, 2015

Working in public places? Think again.


For decades remote capturing of the data (first from people or TV set talking, then from the working computer) was an effective way for political and business espionage. We saw methods to capture sounds, CRT emission, keyboard clicks etc. Tablets and low-power notebooks give much less information to outside world, yet the spies don't calm down and try to capture even tiny bits of electronic emissions hoping to grab your passwords or even more valuable information.

Though I am a bit skeptical about real-world use of these attacks, you still need to be careful when working with confidential information in public. It is also important to mention that rubber-hose cryptanalysis remains effective and if you have logged into your banking account and the attacker knows that you have a fortune in the bank, it makes sense for him to just grab your notebook and run.