How (not) to do GDPR

I prepared a few simple recommendations for you.

1. Do not rush to break your software and business, unless you are deep into advertisement or social profiling or your IT processes imply dumping every piece of data you have on an unprotected file server. If you are doing personal data processing for a legitimate business purpose in a reasonable manner, it is safe to assume you can stand your ground under any GDPR related scrutiny.

I know a retail company that ceased CCTV recording in all their warehouses. The hell broke loose, even before the theft. The company turned completely unable to find misplaced items. Why? Because there was a guy «responsible» for GDPR compliance who had authority to handle it that way. When anyone opposed him, his answer was simple: fines are huge, risks are high and is your advice to do otherwise backed with any willingness to cover possible non-compliance issues from your own pocket? Ah, you do not have enough money anyway, so get lost.

What this guy was essentially doing is covering his own ass on INSANELY HUGE company's expense. Don't do it. GDPR is not about ruining your business (unless your business is very questionable already).

2. Do not buy shit. GDPR fearmongering is a goldmine for people selling «compliance solutions». But the truth is, there is no «compliance solution» you can buy. Under this hot GDPR sauce you would only buy things you do neither need nor want to buy. Less creative «solution providers» will sell to you some firewall-antivirus-encryption stuff for twice the regular price. More creative ones will sell to you data intelligence and audit tools of the most expensive variety, that won't help you at all. GDPR is not about buying anything (unless you neglected your basics before).

3. Do not pay for any certification. Currently there is no GDPR certification, which literally means any certification you get is totally irrelevant for GDPR. The idea «there must be some checklists and papers, so it is worth to get your ISO27K or whatever until more specific requirements would be in place, because ISO27K is a default way to prove to everyone you are a good citizen» sounds appealing, but the appeal of the idea is in its deceptive psychological comfort and nothing more.

Of course it is not all that simple. Some minor organisational effort is still required — as described by countless howto's: remove everything you do not actually need (and stop gathering it «just in case»), get consent when appropriate and keep track on what you do both for yourself and for data subject.

KRACK: no big deal either

Either your vital communications are end2end encrypted already, or you have more reasons to worry than just KRACK.

  • Endpoints are movable. There was a communication once performed via direct patch cord link. Next day it could go around half of the internet: someone decides to move one of endpoints to the cloud, to a different location, or else. And if you ever use your laptop or smartphone on the public wifi, the attack surface never changed for you at all.

  • You cannot reliably protect all endpoints on an Ethernet-like network 100% of the time. Chances are, someone is sniffing you from a compromised device with much higher probability than he/she could get through (relatively) short KRACK vulnerability window.

  • Do you watch your wired infrastructure close enough? Are you sure not just every network socket, but every centimetre of your network cabling is under control? Really? If your TV screen or printer in a public conference room is connected to the office network without 802.1x and VLAN separation, KRACK is not an issue.

On the doorsteps of ivory tower: encryption for a "demanding" customer

Recently I took a somewhat deeper-than-intended dive into a wonderful world of so-called “secure communications” (don’t ask me why, maybe I will tell you eventually). No, not Signal or Protonmail, nor Tox or OTR. I mean paid (and rather expensive) services and devices you probably never heard of (and had every reason not to). Do the names like Myntex, Encrochat, Skyecc, Ennetcom ring a bell? Probably it does not, as it should be, unless they fuck something up spectacularly enough to hit the newspaper headlines (some of them really did).

Three lessons should be learned

FIRST, while experts are discussing technical peculiarities, John Q. Public is not interested in all that technobabble. This attitude constitutes a security issue in its own right, but at least it is well-known and we know what we need to do: to educate the customer about several basic, intuitive and easy for a non-technical person concepts — OPSEC, attack surface, threat models, supply chain security, encryption key life cycle etc. And then we leave everything «more technical» to a trustworthy independent audit.

Right? NO. Those people are not interested AT ALL (technobabble included), and they treat your aforementioned audit with the same amount of interest. And your educational initiative goes the same way since the entire syllabus you call «very very basics every human being must understand» fits comfortably into the category «technobabble» in the customer's world view. For them «Military grade security» is just as convincing as «we had a public independent review» — a little more than white noise and the former is still more than the latter. Let alone the popular opinion about audit: «You could compromise your security by allowing god-knows-who look into the implementation details! It was careless!»

SECOND, as “business” customers do not really care about technology, you cannot show them the trustworthiness of your solution by using the technological correctness of this solution. There is no common ground, no scientific consensus, no expert is trusted, everything is «my word vs your word», no audit is reliable (and that’s yet another reason nobody is interested in audits).

For your customers the very notion of «trust» implies interpersonal relations. They cannot trust anything but people. A piece of software being trusted? Or better still: trusted for a certain particular property? — those notions are not welcome in a businessman's brain. However, that may not be a detriment. In the end of the day we can not eliminate the «human factor» from the software as long as humans write it (with all the backdoors and eastereggs). Trust (as your customers understand it) is all about loyalty. Trust (as you understand it) is an expression of your knowledge of the software capabilities. Perhaps someone should stop abusing the word, and I suggest to stick to the older meaning. Get yourself a new word! On the other hand, the traditional loyalty-driven interpretation of trust leads to horrible decisions in the context of infosec. A catastrophic clusterfuck of any magnitude, is easily forgiveable as long as it is caused by mere negligence as opposed to sabotage. «Yeah, people make mistakes, but they did their best, yes? They TRIED!»

THIRD is that trust issues with people lead those customers into miserable situations, as they know people no better than they know technology, but for no reason they feel more confident in that area. Running a successful business (especially risky one, if you know what I mean) reinforces confirmation bias about knowing people. First you make a lot of money, and next day you get scammed by a Nigerian prince, a Russian bride or a fake crypto.

I guess I should write a separate essay about liability shift and self-preservation mechanisms that sometimes fail in unexpected way for unexpected people, but not now.

The greatest problem with "public" schools

...is that they are NOT public.

Do you, dear public, pay for those schools?
You do… you pay exactly «for» but not «to». The schools actually receive money from the govt, NOT from you. And you have no control over the money distribution. When the money are given to the schools they don't bear your scent anymore — these are «govt's money» at the moment. The govt decides who takes the money, and for these money, a school has to appease the govt, NOT you. These «public» schools are indeed the govt's schools.

Americans seem to forget the old russian proverb:
Who dines the girl, he dances her.

"Security Management" "Maturity" "Model"

A few days ago I twitted this picture:

RSA model for security management "maturity"
with a comment: guess what's wrong with this picture (hint: EVERYTHING).

Not everyone got the joke, so I think it deserves an explanation (sorry).


At a first glance it makes some sense and reflects quite common real world situation: first you start with some «one size fits all» «common sense» security (antivirus, firewall, vulnerability scanner, whatever). Then you get requirements (mostly compliance driven), then you do risk analysis and then voila, you get really good and start talking business objectives. Right?

Wrong.

It is a maturity level model. Which means a each level is a foundation for the next one and cannot be skipped. Does it work this way? No.

Actually you do some business driven decisions all the time from the very beginning. It is not a result, it is a foundation. You may do it an inefficient way, but you still do. With risk analysis. It may be ad hoc, again, depending on the size of your business and your insight into how things work, but from some mid-sized level you simply cannot stick to «checkbox mentality», you need to prioritize. Then you come with checklists and compliance requirements as part of your business risks.

The picture is all upside-down and plain wrong. I understand they need to sell RSA Archer at some point there and that's why they see it this way, but it does not constitute an excuse for inverting reality.

On The Public Discourse

The trouble of all serious social troubles is that they do not allow for a prolix bloated discussion that normal people value so much. Muslims want us dead. Hitlary committed a high treason. Douchebank is a fraud. Credit cards are not secure. 2+2==4 — there is no room for a discussion!!! Here are some prooflinks, case closed, the public is bored and ignores the issue in question.

On the other hand, the lack of evidence, the absence of solid research method, the absurdity of the subject — open the gates for creativity and rhetoric and demagoguery and entertainment of all sorts. One may write volumes on Bigfoot, UFO, ghosts, gods, multiculturalism, oppression, patriarchy, microaggression. And I assure you those volumes will sell magnificently — people love talking much more than thinking.

On hypocrisy and spyware

I said it earlier this century, "state-sponsored malware/spyware developers ARE de facto blackhats".

There is no «legitimate» third side to receive zero days. Either you give a priority to your software vendor (and contribute to the defensive side) or you do not and contribute to the bad guys. Yes, bad.

Not that I blame vulnerability researchers for being immoral. I am a free market advocate: if a software vendor is not willing to pay a competitive price for vulnerability information, it certainly deserves the consequences. I just hate hypocrites that fail to admit the obvious fact that they are no different to blackhats — because «we sell to government and law enforcement only» clause makes no real difference.

But, wait!



They ARE different.

The ideal black market for zero day exploits is free and open for anyone, including software vendors searching for the exploits in their software. You, as a seller, do not want to sell your exploit to the vendor of the vulnerable software, because you are interested in the exploit's longevity. But on the black market there is no way for you to know if a buyer works for the vendor (directly or indirectly).

Contrary to that, the real market (thoroughly regulated by the government) completely rigs the game to the detriment of the software vendors. First, a software vendor is explicitly banned from participation (by this «we sell only to law enforcement»), no legitimate purchases for a vendor, tough luck. Second, it is open for trusted brokers who make huge profits from the fact they got government approvals (see HBGary leak to find out how hard some people try to set a foot there with quite limited success).

Needless to say, newly proposed so-called «cyber arms regulations» only worsen the situation, making free black market for zero day exploits illegal in favor of government-approved brokers.

So they are not «just» blackhats. They are the most vicious breed. They found a perfect exploit for the government. They use regulations to defeat their victims.

Smartphone is a computer (actually, not, but it should be)



There is a simple remedy to many information security woes about smartphones.

And it is simple. And extremely unpopular. Vendors, operators definitely won't like it.

Just it: turn a smartphone to a computer. No, not like now. Really.

A computer does not run «firmware» bundled by «vendor» and «certified to use». It runs operating system, supplementary components like libraries and device drivers, and applications, both system and users'.

And there are updates. When there is a bug, an update is issued, not by the computer vendor, but by the OS or other software vendor. While «firmware» which FCC et al should care of is the tiny thing that runs inside broadband module you user probably never think of at all.

I've seen people arguing that it would break things. Due to device fragmentation people will get broken updates, brick their phones and overload repair centers. Come on. Never seen bundled OTA firmware update doing that? It is actually safer if the update is going to be granular and won't touch things it does not need to.

But you won't ever seen unfixed remote code execution bug to stay for years or even forever if your phone vendor decides that it no longer necessary to support this model.

I want my smartphone to be a real computer. With OS, applications, and no unremovable bloatware that is burned in by the vendor or (worse) MNO. Do you?

UPDATE: and surely initiatives like this will get middle finger as they deserve and no questions could be raised. You may run anything you want on your COMPUTER.

One more lesson to repeat from HackingTeam breach

(it is a copy of my old LinkedIn blog post, I saved it here because Linkedin sucks big time as a blog platform)

The full story is here:
pastebin.com/raw/0SNSvyjJ
and it is worth reading for all InfoSec professionals and amateurs: perfect, outstanding example of an «old school» hack described step by step.

Also it provides us a classic example of another issue often overlook, or rather intentionally ignored: starting from certain (rather small) organization size and complexity, a sophisticated attacker WILL compromise your Active Directory. There is no «if» in this sentence: it is inevitable. I've seen many pen tests and many advanced attacks by malicious entities — ALL, I mean it, ALL of them ended like that.

That leads us to obvious, yet controversial conclusion: for certain valuable resources it is better to keep them OFF the domain. This means cutting away the whole branch of an attack graph: no SSO, no access from domain-joined admin workstations, no access recovery via domain-based email, no backups on AD-enabled storage, whatever. Which rises some completely different issues, but that's it.

Can you manage this? Can you live with this?