arkanoid
Rating
+0.20
Power
0.53

KRACK: no big deal either

Either your vital communications are end2end encrypted already, or you have more reasons to worry than just KRACK.

  • Endpoints are movable. There was a communication once performed via direct patch cord link. Next day it could go around half of the internet: someone decides to move one of endpoints to the cloud, to a different location, or else. And if you ever use your laptop or smartphone on the public wifi, the attack surface never changed for you at all.

  • You cannot reliably protect all endpoints on an Ethernet-like network 100% of the time. Chances are, someone is sniffing you from a compromised device with much higher probability than he/she could get through (relatively) short KRACK vulnerability window.

  • Do you watch your wired infrastructure close enough? Are you sure not just every network socket, but every centimetre of your network cabling is under control? Really? If your TV screen or printer in a public conference room is connected to the office network without 802.1x and VLAN separation, KRACK is not an issue.

On the doorsteps of ivory tower: encryption for a "demanding" customer

Recently I took a somewhat deeper-than-intended dive into a wonderful world of so-called “secure communications” (don’t ask me why, maybe I will tell you eventually). No, not Signal or Protonmail, nor Tox or OTR. I mean paid (and rather expensive) services and devices you probably never heard of (and had every reason not to). Do the names like Myntex, Encrochat, Skyecc, Ennetcom ring a bell? Probably it does not, as it should be, unless they fuck something up spectacularly enough to hit the newspaper headlines (some of them really did).

Three lessons should be learned

FIRST, while experts are discussing technical peculiarities, John Q. Public is not interested in all that technobabble. This attitude constitutes a security issue in its own right, but at least it is well-known and we know what we need to do: to educate the customer about several basic, intuitive and easy for a non-technical person concepts — OPSEC, attack surface, threat models, supply chain security, encryption key life cycle etc. And then we leave everything «more technical» to a trustworthy independent audit.

Right? NO. Those people are not interested AT ALL (technobabble included), and they treat your aforementioned audit with the same amount of interest. And your educational initiative goes the same way since the entire syllabus you call «very very basics every human being must understand» fits comfortably into the category «technobabble» in the customer's world view. For them «Military grade security» is just as convincing as «we had a public independent review» — a little more than white noise and the former is still more than the latter. Let alone the popular opinion about audit: «You could compromise your security by allowing god-knows-who look into the implementation details! It was careless!»

SECOND, as “business” customers do not really care about technology, you cannot show them the trustworthiness of your solution by using the technological correctness of this solution. There is no common ground, no scientific consensus, no expert is trusted, everything is «my word vs your word», no audit is reliable (and that’s yet another reason nobody is interested in audits).

For your customers the very notion of «trust» implies interpersonal relations. They cannot trust anything but people. A piece of software being trusted? Or better still: trusted for a certain particular property? — those notions are not welcome in a businessman's brain. However, that may not be a detriment. In the end of the day we can not eliminate the «human factor» from the software as long as humans write it (with all the backdoors and eastereggs). Trust (as your customers understand it) is all about loyalty. Trust (as you understand it) is an expression of your knowledge of the software capabilities. Perhaps someone should stop abusing the word, and I suggest to stick to the older meaning. Get yourself a new word! On the other hand, the traditional loyalty-driven interpretation of trust leads to horrible decisions in the context of infosec. A catastrophic clusterfuck of any magnitude, is easily forgiveable as long as it is caused by mere negligence as opposed to sabotage. «Yeah, people make mistakes, but they did their best, yes? They TRIED!»

THIRD is that trust issues with people lead those customers into miserable situations, as they know people no better than they know technology, but for no reason they feel more confident in that area. Running a successful business (especially risky one, if you know what I mean) reinforces confirmation bias about knowing people. First you make a lot of money, and next day you get scammed by a Nigerian prince, a Russian bride or a fake crypto.

I guess I should write a separate essay about liability shift and self-preservation mechanisms that sometimes fail in unexpected way for unexpected people, but not now.

On positive impact of ransomware on information security

I truly hate I need to write this. And I feel really sorry for those who were forced to learn it the hard way, but don't tell me you haven't been warned in advance years before. However.


— The end of compliance-driven security is now official. Petya is not impressed with your ISO27K certificate. Nor does it give a flying fsck about your recent audit performed by a Big4 company.
— Make prevention great again (in detection dominated world we live in now)! Too busy playing with your all-new AI-driven deep learning UEBA box? Ooops, your homework goes first. Get patched, enable smb signing, check your account privileges and do other boring stuff and then you may play.

Did I say BCP and business process maturity? Forget that, I was kidding, hahaha. That's for grown-ups.

Any sales pitch mentioning WannaCry is a scam.

snake oil
To suffer a significant damage from WannaCry, you need to craft a redundant clusterfuck of FIVE SIMULTANEOUSLY MET conditions:

  1. Failure to learn from previous cases (remember Cornflicker? It was pretty much similar thing)
  2. Workflow process failure (why do you need those file shares at all?)
  3. Basic business continuity management process failure (where are your backups?)
  4. Patch management process failure (to miss an almost two month old critical patch?)
  5. Basic threat intelligence and situational awareness failure (not like in «use a fancy IPS with IoC feed and dashboard with world map on it», more like «read several top security-related articles in non-technical media at least weekly»)

And after you won the bingo, you expect you can BUY something that will defeat such an ultimate ability to screw up? Duh.

"Security Management" "Maturity" "Model"

A few days ago I twitted this picture:

RSA model for security management "maturity"
with a comment: guess what's wrong with this picture (hint: EVERYTHING).

Not everyone got the joke, so I think it deserves an explanation (sorry).


At a first glance it makes some sense and reflects quite common real world situation: first you start with some «one size fits all» «common sense» security (antivirus, firewall, vulnerability scanner, whatever). Then you get requirements (mostly compliance driven), then you do risk analysis and then voila, you get really good and start talking business objectives. Right?

Wrong.

It is a maturity level model. Which means a each level is a foundation for the next one and cannot be skipped. Does it work this way? No.

Actually you do some business driven decisions all the time from the very beginning. It is not a result, it is a foundation. You may do it an inefficient way, but you still do. With risk analysis. It may be ad hoc, again, depending on the size of your business and your insight into how things work, but from some mid-sized level you simply cannot stick to «checkbox mentality», you need to prioritize. Then you come with checklists and compliance requirements as part of your business risks.

The picture is all upside-down and plain wrong. I understand they need to sell RSA Archer at some point there and that's why they see it this way, but it does not constitute an excuse for inverting reality.

"One Brand of Firewall"

Gatrner sent me an ad of a quite disturbing report ( www.gartner.com/imagesrv/media-products/pdf/fortinet/fortinet-1-3315BQ3.pdf ) which advocates using «one firewall brand» to reduce complexity.

Sorry, guys, one brand of WHAT?

There is no such thing as «general purpose firewall» that fits all. It is a mythical device (and this myth was supported by Gartner for years).
What you call «firewall» is actually one of three (or more) things:

1) A border/datacenter segmenation device. Think high throughput, ASICs, fault tolerance and basic IPS capabilities.
2) An «office» firewall. Think moderate throughput, egress filtering, in-depth protocol inspection, IAM integration and logging capabilities
3) WAF. Enough said, WAF is completely different beast, having almost nothing in common with any of those.

Ah, and a VPN server. It is not a firewall (though it should have basic firewall capabilities). Not falls into any of those categories.

Dear Gartner, have you ever tried to market a pipe-wrench-hair-dryer? You should, you have a talent for that.

On hypocrisy and spyware

I said it earlier this century, "state-sponsored malware/spyware developers ARE de facto blackhats".

There is no «legitimate» third side to receive zero days. Either you give a priority to your software vendor (and contribute to the defensive side) or you do not and contribute to the bad guys. Yes, bad.

Not that I blame vulnerability researchers for being immoral. I am a free market advocate: if a software vendor is not willing to pay a competitive price for vulnerability information, it certainly deserves the consequences. I just hate hypocrites that fail to admit the obvious fact that they are no different to blackhats — because «we sell to government and law enforcement only» clause makes no real difference.

But, wait!



They ARE different.

The ideal black market for zero day exploits is free and open for anyone, including software vendors searching for the exploits in their software. You, as a seller, do not want to sell your exploit to the vendor of the vulnerable software, because you are interested in the exploit's longevity. But on the black market there is no way for you to know if a buyer works for the vendor (directly or indirectly).

Contrary to that, the real market (thoroughly regulated by the government) completely rigs the game to the detriment of the software vendors. First, a software vendor is explicitly banned from participation (by this «we sell only to law enforcement»), no legitimate purchases for a vendor, tough luck. Second, it is open for trusted brokers who make huge profits from the fact they got government approvals (see HBGary leak to find out how hard some people try to set a foot there with quite limited success).

Needless to say, newly proposed so-called «cyber arms regulations» only worsen the situation, making free black market for zero day exploits illegal in favor of government-approved brokers.

So they are not «just» blackhats. They are the most vicious breed. They found a perfect exploit for the government. They use regulations to defeat their victims.

Smartphone is a computer (actually, not, but it should be)



There is a simple remedy to many information security woes about smartphones.

And it is simple. And extremely unpopular. Vendors, operators definitely won't like it.

Just it: turn a smartphone to a computer. No, not like now. Really.

A computer does not run «firmware» bundled by «vendor» and «certified to use». It runs operating system, supplementary components like libraries and device drivers, and applications, both system and users'.

And there are updates. When there is a bug, an update is issued, not by the computer vendor, but by the OS or other software vendor. While «firmware» which FCC et al should care of is the tiny thing that runs inside broadband module you user probably never think of at all.

I've seen people arguing that it would break things. Due to device fragmentation people will get broken updates, brick their phones and overload repair centers. Come on. Never seen bundled OTA firmware update doing that? It is actually safer if the update is going to be granular and won't touch things it does not need to.

But you won't ever seen unfixed remote code execution bug to stay for years or even forever if your phone vendor decides that it no longer necessary to support this model.

I want my smartphone to be a real computer. With OS, applications, and no unremovable bloatware that is burned in by the vendor or (worse) MNO. Do you?

UPDATE: and surely initiatives like this will get middle finger as they deserve and no questions could be raised. You may run anything you want on your COMPUTER.

One more lesson to repeat from HackingTeam breach

(it is a copy of my old LinkedIn blog post, I saved it here because Linkedin sucks big time as a blog platform)

The full story is here:
pastebin.com/raw/0SNSvyjJ
and it is worth reading for all InfoSec professionals and amateurs: perfect, outstanding example of an «old school» hack described step by step.

Also it provides us a classic example of another issue often overlook, or rather intentionally ignored: starting from certain (rather small) organization size and complexity, a sophisticated attacker WILL compromise your Active Directory. There is no «if» in this sentence: it is inevitable. I've seen many pen tests and many advanced attacks by malicious entities — ALL, I mean it, ALL of them ended like that.

That leads us to obvious, yet controversial conclusion: for certain valuable resources it is better to keep them OFF the domain. This means cutting away the whole branch of an attack graph: no SSO, no access from domain-joined admin workstations, no access recovery via domain-based email, no backups on AD-enabled storage, whatever. Which rises some completely different issues, but that's it.

Can you manage this? Can you live with this?

"We are not responsible"



More than 90 percent of corporate executives said they cannot read a cybersecurity report and are not prepared to handle a major attack, according to a new survey.

More distressing is that 40 percent of executives said they don't feel responsible for the repercussions of hackings, said Dave Damato, chief security officer at Tanium, which commissioned the survey with the Nasdaq.


Seriously. They are «not responsible»! Who is, then? Those guys are getting paid enormous amount of money for being MANAGERS. Manager is a person who is responsible — for solving problems he/she might not truly comprehend as well, but that's ok. I do not expect them to really know a thing or two about IT or security. An executive should understand business risks, that's enough. If there is a business risk that an executive does not understand and is not willing to, he/she should consider getting another job, probably McDonalds could offer them an intern position?

Those people say they are utterly incompetent — and they say it in public and get away with that. And everyone thinks it is ok.