KRACK: no big deal either

Either your vital communications are end2end encrypted already, or you have more reasons to worry than just KRACK.

  • Endpoints are movable. There was a communication once performed via direct patch cord link. Next day it could go around half of the internet: someone decides to move one of endpoints to the cloud, to a different location, or else. And if you ever use your laptop or smartphone on the public wifi, the attack surface never changed for you at all.

  • You cannot reliably protect all endpoints on an Ethernet-like network 100% of the time. Chances are, someone is sniffing you from a compromised device with much higher probability than he/she could get through (relatively) short KRACK vulnerability window.

  • Do you watch your wired infrastructure close enough? Are you sure not just every network socket, but every centimetre of your network cabling is under control? Really? If your TV screen or printer in a public conference room is connected to the office network without 802.1x and VLAN separation, KRACK is not an issue.

On the doorsteps of ivory tower: encryption for a "demanding" customer

Recently I took a somewhat deeper-than-intended dive into a wonderful world of so-called “secure communications” (don’t ask me why, maybe I will tell you eventually). No, not Signal or Protonmail, nor Tox or OTR. I mean paid (and rather expensive) services and devices you probably never heard of (and had every reason not to). Do the names like Myntex, Encrochat, Skyecc, Ennetcom ring a bell? Probably it does not, as it should be, unless they fuck something up spectacularly enough to hit the newspaper headlines (some of them really did).

Three lessons should be learned

FIRST, while experts are discussing technical peculiarities, John Q. Public is not interested in all that technobabble. This attitude constitutes a security issue in its own right, but at least it is well-known and we know what we need to do: to educate the customer about several basic, intuitive and easy for a non-technical person concepts — OPSEC, attack surface, threat models, supply chain security, encryption key life cycle etc. And then we leave everything «more technical» to a trustworthy independent audit.

Right? NO. Those people are not interested AT ALL (technobabble included), and they treat your aforementioned audit with the same amount of interest. And your educational initiative goes the same way since the entire syllabus you call «very very basics every human being must understand» fits comfortably into the category «technobabble» in the customer's world view. For them «Military grade security» is just as convincing as «we had a public independent review» — a little more than white noise and the former is still more than the latter. Let alone the popular opinion about audit: «You could compromise your security by allowing god-knows-who look into the implementation details! It was careless!»

SECOND, as “business” customers do not really care about technology, you cannot show them the trustworthiness of your solution by using the technological correctness of this solution. There is no common ground, no scientific consensus, no expert is trusted, everything is «my word vs your word», no audit is reliable (and that’s yet another reason nobody is interested in audits).

For your customers the very notion of «trust» implies interpersonal relations. They cannot trust anything but people. A piece of software being trusted? Or better still: trusted for a certain particular property? — those notions are not welcome in a businessman's brain. However, that may not be a detriment. In the end of the day we can not eliminate the «human factor» from the software as long as humans write it (with all the backdoors and eastereggs). Trust (as your customers understand it) is all about loyalty. Trust (as you understand it) is an expression of your knowledge of the software capabilities. Perhaps someone should stop abusing the word, and I suggest to stick to the older meaning. Get yourself a new word! On the other hand, the traditional loyalty-driven interpretation of trust leads to horrible decisions in the context of infosec. A catastrophic clusterfuck of any magnitude, is easily forgiveable as long as it is caused by mere negligence as opposed to sabotage. «Yeah, people make mistakes, but they did their best, yes? They TRIED!»

THIRD is that trust issues with people lead those customers into miserable situations, as they know people no better than they know technology, but for no reason they feel more confident in that area. Running a successful business (especially risky one, if you know what I mean) reinforces confirmation bias about knowing people. First you make a lot of money, and next day you get scammed by a Nigerian prince, a Russian bride or a fake crypto.

I guess I should write a separate essay about liability shift and self-preservation mechanisms that sometimes fail in unexpected way for unexpected people, but not now.

On positive impact of ransomware on information security

I truly hate I need to write this. And I feel really sorry for those who were forced to learn it the hard way, but don't tell me you haven't been warned in advance years before. However.


— The end of compliance-driven security is now official. Petya is not impressed with your ISO27K certificate. Nor does it give a flying fsck about your recent audit performed by a Big4 company.
— Make prevention great again (in detection dominated world we live in now)! Too busy playing with your all-new AI-driven deep learning UEBA box? Ooops, your homework goes first. Get patched, enable smb signing, check your account privileges and do other boring stuff and then you may play.

Did I say BCP and business process maturity? Forget that, I was kidding, hahaha. That's for grown-ups.

Any sales pitch mentioning WannaCry is a scam.

snake oil
To suffer a significant damage from WannaCry, you need to craft a redundant clusterfuck of FIVE SIMULTANEOUSLY MET conditions:

  1. Failure to learn from previous cases (remember Cornflicker? It was pretty much similar thing)
  2. Workflow process failure (why do you need those file shares at all?)
  3. Basic business continuity management process failure (where are your backups?)
  4. Patch management process failure (to miss an almost two month old critical patch?)
  5. Basic threat intelligence and situational awareness failure (not like in «use a fancy IPS with IoC feed and dashboard with world map on it», more like «read several top security-related articles in non-technical media at least weekly»)

And after you won the bingo, you expect you can BUY something that will defeat such an ultimate ability to screw up? Duh.

The greatest problem with "public" schools

...is that they are NOT public.

Do you, dear public, pay for those schools?
You do… you pay exactly «for» but not «to». The schools actually receive money from the govt, NOT from you. And you have no control over the money distribution. When the money are given to the schools they don't bear your scent anymore — these are «govt's money» at the moment. The govt decides who takes the money, and for these money, a school has to appease the govt, NOT you. These «public» schools are indeed the govt's schools.

Americans seem to forget the old russian proverb:
Who dines the girl, he dances her.

An Open Letter To mr. Thunderf00t The YouTube Physicist In Chief For Debunking Bad Science

Dear mr. Thunderf00t, recently you have published a series of videos about melting gold in strange contraptions (or one might say «stupid setups»). This series culminated in the episode called «Will Burning Diamond Melt Gold?». I quote:
Gold melts at 1064 C, Diamond burns at 2700 C — this should be enough to melt gold, will a diamond melt gold?
Then you put a ~0.25g diamond on a 1g golden coin, ignite the diamond in the pure oxygen atmosphere and wait until the diamond burns a hole in the coin. The diamond burned happily to ashes and the coin remained intact.

This «failure» created confusion among yourself and your audience:
How so?! It burnt so HOOOOOOOT! and melted nothing...

Spoiler alert: ENERGY TRANSFER.

Given that few days before you successfully melted a bead of gold that was put in a cavity inside a burning graphite block (What a surprise that this contraption worked!), your confusion is legitimately cringeworthy.

FUCKING SHAME!!!

I want you to understand the magnitude of this shame. Mr. Thunderf00t is not only an official scientist like many imbeciles are, he has a real discovery in his portfolio which is an achievement that the Steven Hawking's portfolio lacks of. Mr. Thunderf00t is a real scientist — not a cosmologist or something — he knows his science and he is capable of conducting meaningful experiments. A man of this qualification was driven astray by the notion of temperature. So much astray, that laymen of ancient Egypt would laugh at his «gold melting» contraptions being so obviously against even the most basic common sense understanding of thermodynamics available for humans since 10 000 years ago… 20 000? Once again, pay attention, a credible scientist forgets to calculate the energy balance of his experiment before burning real diamonds.

I therefore propose to remove the notion of temperature from the middle school physics curriculum, for it is overwhelmingly confusing and marginally useful.



P.S.
make a funny experiment:
calculate «the temperature» of a 10 GEv proton, say X (note the amount of zeroes in the result)
and then ask a patented physicist: will a proton heated up to X Celcius deg melt a hole in a thin golden foil.

"Security Management" "Maturity" "Model"

A few days ago I twitted this picture:

RSA model for security management "maturity"
with a comment: guess what's wrong with this picture (hint: EVERYTHING).

Not everyone got the joke, so I think it deserves an explanation (sorry).


At a first glance it makes some sense and reflects quite common real world situation: first you start with some «one size fits all» «common sense» security (antivirus, firewall, vulnerability scanner, whatever). Then you get requirements (mostly compliance driven), then you do risk analysis and then voila, you get really good and start talking business objectives. Right?

Wrong.

It is a maturity level model. Which means a each level is a foundation for the next one and cannot be skipped. Does it work this way? No.

Actually you do some business driven decisions all the time from the very beginning. It is not a result, it is a foundation. You may do it an inefficient way, but you still do. With risk analysis. It may be ad hoc, again, depending on the size of your business and your insight into how things work, but from some mid-sized level you simply cannot stick to «checkbox mentality», you need to prioritize. Then you come with checklists and compliance requirements as part of your business risks.

The picture is all upside-down and plain wrong. I understand they need to sell RSA Archer at some point there and that's why they see it this way, but it does not constitute an excuse for inverting reality.

"One Brand of Firewall"

Gatrner sent me an ad of a quite disturbing report ( www.gartner.com/imagesrv/media-products/pdf/fortinet/fortinet-1-3315BQ3.pdf ) which advocates using «one firewall brand» to reduce complexity.

Sorry, guys, one brand of WHAT?

There is no such thing as «general purpose firewall» that fits all. It is a mythical device (and this myth was supported by Gartner for years).
What you call «firewall» is actually one of three (or more) things:

1) A border/datacenter segmenation device. Think high throughput, ASICs, fault tolerance and basic IPS capabilities.
2) An «office» firewall. Think moderate throughput, egress filtering, in-depth protocol inspection, IAM integration and logging capabilities
3) WAF. Enough said, WAF is completely different beast, having almost nothing in common with any of those.

Ah, and a VPN server. It is not a firewall (though it should have basic firewall capabilities). Not falls into any of those categories.

Dear Gartner, have you ever tried to market a pipe-wrench-hair-dryer? You should, you have a talent for that.