A hacker’s guide to not get hacked

Assume you are a technically savvy person who knows the basics. You never install random crap from the internet. A typical phishing email makes you laugh, you almost pity the mankind which can be fooled by scammers as silly as those. Your phone screen is locked by default, and you use a password manager.

In multiple ways (sometimes inevitable, oftentimes obscure and cryptic) you depend on things (software, data, hardware) you do not own nor control locally (unless you are extremely paranoid and have accepted huge operational overhead). And I do not mean continuously bleeding “digital trail” of metadata and behavioural tracking. Let’s talk about a risk of being really hacked.

Understand (and prioritize) your assets


You obviously give the top priority to your bank accounts and launch codes, but some other things might be more expensive than you imagine at a first glance.
«There is nothing secret and I have all that information elsewhere» might lead you, in case of a loss event, spending a whole weekend handpicking it from… from said «elsewhere», and sorting everything in the precise OCDish way it use to be. Some other things may be of sentimental value and little interest to anyone but you certainly do not want everyone to see it (and you are likely underestimating the importance of your feelings, until said «everyone» see those files). You better archive all conversations with your ex-es and don't leave it hanging in your chat history (which would not help much unless other parties do the same). Ah, and, if you own some cryptocurrency, I hope you keep it on a local wallet, otherwise you do not really own it.

Understand (and prioritize) your attack surface.


A computer is not that unsafe unless you physically lose it (while having no disk encryption in place) or forgot (you did not, right?) to remove Adobe Flash from your system. If you adopted a good habit to handle MS Office and PDF documents with web-based services (as long as the content of said documents are not secret) you further improve the overall security of your local system.

Thus, your chances to get hacked via a 0day exploit (unless you do really store actual launch codes on your computer) are negligible.

Portable devices, I mean iOS and Android… Really, you cannot be serious. Android malware? Which requires your consent, like, three times to run, (ok, or just one if you have a system which is 5 years old with unpatched security bugs)? Again, your worst-case scenario always implies someone getting your device in physical possession.

Third party services (I want to avoid using the word “cloud” because the worst things aren’t even cloud, see below) are your only real concern. It is something completely outside of your control. It is not necessarily bad, but they may be vastly distributed, prone to breaches, accidental social engineering attacks and all those factors are unpredictable. To add insult to injury, they often actively encourage, just a little bit short of enforcing, “questionable" security practices. So the rest of the essay will be dedicated to them.

Build dependency graphs and rebuild it wisely



Here the hell begins. Common consensus dictates that the general public is totally incapable of managing their own access credentials without either losing access or leaking it to a malicious entity. Thus said, it all revolves around "recovery methods" which make your access credentials tightly interdependent and susceptible to a cascade failure unless you take special care of that. Let's do a simple dissection.

Worst of all is your phone number used for verification. Because the phone number is something you do not actually own in any way. Even the SIM card is legally the property of the cellular operator which provided it to you for lease (just like your bank card is). You do not own the number, you do not even own the network authentication key (try to reprogram or extract it!). As a practical consequence, not to mention slightly more complicated SS7 attacks, any cellular operator employee may issue a replacement SIM card on your behalf to anyone looking credible enough to them.

The funniest part is it is the sole ultimate trusted method for many services, including some banks, instant messengers and — Facebook. On Facebook, you cannot deny it from providing SMS recovery codes unless you remove the phone number from the system completely. Google is a bit less persistent, it just keeps whining about how insecure it is not to have an SMS recovery option (UPD: no, you need to remove the phone number there, too).

Then there are email accounts. You may have multiple ones, you may consider different threat models which would either convince you to stay on Gmail (at least it is really hard to divert anything in transit there) or to use your own mail server (think BGP hijacks, domain hijacks, and all that nasty stuff, email is generally not designed to be very secure). Anyway, once one of your critical email accounts gets compromised, things can get very nasty. It is one more «recovery method» you often cannot simply turn off (and in most cases, you do not really want to).

There are social network logins. Not to mention the direct problems you would have, many services would let you in on email/phone match without additional checks. Say, if you log into Booking.com with Facebook, you do not provide any authorization on the Booking.com side to recognize your Facebook login: you just let Facebook share your email and phone number and you are instantly connected.

Security questions should be, without any doubt, nominated for one of humanity’s stupidest inventions. Just imagine how attractive would it be to give up access to your account to any person who happens to find out some trivial fact about you. Especially one of those facts you can find in a public database. Yet you can suppress your disgust and treat it as usual recovery codes. Make sure you use a decent random string generator. You may use questions that are really private — but they are non-reusable and you should keep that in mind.

To minimize dependencies and maximize security, you need to turn off all unnecessary recovery options (especially phone numbers), turn on a second factor (one-time passwords, definitely not SMS!), and generate recovery codes and store them in a safe place. Offline and on a hardcopy, preferably.

It does not guarantee anything. Security models for most of the services are incredibly stupid, because they are ad hoc and lack consistency from ground up. I designed a better one for a customer a while ago: it was simple as that:

* every authentication factor that is not under the direct control of the user is considered insecure
* no combination of insecure authentication factors may be ever treated as secure
* recovery procedures that rely on insecure factors should be implemented with extreme caution.

Is it hard to understand or hard to implement? No. Does it impair user experience? Unlikely. Yet it does not scale well enough for services with billions of customers, unfortunately for them. Therefore, when you accept Google or Facebook policy, you are solving their problem on your own expense.

Have a contingency plan


There are no silver bullets, and my advice are not panacea. Anything can fail. Be prepared.

Some useful links:
On second factor: ithipster.com/blog/unorthodox/34.html
On NIST warning not to use SMS: www.theregister.co.uk/2016/12/06/2fa_missed_warning/
On cascade failures in authentication: https://www.cs.uic.edu/~polakis/papers/sso-usenix18.pdf

Self-signed TLS certificates are not evil, nor they are "broken"

One more hopeless rant I was engaged in, like, for last 20 years or more. What is totally broken, however, is UX decisions that mark them «evil». Self-signed TLS certificates possess no more intrinsic evil qualities than your beloved ssh and gpg keys.

The intent behind ostracising self-signed certificates is noble: everybody should do… should be forced to do things the one proper way: for intranet you should deploy our own private root CA and distribute its certificate to all the clients, for internet there are affordable solutions like letsencrypt to save you from calamities of certificate management and huge expenses.

Yet it is nothing but wishful thinking and thus is rotten to the root. Every, I repeat, every single company network I ever seen (with a few exceptions that qualify as hobbyist projects) had tons and tons of self-signed stuff, regardless if they had internal CA or not. Most of it was just «temporary» yet you know most permanent things are those «temporary» ones. Smaller companies just do not have an internal CA at all.

(I intentionally leave out the question if it is always a good idea to get a «trusted third party» involved, and, moreover, to give infinite total trust to vast and vague amount of «third parties», for now at least)

We need to accept this situation, understand it and adapt accordingly. The pretty narrow vulnerability window for a self-signed certificate exists just for the very moment you engage into an «initial» contact with the resource (or when the certificate changes, which is quite uncommon scenario). If no evil actor intervened at this moment you are safe from now on; there are numerous scenarios where there is actually no need of any trusted third party. The same way it works with ssh (unless you deployed that complex set of scripts, you know). Oh, no. You would be safe, if that stupid broken UIs did not complicate things, distracting you with a flood of pointless warnings that never stops, effectively concealing the actual attack when it happens.

For now, however, we see a lot of totally unprotected resources susceptible to MITM attacks because you know, self-signed certificates were proclaimed evil and you should not use it anyway, mkay? And that's why nearly all small/home office wireless environments are contaminated with hideously misdesigned WPA2 PSK (until we have WPA3 on the horizon), because WPA2-enterprise requires complicated «certificate management» — you cannot just say «remember this access point» on the client and no vendor bothers to have a builtin Radius server on the wireless controller therefore.

Every time I add a self-signed certificate to Safari I get a scary dialog about «changing my trust settings» which always makes me doubt — did I just add a site certificate to the trust store? Or was it Honest Achmed's root CA that I granted with full permissions right now? With current workflow it is hard to tell.

SSL: welcome to digital GULAG

Let's take a look at a seemingly innocent practice of OCSP stapling. Basically it is a certification that your certificate is valid, with the certificate for the validity of your certificate being issued by your certificate authority and bundled with your original certificate. Sounds perfect! If only we could certify the validity of this second certificate too, with a third certificate issued by the same authority, would be enough, certainly, three certificates are enough for everyone. Right?

This practice stems from OSCP (an Internet protocol used for obtaining the revocation status) which is not nearly as funny, and far from «innocent».

The original OCSP implementation can introduce a significant cost for the certificate authorities (CA) because it requires them to provide responses to every client of a given certificate in real time. For example, when a certificate is issued to a high traffic website, the servers of CAs are likely to be hit by enormous volumes of OCSP requests querying the validity of the certificate.

Do you see what does this quote imply? That FOR EACH SSL CONNECTION YOU MUST ASK THE AUTHORITY'S PERMISSION. The certificate authority is now an authority that decides whether to allow or refuse your SSL connections. In real time. You no longer decide to connect to a host of your choice, this decision is moving to some authorities.

Let that sink in.

====
P.S. Certificate revocation (without SSL) is not that dangerous and absurd. It was initially designed to work OFFLINE, i.e. all certificates, requests and answers are strictly timestamped — which makes revocation lists valuable and transferable — this is all designed to post-factum verification of documents and such.

"Security Management" "Maturity" "Model"

A few days ago I twitted this picture:

RSA model for security management "maturity"
with a comment: guess what's wrong with this picture (hint: EVERYTHING).

Not everyone got the joke, so I think it deserves an explanation (sorry).


At a first glance it makes some sense and reflects quite common real world situation: first you start with some «one size fits all» «common sense» security (antivirus, firewall, vulnerability scanner, whatever). Then you get requirements (mostly compliance driven), then you do risk analysis and then voila, you get really good and start talking business objectives. Right?

Wrong.

It is a maturity level model. Which means a each level is a foundation for the next one and cannot be skipped. Does it work this way? No.

Actually you do some business driven decisions all the time from the very beginning. It is not a result, it is a foundation. You may do it an inefficient way, but you still do. With risk analysis. It may be ad hoc, again, depending on the size of your business and your insight into how things work, but from some mid-sized level you simply cannot stick to «checkbox mentality», you need to prioritize. Then you come with checklists and compliance requirements as part of your business risks.

The picture is all upside-down and plain wrong. I understand they need to sell RSA Archer at some point there and that's why they see it this way, but it does not constitute an excuse for inverting reality.

An open letter to mr. John Kelly the Homeland Security Secretary

Dear mr. Kelly,
do you realize that you lose the ability to attribute a suspect's social media account to the said suspect immediately after obtaining a password to the said account?
Once you own the password, the account is attributed to YOU, shithead, thus rendering all your claims about the suspect's alleged activity associated with the account completely inconsiderable.

Resign immediately! You know _NOTHING_ about security nor elementary logic, you are utterly unqualified for the Homeland Security Secretary position.

"One Brand of Firewall"

Gatrner sent me an ad of a quite disturbing report ( www.gartner.com/imagesrv/media-products/pdf/fortinet/fortinet-1-3315BQ3.pdf ) which advocates using «one firewall brand» to reduce complexity.

Sorry, guys, one brand of WHAT?

There is no such thing as «general purpose firewall» that fits all. It is a mythical device (and this myth was supported by Gartner for years).
What you call «firewall» is actually one of three (or more) things:

1) A border/datacenter segmenation device. Think high throughput, ASICs, fault tolerance and basic IPS capabilities.
2) An «office» firewall. Think moderate throughput, egress filtering, in-depth protocol inspection, IAM integration and logging capabilities
3) WAF. Enough said, WAF is completely different beast, having almost nothing in common with any of those.

Ah, and a VPN server. It is not a firewall (though it should have basic firewall capabilities). Not falls into any of those categories.

Dear Gartner, have you ever tried to market a pipe-wrench-hair-dryer? You should, you have a talent for that.

Smartphone is a computer (actually, not, but it should be)



There is a simple remedy to many information security woes about smartphones.

And it is simple. And extremely unpopular. Vendors, operators definitely won't like it.

Just it: turn a smartphone to a computer. No, not like now. Really.

A computer does not run «firmware» bundled by «vendor» and «certified to use». It runs operating system, supplementary components like libraries and device drivers, and applications, both system and users'.

And there are updates. When there is a bug, an update is issued, not by the computer vendor, but by the OS or other software vendor. While «firmware» which FCC et al should care of is the tiny thing that runs inside broadband module you user probably never think of at all.

I've seen people arguing that it would break things. Due to device fragmentation people will get broken updates, brick their phones and overload repair centers. Come on. Never seen bundled OTA firmware update doing that? It is actually safer if the update is going to be granular and won't touch things it does not need to.

But you won't ever seen unfixed remote code execution bug to stay for years or even forever if your phone vendor decides that it no longer necessary to support this model.

I want my smartphone to be a real computer. With OS, applications, and no unremovable bloatware that is burned in by the vendor or (worse) MNO. Do you?

UPDATE: and surely initiatives like this will get middle finger as they deserve and no questions could be raised. You may run anything you want on your COMPUTER.

One more lesson to repeat from HackingTeam breach

(it is a copy of my old LinkedIn blog post, I saved it here because Linkedin sucks big time as a blog platform)

The full story is here:
pastebin.com/raw/0SNSvyjJ
and it is worth reading for all InfoSec professionals and amateurs: perfect, outstanding example of an «old school» hack described step by step.

Also it provides us a classic example of another issue often overlook, or rather intentionally ignored: starting from certain (rather small) organization size and complexity, a sophisticated attacker WILL compromise your Active Directory. There is no «if» in this sentence: it is inevitable. I've seen many pen tests and many advanced attacks by malicious entities — ALL, I mean it, ALL of them ended like that.

That leads us to obvious, yet controversial conclusion: for certain valuable resources it is better to keep them OFF the domain. This means cutting away the whole branch of an attack graph: no SSO, no access from domain-joined admin workstations, no access recovery via domain-based email, no backups on AD-enabled storage, whatever. Which rises some completely different issues, but that's it.

Can you manage this? Can you live with this?