GMOs And Passwords

Before you indulge into an experiment investigating the effects of whatever quality of a subject, it is the best for you to make sure beforehand that the quality in question does belong to your subject.

We colloquially say: «a red pencil» as if it is not a question whether a pencil can be red. Indeed, it can. In this particular case our «intuition» coincide with physical reality. We can create an experiment that demonstrates a possibility of any colour be a quality of a pencil. We can clearly define «red» as a specific feature of the light spectrum, and we can unambiguously link those spectra to each pencil. We can see (experimentally) that some pencils share this quality, while some do not. Even if the dividing line between these sets is fuzzy, we now have a CHARACTERISTIC PROPERTY of a «red pencil»: all red pencils share this property, and all non red do not have it. Facing a pencil, we can (experimentally) determine if it is red (and to what extent).

It is perfectly legitimate for anyone to call a pencil «red» or otherwise tag a pencil with a colour, because of the physics, not because the language allows it. Language is equally suitable for describing reality and nonsense as well. We still can call a pencil «aggressive» but it does not make physical sense. Aggressiveness can not be observed in pencils. There are many qualities applicable to pencils and there are many qualities inapplicable to pencils. Some qualities are plainly inapplicable to some objects — this fact is so basic that is often forgotten.

Now, I give you two grains of wheat, one is «GMO» and another isn't.
Can you conceive an experiment that tells me which is which?

Maybe it is time to make one step back and determine if «GMO» is a quality of an organism? Is there any CHARACTERISTIC PROPERTY of a «GM organism», something that all «GM» subjects share, while none of the rest have? Please, define this property for me. ...or simply ask yourself (every time you are looking for the magical label on the food package) what is this characteristic property I am looking for?

Now, as you have yelled at me all your suggestions, think carefully which of them is actually a property of an organism. Not single one. All that you have come up with are qualities of a production process or a design process or even earlier. None of those can be observed in a grain of wheat.

Observing a car, can you tell, for example, a difference between a car that was sketched with HB pencil and a car sketched with 2B pencil during their stage of development? In case of a car you would not claim that all qualities of a design phase are inherited by the product. You may consider me foolish to even suggest this very possibility. It is too obvious for you that a car and a car production process are two wildly different objects. Ok, then. What makes you claim that «GM» property of an organism design process is also a quality of a resulted organism? Hopefully you are not going to claim that organisms and their production processes are the same object.

However, you may legitimately conjecture that this particular property somehow translates from the design process to the organism. This is why I gave you these two grains of wheat. Take them and prove your conjecture. Show me the CHARACTERISTIC PROPERTY of «GMO».

I know you are wondering what all this nonsense has to do with passwords.
Well, this is all about the information entropy, which you do happily assign to your passwords without even a glimpse of doubt: IS IT REALLY A QUALITY OF A PASSWORD??? CAN I CREATE A CHARACTERISTIC RELATION THAT MAPS PASSWORDS ON REAL NUMBERS AND IS A FUNCTION???

"One Brand of Firewall"

Gatrner sent me an ad of a quite disturbing report ( ) which advocates using «one firewall brand» to reduce complexity.

Sorry, guys, one brand of WHAT?

There is no such thing as «general purpose firewall» that fits all. It is a mythical device (and this myth was supported by Gartner for years).
What you call «firewall» is actually one of three (or more) things:

1) A border/datacenter segmenation device. Think high throughput, ASICs, fault tolerance and basic IPS capabilities.
2) An «office» firewall. Think moderate throughput, egress filtering, in-depth protocol inspection, IAM integration and logging capabilities
3) WAF. Enough said, WAF is completely different beast, having almost nothing in common with any of those.

Ah, and a VPN server. It is not a firewall (though it should have basic firewall capabilities). Not falls into any of those categories.

Dear Gartner, have you ever tried to market a pipe-wrench-hair-dryer? You should, you have a talent for that.

One more lesson to repeat from HackingTeam breach

(it is a copy of my old LinkedIn blog post, I saved it here because Linkedin sucks big time as a blog platform)

The full story is here:
and it is worth reading for all InfoSec professionals and amateurs: perfect, outstanding example of an «old school» hack described step by step.

Also it provides us a classic example of another issue often overlook, or rather intentionally ignored: starting from certain (rather small) organization size and complexity, a sophisticated attacker WILL compromise your Active Directory. There is no «if» in this sentence: it is inevitable. I've seen many pen tests and many advanced attacks by malicious entities — ALL, I mean it, ALL of them ended like that.

That leads us to obvious, yet controversial conclusion: for certain valuable resources it is better to keep them OFF the domain. This means cutting away the whole branch of an attack graph: no SSO, no access from domain-joined admin workstations, no access recovery via domain-based email, no backups on AD-enabled storage, whatever. Which rises some completely different issues, but that's it.

Can you manage this? Can you live with this?

On positive impact of ransomware on information security

I truly hate I need to write this. And I feel really sorry for those who were forced to learn it the hard way, but don't tell me you haven't been warned in advance years before. However.

— The end of compliance-driven security is now official. Petya is not impressed with your ISO27K certificate. Nor does it give a flying fsck about your recent audit performed by a Big4 company.
— Make prevention great again (in detection dominated world we live in now)! Too busy playing with your all-new AI-driven deep learning UEBA box? Ooops, your homework goes first. Get patched, enable smb signing, check your account privileges and do other boring stuff and then you may play.

Did I say BCP and business process maturity? Forget that, I was kidding, hahaha. That's for grown-ups.

Any sales pitch mentioning WannaCry is a scam.

snake oil
To suffer a significant damage from WannaCry, you need to craft a redundant clusterfuck of FIVE SIMULTANEOUSLY MET conditions:

  1. Failure to learn from previous cases (remember Cornflicker? It was pretty much similar thing)
  2. Workflow process failure (why do you need those file shares at all?)
  3. Basic business continuity management process failure (where are your backups?)
  4. Patch management process failure (to miss an almost two month old critical patch?)
  5. Basic threat intelligence and situational awareness failure (not like in «use a fancy IPS with IoC feed and dashboard with world map on it», more like «read several top security-related articles in non-technical media at least weekly»)

And after you won the bingo, you expect you can BUY something that will defeat such an ultimate ability to screw up? Duh.

A hacker’s guide to not get hacked

Assume you are a technically savvy person who knows the basics. You never install random crap from the internet. A typical phishing email makes you laugh, you almost pity the mankind which can be fooled by scammers as silly as those. Your phone screen is locked by default, and you use a password manager.

In multiple ways (sometimes inevitable, oftentimes obscure and cryptic) you depend on things (software, data, hardware) you do not own nor control locally (unless you are extremely paranoid and have accepted huge operational overhead). And I do not mean continuously bleeding “digital trail” of metadata and behavioural tracking. Let’s talk about a risk of being really hacked.

Understand (and prioritize) your assets

You obviously give the top priority to your bank accounts and launch codes, but some other things might be more expensive than you imagine at a first glance.
«There is nothing secret and I have all that information elsewhere» might lead you, in case of a loss event, spending a whole weekend handpicking it from… from said «elsewhere», and sorting everything in the precise OCDish way it use to be. Some other things may be of sentimental value and little interest to anyone but you certainly do not want everyone to see it (and you are likely underestimating the importance of your feelings, until said «everyone» see those files). You better archive all conversations with your ex-es and don't leave it hanging in your chat history (which would not help much unless other parties do the same). Ah, and, if you own some cryptocurrency, I hope you keep it on a local wallet, otherwise you do not really own it.

Understand (and prioritize) your attack surface.

A computer is not that unsafe unless you physically lose it (while having no disk encryption in place) or forgot (you did not, right?) to remove Adobe Flash from your system. If you adopted a good habit to handle MS Office and PDF documents with web-based services (as long as the content of said documents are not secret) you further improve the overall security of your local system.

Thus, your chances to get hacked via a 0day exploit (unless you do really store actual launch codes on your computer) are negligible.

Portable devices, I mean iOS and Android… Really, you cannot be serious. Android malware? Which requires your consent, like, three times to run, (ok, or just one if you have a system which is 5 years old with unpatched security bugs)? Again, your worst-case scenario always implies someone getting your device in physical possession.

Third party services (I want to avoid using the word “cloud” because the worst things aren’t even cloud, see below) are your only real concern. It is something completely outside of your control. It is not necessarily bad, but they may be vastly distributed, prone to breaches, accidental social engineering attacks and all those factors are unpredictable. To add insult to injury, they often actively encourage, just a little bit short of enforcing, “questionable" security practices. So the rest of the essay will be dedicated to them.

Build dependency graphs and rebuild it wisely

Here the hell begins. Common consensus dictates that the general public is totally incapable of managing their own access credentials without either losing access or leaking it to a malicious entity. Thus said, it all revolves around "recovery methods" which make your access credentials tightly interdependent and susceptible to a cascade failure unless you take special care of that. Let's do a simple dissection.

Worst of all is your phone number used for verification. Because the phone number is something you do not actually own in any way. Even the SIM card is legally the property of the cellular operator which provided it to you for lease (just like your bank card is). You do not own the number, you do not even own the network authentication key (try to reprogram or extract it!). As a practical consequence, not to mention slightly more complicated SS7 attacks, any cellular operator employee may issue a replacement SIM card on your behalf to anyone looking credible enough to them.

The funniest part is it is the sole ultimate trusted method for many services, including some banks, instant messengers and — Facebook. On Facebook, you cannot deny it from providing SMS recovery codes unless you remove the phone number from the system completely. Google is a bit less persistent, it just keeps whining about how insecure it is not to have an SMS recovery option (UPD: no, you need to remove the phone number there, too).

Then there are email accounts. You may have multiple ones, you may consider different threat models which would either convince you to stay on Gmail (at least it is really hard to divert anything in transit there) or to use your own mail server (think BGP hijacks, domain hijacks, and all that nasty stuff, email is generally not designed to be very secure). Anyway, once one of your critical email accounts gets compromised, things can get very nasty. It is one more «recovery method» you often cannot simply turn off (and in most cases, you do not really want to).

There are social network logins. Not to mention the direct problems you would have, many services would let you in on email/phone match without additional checks. Say, if you log into with Facebook, you do not provide any authorization on the side to recognize your Facebook login: you just let Facebook share your email and phone number and you are instantly connected.

Security questions should be, without any doubt, nominated for one of humanity’s stupidest inventions. Just imagine how attractive would it be to give up access to your account to any person who happens to find out some trivial fact about you. Especially one of those facts you can find in a public database. Yet you can suppress your disgust and treat it as usual recovery codes. Make sure you use a decent random string generator. You may use questions that are really private — but they are non-reusable and you should keep that in mind.

To minimize dependencies and maximize security, you need to turn off all unnecessary recovery options (especially phone numbers), turn on a second factor (one-time passwords, definitely not SMS!), and generate recovery codes and store them in a safe place. Offline and on a hardcopy, preferably.

It does not guarantee anything. Security models for most of the services are incredibly stupid, because they are ad hoc and lack consistency from ground up. I designed a better one for a customer a while ago: it was simple as that:

* every authentication factor that is not under the direct control of the user is considered insecure
* no combination of insecure authentication factors may be ever treated as secure
* recovery procedures that rely on insecure factors should be implemented with extreme caution.

Is it hard to understand or hard to implement? No. Does it impair user experience? Unlikely. Yet it does not scale well enough for services with billions of customers, unfortunately for them. Therefore, when you accept Google or Facebook policy, you are solving their problem on your own expense.

Have a contingency plan

There are no silver bullets, and my advice are not panacea. Anything can fail. Be prepared.

Some useful links:
On second factor:
On NIST warning not to use SMS:
On cascade failures in authentication:

Self-signed TLS certificates are not evil, nor they are "broken"

One more hopeless rant I was engaged in, like, for last 20 years or more. What is totally broken, however, is UX decisions that mark them «evil». Self-signed TLS certificates possess no more intrinsic evil qualities than your beloved ssh and gpg keys.

The intent behind ostracising self-signed certificates is noble: everybody should do… should be forced to do things the one proper way: for intranet you should deploy our own private root CA and distribute its certificate to all the clients, for internet there are affordable solutions like letsencrypt to save you from calamities of certificate management and huge expenses.

Yet it is nothing but wishful thinking and thus is rotten to the root. Every, I repeat, every single company network I ever seen (with a few exceptions that qualify as hobbyist projects) had tons and tons of self-signed stuff, regardless if they had internal CA or not. Most of it was just «temporary» yet you know most permanent things are those «temporary» ones. Smaller companies just do not have an internal CA at all.

(I intentionally leave out the question if it is always a good idea to get a «trusted third party» involved, and, moreover, to give infinite total trust to vast and vague amount of «third parties», for now at least)

We need to accept this situation, understand it and adapt accordingly. The pretty narrow vulnerability window for a self-signed certificate exists just for the very moment you engage into an «initial» contact with the resource (or when the certificate changes, which is quite uncommon scenario). If no evil actor intervened at this moment you are safe from now on; there are numerous scenarios where there is actually no need of any trusted third party. The same way it works with ssh (unless you deployed that complex set of scripts, you know). Oh, no. You would be safe, if that stupid broken UIs did not complicate things, distracting you with a flood of pointless warnings that never stops, effectively concealing the actual attack when it happens.

For now, however, we see a lot of totally unprotected resources susceptible to MITM attacks because you know, self-signed certificates were proclaimed evil and you should not use it anyway, mkay? And that's why nearly all small/home office wireless environments are contaminated with hideously misdesigned WPA2 PSK (until we have WPA3 on the horizon), because WPA2-enterprise requires complicated «certificate management» — you cannot just say «remember this access point» on the client and no vendor bothers to have a builtin Radius server on the wireless controller therefore.

Every time I add a self-signed certificate to Safari I get a scary dialog about «changing my trust settings» which always makes me doubt — did I just add a site certificate to the trust store? Or was it Honest Achmed's root CA that I granted with full permissions right now? With current workflow it is hard to tell.

Generators in GO

Generator, simply, this is a function that returns the iterator. It's applicable in case of when you do not need to load all data in memory before processing or in cases when we can not receive all data at once or we generate it on the go. In some languages mainly used special keyword yield what returns some value until the next call of the next value from the iterator.

Let’s create the generator.

def gen(steps):
	for i in range(0, steps):
		yield i * i

The yield will generate something like this
Read more →

Testing in GO

When the creators of golang were developing the architecture of the language they were caring out about developers which will use it in real work. One of the most important processes of development it's the testing, and they did it very convenient and easy to use as they use it themselves.

To create the test enough to declare ".go" file with "_test" postfix in the name. This file will be ignored by the compiler when you will assemble your application. When you run the test every module in the project «go» compile the module with "_test.go" files as independent application and run the test cases one by one.

Let's make simple GO test

First of all, we need what exactly will be to test.

// fact.go
package fact

func Fact(v int) int {
	switch {
	case v < 2:
		return 1
	case v == 2:
		return 2
	return v * Fact(v-1)

Read more →

RTB on fingers

RTB (Real Time Bidding) the mechanism of monetization of traffic by more highest price or selling the traffic which the ad network can’t monetize itself. It could be compared with the big auction where one ad network is the auctioneer all other is participants. The auctioneer to announced the lot, some participants make a bid, and the bid with the highest price wins the lot.


Read more →