On hypocrisy and spyware

I said it earlier this century, "state-sponsored malware/spyware developers ARE de facto blackhats".

There is no «legitimate» third side to receive zero days. Either you give a priority to your software vendor (and contribute to the defensive side) or you do not and contribute to the bad guys. Yes, bad.

Not that I blame vulnerability researchers for being immoral. I am a free market advocate: if a software vendor is not willing to pay a competitive price for vulnerability information, it certainly deserves the consequences. I just hate hypocrites that fail to admit the obvious fact that they are no different to blackhats — because «we sell to government and law enforcement only» clause makes no real difference.

But, wait!



They ARE different.

The ideal black market for zero day exploits is free and open for anyone, including software vendors searching for the exploits in their software. You, as a seller, do not want to sell your exploit to the vendor of the vulnerable software, because you are interested in the exploit's longevity. But on the black market there is no way for you to know if a buyer works for the vendor (directly or indirectly).

Contrary to that, the real market (thoroughly regulated by the government) completely rigs the game to the detriment of the software vendors. First, a software vendor is explicitly banned from participation (by this «we sell only to law enforcement»), no legitimate purchases for a vendor, tough luck. Second, it is open for trusted brokers who make huge profits from the fact they got government approvals (see HBGary leak to find out how hard some people try to set a foot there with quite limited success).

Needless to say, newly proposed so-called «cyber arms regulations» only worsen the situation, making free black market for zero day exploits illegal in favor of government-approved brokers.

So they are not «just» blackhats. They are the most vicious breed. They found a perfect exploit for the government. They use regulations to defeat their victims.

Smartphone is a computer (actually, not, but it should be)



There is a simple remedy to many information security woes about smartphones.

And it is simple. And extremely unpopular. Vendors, operators definitely won't like it.

Just it: turn a smartphone to a computer. No, not like now. Really.

A computer does not run «firmware» bundled by «vendor» and «certified to use». It runs operating system, supplementary components like libraries and device drivers, and applications, both system and users'.

And there are updates. When there is a bug, an update is issued, not by the computer vendor, but by the OS or other software vendor. While «firmware» which FCC et al should care of is the tiny thing that runs inside broadband module you user probably never think of at all.

I've seen people arguing that it would break things. Due to device fragmentation people will get broken updates, brick their phones and overload repair centers. Come on. Never seen bundled OTA firmware update doing that? It is actually safer if the update is going to be granular and won't touch things it does not need to.

But you won't ever seen unfixed remote code execution bug to stay for years or even forever if your phone vendor decides that it no longer necessary to support this model.

I want my smartphone to be a real computer. With OS, applications, and no unremovable bloatware that is burned in by the vendor or (worse) MNO. Do you?

UPDATE: and surely initiatives like this will get middle finger as they deserve and no questions could be raised. You may run anything you want on your COMPUTER.

"We are not responsible"



More than 90 percent of corporate executives said they cannot read a cybersecurity report and are not prepared to handle a major attack, according to a new survey.

More distressing is that 40 percent of executives said they don't feel responsible for the repercussions of hackings, said Dave Damato, chief security officer at Tanium, which commissioned the survey with the Nasdaq.


Seriously. They are «not responsible»! Who is, then? Those guys are getting paid enormous amount of money for being MANAGERS. Manager is a person who is responsible — for solving problems he/she might not truly comprehend as well, but that's ok. I do not expect them to really know a thing or two about IT or security. An executive should understand business risks, that's enough. If there is a business risk that an executive does not understand and is not willing to, he/she should consider getting another job, probably McDonalds could offer them an intern position?

Those people say they are utterly incompetent — and they say it in public and get away with that. And everyone thinks it is ok.

I still fail to understand why ransomware is such a big deal



I've seen a lot of companies where it is not — not necessary big corporations with huge IT staff. There is just no reason to have anything of significant value on a workstation (and quite a few reasons to have it on a file share) and it is not a huge complication to live without it.

I'd be more worried about the fact that if you've got ransomware (or any malware at all) it means you have been compromised. And you are just lucky that the attacker was not sophisticated enough to get any other advantage of the situation (in a way that would be even more harmful to you), maintaining covert access for indefinite amount of time and silently ruining your business the way you wouldn't even be able to identify before it's too late.

So it is not about desktop backups, or antivirus, or advanced anti-APT self-guided silver bullets. It is about you.

Some thoughts on enterprise risk management, security awareness and stuff



It all started as a Facebook discussion. A colleague of mine witnessed an impressive talk on a conference: a representative of a penetration testing company claimed he would hack any company in one hour. He was challenged to do this, and here is the solution:

With simple search of social networks and the company’s website, he profiled the target company and obtained a contact of a sales person. Then he crafted simple trojan executable (not really tailored at this time, just some generic one), encrypted the archive and sent it to that person; then he called by phone pretending he has an urgent business proposal and mentioned the email he have just sent.

The salesperson replied: ”I cannot open the documents, my antivirus does not allow me to". «Strange, which one?» "(some name)" «Ok, I will send you a new archive, it should work». And it did (now it was a better crafted trojan).

Yes, simple as that.

Could it be thwarted with a proper training?

Yes. And no.

You may expect some vigilance from a person who understands the risks.
But what the risks are and could a training help to understand it?

From a salesperson's perspective, chances are there is a technical issue. A salesperson estimates the probability of this as, say, 90% (we may discuss his reasoning later).

“If I manage to close the deal, circumventing the procedures that do not allow me to open these documents, I get, say, $30K bonus. If I do not, I get nothing.
10% chance is there is a malicious hacker trying to steal the data from the company. If a hacker succeeds, and I get the blame, I am to be fired and I lose, say, $50K in total consequences”.

Given our salesman has a decent experience and learned some basic probability theory, it is totally acceptable for him to ignore the danger; this would be a reasonably profitable strategy that incurs no extra cost. Add some internal competition among sales people and you easily see that he would play this lottery again and again.

That's how single-parameter optimisation works. One cannot simply turn the «money seeking zombie» mode off.

Let's talk about someone a bit higher in a corporate food chain, or even at the top of it — CEO, CFO, VP of sales, etc.

The perspective changes drastically. If the contract is secured, the company gets $1M. If a large-scale network breach occurs, sensitive data get leaked, or something similarly happens, the company loses $15M. And that persons bonus is affected accordingly.

The balance is all different now (even if we assume probabilities to be the same).

Who is our CISO (or whoever is in charge of the data security) working for? The answer is obvious.

But there are caveats, as usual.

The first caveat is that if, say, our worst-case scenario loss is estimated to be low and the associated damage to be benign, then the doing nothing strategy of risk acceptance (as bad as it sounds) is a business justified course of action.
If you dislike this choice, you may try spend some resources to decrease the probability and the impact, don't expect the business side to be very cooperative. It is still a lot of money, but not enough to let you interfere with any revenue generating processes.

And the second caveat is more serious. It is that all our risk estimations are produced by the business risk management process, which is an enigma for us, a black box. It either works, or we blindly assume it works because it is «someone else's problem».

It the business risk management is ad hoc, or does not exist in your organisation, or is non-functional, it gets substituted with “information security risk management”, where the most prominent «information sources» are: «FBI/CSI reports», «SEC-mandated leak disclosures», «industry analysis reports» — the highest grade nonsense, zero relevance is guaranteed.

It is better than nothing to base our guess on, but a blatant attempt to sell our qualitative estimation as quantitative data is a pure hoax.

However, chances are there is no risk management at all in your company, not even a dysfunctional one.

I think most people in the industry know that, but most are afraid to tell the truth aloud.

If you do not know your business environment, the probability estimation is pointless.
If you do not know the real business impact of a breach, your loss estimation is baseless.

Multiply these to get nonsense squared.

But you need to “justify” your security choices anyway. Scaremongering sounds like a decent plan now?

The Root Of All Evil



For a million years you are being trained to reason about the physical world as perceived by your sensors. You evolved to search for patterns and assume animal agency by default (simply because the cost of the mistake is lower with this assumption). Then came computer programs… they are invisible for your sensors, they do not follow any patterns, they can make computers appear animate, they can disguise as a reasonable actor, or fool your senses otherwise. And on top of it all they do not obey the laws of physic, the laws that your brain perceive as unbreakable for any agent in the visible world. This is a disaster for your neolithic brain.
Read more →