Kronos malware indictment highlights the risk of trust
4 min read

Kronos malware indictment highlights the risk of trust

Kronos malware indictment highlights the risk of trust

Some years ago, I found that several of my employees were criminals. As I fired them and turned evidence I had over to law enforcement, I realized that I had another problem: These ex-employees knew my company, my network, my strategies, my defenses and, especially, my weaknesses.

When I was later attacked, I suspected that they were the perps because of stylistic quirks of the attack, as well as the innate knowledge required to damage me in the way they did. I turned over that evidence to law enforcement also.

Some of these never-indicted ex-employees are now part of the industry I was in at that time. I’ve sat next to some of them in summit meetings, and I might even have had a few of them as customers.

I keep sitting by the river, waiting for bodies to float by. I am a patient man.

In light of the kind of dual agency suspected here, it’s not too soon to consider the risks of information sharing.

This thread of events came to mind today, when Marcus Hutchins was indicted by the FBI for taking an active role in the Kronos banking Trojan horse in 2014 and 2015. On one hand, indictment and conviction are worlds apart. And in this case—and in the United States at this time—Hutchins is innocent until proved guilty.


READ MORE ON KRONOS AND MARCUS HUTCHINS

Primer: What’s in a banking Trojan?


But on the other hand, he was the hero of the moment a few months ago, when he single-handedly stopped the WannaCry ransomware attack, simply by registering an Internet domain name that this malicious software used as an off-switch.

Hutchins was thereafter invited to speak at several security conferences of industry insiders, which is how I met him. He was also invited to join at least one electronic community of insiders of which I am a member. And join he did.

If guilt is proved, we ought to worry about who Hutchins’ friends and associates are, and how much of the daily rantings and information sharing to which he’s been party have by now been shared with other criminals.

In spite of the paranoia-level caution we use when interacting with one another, we expose a lot of information in mostly meaningless bite-size chunks.

Right now, there is only an indictment, and it could all blow over. Given that Hutchins released some demonstration malware as open-source software in 2014, this indictment might all be some grand misunderstanding. But in light of the kind of dual agency suspected here, it’s not too soon to consider the risks of information sharing.

The good guys—those of us who recognize and respect property rights, and who defend those rights online—have to be able to work together to have a motivating impact. We need to expose defense methods and sometimes dangerous nonpublic observations in order to receive or provide the cooperation our relevance and success requires. That cooperation, in turn, relies on trust, and trust does not scale well, as potentially shown by today’s indictment of Hutchins.

A human is able to innately reach a “trust state” with at most a few dozen people. After that, concepts, rather than perceptions and gut feelings, are necessary to grow a circle of trust. But even with strong rules for trust, including vouching and vetting, very few humans can even conceptually trust a group of more than about 75 other humans.

The only obvious constructive lesson for all of us thus far is, don’t be careless or blithe in vouching for others.

Someone whose only trust indicator is that everyone in the group has been vetted by some handful of others won’t necessarily behave trustfully, such as sharing sensitive information, or even exposing the existence of sensitive information by asking a question of the whole group.

These limitations are among the structural defects in digital defense I keep ranting about and that I hope to speak about at the SANS Institute’s Cyber Threat Intelligence Summit in January. Bad guys don’t have to trust, or even know the real identity of, other members of an attack team to succeed and prosper.

In a sense, there’s a limit to how much damage Hutchins could have done to the trust community he and I were both members of until his indictment today. It has thousands of members, and truly sensitive information is rarely exposed in a group that large. By the nature of its population size, it’s not a trustworthy and trustful group. However, in spite of the paranoia-level caution we use when interacting with one another, we expose a lot of information in mostly meaningless bite-size chunks.

Noting that data science has taught all of us quite a lot about deanonymization and various kinds of re-correlation, it’s safe to say that if Hutchins actually is a bad guy, he might have weakened the powers of Internet security goodness by sharing with other bad guys even just the character and social structure of the industry insider community of which he has been a member.

Since the usual damage cost after such an injury is dominated by overreaction and counterreaction, I fear that we’ll soon see even more paranoia, less information shared, and less cooperation. That would hurt us all more than it could possibly help us. So, I urge calm.

The only obvious constructive lesson for all of us thus far is, don’t be careless or blithe in vouching for others. If you haven’t seen or even communicated with somebody for a year or longer, maybe you should let others with more recent experience than yours vouch for the solidity of that somebody’s character.

I have seen a bandwagon effect from time to time, probably because vouching or inviting or otherwise helping to grow an industry insider community is an action we can take. And so often in this business, there is no action any of us can take. Greater self-evaluation and self-reflection may be warranted here. Creating and maintaining trust must—must!—be hard work.

Enjoying these posts? Subscribe for more