The sharp flutter of momentary panic that stabs at you when a website link arrives in your inbox or by text message may not (yet) have a specific term in the Diagnostic and Statistical Manual of Mental Disorders, but that doesn’t mean you’re wrong to worry about whether that link is safe to click on.
Certainly, it’s not hard to imagine a feeling of dread in the aftermath of the 2016 U.S. presidential election campaign, when John Podesta’s email account was notoriously phished by now indicted Russian hackers.
READ MORE ON PHISHING ATTACKS
Most Gmail phishers are actually Nigerians targeting Americans
Primer: How to dodge a spear-phishing attack
How to avoid phishing scams
Parallax Primer: What’s in an APT
Your old router could be a hacking group’s APT pawn
How YubiKey could double-lock your online accounts
Why hackers love your Wi-Fi (and how to protect it)
The incredibly short answer to the question of whether you should click on a suspicious link is, not surprisingly, “No,” says Gabriel Weinberg, CEO and founder of privacy-focused search engine DuckDuckGo. But whatever the compelling reason is, whether the attack is a generic phishing attempt or a targeted spear-phishing attack, sometimes it seems important to click on a fishy-looking link. Studies have shown for more than two decades that links are, for many people, psychologically compelling bells that must be rung.
Search engines struggle to keep malicious links from consumers because the scammers “frequently” change them to confound traditional blacklisting techniques, Weinberg says.
“The bad actors are constantly creating new domains, new scams, and otherwise subjugate efforts to stop them. The lists get out of date,” he says. “It always will be an arms race.”
A 2018 study by the U.S. Department of Commerce’s National Institute of Standards and Technology found that phishers were more likely to succeed when crafting messages to fit the specific job responsibilities of their targets. A 2016 study at Friedrich-Alexander University in Germany found that half of the 1,700 students who received a simulated phishing email clicked on the link inside, even though 78 percent of the students “knew” the risks, the researchers said.
In a similar Columbia University study, conducted in 2012, despite warnings to targets about malicious links between multiple rounds of simulated phishing attacks, at least one target clicked on one of the links in question in each of the first three rounds.
The list of phishing studies appears almost to be without end. And as Podesta found out, hackers can be extremely clever in crafting emails not only designed for a specific target, but to look compelling and authentic. Once a hacker has convinced a target to click on a surreptitiously malicious link, he or she may not even need to get the target to do anything else: In some cases, the attack requires no user interaction. So-called drive-by attacks recently became a common technique in cryptojacking, which uses victims’ computers and phones to mine cryptocurrencies like Bitcoin and Ethereum.
As endless phishing studies show, the first and most important lesson in stopping a phishing attempt is to be skeptical of links: in emails, in text messages, in anything that appears to be a personal, private communication. If you’re sent an invoice for a product you didn’t order, for example, don’t click on anything in the email; manually search for the vendor’s website to confirm its legitimacy, then call.
Phishing is not the only way to receive a malicious website link. Friends, family, and colleagues often pass them on as part of a chain of jokes or otherwise ostensibly important content. Links can also come to you in a shortened form, truncated by a legitimate third-party service such as Bit.ly, TinyURL, Goo.gl (which Google plans to discontinue in March), or Twitter’s T.co.
“The bad actors are constantly creating new domains, new scams, and otherwise subjugate efforts to stop them.”—Gabriel Weinberg, CEO and founder, DuckDuckGo
Sometimes targets can tell just by looking at the link whether it’s malicious, such as when there are percent signs or other symbols in the URL. Hovering a mouse over a link, or using a URL unshortener, can also sometimes reveal the destination URL.
Another way to check, which can be faster—especially if the link in question doesn’t come with a customer service department, as an invoice might—is to highlight or right-click the link, copy it, and paste it into a link safety verification service. Many computer security software companies offer link safety-checking services for free on their websites without having to download additional software.
URL Void runs user-submitted links through 39 link safety checkers at once, including those services run by big names in computer security and some specific security subgenres, such as cryptocurrency-specific BadBitcoin and phishing-focused PhishTank. URL Void shows which services ranked the URL as safe or unsafe, offers links back to the service, the URL’s IP address, and even latitude and longitude of the IP address server. If even one of them rates a site as unsafe, it’s probably best to skip it.
However, manually checking links is another opportunity to introduce human error. While new research explores automatic detection of malicious URLs, the NIST report recommends a more holistic approach.
Kristen Greene, a human factors researcher at NIST, recommends a three-pronged approach that combines training consumers and employees to be aware of attacks, machine learning so technology can be not just reactive but preventative, and providing tools to make it easier for users to report phishing attacks and malicious links.
The context of a message containing a malicious link is a “critical factor” in how likely the target will click on the link. “The more the context of the message seems relevant to a person’s life or job responsibilities, the harder it is for them to recognize it as a phishing attack,” she says in a public-service video on NIST’s research.