How Facebook blocks bad code from the outset
2 min read

How Facebook blocks bad code from the outset

How Facebook blocks bad code from the outset

If you have a smartphone, you’re no stranger to the constant flow of app updates designed to enhance your experience and security. Some, such as Instagram’s recent update, are particularly noticeable, but many more affect functions in the background. These updates come with new code the app’s creators must vet for vulnerabilities such as inadvertently exposing users’ personal information.

“Making sure an app is secure is as important as making sure it works,” says Karen Sittig, a software engineer at Facebook.

Auditing the underlying code for apps such as Facebook, Instagram, and Messenger requires time and good managerial judgment, balancing security with the level and pace of innovation, Facebook security representative Melanie Ensign says.

“We can’t be one of those security teams that says, ‘no you can’t do this,’ because our company has very ambitious goals for the products that they’re building,” she says. “The security team is responsible for enabling that innovation in a safe and secure way.”

Facebook’s answer for secure innovation is an automated bug detector—essentially a spell-check mechanism to avoid reusing code that contains any known security vulnerabilities. It points out specific areas that need attention. And if it identifies a vulnerability, it requires the developer to fix or replace the code before proceeding.

This method, Ensign says, enables Facebook’s software developers to write and release safe code about five times per day. The bug detector checks for vulnerabilities, among others, that security researchers have identified and brought to the company’s attention via its bug bounty program.

“One of the big challenges in software development is that security can be kind of a whack-a-mole problem,” says Jonathan Aldrich, associate professor and director of the software-engineering Ph.D. program at Carnegie Mellon University. There is a wide variety of software vulnerabilities to check for, and new bugs crop up constantly.

Companies typically address bugs with a combination of approaches, Aldrich says. In addition to teaching software developers to avoid common security pitfalls, they often have security teams comb over new code before each public release.

But “humans aren’t perfect,” Aldrich adds. When software developers push to meet deadlines or simply aren’t aware that particular configurations of code could leave users vulnerable, they can make mistakes.

“A lot of times, people who are good programmers don’t actually learn good security practices because it’s not how people are trained,” says Frank Wang, computer security Ph.D. student at the Massachusetts Institute of Technology and co-founder of The Cybersecurity Factory, a summer program for security startups.

To complement traditional security team examinations, many companies young and old use some form of bug detectors, Aldrich and Wang say. Software giant Microsoft, among others, has used automated code analysis for a decade. Facebook’s particular method of bug detection, Aldrich says, is somewhat novel because the system prevents developers from even writing code containing known vulnerabilities.

But as effective as automated bug detectors might become, they are no replacement for humans, Aldrich and Wang say. In-house security auditors can detect patterns of software vulnerabilities that machines might have trouble discerning, and independent security researchers often find issues that make it past all the in-house security checks.

There are “so many ways things go wrong,” Wang says, and “only a few ways things can go right.”

Enjoying these posts? Subscribe for more