Disclosures about personal prescription drug use aren’t just taking place behind the closed doors of doctors’ offices and rehab facilities. They’re also happening on public forums.

“About to be cracked on adderall to survive today,” one person tweeted, according to a research paper co-authored by data scientists at Arizona State and Regis universities. “I’m just gonna shower and overdose on Seroquel so I’ll sleep until morning,” another person tweeted.

To help organizations better identify where to target anti-drug abuse campaigns, researchers like those at ASU and Regis are using algorithms to scan for and analyze posts like these. In mining publicly available Twitter posts, they say they can identify general groups of medication users, as well as geographic regions with high concentrations of abusers.

The accuracy of the algorithms in identifying tweets showing signs of abuse “is more than good enough to make predictions about populations,” says Abeed Sarker, ASU research scholar and the paper’s lead author.

To be sure, the tweets aren’t proof of medication abuse—they could simply be sarcastic jokes or attempts to impress peers—but they nevertheless indicate areas of potential concern.

“[T]here’s no baseline privacy protections, when it comes to social media.” — Claire Gartland, consumer protection counsel, Electronic Privacy Information Center

“We can recognize patterns” in social-media posts, says Graciela Gonzalez, associate professor of medical informatics at ASU, to help tackle prescription drug abuse trends “before they get too bad.” To improve the efficiency and effectiveness of a public-health campaign to address abuse of a particular drug, for example, data gleaned from social media could help narrow its focus to specific regions that showed greater concentrations of abuse.

The purpose of the research isn’t to make a particular conclusion, Gonzalez says, but to help validate preliminary hypotheses and look at areas for further study.

Twitter, meds, and privacy

Privacy advocates are concerned that not everyone conducting this type of social-network analysis has such virtuous intentions.

“Your health records can have a host of impacts across your life,” says Claire Gartland, consumer protection counsel at the Electronic Privacy Information Center, from health insurance costs to employment opportunities. Alongside medical practices and research facilities, organizations such as data brokerages are regularly harvesting (and selling) any medical data they can easily access, and “there’s no baseline privacy protections, when it comes to social media.”

The national Health Insurance Portability and Accountability Act, passed in 1996 to protect the privacy of consumers’ health data, applies only to information handled by health care providers, health plans, and health care clearinghouses. State laws intended to supplement HIPAA similarly don’t cover publicly available information, including posts to Twitter or groups joineed on Facebook.

Any legal protections that do apply to such data depend on who is using it and for what purpose. If, for example, a credit-reporting agency incorporated it into a background check, the Fair Credit Reporting Act would require the agency to notify the consumer. And if the agency’s use of the data was later found to disproportionately impact a specific group, such as a race or age bracket, the Equal Credit Opportunity Act might deem it discriminatory.

This hasn’t happened yet, but concerns were raised last year, when Facebook applied for a patent suggesting that banks could analyze someone’s social network to help determine if that person qualifies for a loan.

Concerns also abound regarding employers. Denying someone a position based in part on assumptions gleaned from public data about her health “is definitely unexpected and certainly an invasion,” Gartland says, but likely legal.

Gartland and Gonzalez say consumers need to be conscious about what they publicly reveal online, from the messages they post to the subjects or groups they “like,” follow, or join.

“This is not clinical data,” Gonzalez says. And although it might feel private to the consumer, “this is not something that is, by nature, private.”

Gartland says organizations collecting and using medical data, even if publicly gleaned, should also take steps to protect it. That includes storing it properly, determining the length of time it will be kept, and potentially anonymizing or de-identifying it.

Gonzalez noted that Twitter usernames were stripped from the data set of her study. And she cautioned against prohibiting the collection and analysis of social-networking data for research purposes.

Doing so, she said, would “block out the people who are going to do good with this information.”