Data spills in the age of AI

This week's tech headlines offer a compendium of data-loss nightmares:

  • The personal data of members of Congress was potentially exposed after hackers broke into a D.C. health-insurance system.

  • Police asked an Ohio businessman for video from his Ring doorbell camera, then issued a warrant for footage from more than 20 other cameras at his home and business.

  • Chinese-owned TikTok faces the threat of a ban over fears that the user data it collects could get fed to Beijing.

  • And users are sharing their mental health woes with OpenAI's ChatGPT with no concern for confidentiality.

What's happening: Congress' long-running inability to pass a comprehensive privacy law has left online personal information vulnerable to be mined, hoarded and poached.

Why it matters: Virtually every major technology today opens data vulnerabilities that can cause havoc.

  • "Data privacy" may sound like an abstraction to much of the U.S. public, but our national failure to set privacy rules can have very concrete consequences.

Zoom out: Legal experts and privacy advocates have long warned of the dangers of the U.S.'s failure to bring privacy law into the 21st century.

  • It means that government authorities have a freer hand to seize digital information as evidence.

  • Private companies are freer to gather and resell the personal information of their customers and users.

  • In both public and private sectors, the absence of tough rules governing data handling makes every breach and hack more potentially damaging.

What's next: The frenzy over generative AI is adding a whole new dimension of worry.

  • AI experts fear that chatbots like ChatGPT trained on vast troves of internet text will already be seeded with an unknowable volume of personal data.

  • On its own, that's little different from what's available on Google or any other search engine today.

  • The difference is that ChatGPT and similar programs are capable of "remembering" and reusing information users share with them in unpredictable ways.

  • That means that details from any legal document, medical report, financial calculation or other input that someone shares with these systems might turn up again — accurately or erroneously — in answers to someone else's query, with no indication of the original source.

Our thought bubble: Every time you type at ChatGPT, consider that you might be sharing secrets with a thing that has an impossibly vast memory — and doesn't have a clue what a secret even is.

Between the lines: There may well be ways to equip generative AI systems with guardrails to protect against this kind of unintended sharing.

  • But right now developers have little incentive to build them, and the rest of us have no visibility into what data the systems are holding onto.

The bottom line: The faster technology advances and the more central it becomes in our lives, the more we'll miss having a good privacy law.

Reposted from Axios
By Scott Rosenberg, Axios News

Khalil Thomas

Khalil Thomas is a Health Equity expert and President of TRCG, a boutique Digital Health consulting group that leverages regulatory compliance expertise to bring solutions to market, manage algorithm bias, and improve quality for an expanded patient demographic. He specializes in topics at the intersection of AI, Health Tech, and Health Equity; highlighting pathways for innovation enabled equity.

Previous
Previous

OpenAI CEO Sam Altman’s first congressional hearing on artificial intelligence

Next
Next

FTC’s enforcement action against GoodRx unveiled a new regulatory threat. Should digital health apps be concerned?