The Scorched Hand Test

The Scorched Hand Test

There’s a photograph that haunts the history of radiology: hands, fingertips gone, skin scarred and leathery. Early X-ray workers, around 1900, who paid the price for a technology that would eventually save millions of lives.

I keep thinking about that image because it feels relevant to where we are with AI right now. Not in a doom-and-gloom way, but in a “let’s learn from history” way.

The pioneers weren’t reckless

Here’s what’s interesting about those early radiologists. The story isn’t what you’d expect. These weren’t mad scientists ignoring obvious dangers. Many of them noticed burns and skin irritation pretty quickly. But think about what they were dealing with. You could suddenly see through flesh. Reveal hidden fractures. Guide surgery in ways that seemed like magic.

Of course they kept going. The potential was too extraordinary to pause over some skin irritation. They thought they could manage the risks as they learned. And honestly, they weren’t entirely wrong about that instinct. They just underestimated how much there was to learn.

The thing is, we eventually figured it out. Lead aprons, dosage limits, safety protocols, proper training. Today, medical imaging is both incredibly powerful and remarkably safe. The technology that once scorched hands now prevents countless deaths. We didn’t abandon the technology. We got better at using it.

We’re in a similar moment with AI

With AI, we’re seeing some early signals that remind me of those first radiologists. Biased outputs that amplify existing inequalities, people becoming overreliant on systems they don’t understand, legitimate anxieties about job displacement, students who struggle to think through problems without assistance.

But we’re also seeing things that would have seemed impossible just a few years ago. Scientific breakthroughs happening at unprecedented speed. Accessibility tools that give voice to people who couldn’t communicate before. Creative collaboration between humans and machines that produces genuinely novel art and ideas. Personalized education that adapts to how each person actually learns.

The difference is we have the X-ray example now. We have the pattern. We know what the early warning signs look like, and we know that the answer isn’t to stop pushing forward. It’s to get better at managing the risks while we develop the technology.

Building safety from the start

What gives me optimism is that some of the most exciting AI developments today aren’t just about raw capability. They’re about alignment, interpretability, and human collaboration. People are designing safety into the systems, not trying to bolt it on afterward.

We’re seeing research into AI systems that can explain their reasoning, that are transparent about their limitations, that actively seek human input when they’re uncertain. We’re developing frameworks for testing these systems before they’re deployed at scale. We’re having conversations about responsible development that weren’t happening in the early days of social media or search engines.

It’s not perfect. We’re still figuring it out. But there’s something different about this technological moment. Maybe it’s because we’ve been through these cycles before with the internet, with social media, with mobile technology. Maybe it’s because the potential consequences feel more significant. Or maybe it’s just that we’re getting better at learning from the patterns.

The real test

The scorched hand isn’t a warning to stop. It’s a reminder that we can do better than our predecessors. We can keep the extraordinary promise while learning from their burns. We can be both ambitious and careful, innovative and responsible.

The real test isn’t whether we’ll encounter problems with AI. We will. The test is whether we’ll respond to the early warning signs thoughtfully, before the damage becomes irreversible. Whether we’ll build the lead aprons while we’re still discovering what radiation can do. The scorched hand isn’t a warning to stop. It’s a reminder that we can do better than our predecessors. We can keep the extraordinary promise while learning from their burns. We can be both ambitious and careful, innovative and responsible.

The real test isn’t whether we’ll encounter problems with AI. We will. The test is whether we’ll respond to the early warning signs thoughtfully, before the damage becomes irreversible. Whether we’ll build the lead aprons while we’re still discovering what radiation can do.

I think we can. The conversation is happening. The research is advancing. The awareness is there. We’re not starting from scratch this time. We have the pattern, and we have the choice to break it in the right direction.