Discussion about this post

User's avatar
James Rubinowitz's avatar

The reckoning vs. judgment distinction might be the most useful framework I've come across for thinking about AI in high stakes professional settings. I practice law and teach a course on AI and Litigation at a Law School here in New York, and this framing lands harder when you live in a world where the final decision isn't called a "calculation." It's called a judgment. That's not a coincidence. What we do in courtrooms requires exactly what Smith was describing: registering context, caring about what's actually at stake for real people, navigating all the messy stuff behind that door Grover opens.

What worries me is that I'm already watching Smith's first fear play out in legal practice. Firms are throwing reckoning tools at problems that demand genuine judgment, and nobody has a conceptual map for knowing when they've crossed that line. The fallout isn't abstract. It's hallucinated case references (that attorneys don't bother to check), botched filings, and clients who were failed by the lawyers who were supposed to protect them.

This piece should be mandatory reading for anyone building or deploying AI where the consequences can't be undone. Really grateful you wrote this, Melanie. It crystallized something I've been trying to articulate to my students and other lawyers alike.

David's avatar

Really lovely tribute. I enjoyed how you fit his work into this larger conversation of AI’s evolution and how the “soft” characteristics that make us human are truly the most challenging things to systemically reason and create

53 more comments...

No posts

Ready for more?