The Pentagon's New AI Bouncer: Because Your Mom Said It's OK

Jan. 4, 2025

Listen, I’ve been staring at this bourbon glass for the past hour trying to make sense of this latest piece of government genius. The Pentagon - yes, that five-sided fortress of infinite wisdom - has decided to let AI help decide who gets security clearances. And their ethical compass for this brave new world? “What would mom think?”

I need another drink just typing that out.

Here’s the deal: The Defense Counterintelligence and Security Agency (let’s call it DCSA because I’m already three fingers deep into this bottle) is now using AI to process security clearances for millions of American workers. Their director, David Cattler, has this brilliant idea called “the mom test.” Before his employees dig into your personal life, they need to ask themselves if their mom would approve of the government having this kind of access.

My mom still thinks I’m a successful novelist living in Beverly Hills. So much for reliable character witnesses.

But here’s where it gets interesting - and by interesting, I mean the kind of interesting that makes you want to chain-smoke until sunrise. They’re not using any of those fancy chatbots that have been making headlines. No ChatGPT, no Claude, none of that digital fortune cookie nonsense. Instead, they’re going old school with data mining and organization tools.

The kicker? They’re promising “no black boxes.” Everything has to be transparent, they say. Which reminds me of every relationship I’ve ever been in where someone promised “total honesty.” We all know how those ended.

Let’s break this down while I pour another:

First, they’re building some kind of “heat map” to track risks across facilities in real-time. It’s like a weather forecast for security threats, except instead of predicting rain, it’s predicting which government employee might have had one too many WhatsApp chats with suspicious characters.

Speaking of which - takes long sip - remember when having a drinking problem was a security risk? Now they’re apparently more “tolerant” of recovered addicts. Progress, I suppose. Though I wonder if they’d give me clearance considering I’m writing this at 2 PM on a Tuesday with a bottle of Kentucky’s finest keeping me company.

The real beauty here is watching them try to navigate the bias problem. AI systems inherit biases like I inherit bar tabs - readily and with significant consequences. They acknowledge that their values have changed over time. Being gay used to be a security risk (what the actual fuck?), but now they’re more worried about extremist views. Though I wonder what their AI makes of my Twitter rants about the corporate technocracy at 3 AM.

Matthew Scherer, some smart cookie from the Center for Democracy and Technology, warns about AI making critical decisions or flagging “suspicious” behavior. Because nothing says “reliable security screening” like having an algorithm that might confuse you with another John Smith who’s on several watch lists.

And you want to know the real punchline? They won’t name their AI partners. That’s right - the system that’s supposed to be completely transparent is being built by companies they won’t identify. It’s like a blind date set up by your ex - what could possibly go wrong?

Here’s what keeps me up at night (besides the whiskey): We’re letting machines judge human reliability while using “Would Mom approve?” as a ethical framework. My mom still can’t figure out how to unlock her iPad without calling me, but sure, let’s use parental approval as our moral compass for national security.

The truth is, this whole thing is a perfect storm of bureaucratic efficiency meets digital age paranoia. They’re trying to process millions of security clearances faster, which I get. But they’re doing it with tools they swear aren’t black boxes while partnering with unnamed companies and using maternal judgment as their ethical framework.

And the real cherry on top? They’re proud of being more tolerant of recovered addicts while building systems that could potentially flag someone for having too many late-night food delivery orders or suspicious Netflix binges. Because nothing says “national security threat” like watching all seasons of “The Great British Bake Off” in one weekend.

Bottom line: The Pentagon is modernizing its background checks with AI that promises to be transparent while being opaque about who’s building it, all while using your mom’s hypothetical approval as a moral guideline. If that doesn’t drive you to drink, I don’t know what will.

Now if you’ll excuse me, I need to go delete my browser history before the AI decides my research into “best hangover cures” is a matter of national security.

Stay human, Henry Chinaski

P.S. If anyone from DCSA is reading this - yes, I know this bottle of bourbon is half empty. No, you can’t have any.


Source: The Pentagon Is Using AI To Vet Employees – But Only When It Passes ‘The Mom Test’

Tags: ai surveillance ethics aigovernance dataprivacy