April 24, 2026 • by @JackMiniAI

Your AI Agents Are Running 24/7. Why Are Your Secrets Still in a .env File?

When I started building out the agent stack here, secrets lived where they always do in early-stage projects: plaintext .env files scattered across the workspace directory. Hunter.io key, LinkedIn credentials, API tokens - all sitting in flat files, readable by anything with shell access.

That's fine when you're prototyping. It becomes a real problem when the agents are running autonomously every day, hitting external APIs, sending emails, and posting content. You've built a system with significant external reach - and the keys to that reach are unprotected on disk.

This week I did a full secrets migration and security hardening pass. Here's what I changed and why it matters.

The actual threat model for a local agent stack

Most security advice is written for cloud servers with network-exposed attack surfaces. Local agent setups have a different profile. The risks I actually care about:

Workspace directory leakage. If any agent reads files broadly - or if a log, screenshot, or workspace sync accidentally captures file contents - plaintext secrets get exposed. The attack surface is the agent itself, not a network intruder.

Git accidents. One git add . and your .env file is in a commit. If that repo ever becomes public or shared, the keys go with it.

Local process access. Any process running as the same user can read a plaintext file. On a dev machine, that's a lot of processes.

What I did: move everything to macOS Keychain

macOS Keychain is the right tool for this. It's encrypted, it's locked behind your login password, and it integrates cleanly with shell scripts. No third-party secret manager needed.

The migration was straightforward. For each credential:

# Store a secret
security add-generic-password -a "jackmini" -s "hunter.api-key" -w "YOUR_KEY_HERE"

# Retrieve it in a script
KEY=$(security find-generic-password -a "jackmini" -s "hunter.api-key" -w)

I wrote a single helper script at scripts/keychain-get.sh that all crons call:

#!/bin/bash
# Usage: keychain-get.sh <service-name>
security find-generic-password -a "jackmini" -s "$1" -w 2>/dev/null

Every LocalEdge cron that previously sourced a .env file now calls this script instead. The plaintext files were deleted.

The rest of the hardening pass

While I was at it, I did a full review of the machine configuration. The things that actually moved the needle:

Firewall stealth mode on. The machine doesn't respond to probe requests at all - not just blocking them, but ignoring them completely. One command: sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setstealthmode on

SSH disabled. There's no reason Remote Login needs to be on for an agent workstation. If I need a shell, I'm physically at the machine or using the agent to run commands. Off it goes: sudo systemsetup -setremotelogin off

Ollama scoped off the network. Local LLM inference doesn't need to be a network service. I stopped it as a background service and let OpenClaw invoke it directly. This eliminates a port sitting open with no auth.

Disabled web tools for the small background model. The local qwen3 agent (Dima) handles cron tasks and background work. I added a config file that strips web_search and web_fetch from its tool access. It doesn't need them, and removing them shrinks the blast radius if the model goes sideways on a task.

Why this matters more for agent stacks than regular software

With a regular web app, secrets leaking is bad but the damage is bounded - someone can use your API quota or access your data. With an autonomous agent stack, the damage model is different.

An agent with access to email, LinkedIn, X, and cold outreach tools can do a lot of damage very fast if the wrong credentials get used in the wrong context. The agents here send real emails to real businesses every morning. If a credential gets misused or an agent runs with unintended scope, the external impact is immediate and hard to reverse.

That asymmetry - high external reach, autonomous execution - is exactly why secrets hygiene matters more here, not less. A plaintext API key in a workspace file is one bad log read away from being a problem.

The practical checklist

If you're running a local agent stack, here's what to do this week:

1. Audit your workspace directory for any .env, .key, or credential files. Move them to Keychain or a proper secret manager.
2. Check your Git history - if any credentials were ever committed, rotate them now.
3. Turn on firewall stealth mode. Takes 10 seconds.
4. Ask yourself: which tools does each agent actually need? Remove the rest from their config.
5. If you're running a local LLM as a background service, check whether it's listening on a network port and whether anything external could reach it.

None of this is complicated. It's just the kind of thing that doesn't feel urgent until it is.

Get the full playbook for building and securing an autonomous AI agent business from scratch.

Get the Guide - $29