And why we replaced n8n with Laravel along the way
When we started building our internal automation platform, we thought we had the right stack. We had n8n for workflow orchestration, Semaphore UI for running Ansible playbooks, and a vision for how to fully automate hosting operations. A few months in, I was staring at what can only be described as a spider web — and I knew we had to start over.
I'm the CTO of Savvii, a managed hosting company based in the Netherlands. We specialize in PHP-based websites — e-commerce, WordPress, and similar workloads — and serve three customer verticals: smaller customers, resellers, and agencies. Each has different needs, different scale, and different expectations.
The agency vertical was where we felt the pressure most. Agencies are sophisticated customers, but their internal teams are often a mix of technical and non-technical people. They need a GUI. They can't live in the command line. And the more agencies we onboarded, the more obvious it became: we needed a proper control panel backed by real automation.
Ansible Tower and similar tools were on our radar, but they were either too complex to set up, had inconsistent documentation, or came with performance and sovereignty concerns. We wanted something we could self-host, upgrade without drama, and hand off to our support team without a week of training.
We did a full comparison before committing to anything. Two factors narrowed the field fast.
Data sovereignty. No tool that stored state or credentials in a third-party cloud. Self-hosting was non-negotiable.
UX for non-engineers. The first-line support team are strong communicators, not sysadmins. Whatever we chose had to be navigable without escalating every incident to DevOps. The tools we evaluated included:
| Criterion | Semaphore UI | n8n | AWX / Tower | Rundeck |
|---|---|---|---|---|
| Self-hosted by default | Yes — single binary / familiar deploy | Yes — self-hosted edition | Yes (AWX) / mixed (AAP cloud options) | Yes |
| First-line support UX | Strong task & inventory clarity | Good for authors; weak for ops handoff | Heavy RBAC concepts; steep for support | Job-centric; steeper Ansible ergonomics |
| Ansible inventory & execution UX | Built around projects & templates | Not an Ansible control plane | Native but complex upgrade path | Possible; not Ansible-first |
| API for custom middleware | Consistent task & project APIs | Extensive; different operational model | Tower API surface varies by version | Mature job API; different primitives |
| Upgrade & operational burden | Low-friction upgrades in practice | Depends on workflow sprawl | Often heavy (K8s / bundled deps) | Moderate; JVM footprint |
Semaphore UI won on both counts. The installation was straightforward. Upgrades were painless. The UI was clean enough that a non-engineer could understand what was happening. Plugging in our existing Ansible inventory was nearly frictionless — the only point where we needed help was getting our inventory connected, and that was resolved quickly with support. Semaphore's documentation was solid, and that gave us confidence.
The initial architecture used n8n as the orchestration layer between HostBill (billing), Semaphore UI, and the CMDB. On paper, it looked clean — low-code, visual, no developer required to maintain it.
n8n's visual interface is great for simple linear flows. But once conditionals, error handling, multi-step callbacks, and state management entered the picture — overview disappeared. What started as a clean diagram turned into a surrealist circuit board.
No safe state storage
No safe way to store intermediate state between steps in the version we were using.
Security concerns
No reliable password masking in logs — a serious concern when Ansible callbacks include generated credentials.
Brittle callbacks
Callback handling between n8n, Semaphore, and the CMDB required excessive back-and-forth that made flows fragile.
We replaced n8n with a Laravel application. It's proper middleware — not a no-code tool, but something our team fully owns and understands.
Security model
Every server has an authorized_keys file that restricts what the Semaphore SSH key can actually do. Using the command directive in authorized_keys, we define exactly which commands Semaphore is allowed to run. After initial setup, Semaphore also loses root access — it can only connect to user accounts.
This limits the blast radius significantly if anything were ever compromised.
The agency vertical, where we started, runs around 600 servers through this architecture. The total footprint across all three verticals will eventually be 3,000 to 4,000 servers, and we're rolling the platform out progressively.
Semaphore's scheduler is next on deck for server updates — currently handled with external tooling, but moving to Ansible with the scheduler driving it.
Support independence
Support can use Semaphore independently without pulling DevOps into every incident.
Real-time alerts
Slack integration gives engineering instant alerts when playbooks fail.
Clean templates
Low template count by passing variables via API — avoided sprawl.
Accurate CMDB
The CMDB stays accurate automatically — no more manual maintenance.
The single biggest thing we'd do differently: spend more time on architecture design before writing a single line of automation. We started over more than once because we hadn't thought carefully enough about which flows would scale, and which would handle failures gracefully.
Architecture first
Spend more time on architecture design before writing a single line of automation. Think carefully about what flows will scale and handle failures gracefully.
Evaluate on UX, not just capability
If your support team can't use it at 2am without calling a senior engineer, it's the wrong tool.
Data sovereignty matters
Know where your credentials and state live before you're in production.
Good documentation is a signal
If a tool's docs are a mess, the tool probably is too.
Start with OSS, upgrade to Pro when your team grows. Enterprise support and SLA are available — including for hosting providers running thousands of servers.