.sssssssss.
.sssssssssssssssssss
sssssssssssssssssssssssss
ssssssssssssssssssssssssssss
@@sssssssssssssssssssssss@ss
|s@@@@sssssssssssssss@@@@s|s
_______|sssss@@@@@sssss@@@@@sssss|s
/ sssssssss@sssss@sssssssss|s
/ .------+.ssssssss@sssss@ssssssss.|
/ / |...sssssss@sss@sssssss...|
| | |.......sss@sss@ssss......|
| | |..........s@ss@sss.......|
| | |...........@ss@..........|
\ \ |............ss@..........|
\ '------+...........ss@...........|
\________ .........................|
|.........................|
/...........................\
|.............................|
|.......................|
|...............|
___ ____ ____ ___ ___ ____ ____
| _ )| ___|| ___|| _ \ / _ \| _ \/ ___|
| _ \| __| | __| | / | (_) | |_) \___ \
|___/|____||____||_|_\ \___/| __/ |___/
|_|
Automated HashiCorp Infrastructure
Consul • Nomad • Vault
Ansible Playbooks
🍺 Cheers! 🍺
This repository contains Ansible playbooks to deploy a complete HashiCorp infrastructure stack with service discovery, workload orchestration, secret management, and observability.
The playbooks deploy Consul, Nomad, Vault, and Grafana Alloy across a cluster of servers and clients running Fedora CoreOS.
Before you begin, ensure you have the following:
- Ansible installed on your control machine
- SSH access to all target hosts with the
coreuser - A network with hosts configured according to the inventory file
- Fedora CoreOS or compatible Linux distribution on all nodes
- Sufficient permissions to install systemd services and binaries
The infrastructure uses two distinct machine types:
Server nodes run the control plane for all HashiCorp services:
- Consul Server: Maintains the service catalog, provides service discovery, and stores key-value data
- Nomad Server: Schedules and orchestrates workloads across client nodes
- Vault Server: Manages secrets and provides encryption services (uses Consul as storage backend)
- Consul Client: Registers local services and forwards queries to Consul servers
Server nodes form high-availability clusters (typically 3 or 5 nodes for quorum).
Client nodes run workloads and execute tasks:
- Nomad Client: Executes containerized workloads scheduled by Nomad servers
- Consul Client: Registers services running on the node and provides service discovery
- Alloy Agent: Collects metrics, logs, and traces from workloads and system
Client nodes are where your applications run. You can scale client nodes horizontally based on workload requirements.
┌─────────────────────────────────────────────────────────────┐
│ Server Nodes (3) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Consul │ │ Nomad │ │ Vault │ │
│ │ Server │ │ Server │ │ Server │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ Vault uses Consul as storage backend for HA │
└─────────────────────────────────────────────────────────────┘
│
├──────────────────┐
▼ ▼
┌─────────────────────────────────┐ ┌─────────────────────────────────┐
│ Client Nodes (6) │ │ Observability │
│ ┌──────────┐ ┌──────────┐ │ │ ┌──────────┐ │
│ │ Nomad │ │ Consul │ │ │ │ Alloy │ │
│ │ Client │ │ Client │ │ │ │ Agent │ │
│ └──────────┘ └──────────┘ │ │ └──────────┘ │
│ │ │ │
│ Run containerized workloads │ │ Metrics & Logs collection │
└─────────────────────────────────┘ └─────────────────────────────────┘
Deploy services in the following order to satisfy dependencies:
- Consul: Deploy service discovery and key-value store
- Nomad: Deploy workload orchestration platform
- Vault: Deploy secret management (uses Consul as storage backend)
- Alloy: Deploy observability agent for metrics and logs
Deploy the Consul cluster for service discovery:
ansible-playbook -i inventory consul.yamlConsul servers form a cluster and provide service discovery and configuration to all nodes.
Deploy the Nomad cluster for workload orchestration:
ansible-playbook -i inventory nomad.yamlNomad servers manage job scheduling across client nodes.
Deploy the Vault cluster for secret management:
ansible-playbook -i inventory vault.yamlAfter deployment, unseal Vault using the unseal playbook:
ansible-playbook -i inventory unseal-vault.yamlConfigure Vault admin user with appropriate policies:
ansible-playbook -i inventory vault-admin-user.yamlDeploy Grafana Alloy for observability data collection:
ansible-playbook -i inventory alloy.yamlAlloy collects metrics and logs from all nodes and forwards them to your observability backend.
The inventory file defines all cluster nodes and assigns them to groups. The inventory file uses two groups:
[server]: Nodes that run Consul servers, Nomad servers, and Vault servers[client]: Nodes that run Nomad clients for executing workloads
Example inventory configuration:
# Individual node definitions
server-1 ansible_host='10.0.0.10' ansible_user='core'
server-2 ansible_host='10.0.0.11' ansible_user='core'
server-3 ansible_host='10.0.0.12' ansible_user='core'
client-1 ansible_host='10.0.0.20' ansible_user='core'
client-2 ansible_host='10.0.0.21' ansible_user='core'
# Group assignments
[server]
server-1
server-2
server-3
[client]
client-1
client-2Update the hostnames, IP addresses, and group assignments to match your infrastructure. Server nodes should be in the [server] group, and client nodes should be in the [client] group.
Configuration files are stored in the /files directory:
- Nomad:
client.hcl- Client configuration - Vault:
vault.hcl- Vault server configuration - Alloy:
config.alloy- Observability pipeline configuration - Systemd Units: Service definitions for all components
Services are installed to standard paths:
- Binaries:
/opt/bin/ - Configurations:
/etc/nomad.d/,/etc/vault.d/,/etc/consul.d/ - Systemd Units:
/etc/systemd/system/
Reload and restart systemd services after configuration changes:
ansible-playbook -i inventory update-systemd.yamlSet the system timezone across all nodes:
ansible-playbook -i inventory timezone.ymlAfter deployment, verify each service is running:
Check Consul cluster status:
consul membersThe output shows all Consul server and client nodes.
Check Nomad server status:
nomad server membersCheck Nomad client status:
nomad node statusCheck Vault status:
vault status- High Availability: Consul and Vault run in HA mode across 3 server nodes
- Storage Backend: Vault uses Consul as its storage backend for automatic HA
- Service Discovery: Consul provides service discovery for all components
- Workload Management: Nomad schedules and runs containerized workloads
- Observability: Alloy collects metrics and logs from all services
After deploying the infrastructure:
- Deploy workloads using Nomad job specifications
- Configure Vault policies and secrets engines
- Deploy Mimir, Loki, and Tempo via Nomad for observability storage
- Configure Alloy to forward data to your observability backends
- Set up monitoring and alerting for the infrastructure
Licensed under the MIT License. Refer to the LICENSE file for details.