Compare commits
19 Commits
6eef96b302
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0a3171b9bc | ||
|
|
3068a5a8fb | ||
|
|
ef652fac20 | ||
|
|
22c1b534ab | ||
|
|
9cb90a8020 | ||
|
|
d9181515bb | ||
|
|
c3905ed144 | ||
|
|
5fb50ab4b2 | ||
|
|
2909d6e16c | ||
|
|
0aed818be5 | ||
|
|
fbdeec93ce | ||
|
|
44626101de | ||
|
|
c1d6f13275 | ||
|
|
282e98e90a | ||
|
|
9573cbfcad | ||
|
|
48aec11d8c | ||
|
|
a1da69ac98 | ||
|
|
7aa16f3207 | ||
|
|
fe3f1749c5 |
@@ -13,6 +13,8 @@ skip_list:
|
||||
- fqcn-builtins
|
||||
- no-handler
|
||||
- var-naming
|
||||
- no-changed-when
|
||||
- risky-shell-pipe
|
||||
|
||||
# Enforce certain rules that are not enabled by default.
|
||||
enable_list:
|
||||
|
||||
8
.gitattributes
vendored
Normal file
8
.gitattributes
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
vars/group_vars/proxmox/secrets_vm.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/all/secrets.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/docker/secrets.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/k3s/secrets.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/k3s/secrets_token.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/kubernetes/secrets.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/proxmox/secrets.yml diff=ansible-vault merge=binary
|
||||
vars/group_vars/proxmox/secrets_vm.yml diff=ansible-vault merge=binary
|
||||
@@ -10,7 +10,7 @@ repos:
|
||||
hooks:
|
||||
- id: ansible-galaxy-install
|
||||
name: Install ansible-galaxy collections
|
||||
entry: ansible-galaxy collection install -r requirements.yml
|
||||
entry: ansible-galaxy collection install -r requirements.yaml
|
||||
language: system
|
||||
pass_filenames: false
|
||||
always_run: true
|
||||
@@ -18,6 +18,6 @@ repos:
|
||||
rev: v6.22.2
|
||||
hooks:
|
||||
- id: ansible-lint
|
||||
files: \.(yaml|yml)$
|
||||
files: \.(yaml)$
|
||||
additional_dependencies:
|
||||
- ansible-core==2.15.8
|
||||
|
||||
140
README.md
140
README.md
@@ -3,90 +3,80 @@
|
||||
**I do not recommend this project being used for ones own infrastructure, as
|
||||
this project is heavily attuned to my specific host/network setup**
|
||||
|
||||
The Ansible Project to provision fresh Debian VMs for my Proxmox instances.
|
||||
This Ansible project automates the setup of a K3s Kubernetes cluster on Proxmox VE. It also includes playbooks for configuring Docker hosts, load balancers, and other services.
|
||||
|
||||
## Configuration
|
||||
## Repository Structure
|
||||
|
||||
The configuration of this project is done via files in the `./vars` directory.
|
||||
The inventory is composed of `.ini` files in the `./vars` directory. Each `.ini` file represents an inventory and can be used with the `-i` flag when running playbooks.
|
||||
The repository is organized into the following main directories:
|
||||
|
||||
The variables for the hosts and groups are defined in the `./vars/group_vars` directory. The structure of this directory is as follows:
|
||||
- `playbooks/`: Contains the main Ansible playbooks for different setup scenarios.
|
||||
- `roles/`: Contains the Ansible roles that are used by the playbooks.
|
||||
- `vars/`: Contains variable files, including group-specific variables.
|
||||
|
||||
```
|
||||
vars/
|
||||
├── group_vars/
|
||||
│ ├── all/
|
||||
│ │ ├── secrets.yml
|
||||
│ │ └── vars.yml
|
||||
│ ├── <group_name>/
|
||||
│ │ ├── *.yml
|
||||
├── docker.ini
|
||||
├── k3s.ini
|
||||
├── kubernetes.ini
|
||||
├── proxmox.ini
|
||||
└── vps.ini
|
||||
```
|
||||
## Playbooks
|
||||
|
||||
The `all` group contains variables that are common to all hosts. Each other directory in `group_vars` corresponds to a group defined in the inventory files and contains variables specific to that group.
|
||||
The following playbooks are available:
|
||||
|
||||
## Run Playbook
|
||||
- `proxmox.yml`: Provisions VMs and containers on Proxmox VE.
|
||||
- `k3s-servers.yml`: Sets up the K3s master nodes.
|
||||
- `k3s-agents.yml`: Sets up the K3s agent nodes.
|
||||
- `k3s-loadbalancer.yml`: Configures a load balancer for the K3s cluster.
|
||||
- `k3s-storage.yml`: Configures storage for the K3s cluster.
|
||||
- `docker.yml`: Sets up Docker hosts and their load balancer.
|
||||
- `docker-host.yml`: Configures the docker hosts.
|
||||
- `docker-lb.yml`: Configures a load balancer for Docker services.
|
||||
- `kubernetes_setup.yml`: A meta-playbook for setting up the entire Kubernetes cluster.
|
||||
|
||||
To run a playbook, you need to specify the inventory file and the playbook file. For example, to run the `k3s-servers.yml` playbook with the `k3s.ini` inventory, you can use the following command:
|
||||
## Roles
|
||||
|
||||
The following roles are defined:
|
||||
|
||||
- `common`: Common configuration tasks for all nodes.
|
||||
- `proxmox`: Manages Proxmox VE, including VM and container creation.
|
||||
- `k3s_server`: Installs and configures K3s master nodes.
|
||||
- `k3s_agent`: Installs and configures K3s agent nodes.
|
||||
- `k3s_loadbalancer`: Configures an Nginx-based load balancer for the K3s cluster.
|
||||
- `k3s_storage`: Configures storage solutions for Kubernetes.
|
||||
- `docker_host`: Installs and configures Docker.
|
||||
- `kubernetes_argocd`: Deploys Argo CD to the Kubernetes cluster.
|
||||
- `node_exporter`: Installs the Prometheus Node Exporter for monitoring.
|
||||
- `reverse_proxy`: Configures a Caddy-based reverse proxy.
|
||||
|
||||
## Usage
|
||||
|
||||
1. **Install dependencies:**
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
2. **Configure variables:**
|
||||
|
||||
- Create an inventory file (e.g., `vars/k3s.ini`).
|
||||
- Adjust variables in `vars/group_vars/` to match your environment.
|
||||
|
||||
3. **Run playbooks:**
|
||||
|
||||
```bash
|
||||
# To provision VMs on Proxmox
|
||||
ansible-playbook -i vars/proxmox.ini playbooks/proxmox.yml
|
||||
|
||||
# To set up the K3s cluster
|
||||
ansible-playbook -i vars/k3s.ini playbooks/kubernetes_setup.yml
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
### Vault Git Diff
|
||||
|
||||
This repo has a `.gitattributes` which points at the repos ansible-vault files.
|
||||
These can be temporarily decrypted for git diff by adding this in conjunction with the `.gitattributes`:
|
||||
```sh
|
||||
ansible-playbook -i vars/k3s.ini playbooks/k3s-servers.yml
|
||||
# https://stackoverflow.com/questions/29937195/how-to-diff-ansible-vault-changes
|
||||
git config --global diff.ansible-vault.textconv "ansible-vault view"
|
||||
```
|
||||
|
||||
## After successful k3s installation
|
||||
## Disclaimer
|
||||
|
||||
To access our Kubernetes cluster from our host machine to work on it via
|
||||
flux and such we need to manually copy a k3s config from one of our server nodes to our host machine.
|
||||
Then we need to install `kubectl` on our host machine and optionally `kubectx` if we're already
|
||||
managing other Kubernetes instances.
|
||||
Then we replace the localhost address inside of the config with the IP of our load balancer.
|
||||
Finally we'll need to set the KUBECONFIG variable.
|
||||
|
||||
```sh
|
||||
mkdir ~/.kube/
|
||||
scp k3s-server00:/etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
chown $USER ~/.kube/config
|
||||
sed -i "s/127.0.0.1/192.168.20.22/" ~/.kube/config
|
||||
export KUBECONFIG=~/.kube/config
|
||||
```
|
||||
|
||||
Install flux and continue in the flux repository.
|
||||
|
||||
## Longhorn Nodes
|
||||
|
||||
To create longhorn nodes from existing kubernetes nodes we want to increase
|
||||
their storage capacity. Since we're using VMs for our k3s nodes we can
|
||||
resize the root-disk of the VMs in the proxmox GUI.
|
||||
|
||||
Then we have to resize the partitions inside of the VM so the root partition
|
||||
uses the newly available space.
|
||||
When we have LVM-based root partition we can do the following:
|
||||
|
||||
```sh
|
||||
# Create a new partition from the free space.
|
||||
sudo fdisk /dev/sda
|
||||
# echo "n\n\n\n\n\nw\n"
|
||||
# n > 5x\n > w > \n
|
||||
# Create a LVM volume on the new partition
|
||||
sudo pvcreate /dev/sda3
|
||||
sudo vgextend k3s-vg /dev/sda3
|
||||
# Use the newly available storage in the root volume
|
||||
sudo lvresize -l +100%FREE -r /dev/k3s-vg/root
|
||||
```
|
||||
|
||||
## Cloud Init VMs
|
||||
|
||||
```sh
|
||||
# On Hypervisor Host
|
||||
qm resize <vmid> scsi0 +32G
|
||||
# On VM
|
||||
sudo fdisk -l /dev/sda # To check
|
||||
echo 1 | sudo tee /sys/class/block/sda/device/rescan
|
||||
sudo fdisk -l /dev/sda # To check
|
||||
# sudo apt-get install cloud-guest-utils
|
||||
sudo growpart /dev/sda 1
|
||||
```
|
||||
This project is highly customized for the author's specific environment. Using it without modification is not recommended.
|
||||
|
||||
@@ -14,7 +14,7 @@ vault_password_file=/media/veracrypt1/scripts/ansible_vault.sh
|
||||
|
||||
# (list) Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these.
|
||||
# This affects vars_files, include_vars, inventory and vars plugins among others.
|
||||
yaml_valid_extensions=.yml
|
||||
yaml_valid_extensions=.yaml
|
||||
|
||||
# (boolean) Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host
|
||||
host_key_checking=False
|
||||
|
||||
69
blog.md
Normal file
69
blog.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: "Automating My Homelab: From Bare Metal to Kubernetes with Ansible"
|
||||
date: 2025-07-27
|
||||
author: "TuDatTr"
|
||||
tags: ["Ansible", "Proxmox", "Kubernetes", "K3s", "IaC", "Homelab"]
|
||||
---
|
||||
|
||||
## The Homelab: Repeatable, Automated, and Documented
|
||||
|
||||
For many tech enthusiasts, a homelab is a playground for learning, experimenting, and self-hosting services. But as the complexity grows, so does the management overhead. Manually setting up virtual machines, configuring networks, and deploying applications becomes a tedious and error-prone process. This lead me to building my homelab as Infrastructure as Code (IaC) with Ansible.
|
||||
|
||||
This blog post walks you through my Ansible project, which automates the entire lifecycle of my homelab—from provisioning VMs on Proxmox to deploying a production-ready K3s Kubernetes cluster.
|
||||
|
||||
## Why Ansible?
|
||||
|
||||
When I decided to automate my infrastructure, I considered several tools. I chose Ansible for its simplicity, agentless architecture, and gentle learning curve. Writing playbooks in YAML felt declarative and intuitive, and the vast collection of community-supported modules meant I wouldn't have to reinvent the wheel.
|
||||
|
||||
## The Architecture: A Multi-Layered Approach
|
||||
|
||||
My Ansible project is designed to be modular and scalable, with a clear separation of concerns. It's built around a collection of roles, each responsible for a specific component of the infrastructure.
|
||||
|
||||
### Layer 1: Proxmox Provisioning
|
||||
|
||||
The foundation of my homelab is Proxmox VE. The `proxmox` role is the first step in the automation pipeline. It handles:
|
||||
|
||||
- **VM and Container Creation:** Using a simple YAML definition in my `vars` files, I can specify the number of VMs and containers to create, their resources (CPU, memory, disk), and their base operating system images.
|
||||
- **Cloud-Init Integration:** For VMs, I leverage Cloud-Init to perform initial setup, such as setting the hostname, creating users, and injecting SSH keys for Ansible to connect to.
|
||||
- **Hardware Passthrough:** The role also configures hardware passthrough for devices like Intel Quick Sync for video transcoding in my media server.
|
||||
|
||||
### Layer 2: The K3s Kubernetes Cluster
|
||||
|
||||
With the base VMs ready, the next step is to build the Kubernetes cluster. I chose K3s for its lightweight footprint and ease of installation. The setup is divided into several roles:
|
||||
|
||||
- `k3s_server`: This role bootstraps the first master node and then adds additional master nodes to create a highly available control plane.
|
||||
- `k3s_agent`: This role joins the worker nodes to the cluster.
|
||||
- `k3s_loadbalancer`: A dedicated VM running Nginx is set up to act as a load balancer for the K3s API server, ensuring a stable endpoint for `kubectl` and other clients.
|
||||
|
||||
### Layer 3: Applications and Services
|
||||
|
||||
Once the Kubernetes cluster is up and running, it's time to deploy applications. My project includes roles for:
|
||||
|
||||
- `docker_host`: For services that are better suited to run in a traditional Docker environment, this role sets up and configures Docker hosts.
|
||||
- `kubernetes_argocd`: I use Argo CD for GitOps-based continuous delivery. This role deploys Argo CD to the cluster and configures it to sync with my application repositories.
|
||||
- `reverse_proxy`: Caddy is my reverse proxy of choice, and this role automates its installation and configuration, including obtaining SSL certificates from Let's Encrypt.
|
||||
|
||||
## Putting It All Together: The Power of Playbooks
|
||||
|
||||
The playbooks in the `playbooks/` directory tie everything together. For example, the `kubernetes_setup.yml` playbook runs all the necessary roles in the correct order to bring up the entire Kubernetes cluster from scratch.
|
||||
|
||||
```yaml
|
||||
# playbooks/kubernetes_setup.yml
|
||||
---
|
||||
- name: Set up Kubernetes Cluster
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: k3s_server
|
||||
- role: k3s_agent
|
||||
- role: k3s_loadbalancer
|
||||
- role: kubernetes_argocd
|
||||
```
|
||||
|
||||
## Final Thoughts and Future Plans
|
||||
|
||||
This Ansible project has transformed my homelab from a collection of manually configured machines into a fully automated and reproducible environment. I can now tear down and rebuild my entire infrastructure with a single command, which gives me the confidence to experiment without fear of breaking things.
|
||||
|
||||
While the project is highly tailored to my specific needs, I hope this overview provides some inspiration for your own automation journey. The principles of IaC and the power of tools like Ansible can be applied to any environment, big or small.
|
||||
|
||||
What's next? I plan to explore more advanced Kubernetes concepts, such as Cilium for networking and policy, and integrate more of my self-hosted services into the GitOps workflow with Argo CD. The homelab is never truly "finished," and that's what makes it so much fun.
|
||||
@@ -3,9 +3,9 @@
|
||||
hosts: docker_host
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: common
|
||||
tags:
|
||||
- common
|
||||
# - role: common
|
||||
# tags:
|
||||
# - common
|
||||
- role: docker_host
|
||||
tags:
|
||||
- docker_host
|
||||
5
playbooks/docker.yaml
Normal file
5
playbooks/docker.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
- name: Setup Docker Hosts
|
||||
ansible.builtin.import_playbook: docker-host.yaml
|
||||
- name: Setup Docker load balancer
|
||||
ansible.builtin.import_playbook: docker-lb.yaml
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
- name: Setup Docker Hosts
|
||||
ansible.builtin.import_playbook: docker-host.yml
|
||||
- name: Setup Docker load balancer
|
||||
ansible.builtin.import_playbook: docker-lb.yml
|
||||
@@ -3,10 +3,10 @@
|
||||
hosts: k3s
|
||||
gather_facts: true
|
||||
roles:
|
||||
# - role: common
|
||||
# tags:
|
||||
# - common
|
||||
# when: inventory_hostname in groups["k3s_server"]
|
||||
- role: common
|
||||
tags:
|
||||
- common
|
||||
when: inventory_hostname in groups["k3s_server"]
|
||||
- role: k3s_server
|
||||
tags:
|
||||
- k3s_server
|
||||
6
playbooks/proxmox-k3s-add-agent.yaml
Normal file
6
playbooks/proxmox-k3s-add-agent.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: Create new VM(s)
|
||||
ansible.builtin.import_playbook: proxmox.yaml
|
||||
|
||||
- name: Provision VM
|
||||
ansible.builtin.import_playbook: k3s-agents.yaml
|
||||
@@ -79,12 +79,13 @@
|
||||
path: ~/.config/nvim
|
||||
register: nvim_config
|
||||
|
||||
- name: Clone LazyVim starter to Neovim config directory
|
||||
- name: Clone personal Neovim config directory
|
||||
ansible.builtin.git:
|
||||
repo: https://github.com/LazyVim/starter
|
||||
repo: https://codeberg.org/tudattr/nvim
|
||||
dest: ~/.config/nvim
|
||||
clone: true
|
||||
update: false
|
||||
version: 1.0.0
|
||||
when: not nvim_config.stat.exists
|
||||
|
||||
- name: Remove .git directory from Neovim config
|
||||
13
roles/common/tasks/main.yaml
Normal file
13
roles/common/tasks/main.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
- name: Configure Time
|
||||
ansible.builtin.include_tasks: time.yaml
|
||||
- name: Configure Packages
|
||||
ansible.builtin.include_tasks: packages.yaml
|
||||
- name: Configure Hostname
|
||||
ansible.builtin.include_tasks: hostname.yaml
|
||||
- name: Configure Extra-Packages
|
||||
ansible.builtin.include_tasks: extra_packages.yaml
|
||||
- name: Configure Bash
|
||||
ansible.builtin.include_tasks: bash.yaml
|
||||
- name: Configure SSH
|
||||
ansible.builtin.include_tasks: sshd.yaml
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
- name: Configure Time
|
||||
ansible.builtin.include_tasks: time.yml
|
||||
- name: Configure Packages
|
||||
ansible.builtin.include_tasks: packages.yml
|
||||
- name: Configure Hostname
|
||||
ansible.builtin.include_tasks: hostname.yml
|
||||
- name: Configure Extra-Packages
|
||||
ansible.builtin.include_tasks: extra_packages.yml
|
||||
- name: Configure Bash
|
||||
ansible.builtin.include_tasks: bash.yml
|
||||
- name: Configure SSH
|
||||
ansible.builtin.include_tasks: sshd.yml
|
||||
@@ -14,3 +14,5 @@ common_packages:
|
||||
- fd-find
|
||||
- ripgrep
|
||||
- nfs-common
|
||||
- open-iscsi
|
||||
- parted
|
||||
@@ -5,7 +5,6 @@
|
||||
state: directory
|
||||
mode: "0755"
|
||||
loop:
|
||||
- /media/docker
|
||||
- /media/series
|
||||
- /media/movies
|
||||
- /media/songs
|
||||
@@ -38,4 +37,5 @@
|
||||
- /media/series
|
||||
- /media/movies
|
||||
- /media/songs
|
||||
- /media/downloads
|
||||
become: true
|
||||
21
roles/docker_host/tasks/main.yaml
Normal file
21
roles/docker_host/tasks/main.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
- name: Setup VM
|
||||
ansible.builtin.include_tasks: 10_setup.yaml
|
||||
|
||||
- name: Install docker
|
||||
ansible.builtin.include_tasks: 20_installation.yaml
|
||||
|
||||
- name: Setup user and group for docker
|
||||
ansible.builtin.include_tasks: 30_user_group_setup.yaml
|
||||
|
||||
- name: Setup directory structure for docker
|
||||
ansible.builtin.include_tasks: 40_directory_setup.yaml
|
||||
|
||||
# - name: Deploy configs
|
||||
# ansible.builtin.include_tasks: 50_provision.yaml
|
||||
|
||||
- name: Deploy docker compose
|
||||
ansible.builtin.include_tasks: 60_deploy_compose.yaml
|
||||
|
||||
- name: Publish metrics
|
||||
ansible.builtin.include_tasks: 70_export.yaml
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
- name: Setup VM
|
||||
ansible.builtin.include_tasks: 10_setup.yml
|
||||
|
||||
- name: Install docker
|
||||
ansible.builtin.include_tasks: 20_installation.yml
|
||||
|
||||
- name: Setup user and group for docker
|
||||
ansible.builtin.include_tasks: 30_user_group_setup.yml
|
||||
|
||||
- name: Setup directory structure for docker
|
||||
ansible.builtin.include_tasks: 40_directory_setup.yml
|
||||
|
||||
- name: Deploy configs
|
||||
ansible.builtin.include_tasks: 50_provision.yml
|
||||
|
||||
- name: Deploy docker compose
|
||||
ansible.builtin.include_tasks: 60_deploy_compose.yml
|
||||
|
||||
- name: Publish metrics
|
||||
ansible.builtin.include_tasks: 70_export.yml
|
||||
@@ -1,7 +1,5 @@
|
||||
docker_host_package_common_dependencies:
|
||||
- nfs-common
|
||||
- firmware-misc-nonfree
|
||||
- linux-image-amd64
|
||||
|
||||
apt_lock_files:
|
||||
- /var/lib/dpkg/lock
|
||||
3
roles/k3s_agent/tasks/main.yaml
Normal file
3
roles/k3s_agent/tasks/main.yaml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
- name: Install k3s agent
|
||||
include_tasks: installation.yaml
|
||||
@@ -1,3 +0,0 @@
|
||||
---
|
||||
- name: Install k3s agent
|
||||
include_tasks: installation.yml
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
- name: Installation
|
||||
ansible.builtin.include_tasks: installation.yml
|
||||
ansible.builtin.include_tasks: installation.yaml
|
||||
|
||||
- name: Configure
|
||||
ansible.builtin.include_tasks: configuration.yml
|
||||
ansible.builtin.include_tasks: configuration.yaml
|
||||
|
||||
- name: Setup DNS on Netcup
|
||||
community.general.netcup_dns:
|
||||
@@ -14,16 +14,16 @@
|
||||
register: k3s_status
|
||||
|
||||
- name: Install primary k3s server
|
||||
include_tasks: primary_installation.yml
|
||||
include_tasks: primary_installation.yaml
|
||||
when: ansible_default_ipv4.address == k3s_primary_server_ip
|
||||
|
||||
- name: Get token from primary k3s server
|
||||
include_tasks: pull_token.yml
|
||||
include_tasks: pull_token.yaml
|
||||
|
||||
- name: Install seconary k3s servers
|
||||
include_tasks: secondary_installation.yml
|
||||
include_tasks: secondary_installation.yaml
|
||||
when: ansible_default_ipv4.address != k3s_primary_server_ip
|
||||
|
||||
- name: Set kubeconfig on localhost
|
||||
include_tasks: create_kubeconfig.yml
|
||||
include_tasks: create_kubeconfig.yaml
|
||||
when: ansible_default_ipv4.address == k3s_primary_server_ip
|
||||
@@ -1 +1 @@
|
||||
k3s_server_token_vault_file: ../vars/group_vars/k3s/secrets_token.yml
|
||||
k3s_server_token_vault_file: ../vars/group_vars/k3s/secrets_token.yaml
|
||||
5
roles/k3s_storage/tasks/main.yaml
Normal file
5
roles/k3s_storage/tasks/main.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
- name: Install dependencies
|
||||
ansible.builtin.include_tasks: requirements.yaml
|
||||
- name: Install k3s
|
||||
ansible.builtin.include_tasks: installation.yaml
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
- name: Install dependencies
|
||||
ansible.builtin.include_tasks: requirements.yml
|
||||
- name: Install k3s
|
||||
ansible.builtin.include_tasks: installation.yml
|
||||
@@ -33,7 +33,7 @@
|
||||
|
||||
- name: Apply ArgoCD Ingress
|
||||
kubernetes.core.k8s:
|
||||
definition: "{{ lookup('ansible.builtin.template', 'ingress.yml.j2') | from_yaml }}"
|
||||
definition: "{{ lookup('ansible.builtin.template', 'ingress.yaml.j2') | from_yaml }}"
|
||||
state: present
|
||||
namespace: "{{ argocd_namespace }}"
|
||||
register: apply_manifests
|
||||
@@ -53,7 +53,7 @@
|
||||
|
||||
- name: Apply ArgoCD repository
|
||||
kubernetes.core.k8s:
|
||||
definition: "{{ lookup('ansible.builtin.template', 'repository.yml.j2') | from_yaml }}"
|
||||
definition: "{{ lookup('ansible.builtin.template', 'repository.yaml.j2') | from_yaml }}"
|
||||
state: present
|
||||
namespace: "{{ argocd_namespace }}"
|
||||
register: apply_manifests
|
||||
@@ -63,7 +63,7 @@
|
||||
|
||||
- name: Apply ArgoCD Root Application
|
||||
kubernetes.core.k8s:
|
||||
definition: "{{ lookup('ansible.builtin.template', 'root_application.yml.j2') | from_yaml }}"
|
||||
definition: "{{ lookup('ansible.builtin.template', 'root_application.yaml.j2') | from_yaml }}"
|
||||
state: present
|
||||
namespace: "{{ argocd_namespace }}"
|
||||
register: apply_manifests
|
||||
6
roles/node_exporter/tasks/main.yaml
Normal file
6
roles/node_exporter/tasks/main.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
- name: Get Version
|
||||
ansible.builtin.include_tasks: get_version.yaml
|
||||
- name: Install
|
||||
ansible.builtin.include_tasks: install.yaml
|
||||
- name: Setup Service
|
||||
ansible.builtin.include_tasks: systemd.yaml
|
||||
@@ -1,6 +0,0 @@
|
||||
- name: Get Version
|
||||
ansible.builtin.include_tasks: get_version.yml
|
||||
- name: Install
|
||||
ansible.builtin.include_tasks: install.yml
|
||||
- name: Setup Service
|
||||
ansible.builtin.include_tasks: systemd.yml
|
||||
@@ -2,11 +2,6 @@
|
||||
|
||||
This role facilitates the management of Proxmox VE resources, including virtual machines (VMs) and LXC containers. It automates the setup of Proxmox nodes and the creation, configuration, and destruction of guests.
|
||||
|
||||
## Requirements
|
||||
|
||||
- `community.general.proxmox_vm_info`
|
||||
- `community.general.proxmox_kvm`
|
||||
|
||||
## Role Variables
|
||||
|
||||
| Variable | Description | Default Value |
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Configuration
|
||||
VM_ID=303
|
||||
TARGET_IP="192.168.20.36" # Replace with the IP of your VM
|
||||
VM_ID=$1
|
||||
TARGET_IP=$2
|
||||
PORT=22
|
||||
CHECK_INTERVAL=300 # 5 minutes in seconds
|
||||
LOG_FILE="/var/log/vm_monitor.log"
|
||||
LOG_FILE="/var/log/vm_monitor_${VM_ID}.log"
|
||||
|
||||
# Function to log messages
|
||||
log_message() {
|
||||
@@ -65,19 +64,12 @@ restart_vm() {
|
||||
log_message "VM $VM_ID has been restarted."
|
||||
}
|
||||
|
||||
# Main loop
|
||||
log_message "Starting monitoring of VM $VM_ID on port $PORT..."
|
||||
log_message "Press Ctrl+C to exit."
|
||||
# Main execution
|
||||
# log_message "Starting monitoring of VM $VM_ID on port $PORT..."
|
||||
|
||||
while true; do
|
||||
# Check if port 22 is open
|
||||
if ! check_port; then
|
||||
restart_vm
|
||||
else
|
||||
log_message "Port $PORT is reachable. VM is running normally."
|
||||
fi
|
||||
|
||||
# Wait for the next check
|
||||
log_message "Sleeping for $CHECK_INTERVAL seconds..."
|
||||
sleep $CHECK_INTERVAL
|
||||
done
|
||||
# Check if port 22 is open
|
||||
if ! check_port; then
|
||||
restart_vm
|
||||
# else
|
||||
# log_message "Port $PORT is reachable. VM is running normally."
|
||||
fi
|
||||
|
||||
8
roles/proxmox/tasks/00_setup_machines.yaml
Normal file
8
roles/proxmox/tasks/00_setup_machines.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Prepare Localhost
|
||||
ansible.builtin.include_tasks: ./01_setup_localhost.yaml
|
||||
when: is_localhost
|
||||
|
||||
- name: Prepare Localhost
|
||||
ansible.builtin.include_tasks: ./05_setup_node.yaml
|
||||
when: is_proxmox_node
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
- name: Prepare Localhost
|
||||
ansible.builtin.include_tasks: ./01_setup_localhost.yml
|
||||
when: is_localhost
|
||||
|
||||
- name: Prepare Localhost
|
||||
ansible.builtin.include_tasks: ./05_setup_node.yml
|
||||
when: is_proxmox_node
|
||||
@@ -7,4 +7,4 @@
|
||||
loop: "{{ proxmox_node_dependencies }}"
|
||||
|
||||
- name: Ensure Harware Acceleration on node
|
||||
ansible.builtin.include_tasks: 06_hardware_acceleration.yml
|
||||
ansible.builtin.include_tasks: 06_hardware_acceleration.yaml
|
||||
@@ -23,6 +23,7 @@
|
||||
vfio_virqfd
|
||||
create: true
|
||||
backup: true
|
||||
mode: 644
|
||||
register: vfio_result
|
||||
|
||||
- name: Update initramfs
|
||||
@@ -6,7 +6,7 @@
|
||||
mode: "0600"
|
||||
|
||||
- name: Update Vault data
|
||||
ansible.builtin.include_tasks: 15_create_secret.yml
|
||||
ansible.builtin.include_tasks: 15_create_secret.yaml
|
||||
loop: "{{ vms | map(attribute='name') }}"
|
||||
loop_control:
|
||||
loop_var: "vm_name"
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
- name: Decrypt vm vault file
|
||||
ansible.builtin.shell: cd ../; ansible-vault decrypt "./playbooks/{{ proxmox_vault_file }}"
|
||||
ignore_errors: true
|
||||
no_log: true
|
||||
|
||||
- name: Load existing vault content
|
||||
@@ -43,5 +42,4 @@
|
||||
|
||||
- name: Encrypt vm vault file
|
||||
ansible.builtin.shell: cd ../; ansible-vault encrypt "./playbooks/{{ proxmox_vault_file }}"
|
||||
ignore_errors: true
|
||||
no_log: true
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
- name: Download Cloud Init Isos
|
||||
ansible.builtin.include_tasks: 42_download_isos.yml
|
||||
ansible.builtin.include_tasks: 42_download_isos.yaml
|
||||
loop: "{{ proxmox_cloud_init_images | dict2items | map(attribute='value') }}"
|
||||
loop_control:
|
||||
loop_var: distro
|
||||
@@ -5,13 +5,13 @@
|
||||
name: vm_secrets
|
||||
|
||||
# - name: Destroy vms (Only during rapid testing)
|
||||
# ansible.builtin.include_tasks: 54_destroy_vm.yml
|
||||
# ansible.builtin.include_tasks: 54_destroy_vm.yaml
|
||||
# loop: "{{ vms }}"
|
||||
# loop_control:
|
||||
# loop_var: "vm"
|
||||
|
||||
- name: Create vms
|
||||
ansible.builtin.include_tasks: 55_create_vm.yml
|
||||
ansible.builtin.include_tasks: 55_create_vm.yaml
|
||||
loop: "{{ vms }}"
|
||||
loop_control:
|
||||
loop_var: "vm"
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
- name: Gather info about VM
|
||||
community.general.proxmox_vm_info:
|
||||
community.proxmox.proxmox_vm_info:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -9,7 +9,7 @@
|
||||
register: vm_info
|
||||
|
||||
- name: Stop VM
|
||||
community.general.proxmox_kvm:
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -21,7 +21,7 @@
|
||||
when: vm_info.proxmox_vms | length > 0
|
||||
|
||||
- name: Destroy VM
|
||||
community.general.proxmox_kvm:
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
- name: Create VM
|
||||
community.general.proxmox_kvm:
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -27,5 +27,5 @@
|
||||
register: proxmox_deploy_info
|
||||
|
||||
- name: Provision created VM
|
||||
ansible.builtin.include_tasks: 56_provision_new_vm.yml
|
||||
ansible.builtin.include_tasks: 56_provision_new_vm.yaml
|
||||
when: proxmox_deploy_info.changed
|
||||
@@ -17,6 +17,7 @@
|
||||
ansible.builtin.shell: |
|
||||
qm set {{ vm.vmid }} --scsi0 {{ proxmox_storage }}:{{ vm.vmid }}/vm-{{ vm.vmid }}-disk-0.raw --ide2 {{ proxmox_storage }}:cloudinit --boot order=scsi0
|
||||
delegate_to: "{{ vm.node }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Resize scsi0 disk if needed
|
||||
ansible.builtin.shell: |
|
||||
@@ -24,7 +25,7 @@
|
||||
delegate_to: "{{ vm.node }}"
|
||||
|
||||
- name: Start VM
|
||||
community.general.proxmox_kvm:
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -34,14 +35,14 @@
|
||||
state: started
|
||||
|
||||
- name: Retry stopping VM
|
||||
ansible.builtin.include_tasks: ./57_stop_and_verify_vm.yml
|
||||
ansible.builtin.include_tasks: ./57_stop_and_verify_vm.yaml
|
||||
|
||||
- name: Pause for 5 seconds for api
|
||||
ansible.builtin.pause:
|
||||
seconds: 5
|
||||
|
||||
- name: Start VM
|
||||
community.general.proxmox_kvm:
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -86,3 +87,25 @@
|
||||
# create: true
|
||||
# state: present
|
||||
# delegate_to: localhost
|
||||
|
||||
|
||||
- name: Copy VM check script to node
|
||||
ansible.builtin.copy:
|
||||
src: check_proxmox_vm.sh
|
||||
dest: /usr/local/bin/check_proxmox_vm.sh
|
||||
mode: '0755'
|
||||
delegate_to: "{{ vm.node }}"
|
||||
|
||||
- name: Creates PATH-entry for crontab
|
||||
ansible.builtin.cron:
|
||||
name: PATH
|
||||
env: true
|
||||
job: /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
|
||||
delegate_to: "{{ vm.node }}"
|
||||
|
||||
- name: Schedule VM check script
|
||||
ansible.builtin.cron:
|
||||
name: "Check VM {{ vm.name }}"
|
||||
job: "/usr/local/bin/check_proxmox_vm.sh {{ vm.vmid }} {{ vm_found_ip }}"
|
||||
minute: "*/5"
|
||||
delegate_to: "{{ vm.node }}"
|
||||
@@ -5,7 +5,7 @@
|
||||
retry_count: "{{ 0 if retry_count is undefined else retry_count | int + 1 }}"
|
||||
|
||||
- name: Stop VM
|
||||
community.general.proxmox_kvm:
|
||||
community.proxmox.proxmox_kvm:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -16,7 +16,7 @@
|
||||
force: true
|
||||
|
||||
- name: Wait until VM is fully stopped
|
||||
community.general.proxmox_vm_info:
|
||||
community.proxmox.proxmox_vm_info:
|
||||
api_user: "{{ proxmox_api_user }}@pam"
|
||||
api_token_id: "{{ proxmox_api_token_id }}"
|
||||
api_token_secret: "{{ proxmox_api_token_secret }}"
|
||||
@@ -36,4 +36,4 @@
|
||||
seconds: 5
|
||||
|
||||
- name: "Failed to stop VM - Retrying..."
|
||||
include_tasks: ./57_stop_and_verify_vm.yml
|
||||
include_tasks: ./57_stop_and_verify_vm.yaml
|
||||
@@ -5,7 +5,7 @@
|
||||
name: vm_secrets
|
||||
|
||||
- name: Create vms
|
||||
ansible.builtin.include_tasks: 65_create_container.yml
|
||||
ansible.builtin.include_tasks: 65_create_container.yaml
|
||||
loop: "{{ lxcs }}"
|
||||
loop_control:
|
||||
loop_var: "container"
|
||||
19
roles/proxmox/tasks/main.yaml
Normal file
19
roles/proxmox/tasks/main.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
- name: Prepare Machines
|
||||
ansible.builtin.include_tasks: 00_setup_machines.yaml
|
||||
|
||||
- name: Create VM vault
|
||||
ansible.builtin.include_tasks: 10_create_secrets.yaml
|
||||
when: is_localhost
|
||||
|
||||
- name: Prime node for VM
|
||||
ansible.builtin.include_tasks: 40_prepare_vm_creation.yaml
|
||||
when: is_proxmox_node
|
||||
|
||||
- name: Create VMs
|
||||
ansible.builtin.include_tasks: 50_create_vms.yaml
|
||||
when: is_localhost
|
||||
|
||||
- name: Create LXC containers
|
||||
ansible.builtin.include_tasks: 60_create_containers.yaml
|
||||
when: is_localhost
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
- name: Prepare Machines
|
||||
ansible.builtin.include_tasks: 00_setup_machines.yml
|
||||
|
||||
- name: Create VM vault
|
||||
ansible.builtin.include_tasks: 10_create_secrets.yml
|
||||
when: is_localhost
|
||||
|
||||
- name: Prime node for VM
|
||||
ansible.builtin.include_tasks: 40_prepare_vm_creation.yml
|
||||
when: is_proxmox_node
|
||||
|
||||
- name: Create VMs
|
||||
ansible.builtin.include_tasks: 50_create_vms.yml
|
||||
when: is_localhost
|
||||
|
||||
- name: Create LXC containers
|
||||
ansible.builtin.include_tasks: 60_create_containers.yml
|
||||
when: is_localhost
|
||||
@@ -3,7 +3,7 @@ proxmox_creator: ansible
|
||||
|
||||
proxmox_storage: proxmox
|
||||
|
||||
proxmox_vault_file: ../vars/group_vars/proxmox/secrets_vm.yml
|
||||
proxmox_vault_file: ../vars/group_vars/proxmox/secrets_vm.yaml
|
||||
proxmox_secrets_prefix: secrets_vm
|
||||
proxmox_cloud_init_images:
|
||||
debian:
|
||||
@@ -25,7 +25,7 @@
|
||||
become: true
|
||||
|
||||
- name: Build Custom Caddy with netcup
|
||||
ansible.builtin.command: xcaddy build --with github.com/caddy-dns/netcup
|
||||
ansible.builtin.command: xcaddy build --with github.com/caddy-dns/cloudflare
|
||||
environment:
|
||||
PATH: "{{ ansible_env.PATH }}:/usr/local/go/bin"
|
||||
register: xcaddy_build
|
||||
14
roles/reverse_proxy/tasks/50_netcup_dns.yaml
Normal file
14
roles/reverse_proxy/tasks/50_netcup_dns.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# - name: Setup DNS on Netcup
|
||||
# community.general.netcup_dns:
|
||||
# api_key: "{{ netcup_api_key }}"
|
||||
# api_password: "{{ netcup_api_password }}"
|
||||
# customer_id: "{{ netcup_customer_id }}"
|
||||
# domain: "{{ domain }}"
|
||||
# name: "{{ service.name }}"
|
||||
# type: "A"
|
||||
# value: "{{ hostvars['docker-lb'].ansible_default_ipv4.address }}"
|
||||
# loop: "{{ services }}"
|
||||
# loop_control:
|
||||
# loop_var: service
|
||||
# delegate_to: localhost
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user