Go to file
Tuan-Dat Tran ce0411cdb0 fixed taint
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-13 22:56:59 +02:00
group_vars Added storage nodes for k3s 2024-10-08 04:13:38 +02:00
host_vars Added storage nodes for k3s 2024-10-08 04:13:38 +02:00
roles fixed taint 2024-10-13 22:56:59 +02:00
scripts Added storage nodes for k3s 2024-10-08 04:13:38 +02:00
.gitignore Added bin as pastbin internally and made vars more configurable 2023-04-17 19:23:48 +02:00
README.md Added notes for longhorn nodes 2024-10-08 04:40:16 +02:00
common-k3s.yml Finished lb and db 2024-09-19 23:10:00 +02:00
db.yml add postgres exporter 2024-10-08 11:17:03 +02:00
k3s-agents.yml Added k3s agents 2024-09-20 16:57:59 +02:00
k3s-servers.yml Full k3s server installation done 2024-09-20 15:01:33 +02:00
k3s-storage.yml Added storage nodes for k3s 2024-10-08 04:13:38 +02:00
loadbalancer.yml Finished lb and db 2024-09-19 23:10:00 +02:00
production Added storage nodes for k3s 2024-10-08 04:13:38 +02:00
secrets.yml.skeleton update vault skeleton 2024-10-08 04:14:01 +02:00
test.yml Full k3s server installation done 2024-09-20 15:01:33 +02:00

README.md

TuDatTr IaC

I do not recommend this project being used for ones own infrastructure, as this project is heavily attuned to my specific host/network setup The Ansible Project to provision fresh Debian VMs for my Proxmox instances. Some values are hard coded such as the public key both in ./scripts/debian_seed.sh and ./group_vars/all/vars.yml.

Prerequisites

Improvable Variables

  • group_vars/k3s/vars.yml:
    • k3s.server.ips: Take list of IPs from host_vars k3s_server*.yml.
    • k3s_db_connection_string: Embed this variable in the k3s.db.-directory. Currently causes loop.

Run Playbook

To run a first playbook and test the setup the following command can be executed.

ansible-playbook -i production -J k3s-servers.yml

This will run the ./k3s-servers.yml playbook and execute its roles.

After successful k3s installation

To access our Kubernetes cluster from our host machine to work on it via flux and such we need to manually copy a k3s config from one of our server nodes to our host machine. Then we need to install kubectl on our host machine and optionally kubectx if we're already managing other Kubernetes instances. Then we replace the localhost address inside of the config with the IP of our load balancer. Finally we'll need to set the KUBECONFIG variable.

mkdir ~/.kube/
scp k3s-server00:/etc/rancher/k3s/k3s.yaml ~/.kube/config
chown $USER ~/.kube/config
sed -i "s/127.0.0.1/192.168.20.22/" ~/.kube/config
export KUBECONFIG=~/.kube/config

Install flux and continue in the flux repository.

Longhorn Nodes

To create longhorn nodes from existing kubernetes nodes we want to increase their storage capacity. Since we're using VMs for our k3s nodes we can resize the root-disk of the VMs in the proxmox GUI.

Then we have to resize the partitions inside of the VM so the root partition uses the newly available space. When we have LVM-based root partition we can do the following:

# Create a new partition from the free space.
sudo fdisk /dev/sda
# echo "n\n\n\n\n\nw\n"
# Create a LVM volume on the new partition
sudo pvcreate /dev/sda3
sudo vgextend  k3s-vg /dev/sda3
# Use the newly available storage in the root volume
sudo lvresize --extents +100%FREE --resizefs /dev/k3s-vg/root