4db26b56da
- Introduce Docker host configuration playbooks in `docker_host` role - Install Docker and Docker Compose via apt repository - Configure Docker user, group, and required directories (`/opt/docker`, `/media`) - Add NFS mounts for Docker data, series, movies, and songs directories - Add extra utility packages (`bat`, `ripgrep`, `fd-find`, `screen`, `eza`, `neovim`) - Set up and manage `bash_aliases` for user-friendly command replacements (`batcat`, `nvim`, `eza`) - Enhance `/group_vars` and `/host_vars` for Docker-related settings and secure access - Add `docker-host00` and `docker-host01` entries to production and staging inventories Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev> |
||
---|---|---|
group_vars | ||
host_vars | ||
roles | ||
scripts | ||
.gitignore | ||
README.md | ||
common-k3s.yml | ||
db.yml | ||
docker-host.yml | ||
k3s-agents.yml | ||
k3s-servers.yml | ||
k3s-storage.yml | ||
loadbalancer.yml | ||
production | ||
secrets.yml.skeleton | ||
staging | ||
test.yml |
README.md
TuDatTr IaC
I do not recommend this project being used for ones own infrastructure, as this project is heavily attuned to my specific host/network setup The Ansible Project to provision fresh Debian VMs for my Proxmox instances. Some values are hard coded such as the public key both in ./scripts/debian_seed.sh and ./group_vars/all/vars.yml.
Prerequisites
- secrets.yml in the root directory of this repository. Skeleton file can be found as ./secrets.yml.skeleton.
- IP Configuration of hosts like in ./host_vars/*
- Setup ~/.ssh/config for the respective hosts used.
- Install
passlib
for your operating system. Needed to hash passwords ad-hoc.
Improvable Variables
group_vars/k3s/vars.yml
:k3s.server.ips
: Take list of IPs from host_varsk3s_server*.yml
.k3s_db_connection_string
: Embed this variable in thek3s.db.
-directory. Currently causes loop.
Run Playbook
To run a first playbook and test the setup the following command can be executed.
ansible-playbook -i production -J k3s-servers.yml
This will run the ./k3s-servers.yml playbook and execute its roles.
After successful k3s installation
To access our Kubernetes cluster from our host machine to work on it via
flux and such we need to manually copy a k3s config from one of our server nodes to our host machine.
Then we need to install kubectl
on our host machine and optionally kubectx
if we're already
managing other Kubernetes instances.
Then we replace the localhost address inside of the config with the IP of our load balancer.
Finally we'll need to set the KUBECONFIG variable.
mkdir ~/.kube/
scp k3s-server00:/etc/rancher/k3s/k3s.yaml ~/.kube/config
chown $USER ~/.kube/config
sed -i "s/127.0.0.1/192.168.20.22/" ~/.kube/config
export KUBECONFIG=~/.kube/config
Install flux and continue in the flux repository.
Longhorn Nodes
To create longhorn nodes from existing kubernetes nodes we want to increase their storage capacity. Since we're using VMs for our k3s nodes we can resize the root-disk of the VMs in the proxmox GUI.
Then we have to resize the partitions inside of the VM so the root partition uses the newly available space. When we have LVM-based root partition we can do the following:
# Create a new partition from the free space.
sudo fdisk /dev/sda
# echo "n\n\n\n\n\nw\n"
# n > 5x\n > w > \n
# Create a LVM volume on the new partition
sudo pvcreate /dev/sda3
sudo vgextend k3s-vg /dev/sda3
# Use the newly available storage in the root volume
sudo lvresize -l +100%FREE -r /dev/k3s-vg/root