Update README.md
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
This commit is contained in:
117
README.md
117
README.md
@@ -3,90 +3,69 @@
|
||||
**I do not recommend this project being used for ones own infrastructure, as
|
||||
this project is heavily attuned to my specific host/network setup**
|
||||
|
||||
The Ansible Project to provision fresh Debian VMs for my Proxmox instances.
|
||||
This Ansible project automates the setup of a K3s Kubernetes cluster on Proxmox VE. It also includes playbooks for configuring Docker hosts, load balancers, and other services.
|
||||
|
||||
## Configuration
|
||||
## Repository Structure
|
||||
|
||||
The configuration of this project is done via files in the `./vars` directory.
|
||||
The inventory is composed of `.ini` files in the `./vars` directory. Each `.ini` file represents an inventory and can be used with the `-i` flag when running playbooks.
|
||||
The repository is organized into the following main directories:
|
||||
|
||||
The variables for the hosts and groups are defined in the `./vars/group_vars` directory. The structure of this directory is as follows:
|
||||
- `playbooks/`: Contains the main Ansible playbooks for different setup scenarios.
|
||||
- `roles/`: Contains the Ansible roles that are used by the playbooks.
|
||||
- `vars/`: Contains variable files, including group-specific variables.
|
||||
|
||||
```
|
||||
vars/
|
||||
├── group_vars/
|
||||
│ ├── all/
|
||||
│ │ ├── secrets.yml
|
||||
│ │ └── vars.yml
|
||||
│ ├── <group_name>/
|
||||
│ │ ├── *.yml
|
||||
├── docker.ini
|
||||
├── k3s.ini
|
||||
├── kubernetes.ini
|
||||
├── proxmox.ini
|
||||
└── vps.ini
|
||||
```
|
||||
## Playbooks
|
||||
|
||||
The `all` group contains variables that are common to all hosts. Each other directory in `group_vars` corresponds to a group defined in the inventory files and contains variables specific to that group.
|
||||
The following playbooks are available:
|
||||
|
||||
## Run Playbook
|
||||
- `proxmox.yml`: Provisions VMs and containers on Proxmox VE.
|
||||
- `k3s-servers.yml`: Sets up the K3s master nodes.
|
||||
- `k3s-agents.yml`: Sets up the K3s agent nodes.
|
||||
- `k3s-loadbalancer.yml`: Configures a load balancer for the K3s cluster.
|
||||
- `k3s-storage.yml`: Configures storage for the K3s cluster.
|
||||
- `docker.yml`: Sets up Docker hosts and their load balancer.
|
||||
- `docker-host.yml`: Configures the docker hosts.
|
||||
- `docker-lb.yml`: Configures a load balancer for Docker services.
|
||||
- `kubernetes_setup.yml`: A meta-playbook for setting up the entire Kubernetes cluster.
|
||||
|
||||
To run a playbook, you need to specify the inventory file and the playbook file. For example, to run the `k3s-servers.yml` playbook with the `k3s.ini` inventory, you can use the following command:
|
||||
## Roles
|
||||
|
||||
```sh
|
||||
ansible-playbook -i vars/k3s.ini playbooks/k3s-servers.yml
|
||||
```
|
||||
The following roles are defined:
|
||||
|
||||
## After successful k3s installation
|
||||
- `common`: Common configuration tasks for all nodes.
|
||||
- `proxmox`: Manages Proxmox VE, including VM and container creation.
|
||||
- `k3s_server`: Installs and configures K3s master nodes.
|
||||
- `k3s_agent`: Installs and configures K3s agent nodes.
|
||||
- `k3s_loadbalancer`: Configures an Nginx-based load balancer for the K3s cluster.
|
||||
- `k3s_storage`: Configures storage solutions for Kubernetes.
|
||||
- `docker_host`: Installs and configures Docker.
|
||||
- `kubernetes_argocd`: Deploys Argo CD to the Kubernetes cluster.
|
||||
- `node_exporter`: Installs the Prometheus Node Exporter for monitoring.
|
||||
- `reverse_proxy`: Configures a Caddy-based reverse proxy.
|
||||
|
||||
To access our Kubernetes cluster from our host machine to work on it via
|
||||
flux and such we need to manually copy a k3s config from one of our server nodes to our host machine.
|
||||
Then we need to install `kubectl` on our host machine and optionally `kubectx` if we're already
|
||||
managing other Kubernetes instances.
|
||||
Then we replace the localhost address inside of the config with the IP of our load balancer.
|
||||
Finally we'll need to set the KUBECONFIG variable.
|
||||
## Usage
|
||||
|
||||
```sh
|
||||
mkdir ~/.kube/
|
||||
scp k3s-server00:/etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
chown $USER ~/.kube/config
|
||||
sed -i "s/127.0.0.1/192.168.20.22/" ~/.kube/config
|
||||
export KUBECONFIG=~/.kube/config
|
||||
```
|
||||
1. **Install dependencies:**
|
||||
|
||||
Install flux and continue in the flux repository.
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
## Longhorn Nodes
|
||||
2. **Configure variables:**
|
||||
|
||||
To create longhorn nodes from existing kubernetes nodes we want to increase
|
||||
their storage capacity. Since we're using VMs for our k3s nodes we can
|
||||
resize the root-disk of the VMs in the proxmox GUI.
|
||||
- Create an inventory file (e.g., `vars/k3s.ini`).
|
||||
- Adjust variables in `vars/group_vars/` to match your environment.
|
||||
|
||||
Then we have to resize the partitions inside of the VM so the root partition
|
||||
uses the newly available space.
|
||||
When we have LVM-based root partition we can do the following:
|
||||
3. **Run playbooks:**
|
||||
|
||||
```sh
|
||||
# Create a new partition from the free space.
|
||||
sudo fdisk /dev/sda
|
||||
# echo "n\n\n\n\n\nw\n"
|
||||
# n > 5x\n > w > \n
|
||||
# Create a LVM volume on the new partition
|
||||
sudo pvcreate /dev/sda3
|
||||
sudo vgextend k3s-vg /dev/sda3
|
||||
# Use the newly available storage in the root volume
|
||||
sudo lvresize -l +100%FREE -r /dev/k3s-vg/root
|
||||
```
|
||||
```bash
|
||||
# To provision VMs on Proxmox
|
||||
ansible-playbook -i vars/proxmox.ini playbooks/proxmox.yml
|
||||
|
||||
## Cloud Init VMs
|
||||
# To set up the K3s cluster
|
||||
ansible-playbook -i vars/k3s.ini playbooks/kubernetes_setup.yml
|
||||
```
|
||||
|
||||
```sh
|
||||
# On Hypervisor Host
|
||||
qm resize <vmid> scsi0 +32G
|
||||
# On VM
|
||||
sudo fdisk -l /dev/sda # To check
|
||||
echo 1 | sudo tee /sys/class/block/sda/device/rescan
|
||||
sudo fdisk -l /dev/sda # To check
|
||||
# sudo apt-get install cloud-guest-utils
|
||||
sudo growpart /dev/sda 1
|
||||
```
|
||||
## Disclaimer
|
||||
|
||||
This project is highly customized for the author's specific environment. Using it without modification is not recommended.
|
||||
|
||||
Reference in New Issue
Block a user