parent
50abbf933c
commit
51a49d003d
248
README.md
248
README.md
|
@ -1,227 +1,33 @@
|
||||||
# TuDatTr IaC
|
# TuDatTr IaC
|
||||||
|
|
||||||
## User
|
**I do not recommend this project being used for ones own infrastructure, as
|
||||||
It is expected that a user with sudo privilages is on the target, for me the users name is "tudattr"
|
this project is heavily attuned to my specific host/network setup**
|
||||||
you can add such user with the following command `useradd -m -g sudo -s /bin/bash tudattr`
|
The Ansible Project to provision fresh Debian VMs for my Proxmox instances.
|
||||||
Don't forget to set a password for the new user with `passwd tudattr`
|
Some values are hard coded such as the public key both in
|
||||||
## sudo
|
[./scripts/debian_seed.sh](./scripts/debian_seed.sh) and [./group_vars/all/vars.yml](./group_vars/all/vars.yml).
|
||||||
Install sudo on the target machine, with debian its
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- [secrets.yml](secrets.yml) in the root directory of this repository.
|
||||||
|
Skeleton file can be found as [./secrets.yml.skeleton](./secrets.yml.skeleton).
|
||||||
|
- IP Configuration of hosts like in [./host_vars/\*](./host_vars/*)
|
||||||
|
- Setup [~/.ssh/config](~/.ssh/config) for the respective hosts used.
|
||||||
|
- Install `passlib` for your operating system. Needed to hash passwords ad-hoc.
|
||||||
|
|
||||||
|
## Improvable Variables
|
||||||
|
|
||||||
|
- `group_vars/k3s/vars.yml`:
|
||||||
|
- `k3s.server.ips`: Take list of IPs from host_vars `k3s_server*.yml`.
|
||||||
|
- `k3s_db_connection_string`: Embed this variable in the `k3s.db.`-directory.
|
||||||
|
Currently causes loop.
|
||||||
|
|
||||||
|
## Run Playbook
|
||||||
|
|
||||||
|
To run a first playbook and test the setup the following command can be executed.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
su root
|
ansible-playbook -i production -J k3s-servers.yml
|
||||||
apt install sudo
|
|
||||||
usermod -a -G sudo tudattr
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Backups
|
This will run the [./k3s-servers.yml](./k3s-servers.yml) playbook and execute
|
||||||
Backup for aya01 and raspberry are in a backblaze b2, which gets encrypted on the clientside by rclone.
|
its roles.
|
||||||
but first of all we need to create the buckets and provide ansible with the needed information.
|
|
||||||
|
|
||||||
First we need to create a api key for backblaze, consists of an id and a key.
|
|
||||||
we use clone to sync to backblaze.
|
|
||||||
we can encrypt the data with rclone before sending it to backblaze.
|
|
||||||
to do this we need two buckets:
|
|
||||||
- b2
|
|
||||||
- crypt
|
|
||||||
on each device that should be backupped.
|
|
||||||
|
|
||||||
we create these by running `rclone config` and creating one [remote] b2 config and a [secret] crypt config. The crypt config should have two passwords that we store in our secrets file.
|
|
||||||
|
|
||||||
`
|
|
||||||
## Vault
|
|
||||||
- Create vault with: `ansible-vault create secrets.yml`
|
|
||||||
- Create entry in vault with: `ansible-vault edit secrets.yml`
|
|
||||||
- Add following entries: TODO
|
|
||||||
|
|
||||||
## Docker
|
|
||||||
To add new docker containers to the docker role you need to add the following and replace `service` with the name of your service:
|
|
||||||
|
|
||||||
- Add relevent vars to `group_vars/all/vars.yaml`:
|
|
||||||
```yaml
|
|
||||||
service:
|
|
||||||
host: "service"
|
|
||||||
ports:
|
|
||||||
http: "19999"
|
|
||||||
volumes:
|
|
||||||
config: "{{ docker_dir }}/service/" # config folder or your dir
|
|
||||||
data: "{{ docker_data_dir }}/service/" # data folder or your dir (only works on aya01)
|
|
||||||
```
|
|
||||||
|
|
||||||
- Create necessary directories for service in the docker role `roles/docker/tasks/service.yaml`
|
|
||||||
```yaml
|
|
||||||
- name: Create service dirs
|
|
||||||
file:
|
|
||||||
path: "{{ item }}"
|
|
||||||
owner: 1000
|
|
||||||
group: 1000
|
|
||||||
mode: '775'
|
|
||||||
state: directory
|
|
||||||
loop:
|
|
||||||
- "{{ service.volumes.config }}"
|
|
||||||
- "{{ service.volumes.data }}"
|
|
||||||
|
|
||||||
# optional:
|
|
||||||
# - name: Place service config
|
|
||||||
# template:
|
|
||||||
# owner: 1000
|
|
||||||
# mode: '660'
|
|
||||||
# src: "templates/hostname/service/service.yml"
|
|
||||||
# dest: "{{ prm_config }}/service.yml"
|
|
||||||
```
|
|
||||||
|
|
||||||
- Includ new tasks to `roles/docker/tasks/hostname_compose.yaml`:
|
|
||||||
```yaml
|
|
||||||
- include_tasks: service.yaml
|
|
||||||
tags:
|
|
||||||
- service
|
|
||||||
```
|
|
||||||
|
|
||||||
- Add new service to compose `roles/docker/templates/hostname/compose.yaml`
|
|
||||||
```yaml
|
|
||||||
service:
|
|
||||||
image: service/service
|
|
||||||
container_name: service
|
|
||||||
hostname: service
|
|
||||||
networks:
|
|
||||||
- net
|
|
||||||
ports:
|
|
||||||
- "{{service_port}}:19999"
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- "{{service_config}}:/etc/service"
|
|
||||||
- "{{service_lib}}:/var/lib/service"
|
|
||||||
- "{{service_cache}}:/var/cache/service"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Server
|
|
||||||
- Install Debian (debian-11.5.0-amd64-netinst.iso) on remote system
|
|
||||||
- Create user (tudattr)
|
|
||||||
- Get IP of remote system (192.168.20.11)
|
|
||||||
- Create ssh-config entry
|
|
||||||
```config
|
|
||||||
Host aya01
|
|
||||||
HostName 192.168.20.11
|
|
||||||
Port 22
|
|
||||||
User tudattr
|
|
||||||
IdentityFile /mnt/veracrypt1/genesis
|
|
||||||
```
|
|
||||||
- copy public key to remote system
|
|
||||||
`ssh-copy-id -i /mnt/veracrypt1/genesis.pub aya01`
|
|
||||||
- Add this host to ansible inventory
|
|
||||||
- Install sudo on remote
|
|
||||||
- add user to sudo group (with `su --login` without login the path will not be loaded correctly see [here](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=918754)) and `usermod -a -G sudo tudattr`
|
|
||||||
- set time correctly when getting the following error
|
|
||||||
```sh
|
|
||||||
Release file for http://security.debian.org/debian-security/dists/bullseye-security/InRelease is not valid yet (invalid for another 12h 46min 9s). Updates for this repository will not be applied.
|
|
||||||
```
|
|
||||||
By doing on remote system (example):
|
|
||||||
```sh
|
|
||||||
sudo systemctl stop ntp.service
|
|
||||||
sudo ntpd -gq
|
|
||||||
sudo systemctl start ntp.service
|
|
||||||
```
|
|
||||||
### zoneminder
|
|
||||||
- Enable authentification in (Option->System)
|
|
||||||
- Create new Camera:
|
|
||||||
- General>Name: BirdCam
|
|
||||||
- General>Function: Ffmpeg
|
|
||||||
- General>Function: Modect
|
|
||||||
- Source>Source Path: `rtsp://user:pw@ip:554/cam/mpeg4`
|
|
||||||
- Change default admin password
|
|
||||||
- Create users
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## RaspberryPi
|
|
||||||
- Install raspbian lite (2022-09-22-raspios-bullseye-arm64-lite.img) on pi
|
|
||||||
- Get IP of remote system (192.168.20.11)
|
|
||||||
- Create ssh-config entry
|
|
||||||
```config
|
|
||||||
Host pi
|
|
||||||
HostName 192.168.20.11
|
|
||||||
Port 22
|
|
||||||
User tudattr
|
|
||||||
IdentityFile /mnt/veracrypt1/genesis
|
|
||||||
```
|
|
||||||
- enable ssh on pi
|
|
||||||
- copy public key to pi
|
|
||||||
- change user password of user on pi
|
|
||||||
- execute `ansible-playbook -i production --ask-vault-pass --extra-vars '@secrets.yml' pi.yml`
|
|
||||||
|
|
||||||
## Mikrotik
|
|
||||||
- Create rsa-key on your device and name it mikrotik_rsa
|
|
||||||
- On mikrotik run: `/user/ssh-keys/import public-key-file=mikrotik_rsa.pub user=tudattr`
|
|
||||||
- Create ssh-config entry:
|
|
||||||
```config
|
|
||||||
Host mikrotik
|
|
||||||
HostName 192.168.70.1
|
|
||||||
Port 2200
|
|
||||||
User tudattr
|
|
||||||
IdentityFile /mnt/veracrypt1/mikrotik_rsa
|
|
||||||
```
|
|
||||||
|
|
||||||
### wireguard
|
|
||||||
thanks to [mikrotik](https://www.medo64.com/2022/04/wireguard-on-mikrotik-routeros-7/)0
|
|
||||||
quick code
|
|
||||||
```
|
|
||||||
# add wiregurad interface
|
|
||||||
interface/wireguard/add listen-port=51820 name=wg1
|
|
||||||
# get public key
|
|
||||||
interface/wireguard/print
|
|
||||||
$ > public-key: <mikrotik_public_key>
|
|
||||||
# add network/ip for wireguard interface
|
|
||||||
ip/address/add address=192.168.200.1/24 network=192.168.200.0 interface=wg1
|
|
||||||
# add firewall rule for wireguard (maybe specify to be from pppoe-wan)
|
|
||||||
/ip/firewall/filter/add chain=input protocol=udp dst-port=51820 action=accept
|
|
||||||
# routing for wg1 clients and rest of the network
|
|
||||||
> <insert forward for routing between wg1 and other networks>
|
|
||||||
# enable internet for wg1 clients (may have to add to enable internet list
|
|
||||||
/ip/firewall/nat/add chain=srcnat src-address=192.168.200.0/24 out-interface=pppoe-wan action=masquerade
|
|
||||||
```
|
|
||||||
add peer
|
|
||||||
```
|
|
||||||
/interface/wireguard/peers/add interface=wg1 allowed-address=<untaken_ipv4>/24 public-key="<client_public_key"
|
|
||||||
```
|
|
||||||
|
|
||||||
Keygeneragion on archlinux `wg genkey | (umask 0077 && tee wireguard.key) | wg pubkey > peer_A.pub`
|
|
||||||
Wireguard config on archlinux at `/etc/wireguard/wg0.conf`:
|
|
||||||
```
|
|
||||||
[Interface]
|
|
||||||
PrivateKey = <client_private_key>
|
|
||||||
Address = 192.168.200.250/24
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = <mikrotik public key>
|
|
||||||
Endpoint = tudattr.dev:51820
|
|
||||||
AllowedIPs = 0.0.0.0/0
|
|
||||||
```
|
|
||||||
used ipv4:
|
|
||||||
- tudattr: 192.168.200.250
|
|
||||||
- livei: 192.168.200.240
|
|
||||||
|
|
||||||
#### notes
|
|
||||||
- wireguard->add
|
|
||||||
name: wg_tunnel01
|
|
||||||
listen port: 51820
|
|
||||||
[save]
|
|
||||||
- wireguard->peers->add
|
|
||||||
interface: wg_tunnel01
|
|
||||||
endpoint port: 51820
|
|
||||||
allowed address: ::/0
|
|
||||||
psk: <password>
|
|
||||||
persistent keepalive: 25
|
|
||||||
- ip->address->address list->add
|
|
||||||
address:192.168.200.1/24
|
|
||||||
network: 192.168.200.0
|
|
||||||
interface: wg_tunnel01
|
|
||||||
|
|
||||||
## troubleshooting
|
|
||||||
### Docker networking problem
|
|
||||||
`docker system prune -a`
|
|
||||||
### Time problems (NTP service: n/a)
|
|
||||||
systemctl status systemd-timesyncd.service
|
|
||||||
when not available
|
|
||||||
sudo apt install systemd-timesyncd/stable
|
|
||||||
### Syncthing inotify
|
|
||||||
echo "fs.inotify.max_user_watches=204800" | sudo tee -a /etc/sysctl.conf
|
|
||||||
https://forum.cloudron.io/topic/7163/how-to-increase-inotify-limit-for-syncthing/2
|
|
||||||
|
|
|
@ -0,0 +1,10 @@
|
||||||
|
---
|
||||||
|
- name: Run the common role on k3s
|
||||||
|
hosts: k3s
|
||||||
|
gather_facts: yes
|
||||||
|
vars_files:
|
||||||
|
- secrets.yml
|
||||||
|
roles:
|
||||||
|
- role: common
|
||||||
|
tags:
|
||||||
|
- common
|
|
@ -0,0 +1,16 @@
|
||||||
|
---
|
||||||
|
- name: Set up Servers
|
||||||
|
hosts: db
|
||||||
|
gather_facts: yes
|
||||||
|
vars_files:
|
||||||
|
- secrets.yml
|
||||||
|
roles:
|
||||||
|
- role: common
|
||||||
|
tags:
|
||||||
|
- common
|
||||||
|
- role: postgres
|
||||||
|
tags:
|
||||||
|
- postgres
|
||||||
|
- role: node_exporter
|
||||||
|
tags:
|
||||||
|
- node_exporter
|
|
@ -4,7 +4,6 @@
|
||||||
|
|
||||||
user: tudattr
|
user: tudattr
|
||||||
timezone: Europe/Berlin
|
timezone: Europe/Berlin
|
||||||
rclone_config: "/root/.config/rclone/"
|
|
||||||
puid: "1000"
|
puid: "1000"
|
||||||
pgid: "1000"
|
pgid: "1000"
|
||||||
pk_path: "/mnt/veracrypt1/genesis"
|
pk_path: "/mnt/veracrypt1/genesis"
|
||||||
|
|
|
@ -0,0 +1,19 @@
|
||||||
|
db:
|
||||||
|
default_user:
|
||||||
|
password: "{{ vault.k3s.postgres.default_user.password }}"
|
||||||
|
name: "k3s"
|
||||||
|
user: "k3s"
|
||||||
|
password: "{{ vault.k3s.db.password}}"
|
||||||
|
|
||||||
|
k3s:
|
||||||
|
server:
|
||||||
|
ips:
|
||||||
|
- 192.168.20.21
|
||||||
|
- 192.168.20.24
|
||||||
|
loadbalancer:
|
||||||
|
ips: 192.168.20.22
|
||||||
|
db:
|
||||||
|
ip: 192.168.20.23
|
||||||
|
default_port: "5432"
|
||||||
|
|
||||||
|
k3s_db_connection_string: "postgres://{{db.user}}:{{db.password}}@{{k3s.db.ip}}:{{k3s.db.default_port}}/{{db.name}}"
|
|
@ -1,53 +0,0 @@
|
||||||
ansible_user: "{{ user }}"
|
|
||||||
ansible_host: 192.168.20.12
|
|
||||||
ansible_port: 22
|
|
||||||
ansible_ssh_private_key_file: '{{ pk_path }}'
|
|
||||||
ansible_become_pass: '{{ vault.aya01.sudo }}'
|
|
||||||
|
|
||||||
host:
|
|
||||||
hostname: "aya01"
|
|
||||||
ip: "{{ ansible_host }}"
|
|
||||||
backblaze:
|
|
||||||
account: "{{ vault.aya01.backblaze.account }}"
|
|
||||||
key: "{{ vault.aya01.backblaze.key }}"
|
|
||||||
remote: "remote:aya01-tudattr-dev"
|
|
||||||
password: "{{ vault.aya01.rclone.password }}"
|
|
||||||
password2: "{{ vault.aya01.rclone.password2 }}"
|
|
||||||
paths:
|
|
||||||
- "{{ docker_compose_dir }}"
|
|
||||||
- "{{ docker_dir }}"
|
|
||||||
fstab:
|
|
||||||
- name: "config"
|
|
||||||
path: "/opt"
|
|
||||||
type: "ext4"
|
|
||||||
uuid: "cad60133-dd84-4a2a-8db4-2881c608addf"
|
|
||||||
- name: "media0"
|
|
||||||
path: "/mnt/media0"
|
|
||||||
type: "ext4"
|
|
||||||
uuid: "c4c724ec-4fe3-4665-adf4-acd31d6b7f95"
|
|
||||||
- name: "media1"
|
|
||||||
path: "/mnt/media1"
|
|
||||||
type: "ext4"
|
|
||||||
uuid: "8d66d395-1e35-4f5a-a5a7-d181d6642ebf"
|
|
||||||
mergerfs:
|
|
||||||
- name: "media"
|
|
||||||
path: "/media"
|
|
||||||
branches:
|
|
||||||
- "/mnt/media0"
|
|
||||||
- "/mnt/media1"
|
|
||||||
opts:
|
|
||||||
- "use_ino"
|
|
||||||
- "allow_other"
|
|
||||||
- "cache.files=partial"
|
|
||||||
- "dropcacheonclose=true"
|
|
||||||
- "category.create=mfs"
|
|
||||||
type: "fuse.mergerfs"
|
|
||||||
samba:
|
|
||||||
password: "{{ vault.aya01.samba.password }}"
|
|
||||||
paperless:
|
|
||||||
db:
|
|
||||||
password: "{{ vault.aya01.paperless.db.password }}"
|
|
||||||
gitea:
|
|
||||||
runner:
|
|
||||||
token: "{{ vault.aya01.gitea.runner.token }}"
|
|
||||||
name: "aya01"
|
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
ansible_user: "{{ user }}"
|
||||||
|
ansible_host: 192.168.20.22
|
||||||
|
ansible_port: 22
|
||||||
|
ansible_ssh_private_key_file: "{{ pk_path }}"
|
||||||
|
ansible_become_pass: "{{ vault.k3s.loadbalancer.sudo }}"
|
||||||
|
host:
|
||||||
|
hostname: "k3s-loadbalancer"
|
||||||
|
ip: "{{ ansible_host }}"
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
ansible_user: "{{ user }}"
|
||||||
|
ansible_host: 192.168.20.23
|
||||||
|
ansible_port: 22
|
||||||
|
ansible_ssh_private_key_file: "{{ pk_path }}"
|
||||||
|
ansible_become_pass: "{{ vault.k3s.postgres.sudo }}"
|
||||||
|
host:
|
||||||
|
hostname: "k3s-postgres"
|
||||||
|
ip: "{{ ansible_host }}"
|
|
@ -1,9 +1,9 @@
|
||||||
|
---
|
||||||
ansible_user: "{{ user }}"
|
ansible_user: "{{ user }}"
|
||||||
ansible_host: 192.168.20.21
|
ansible_host: 192.168.20.21
|
||||||
ansible_port: 22
|
ansible_port: 22
|
||||||
ansible_ssh_private_key_file: "{{ pk_path }}"
|
ansible_ssh_private_key_file: "{{ pk_path }}"
|
||||||
ansible_become_pass: "{{ vault.k3s-server.sudo }}"
|
ansible_become_pass: "{{ vault.k3s.server00.sudo }}"
|
||||||
|
|
||||||
host:
|
host:
|
||||||
hostname: "k3s.server"
|
hostname: "k3s-server00"
|
||||||
ip: "{{ ansible_host }}"
|
ip: "{{ ansible_host }}"
|
|
@ -1,9 +1,10 @@
|
||||||
|
---
|
||||||
ansible_user: "{{ user }}"
|
ansible_user: "{{ user }}"
|
||||||
ansible_host: 192.168.20.12
|
ansible_host: 192.168.20.24
|
||||||
ansible_port: 22
|
ansible_port: 22
|
||||||
ansible_ssh_private_key_file: "{{ pk_path }}"
|
ansible_ssh_private_key_file: "{{ pk_path }}"
|
||||||
ansible_become_pass: "{{ vault.aya01.sudo }}"
|
ansible_become_pass: "{{ vault.k3s.server01.sudo }}"
|
||||||
|
|
||||||
host:
|
host:
|
||||||
hostname: "k3s.server"
|
hostname: "k3s-server01"
|
||||||
ip: "{{ ansible_host }}"
|
ip: "{{ ansible_host }}"
|
|
@ -1,14 +1,13 @@
|
||||||
---
|
---
|
||||||
- name: Set up Servers
|
- name: Set up Servers
|
||||||
hosts: aya01
|
hosts: k3s_server
|
||||||
gather_facts: yes
|
gather_facts: yes
|
||||||
|
vars_files:
|
||||||
|
- secrets.yml
|
||||||
roles:
|
roles:
|
||||||
- role: common
|
- role: common
|
||||||
tags:
|
tags:
|
||||||
- common
|
- common
|
||||||
- role: k3s-server
|
|
||||||
tags:
|
|
||||||
- k3s-server
|
|
||||||
- role: node_exporter
|
- role: node_exporter
|
||||||
tags:
|
tags:
|
||||||
- node_exporter
|
- node_exporter
|
|
@ -0,0 +1,16 @@
|
||||||
|
---
|
||||||
|
- name: Set up Servers
|
||||||
|
hosts: loadbalancer
|
||||||
|
gather_facts: yes
|
||||||
|
vars_files:
|
||||||
|
- secrets.yml
|
||||||
|
roles:
|
||||||
|
- role: common
|
||||||
|
tags:
|
||||||
|
- common
|
||||||
|
- role: loadbalancer
|
||||||
|
tags:
|
||||||
|
- loadbalancer
|
||||||
|
- role: node_exporter
|
||||||
|
tags:
|
||||||
|
- node_exporter
|
24
production
24
production
|
@ -2,10 +2,26 @@
|
||||||
mii
|
mii
|
||||||
|
|
||||||
[k3s]
|
[k3s]
|
||||||
k3s.server
|
k3s-server00
|
||||||
|
k3s-server01
|
||||||
|
k3s-postgres
|
||||||
|
k3s-loadbalancer
|
||||||
|
|
||||||
|
[k3s_server]
|
||||||
|
k3s-server00
|
||||||
|
k3s-server01
|
||||||
|
|
||||||
[vm]
|
[vm]
|
||||||
k3s.server
|
k3s-server00
|
||||||
|
k3s-server01
|
||||||
|
k3s-postgres
|
||||||
|
k3s-loadbalancer
|
||||||
|
|
||||||
[controller]
|
[db]
|
||||||
genesis
|
k3s-postgres
|
||||||
|
|
||||||
|
[loadbalancer]
|
||||||
|
k3s-loadbalancer
|
||||||
|
|
||||||
|
[vm:vars]
|
||||||
|
ansible_ssh_common_args='-o ProxyCommand="ssh -p 22 -W %h:%p -q aya01"'
|
||||||
|
|
|
@ -51,6 +51,3 @@ if ! shopt -oq posix; then
|
||||||
. /etc/bash_completion
|
. /etc/bash_completion
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
||||||
. "$HOME/.cargo/env"
|
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- name: Restart sshd
|
||||||
|
service:
|
||||||
|
name: sshd
|
||||||
|
state: restarted
|
||||||
|
become: yes
|
|
@ -1,10 +1,9 @@
|
||||||
---
|
---
|
||||||
- name: Copy .bashrc
|
- name: Copy .bashrc
|
||||||
template:
|
template:
|
||||||
src: templates/common/bash/bashrc.j2
|
src: files/bash/bashrc
|
||||||
dest: "/home/{{ user }}/.bashrc"
|
dest: "/home/{{ user }}/.bashrc"
|
||||||
owner: "{{ user }}"
|
owner: "{{ user }}"
|
||||||
group: "{{ user }}"
|
group: "{{ user }}"
|
||||||
mode: 0644
|
mode: 0644
|
||||||
become: yes
|
become: yes
|
||||||
register: sshd
|
|
||||||
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
---
|
||||||
|
- name: Set a hostname
|
||||||
|
ansible.builtin.hostname:
|
||||||
|
name: "{{ host.hostname }}"
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Update /etc/hosts to reflect the new hostname
|
||||||
|
lineinfile:
|
||||||
|
path: /etc/hosts
|
||||||
|
regexp: '^127\.0\.1\.1'
|
||||||
|
line: "127.0.1.1 {{ host.hostname }}"
|
||||||
|
state: present
|
||||||
|
backup: yes
|
||||||
|
become: true
|
|
@ -1,5 +1,6 @@
|
||||||
---
|
---
|
||||||
- include_tasks: time.yml
|
- include_tasks: time.yml
|
||||||
- include_tasks: essential.yml
|
- include_tasks: hostname.yml
|
||||||
|
- include_tasks: packages.yml
|
||||||
- include_tasks: bash.yml
|
- include_tasks: bash.yml
|
||||||
- include_tasks: sshd.yml
|
- include_tasks: sshd.yml
|
||||||
|
|
|
@ -1,11 +1,12 @@
|
||||||
---
|
---
|
||||||
- name: Copy sshd_config
|
- name: Copy sshd_config
|
||||||
template:
|
template:
|
||||||
src: templates/common/ssh/sshd_config
|
src: templates/ssh/sshd_config
|
||||||
dest: /etc/ssh/sshd_config
|
dest: /etc/ssh/sshd_config
|
||||||
mode: 0644
|
mode: 0644
|
||||||
|
notify:
|
||||||
|
- Restart sshd
|
||||||
become: yes
|
become: yes
|
||||||
register: sshd
|
|
||||||
|
|
||||||
- name: Copy pubkey
|
- name: Copy pubkey
|
||||||
copy:
|
copy:
|
||||||
|
@ -14,10 +15,3 @@
|
||||||
owner: "{{ user }}"
|
owner: "{{ user }}"
|
||||||
group: "{{ user }}"
|
group: "{{ user }}"
|
||||||
mode: "644"
|
mode: "644"
|
||||||
|
|
||||||
- name: Restart sshd
|
|
||||||
service:
|
|
||||||
name: "sshd"
|
|
||||||
state: "restarted"
|
|
||||||
become: yes
|
|
||||||
when: sshd.changed
|
|
||||||
|
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- name: Restart sshd
|
||||||
|
service:
|
||||||
|
name: k3s
|
||||||
|
state: restarted
|
||||||
|
become: yes
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- name: Install k3s
|
||||||
|
command: "curl -sfL https://get.k3s.io | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san {{ k3s.loadbalancer.ip }}"
|
||||||
|
environment:
|
||||||
|
K3S_DATASTORE_ENDPOINT: "{{ k3s_db_connection_string }}"
|
||||||
|
become: true
|
|
@ -0,0 +1,2 @@
|
||||||
|
---
|
||||||
|
- include_tasks: installation.yml
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- name: Restart nginx
|
||||||
|
systemd:
|
||||||
|
name: nginx
|
||||||
|
state: restarted
|
||||||
|
become: true
|
|
@ -0,0 +1,20 @@
|
||||||
|
---
|
||||||
|
- name: Template the nginx config file with dynamic upstreams
|
||||||
|
template:
|
||||||
|
src: templates/nginx.conf.j2
|
||||||
|
dest: "{{ nginx_config_path }}"
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
become: true
|
||||||
|
notify:
|
||||||
|
- Restart nginx
|
||||||
|
vars:
|
||||||
|
k3s_server_ips: "{{ k3s.server.ips }}"
|
||||||
|
|
||||||
|
- name: Enable nginx
|
||||||
|
systemd:
|
||||||
|
name: nginx
|
||||||
|
daemon_reload: true
|
||||||
|
enabled: true
|
||||||
|
become: true
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
- name: Update apt cache
|
||||||
|
apt:
|
||||||
|
update_cache: yes
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Install Nginx
|
||||||
|
apt:
|
||||||
|
name:
|
||||||
|
- nginx-full
|
||||||
|
state: present
|
||||||
|
become: true
|
|
@ -0,0 +1,3 @@
|
||||||
|
---
|
||||||
|
- include_tasks: installation.yml
|
||||||
|
- include_tasks: configuration.yml
|
|
@ -0,0 +1,16 @@
|
||||||
|
include /etc/nginx/modules-enabled/*.conf;
|
||||||
|
|
||||||
|
events {}
|
||||||
|
|
||||||
|
stream {
|
||||||
|
upstream k3s_servers {
|
||||||
|
{% for ip in k3s_server_ips %}
|
||||||
|
server {{ ip }}:6443;
|
||||||
|
{% endfor %}
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 6443;
|
||||||
|
proxy_pass k3s_servers;
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1 @@
|
||||||
|
nginx_config_path: "/etc/nginx/nginx.conf"
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- name: Restart node_exporter
|
||||||
|
service:
|
||||||
|
name: node_exporter
|
||||||
|
state: restarted
|
||||||
|
become: true
|
|
@ -2,17 +2,17 @@
|
||||||
- name: Determine latest GitHub release (local)
|
- name: Determine latest GitHub release (local)
|
||||||
delegate_to: localhost
|
delegate_to: localhost
|
||||||
uri:
|
uri:
|
||||||
url: "https://api.github.com/repos/prometheus/node_exporter/releases/{{ node_exporter.version }}"
|
url: "https://api.github.com/repos/prometheus/node_exporter/releases/{{ version }}"
|
||||||
body_format: json
|
body_format: json
|
||||||
register: _github_release
|
register: _github_release
|
||||||
until: _github_release.status == 200
|
until: _github_release.status == 200
|
||||||
retries: 3
|
retries: 3
|
||||||
|
|
||||||
- name: Set node_exporter_version
|
- name: Set version
|
||||||
set_fact:
|
set_fact:
|
||||||
node_exporter_version: "{{ _github_release.json.tag_name
|
version: "{{ _github_release.json.tag_name
|
||||||
| regex_replace('^v?([0-9\\.]+)$', '\\1') }}"
|
| regex_replace('^v?([0-9\\.]+)$', '\\1') }}"
|
||||||
|
|
||||||
- name: Set node_exporter.download_url
|
- name: Set download_url
|
||||||
set_fact:
|
set_fact:
|
||||||
node_exporter_download_url: "https://github.com/prometheus/node_exporter/releases/download/v{{ node_exporter_version }}/node_exporter-{{ node_exporter_version }}.linux-{{ go_arch }}.tar.gz"
|
download_url: "https://github.com/prometheus/node_exporter/releases/download/v{{ version }}/node_exporter-{{ version }}.linux-{{ go_arch }}.tar.gz"
|
||||||
|
|
|
@ -1,15 +1,15 @@
|
||||||
---
|
---
|
||||||
- name: Download/Extract "{{ node_exporter_download_url }}"
|
- name: Download/Extract "{{ download_url }}"
|
||||||
unarchive:
|
unarchive:
|
||||||
src: "{{ node_exporter_download_url }}"
|
src: "{{ download_url }}"
|
||||||
dest: /tmp/
|
dest: /tmp/
|
||||||
remote_src: true
|
remote_src: true
|
||||||
mode: 755
|
mode: 755
|
||||||
|
|
||||||
- name: Move node_exporter into path
|
- name: Move node_exporter into path
|
||||||
copy:
|
copy:
|
||||||
src: "/tmp/node_exporter-{{ node_exporter_version }}.linux-{{ go_arch }}/node_exporter"
|
src: "/tmp/node_exporter-{{ version }}.linux-{{ go_arch }}/node_exporter"
|
||||||
dest: "{{ node_exporter.bin_path }}"
|
dest: "{{ bin_path }}"
|
||||||
mode: 755
|
mode: 755
|
||||||
remote_src: true
|
remote_src: true
|
||||||
become: true
|
become: true
|
||||||
|
@ -26,6 +26,4 @@
|
||||||
src: node_exporter.service.j2
|
src: node_exporter.service.j2
|
||||||
dest: /etc/systemd/system/node_exporter.service
|
dest: /etc/systemd/system/node_exporter.service
|
||||||
mode: 0644
|
mode: 0644
|
||||||
register: node_exporter_service
|
|
||||||
become: true
|
become: true
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,10 @@
|
||||||
---
|
---
|
||||||
- name: Ensure node_exporter is running and enabled at boot.
|
- name: Ensure node_exporter is running and enabled at boot.
|
||||||
service:
|
service:
|
||||||
daemon_reload: true
|
|
||||||
name: node_exporter
|
name: node_exporter
|
||||||
state: restarted
|
state: started
|
||||||
|
daemon_reload: true
|
||||||
enabled: true
|
enabled: true
|
||||||
when: node_exporter_service is changed
|
notify:
|
||||||
|
- Restart node_exporter
|
||||||
become: true
|
become: true
|
||||||
|
|
|
@ -4,7 +4,7 @@ Description=NodeExporter
|
||||||
[Service]
|
[Service]
|
||||||
TimeoutStartSec=0
|
TimeoutStartSec=0
|
||||||
User=node_exporter
|
User=node_exporter
|
||||||
ExecStart={{ node_exporter.bin_path }} --web.listen-address={{ host.ip }}:{{ node_exporter.port }} {{ node_exporter.options }}
|
ExecStart={{ bin_path }} --web.listen-address={{ host.ip }}:{{ bind_port }} {{ options }}
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
|
|
|
@ -6,3 +6,9 @@ go_arch_map:
|
||||||
armv6l: "armv6"
|
armv6l: "armv6"
|
||||||
|
|
||||||
go_arch: "{{ go_arch_map[ansible_architecture] | default(ansible_architecture) }}"
|
go_arch: "{{ go_arch_map[ansible_architecture] | default(ansible_architecture) }}"
|
||||||
|
|
||||||
|
bind_port: 9100
|
||||||
|
version: "latest"
|
||||||
|
serve: "localhost"
|
||||||
|
options: ""
|
||||||
|
bin_path: "/usr/local/bin/node_exporter"
|
||||||
|
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
- name: Restart postgres
|
||||||
|
systemd:
|
||||||
|
name: postgres
|
||||||
|
state: restarted
|
||||||
|
become: true
|
|
@ -0,0 +1,10 @@
|
||||||
|
---
|
||||||
|
- name: Update apt cache
|
||||||
|
apt:
|
||||||
|
update_cache: yes
|
||||||
|
become: true
|
||||||
|
|
||||||
|
- name: Install ansible dependencies
|
||||||
|
apt:
|
||||||
|
name: "{{ ansible_dependencies }}"
|
||||||
|
become: true
|
|
@ -0,0 +1,49 @@
|
||||||
|
---
|
||||||
|
- name: "Create postgres user: {{ db.user }}"
|
||||||
|
community.postgresql.postgresql_user:
|
||||||
|
state: present
|
||||||
|
name: "{{ db.user }}"
|
||||||
|
password: "{{ db.password }}"
|
||||||
|
become: true
|
||||||
|
become_user: "{{ db.default_user.user }}"
|
||||||
|
vars:
|
||||||
|
ansible_remote_temp: "/tmp/"
|
||||||
|
|
||||||
|
- name: "Create database: {{ db.name }}"
|
||||||
|
community.postgresql.postgresql_db:
|
||||||
|
state: present
|
||||||
|
name: "{{ db.name }}"
|
||||||
|
encoding: UTF8
|
||||||
|
lc_collate: "en_US.UTF-8"
|
||||||
|
lc_ctype: "en_US.UTF-8"
|
||||||
|
become: yes
|
||||||
|
become_user: postgres
|
||||||
|
vars:
|
||||||
|
ansible_remote_temp: "/tmp/"
|
||||||
|
|
||||||
|
- name: "Grant {{ db.user }} user access to db {{ db.name }}"
|
||||||
|
postgresql_privs:
|
||||||
|
type: database
|
||||||
|
database: "{{ db.name }}"
|
||||||
|
roles: "{{ db.user }}"
|
||||||
|
grant_option: no
|
||||||
|
privs: all
|
||||||
|
become: yes
|
||||||
|
become_user: postgres
|
||||||
|
vars:
|
||||||
|
ansible_remote_temp: "/tmp/"
|
||||||
|
|
||||||
|
- name: "Allow md5 connection for the {{ db.user }} user"
|
||||||
|
postgresql_pg_hba:
|
||||||
|
dest: "~/15/main/pg_hba.conf"
|
||||||
|
contype: host
|
||||||
|
databases: all
|
||||||
|
method: md5
|
||||||
|
users: "{{ db.user }}"
|
||||||
|
create: true
|
||||||
|
become: yes
|
||||||
|
become_user: postgres
|
||||||
|
notify:
|
||||||
|
- Restart postgres
|
||||||
|
vars:
|
||||||
|
ansible_remote_temp: "/tmp/"
|
|
@ -0,0 +1,14 @@
|
||||||
|
---
|
||||||
|
- name: Install postgres
|
||||||
|
apt:
|
||||||
|
name: "{{ postgres_packages }}"
|
||||||
|
state: present
|
||||||
|
become: true
|
||||||
|
register: postgres_install
|
||||||
|
|
||||||
|
- name: Start and enable the service
|
||||||
|
systemd:
|
||||||
|
name: postgresql
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
become: true
|
|
@ -0,0 +1,4 @@
|
||||||
|
---
|
||||||
|
- include_tasks: ansible_deps.yml
|
||||||
|
- include_tasks: installation.yml
|
||||||
|
- include_tasks: configuration.yml
|
|
@ -0,0 +1,21 @@
|
||||||
|
############################################
|
||||||
|
############### CHANGE THESE ###############
|
||||||
|
############################################
|
||||||
|
db:
|
||||||
|
default_user:
|
||||||
|
user: "postgres"
|
||||||
|
name: "database"
|
||||||
|
user: "user"
|
||||||
|
password: "password"
|
||||||
|
|
||||||
|
############################################
|
||||||
|
# Don't change these (probably)
|
||||||
|
ansible_dependencies:
|
||||||
|
- python3-pip
|
||||||
|
- python3-psycopg
|
||||||
|
- python3-pexpect
|
||||||
|
- acl
|
||||||
|
|
||||||
|
postgres_packages:
|
||||||
|
- postgresql
|
||||||
|
- postgresql-client
|
|
@ -1,16 +0,0 @@
|
||||||
---
|
|
||||||
- name: Copy "{{ wg_config }}"
|
|
||||||
template:
|
|
||||||
src: "{{ wg_config }}"
|
|
||||||
dest: "{{ wg_remote_config }}"
|
|
||||||
owner: "root"
|
|
||||||
group: "root"
|
|
||||||
mode: "0600"
|
|
||||||
become: true
|
|
||||||
|
|
||||||
- name: Start wireguard
|
|
||||||
service:
|
|
||||||
name: "{{ wg_service }}"
|
|
||||||
state: started
|
|
||||||
enabled: yes
|
|
||||||
become: true
|
|
|
@ -1,20 +0,0 @@
|
||||||
---
|
|
||||||
- name: Update and upgrade packages
|
|
||||||
apt:
|
|
||||||
update_cache: true
|
|
||||||
upgrade: true
|
|
||||||
autoremove: true
|
|
||||||
become: true
|
|
||||||
|
|
||||||
- name: Install WireGuard dependencies
|
|
||||||
apt:
|
|
||||||
name: "{{ wg_deps }}"
|
|
||||||
state: present
|
|
||||||
become: true
|
|
||||||
|
|
||||||
- name: Create resolveconf symlink Debian bug #939904
|
|
||||||
file:
|
|
||||||
src: /usr/bin/resolvectl
|
|
||||||
dest: /usr/local/bin/resolvconf
|
|
||||||
state: link
|
|
||||||
become: true
|
|
|
@ -1,2 +0,0 @@
|
||||||
- include_tasks: install.yml
|
|
||||||
- include_tasks: config.yml
|
|
|
@ -1,9 +0,0 @@
|
||||||
[Interface]
|
|
||||||
PrivateKey = {{ vault_wg_pk }}
|
|
||||||
Address = {{ wg_ip }}
|
|
||||||
DNS = {{ wg_dns }}
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = {{ wg_pubkey }}
|
|
||||||
Endpoint = {{ wg_endpoint }}
|
|
||||||
AllowedIPs = {{ wg_allowed_ips }}
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
ansible-vault view secrets.yml | sed "s/: \w\+$/: ......../g" >>secrets.yml.skeleton
|
|
@ -0,0 +1,4 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
ssh $1 'mkdir .ssh && echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKqc9fnzfCz8fQDFzla+D8PBhvaMmFu2aF+TYkkZRxl9 tuan@genesis-2022-01-20" >> .ssh/authorized_keys'
|
||||||
|
ssh $1 'su root -c "apt update && apt install sudo && /usr/sbin/usermod -a -G sudo tudattr"'
|
|
@ -0,0 +1,4 @@
|
||||||
|
vault:
|
||||||
|
k3s:
|
||||||
|
server:
|
||||||
|
sudo: ........
|
Loading…
Reference in New Issue