6fd77266cd
Signed-off-by: TuDatTr <tuan-dat.tran@tudattr.dev> |
||
---|---|---|
group_vars/all | ||
host_vars | ||
roles | ||
.gitignore | ||
Homelab Diagram.drawio | ||
Homelab Diagram.pdf | ||
README.md | ||
aya01.yml | ||
inko.yml | ||
mii.yml | ||
naruto.yml | ||
pi.yml | ||
production | ||
shelly.yml | ||
staging |
README.md
TuDatTr IaC
User
It is expected that a user with sudo privilages is on the target, for me the users name is "tudattr"
you can add such user with the following command useradd -m -g sudo -s /bin/bash tudattr
Don't forget to set a password for the new user with passwd tudattr
sudo
Install sudo on the target machine, with debian its
su root
apt install sudo
usermod -a -G sudo tudattr
Backups
Backup for aya01 and raspberry are in a backblaze b2, which gets encrypted on the clientside by rclone. but first of all we need to create the buckets and provide ansible with the needed information.
First we need to create a api key for backblaze, consists of an id and a key. we use clone to sync to backblaze. we can encrypt the data with rclone before sending it to backblaze. to do this we need two buckets:
- b2
- crypt on each device that should be backupped.
we create these by running rclone config
and creating one [remote] b2 config and a [secret] crypt config. The crypt config should have two passwords that we store in our secrets file.
`
Vault
- Create vault with:
ansible-vault create secrets.yml
- Create entry in vault with:
ansible-vault edit secrets.yml
- Add following entries: TODO
Docker
To add new docker containers to the docker role you need to add the following and replace service
with the name of your service:
- Add relevent vars to
group_vars/all/vars.yaml
:
service:
host: "service"
ports:
http: "19999"
volumes:
config: "{{ docker_dir }}/service/" # config folder or your dir
data: "{{ docker_data_dir }}/service/" # data folder or your dir (only works on aya01)
- Create necessary directories for service in the docker role
roles/docker/tasks/service.yaml
- name: Create service dirs
file:
path: "{{ item }}"
owner: 1000
group: 1000
mode: '775'
state: directory
loop:
- "{{ service.volumes.config }}"
- "{{ service.volumes.data }}"
# optional:
# - name: Place service config
# template:
# owner: 1000
# mode: '660'
# src: "templates/hostname/service/service.yml"
# dest: "{{ prm_config }}/service.yml"
- Includ new tasks to
roles/docker/tasks/hostname_compose.yaml
:
- include_tasks: service.yaml
tags:
- service
- Add new service to compose
roles/docker/templates/hostname/compose.yaml
service:
image: service/service
container_name: service
hostname: service
networks:
- net
ports:
- "{{service_port}}:19999"
restart: unless-stopped
volumes:
- "{{service_config}}:/etc/service"
- "{{service_lib}}:/var/lib/service"
- "{{service_cache}}:/var/cache/service"
Server
- Install Debian (debian-11.5.0-amd64-netinst.iso) on remote system
- Create user (tudattr)
- Get IP of remote system (192.168.20.11)
- Create ssh-config entry
Host aya01 HostName 192.168.20.11 Port 22 User tudattr IdentityFile /mnt/veracrypt1/genesis
- copy public key to remote system
ssh-copy-id -i /mnt/veracrypt1/genesis.pub aya01
- copy public key to remote system
- Add this host to ansible inventory
- Install sudo on remote
- add user to sudo group (with
su --login
without login the path will not be loaded correctly see here) andusermod -a -G sudo tudattr
- set time correctly when getting the following error
Release file for http://security.debian.org/debian-security/dists/bullseye-security/InRelease is not valid yet (invalid for another 12h 46min 9s). Updates for this repository will not be applied.
By doing on remote system (example):
sudo systemctl stop ntp.service
sudo ntpd -gq
sudo systemctl start ntp.service
zoneminder
- Enable authentification in (Option->System)
- Create new Camera:
- General>Name: BirdCam
- General>Function: Ffmpeg
- General>Function: Modect
- Source>Source Path:
rtsp://user:pw@ip:554/cam/mpeg4
- Change default admin password
- Create users
RaspberryPi
- Install raspbian lite (2022-09-22-raspios-bullseye-arm64-lite.img) on pi
- Get IP of remote system (192.168.20.11)
- Create ssh-config entry
Host pi
HostName 192.168.20.11
Port 22
User tudattr
IdentityFile /mnt/veracrypt1/genesis
- enable ssh on pi
- copy public key to pi
- change user password of user on pi
- execute
ansible-playbook -i production --ask-vault-pass --extra-vars '@secrets.yml' pi.yml
Mikrotik
- Create rsa-key on your device and name it mikrotik_rsa
- On mikrotik run:
/user/ssh-keys/import public-key-file=mikrotik_rsa.pub user=tudattr
- Create ssh-config entry:
Host mikrotik
HostName 192.168.70.1
Port 2200
User tudattr
IdentityFile /mnt/veracrypt1/mikrotik_rsa
wireguard
thanks to mikrotik0 quick code
# add wiregurad interface
interface/wireguard/add listen-port=51820 name=wg1
# get public key
interface/wireguard/print
$ > public-key: <mikrotik_public_key>
# add network/ip for wireguard interface
ip/address/add address=192.168.200.1/24 network=192.168.200.0 interface=wg1
# add firewall rule for wireguard (maybe specify to be from pppoe-wan)
/ip/firewall/filter/add chain=input protocol=udp dst-port=51820 action=accept
# routing for wg1 clients and rest of the network
> <insert forward for routing between wg1 and other networks>
# enable internet for wg1 clients (may have to add to enable internet list
/ip/firewall/nat/add chain=srcnat src-address=192.168.200.0/24 out-interface=pppoe-wan action=masquerade
add peer
/interface/wireguard/peers/add interface=wg1 allowed-address=<untaken_ipv4>/24 public-key="<client_public_key"
Keygeneragion on archlinux wg genkey | (umask 0077 && tee wireguard.key) | wg pubkey > peer_A.pub
Wireguard config on archlinux at /etc/wireguard/wg0.conf
:
[Interface]
PrivateKey = <client_private_key>
Address = 192.168.200.250/24
[Peer]
PublicKey = <mikrotik public key>
Endpoint = tudattr.dev:51820
AllowedIPs = 0.0.0.0/0
used ipv4:
- tudattr: 192.168.200.250
- livei: 192.168.200.240
notes
- wireguard->add name: wg_tunnel01 listen port: 51820 [save]
- wireguard->peers->add interface: wg_tunnel01 endpoint port: 51820 allowed address: ::/0 psk: persistent keepalive: 25
- ip->address->address list->add address:192.168.200.1/24 network: 192.168.200.0 interface: wg_tunnel01
troubleshooting
Docker networking problem
docker system prune -a
Time problems (NTP service: n/a)
systemctl status systemd-timesyncd.service when not available sudo apt install systemd-timesyncd/stable
Syncthing inotify
echo "fs.inotify.max_user_watches=204800" | sudo tee -a /etc/sysctl.conf https://forum.cloudron.io/topic/7163/how-to-increase-inotify-limit-for-syncthing/2