Go to file
TuDatTr 6fd77266cd Added Homarr and removed jellyseer
Signed-off-by: TuDatTr <tuan-dat.tran@tudattr.dev>
2024-01-25 00:15:15 +01:00
group_vars/all Added Homarr and removed jellyseer 2024-01-25 00:15:15 +01:00
host_vars Added additional info for gitea runner 2023-11-27 23:25:18 +01:00
roles Added Homarr and removed jellyseer 2024-01-25 00:15:15 +01:00
.gitignore Added bin as pastbin internally and made vars more configurable 2023-04-17 19:23:48 +02:00
Homelab Diagram.drawio Added Jellyseer 2023-11-06 10:16:53 +01:00
Homelab Diagram.pdf Added Jellyseer 2023-11-06 10:16:53 +01:00
README.md Added naruto host and gitea to docker 2023-10-10 11:34:02 +02:00
aya01.yml Added new machine: inko, personalized root yaml files for specific host instead of group 2023-11-05 20:33:59 +01:00
inko.yml Added new machine: inko, personalized root yaml files for specific host instead of group 2023-11-05 20:33:59 +01:00
mii.yml Added new machine: inko, personalized root yaml files for specific host instead of group 2023-11-05 20:33:59 +01:00
naruto.yml Added new machine: inko, personalized root yaml files for specific host instead of group 2023-11-05 20:33:59 +01:00
pi.yml Added new machine: inko, personalized root yaml files for specific host instead of group 2023-11-05 20:33:59 +01:00
production Added naruto host and gitea to docker 2023-10-10 11:34:02 +02:00
shelly.yml Added working traefik configuration/labels for containers 2023-04-13 18:43:32 +02:00
staging Initial commit with not yet working docker networking 2022-11-30 23:49:07 +01:00

README.md

TuDatTr IaC

User

It is expected that a user with sudo privilages is on the target, for me the users name is "tudattr" you can add such user with the following command useradd -m -g sudo -s /bin/bash tudattr Don't forget to set a password for the new user with passwd tudattr

sudo

Install sudo on the target machine, with debian its

su root
apt install sudo
usermod -a -G sudo tudattr

Backups

Backup for aya01 and raspberry are in a backblaze b2, which gets encrypted on the clientside by rclone. but first of all we need to create the buckets and provide ansible with the needed information.

First we need to create a api key for backblaze, consists of an id and a key. we use clone to sync to backblaze. we can encrypt the data with rclone before sending it to backblaze. to do this we need two buckets:

  • b2
  • crypt on each device that should be backupped.

we create these by running rclone config and creating one [remote] b2 config and a [secret] crypt config. The crypt config should have two passwords that we store in our secrets file.

`

Vault

  • Create vault with: ansible-vault create secrets.yml
  • Create entry in vault with: ansible-vault edit secrets.yml
  • Add following entries: TODO

Docker

To add new docker containers to the docker role you need to add the following and replace service with the name of your service:

  • Add relevent vars to group_vars/all/vars.yaml:
service:
  host: "service"
  ports:
    http: "19999"
  volumes:
    config: "{{ docker_dir }}/service/" # config folder or your dir
    data: "{{ docker_data_dir }}/service/" # data folder or your dir (only works on aya01)
  • Create necessary directories for service in the docker role roles/docker/tasks/service.yaml
- name: Create service dirs
  file:
    path: "{{ item }}"
    owner: 1000
    group: 1000
    mode: '775'
    state: directory
  loop:
    - "{{ service.volumes.config }}"
    - "{{ service.volumes.data }}"

# optional:
# - name: Place service config
#   template:
#     owner: 1000
#     mode: '660'
#     src: "templates/hostname/service/service.yml"
#     dest: "{{ prm_config }}/service.yml"
  • Includ new tasks to roles/docker/tasks/hostname_compose.yaml:
- include_tasks: service.yaml
  tags:
    - service
  • Add new service to compose roles/docker/templates/hostname/compose.yaml
  service:
    image: service/service
    container_name: service
    hostname: service
    networks:
      - net
    ports:
      - "{{service_port}}:19999"
    restart: unless-stopped
    volumes:
      - "{{service_config}}:/etc/service"
      - "{{service_lib}}:/var/lib/service"
      - "{{service_cache}}:/var/cache/service"

Server

  • Install Debian (debian-11.5.0-amd64-netinst.iso) on remote system
  • Create user (tudattr)
  • Get IP of remote system (192.168.20.11)
  • Create ssh-config entry
    Host aya01
      HostName 192.168.20.11
      Port 22
      User tudattr
      IdentityFile /mnt/veracrypt1/genesis
    
    • copy public key to remote system ssh-copy-id -i /mnt/veracrypt1/genesis.pub aya01
  • Add this host to ansible inventory
  • Install sudo on remote
  • add user to sudo group (with su --login without login the path will not be loaded correctly see here) and usermod -a -G sudo tudattr
  • set time correctly when getting the following error
Release file for http://security.debian.org/debian-security/dists/bullseye-security/InRelease is not valid yet (invalid for another 12h 46min 9s). Updates for this repository will not be applied.

By doing on remote system (example):

sudo systemctl stop ntp.service
sudo ntpd -gq
sudo systemctl start ntp.service

zoneminder

  • Enable authentification in (Option->System)
  • Create new Camera:
    • General>Name: BirdCam
    • General>Function: Ffmpeg
    • General>Function: Modect
    • Source>Source Path: rtsp://user:pw@ip:554/cam/mpeg4
  • Change default admin password
  • Create users

RaspberryPi

  • Install raspbian lite (2022-09-22-raspios-bullseye-arm64-lite.img) on pi
  • Get IP of remote system (192.168.20.11)
  • Create ssh-config entry
Host pi
     HostName 192.168.20.11
     Port 22
     User tudattr
     IdentityFile /mnt/veracrypt1/genesis
  • enable ssh on pi
  • copy public key to pi
  • change user password of user on pi
  • execute ansible-playbook -i production --ask-vault-pass --extra-vars '@secrets.yml' pi.yml

Mikrotik

  • Create rsa-key on your device and name it mikrotik_rsa
  • On mikrotik run: /user/ssh-keys/import public-key-file=mikrotik_rsa.pub user=tudattr
  • Create ssh-config entry:
Host mikrotik
     HostName 192.168.70.1
     Port 2200
     User tudattr
     IdentityFile /mnt/veracrypt1/mikrotik_rsa

wireguard

thanks to mikrotik0 quick code

# add wiregurad interface
interface/wireguard/add listen-port=51820 name=wg1
# get public key
interface/wireguard/print
$ > public-key: <mikrotik_public_key>
# add network/ip for wireguard interface
ip/address/add address=192.168.200.1/24 network=192.168.200.0 interface=wg1
# add firewall rule for wireguard (maybe specify to be from pppoe-wan)
/ip/firewall/filter/add chain=input protocol=udp dst-port=51820 action=accept 
# routing for wg1 clients and rest of the network
> <insert forward for routing between wg1 and other networks>
# enable internet for wg1 clients (may have to add to enable internet list
/ip/firewall/nat/add chain=srcnat src-address=192.168.200.0/24 out-interface=pppoe-wan action=masquerade

add peer

/interface/wireguard/peers/add interface=wg1 allowed-address=<untaken_ipv4>/24 public-key="<client_public_key"

Keygeneragion on archlinux wg genkey | (umask 0077 && tee wireguard.key) | wg pubkey > peer_A.pub Wireguard config on archlinux at /etc/wireguard/wg0.conf:

[Interface]
PrivateKey = <client_private_key>
Address = 192.168.200.250/24

[Peer]
PublicKey = <mikrotik public key>
Endpoint = tudattr.dev:51820
AllowedIPs = 0.0.0.0/0

used ipv4:

  • tudattr: 192.168.200.250
  • livei: 192.168.200.240

notes

  • wireguard->add name: wg_tunnel01 listen port: 51820 [save]
  • wireguard->peers->add interface: wg_tunnel01 endpoint port: 51820 allowed address: ::/0 psk: persistent keepalive: 25
  • ip->address->address list->add address:192.168.200.1/24 network: 192.168.200.0 interface: wg_tunnel01

troubleshooting

Docker networking problem

docker system prune -a

Time problems (NTP service: n/a)

systemctl status systemd-timesyncd.service when not available sudo apt install systemd-timesyncd/stable

Syncthing inotify

echo "fs.inotify.max_user_watches=204800" | sudo tee -a /etc/sysctl.conf https://forum.cloudron.io/topic/7163/how-to-increase-inotify-limit-for-syncthing/2