Compare commits

...

23 Commits

Author SHA1 Message Date
Tuan-Dat Tran 711dc58f2e fix(docker/jellyfin): Moved jellyfin config to local machine due to error with sqlite dbs used for config
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-11-15 14:09:31 +01:00
Tuan-Dat Tran 5aaf3eef53 chore(inventory): add host-specific configuration files and update production inventory for proxmox hosts
- Add individual `host_vars` YAML files for new proxmox hosts (`aya01`, `inko`, `lulu`):
  - Set SSH and Ansible connection variables, including `ansible_user`, `ansible_host`, `ansible_port`, and `ansible_ssh_private_key_file`
  - Configure `ansible_become_pass` with respective vault entries for sudo access
  - Define host-specific metadata, including hostname and IP address

- Update `production` inventory:
  - Add new `[proxmox]` group and include `aya01`, `inko`, and `lulu` for proxmox-related automation

These additions streamline Ansible's management of proxmox hosts, centralizing their configuration and enabling easier host-specific variable access for deployment and management tasks.

Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-11-13 23:55:22 +01:00
Tuan-Dat Tran 33253e934d feat(docker): add Calibre Web service to Docker Compose configuration
- Add Calibre Web container configuration to `docker-compose.yaml`
  - Use `lscr.io/linuxserver/calibre-web:latest` image
  - Configure environment variables (PUID, PGID, TZ, DOCKER_MODS)
  - Set up volumes for persistent storage of Calibre configuration and books
  - Expose port 8084 to access the Calibre Web UI
  - Implement automatic restart policy (`unless-stopped`)

This commit introduces the Calibre Web service to the Docker Compose setup, enabling users to run a Calibre library management and e-book reader web service in a Docker container.

Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-11-11 01:04:30 +01:00
Tuan-Dat Tran 4db26b56da feat(ansible): add Docker host configuration with NFS mounts and utility packages
- Introduce Docker host configuration playbooks in `docker_host` role
  - Install Docker and Docker Compose via apt repository
  - Configure Docker user, group, and required directories (`/opt/docker`, `/media`)
  - Add NFS mounts for Docker data, series, movies, and songs directories
- Add extra utility packages (`bat`, `ripgrep`, `fd-find`, `screen`, `eza`, `neovim`)
- Set up and manage `bash_aliases` for user-friendly command replacements (`batcat`, `nvim`, `eza`)
- Enhance `/group_vars` and `/host_vars` for Docker-related settings and secure access
- Add `docker-host00` and `docker-host01` entries to production and staging inventories

Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-11-10 21:37:22 +01:00
Tuan-Dat Tran ce0411cdb0 fixed taint
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-13 22:56:59 +02:00
Tuan-Dat Tran 28d946cae5 Add noexecute taint on longhorn
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-13 21:49:10 +02:00
Tuan-Dat Tran 5d0f56ce38 linting
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-08 11:31:26 +02:00
Tuan-Dat Tran 0c1a8a95f2 add postgres exporter
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-08 11:17:03 +02:00
Tuan-Dat Tran 05c35a546a added installation of reqs for longhorn
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-08 05:20:35 +02:00
Tuan-Dat Tran d16cc0db06 Added notes for longhorn nodes
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-08 04:40:16 +02:00
Tuan-Dat Tran 2ae0f4863e update vault skeleton
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-08 04:14:01 +02:00
Tuan-Dat Tran 7d58de98d9 Added storage nodes for k3s
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-08 04:13:38 +02:00
Tuan-Dat Tran 92e4b3bb27 Add k3s-server02
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-07 20:56:12 +02:00
Tuan-Dat Tran ed980f816f prod and staging for tls in loadbalancer
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-04 00:00:02 +02:00
Tuan-Dat Tran c0e81ee277 Added script etc for ssl on lb
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-10-03 17:38:08 +02:00
Tuan-Dat Tran a09448985c Added https lb for lb
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-30 20:06:27 +02:00
Tuan-Dat Tran 95afa201e3 Fixed host forwarding for subdomain reverse proxy
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-30 10:53:18 +02:00
Tuan-Dat Tran 000375c7ba adjust name for upstream in lb
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-30 10:46:19 +02:00
Tuan-Dat Tran 2cc4fd0be0 Added http lb for lb
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-30 07:51:33 +02:00
Tuan-Dat Tran 8fb4eaf610 Added k3s agents
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-20 16:57:59 +02:00
Tuan-Dat Tran 3aa56be025 Full k3s server installation done
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-20 15:01:33 +02:00
Tuan-Dat Tran 51a49d003d Finished lb and db
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-19 23:10:00 +02:00
Tuan-Dat Tran 50abbf933c First step towards rewrite
Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-09-17 23:44:20 +02:00
182 changed files with 1829 additions and 62395 deletions

View File

@ -1,207 +0,0 @@
<mxfile host="app.diagrams.net" modified="2023-11-05T13:55:54.105Z" agent="Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/119.0" etag="qKRITLw66apjhZnPW2mG" version="21.6.2" pages="2">
<diagram id="JSIfkQgaAO27B-iO4uI6" name="Homelab Overview">
<mxGraphModel dx="2924" dy="1194" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" math="0" shadow="0">
<root>
<mxCell id="0" />
<mxCell id="1" parent="0" />
<mxCell id="z4CzeoHyWsNDpYlZFiTu-54" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0;exitY=1;exitDx=0;exitDy=0;entryX=0.5;entryY=0;entryDx=0;entryDy=0;" edge="1" parent="1" source="z4CzeoHyWsNDpYlZFiTu-73" target="z4CzeoHyWsNDpYlZFiTu-27">
<mxGeometry relative="1" as="geometry">
<mxPoint x="-500" y="530" as="targetPoint" />
<Array as="points">
<mxPoint x="10" y="320" />
<mxPoint x="-515" y="320" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-66" value="192.168.20.1/24" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="z4CzeoHyWsNDpYlZFiTu-54">
<mxGeometry x="-0.3363" y="1" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-55" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.25;exitY=1;exitDx=0;exitDy=0;" edge="1" parent="1" source="z4CzeoHyWsNDpYlZFiTu-73" target="z4CzeoHyWsNDpYlZFiTu-35">
<mxGeometry relative="1" as="geometry">
<mxPoint x="180" y="290" as="sourcePoint" />
<Array as="points">
<mxPoint x="105" y="360" />
<mxPoint x="-20" y="360" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-65" value="192.168.30.1/24" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="z4CzeoHyWsNDpYlZFiTu-55">
<mxGeometry x="-0.1082" y="1" relative="1" as="geometry">
<mxPoint x="52" as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-56" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;exitX=0.75;exitY=1;exitDx=0;exitDy=0;" edge="1" parent="1" source="z4CzeoHyWsNDpYlZFiTu-73" target="z4CzeoHyWsNDpYlZFiTu-41">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="295" y="360" />
<mxPoint x="420" y="360" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-67" value="192.168.40.1/24" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="z4CzeoHyWsNDpYlZFiTu-56">
<mxGeometry x="-0.1475" y="-2" relative="1" as="geometry">
<mxPoint x="-33" as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-57" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=1;exitDx=0;exitDy=0;" edge="1" parent="1" source="z4CzeoHyWsNDpYlZFiTu-73" target="z4CzeoHyWsNDpYlZFiTu-39">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="390" y="320" />
<mxPoint x="820" y="320" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-68" value="192.168.50.1/24" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="z4CzeoHyWsNDpYlZFiTu-57">
<mxGeometry x="-0.2384" y="-3" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-27" value="Homelab VLAN20" style="swimlane;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="-750" y="600" width="470" height="400" as="geometry">
<mxRectangle x="-750" y="600" width="140" height="30" as="alternateBounds" />
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-90" value="&lt;div&gt;aya01.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.12&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server_storage;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-27">
<mxGeometry x="20" y="40" width="105" height="105" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-19" value="&lt;div&gt;pi.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.11&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-27">
<mxGeometry x="250" y="40" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-17" value="&lt;div&gt;inko.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.14&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-27">
<mxGeometry x="140" y="40" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-20" value="&lt;div&gt;naruto.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.13&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-27">
<mxGeometry x="360" y="40" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-35" value="User VLAN30" style="swimlane;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="-200" y="600" width="360" height="400" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-28" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.tablet;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-35">
<mxGeometry x="50" y="50" width="100" height="70" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-8" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.pc;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-35">
<mxGeometry x="100" y="140" width="100" height="70" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-33" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.mobile;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-35">
<mxGeometry x="250" y="70" width="50" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-36" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.video_projector;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-35">
<mxGeometry x="220" y="210" width="100" height="35" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-46" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.laptop;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-35">
<mxGeometry x="50" y="260" width="100" height="55" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-39" value="IoT VLAN50" style="swimlane;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="680" y="600" width="280" height="460" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-52" value="&lt;div&gt;Brother MFC-L2710DW&lt;/div&gt;&lt;div&gt;192.168.50.219&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.copier;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-39">
<mxGeometry x="20" y="35" width="100" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-51" value="&lt;div&gt;Brother QL-820NWB&lt;/div&gt;&lt;div&gt;192.168.50.218&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.copier;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-39">
<mxGeometry x="150" y="35" width="100" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-60" value="Lightbulbs" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.comm_link;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-39">
<mxGeometry x="50" y="190" width="40" height="80" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-62" value="Shelly Power Outlet" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.comm_link;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-39">
<mxGeometry x="180" y="190" width="40" height="80" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-81" value="BirbCam" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.security_camera;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-39">
<mxGeometry x="30" y="330" width="100" height="75" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-53" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="z4CzeoHyWsNDpYlZFiTu-40" target="z4CzeoHyWsNDpYlZFiTu-73">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-69" value="192.168.200.1/32" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="z4CzeoHyWsNDpYlZFiTu-53">
<mxGeometry x="-0.3672" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-40" value="netcup VPS" style="swimlane;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="-290" y="40" width="150" height="220" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-38" value="&lt;div&gt;mii.seyshiro.de&lt;/div&gt;&lt;div&gt;tudattr.dev&lt;br&gt;&lt;/div&gt;&lt;div&gt;192.168.200.2&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.proxy_server;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-40">
<mxGeometry x="20" y="50" width="105" height="105" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-41" value="Guest VLAN40" style="swimlane;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="600" width="360" height="280" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-44" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.mobile;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-41">
<mxGeometry x="250" y="70" width="50" height="100" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-47" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.tablet;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-41">
<mxGeometry x="40" y="50" width="100" height="70" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-48" value="" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.laptop;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-41">
<mxGeometry x="90" y="160" width="100" height="55" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-73" value="&lt;div&gt;Network Backbone&amp;nbsp;&lt;/div&gt;&lt;div&gt;(Management VLAN 70)&lt;/div&gt;" style="swimlane;whiteSpace=wrap;html=1;startSize=40;" vertex="1" parent="1">
<mxGeometry x="10" y="40" width="380" height="220" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-10" value="&lt;div&gt;Mikrotik CRS 326&lt;/div&gt;&lt;div&gt;192.168.70.1&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.router;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-73">
<mxGeometry x="60" y="85" width="100" height="30" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-70" value="&lt;div&gt;TP-Link EAP 225&lt;/div&gt;&lt;div&gt;192.168.70.250&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.wireless_modem;" vertex="1" parent="z4CzeoHyWsNDpYlZFiTu-73">
<mxGeometry x="260" y="57.5" width="100" height="85" as="geometry" />
</mxCell>
<mxCell id="z4CzeoHyWsNDpYlZFiTu-71" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;" edge="1" parent="z4CzeoHyWsNDpYlZFiTu-73" source="z4CzeoHyWsNDpYlZFiTu-10" target="z4CzeoHyWsNDpYlZFiTu-70">
<mxGeometry relative="1" as="geometry">
<mxPoint x="30" y="142.5" as="sourcePoint" />
</mxGeometry>
</mxCell>
</root>
</mxGraphModel>
</diagram>
<diagram id="2pU-qBdMS-FfD6IS7qYU" name="VLAN View">
<mxGraphModel dx="2440" dy="1405" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" math="0" shadow="0">
<root>
<mxCell id="0" />
<mxCell id="1" parent="0" />
<mxCell id="7z5INb6uvPQJT5LWZGVQ-28" value="netcup VPS" style="swimlane;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="480" y="20" width="150" height="220" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-29" value="&lt;div&gt;mii.seyshiro.de&lt;/div&gt;&lt;div&gt;tudattr.dev&lt;br&gt;&lt;/div&gt;&lt;div&gt;192.168.200.2&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.proxy_server;" vertex="1" parent="7z5INb6uvPQJT5LWZGVQ-28">
<mxGeometry x="20" y="50" width="105" height="105" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-34" value="&lt;div&gt;Network Backbone&amp;nbsp;&lt;/div&gt;&lt;div&gt;(Management VLAN 70)&lt;/div&gt;" style="swimlane;whiteSpace=wrap;html=1;startSize=40;" vertex="1" parent="1">
<mxGeometry x="780" y="20" width="380" height="220" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-36" value="&lt;div&gt;TP-Link EAP 225&lt;/div&gt;&lt;div&gt;192.168.70.250&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.wireless_modem;" vertex="1" parent="7z5INb6uvPQJT5LWZGVQ-34">
<mxGeometry x="260" y="57.5" width="100" height="85" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-35" value="&lt;div&gt;Mikrotik CRS 326&lt;/div&gt;&lt;div&gt;192.168.70.1&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.router;" vertex="1" parent="7z5INb6uvPQJT5LWZGVQ-34">
<mxGeometry x="60" y="100" width="100" height="30" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-13" value="&lt;div&gt;naruto.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.13&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server;" vertex="1" parent="1">
<mxGeometry x="420" y="370" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-11" value="&lt;div&gt;pi.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.11&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server;" vertex="1" parent="1">
<mxGeometry x="310" y="370" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-12" value="&lt;div&gt;inko.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.14&lt;br&gt;&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server;" vertex="1" parent="1">
<mxGeometry x="200" y="370" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-10" value="&lt;div&gt;aya01.seyshiro.de&lt;/div&gt;&lt;div&gt;192.168.20.12&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.server_storage;" vertex="1" parent="1">
<mxGeometry x="80" y="370" width="105" height="105" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-21" value="&lt;div&gt;Brother MFC-L2710DW&lt;/div&gt;&lt;div&gt;192.168.50.219&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.copier;" vertex="1" parent="1">
<mxGeometry x="1330" y="160" width="100" height="100" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-22" value="&lt;div&gt;Brother QL-820NWB&lt;/div&gt;&lt;div&gt;192.168.50.218&lt;/div&gt;" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.copier;" vertex="1" parent="1">
<mxGeometry x="1460" y="160" width="100" height="100" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-23" value="Lightbulbs" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.comm_link;" vertex="1" parent="1">
<mxGeometry x="1360" y="315" width="40" height="80" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-24" value="Shelly Power Outlet" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.comm_link;" vertex="1" parent="1">
<mxGeometry x="1490" y="315" width="40" height="80" as="geometry" />
</mxCell>
<mxCell id="7z5INb6uvPQJT5LWZGVQ-25" value="BirbCam" style="fontColor=#0066CC;verticalAlign=top;verticalLabelPosition=bottom;labelPosition=center;align=center;html=1;outlineConnect=0;fillColor=#CCCCCC;strokeColor=#6881B3;gradientColor=none;gradientDirection=north;strokeWidth=2;shape=mxgraph.networks.security_camera;" vertex="1" parent="1">
<mxGeometry x="1340" y="455" width="100" height="75" as="geometry" />
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

275
README.md
View File

@ -1,227 +1,74 @@
# TuDatTr IaC
## User
It is expected that a user with sudo privilages is on the target, for me the users name is "tudattr"
you can add such user with the following command `useradd -m -g sudo -s /bin/bash tudattr`
Don't forget to set a password for the new user with `passwd tudattr`
## sudo
Install sudo on the target machine, with debian its
**I do not recommend this project being used for ones own infrastructure, as
this project is heavily attuned to my specific host/network setup**
The Ansible Project to provision fresh Debian VMs for my Proxmox instances.
Some values are hard coded such as the public key both in
[./scripts/debian_seed.sh](./scripts/debian_seed.sh) and [./group_vars/all/vars.yml](./group_vars/all/vars.yml).
## Prerequisites
- [secrets.yml](secrets.yml) in the root directory of this repository.
Skeleton file can be found as [./secrets.yml.skeleton](./secrets.yml.skeleton).
- IP Configuration of hosts like in [./host_vars/\*](./host_vars/*)
- Setup [~/.ssh/config](~/.ssh/config) for the respective hosts used.
- Install `passlib` for your operating system. Needed to hash passwords ad-hoc.
## Improvable Variables
- `group_vars/k3s/vars.yml`:
- `k3s.server.ips`: Take list of IPs from host_vars `k3s_server*.yml`.
- `k3s_db_connection_string`: Embed this variable in the `k3s.db.`-directory.
Currently causes loop.
## Run Playbook
To run a first playbook and test the setup the following command can be executed.
```sh
su root
apt install sudo
usermod -a -G sudo tudattr
ansible-playbook -i production -J k3s-servers.yml
```
## Backups
Backup for aya01 and raspberry are in a backblaze b2, which gets encrypted on the clientside by rclone.
but first of all we need to create the buckets and provide ansible with the needed information.
This will run the [./k3s-servers.yml](./k3s-servers.yml) playbook and execute
its roles.
First we need to create a api key for backblaze, consists of an id and a key.
we use clone to sync to backblaze.
we can encrypt the data with rclone before sending it to backblaze.
to do this we need two buckets:
- b2
- crypt
on each device that should be backupped.
## After successful k3s installation
we create these by running `rclone config` and creating one [remote] b2 config and a [secret] crypt config. The crypt config should have two passwords that we store in our secrets file.
To access our Kubernetes cluster from our host machine to work on it via
flux and such we need to manually copy a k3s config from one of our server nodes to our host machine.
Then we need to install `kubectl` on our host machine and optionally `kubectx` if we're already
managing other Kubernetes instances.
Then we replace the localhost address inside of the config with the IP of our load balancer.
Finally we'll need to set the KUBECONFIG variable.
`
## Vault
- Create vault with: `ansible-vault create secrets.yml`
- Create entry in vault with: `ansible-vault edit secrets.yml`
- Add following entries: TODO
## Docker
To add new docker containers to the docker role you need to add the following and replace `service` with the name of your service:
- Add relevent vars to `group_vars/all/vars.yaml`:
```yaml
service:
host: "service"
ports:
http: "19999"
volumes:
config: "{{ docker_dir }}/service/" # config folder or your dir
data: "{{ docker_data_dir }}/service/" # data folder or your dir (only works on aya01)
```
- Create necessary directories for service in the docker role `roles/docker/tasks/service.yaml`
```yaml
- name: Create service dirs
file:
path: "{{ item }}"
owner: 1000
group: 1000
mode: '775'
state: directory
loop:
- "{{ service.volumes.config }}"
- "{{ service.volumes.data }}"
# optional:
# - name: Place service config
# template:
# owner: 1000
# mode: '660'
# src: "templates/hostname/service/service.yml"
# dest: "{{ prm_config }}/service.yml"
```
- Includ new tasks to `roles/docker/tasks/hostname_compose.yaml`:
```yaml
- include_tasks: service.yaml
tags:
- service
```
- Add new service to compose `roles/docker/templates/hostname/compose.yaml`
```yaml
service:
image: service/service
container_name: service
hostname: service
networks:
- net
ports:
- "{{service_port}}:19999"
restart: unless-stopped
volumes:
- "{{service_config}}:/etc/service"
- "{{service_lib}}:/var/lib/service"
- "{{service_cache}}:/var/cache/service"
```
## Server
- Install Debian (debian-11.5.0-amd64-netinst.iso) on remote system
- Create user (tudattr)
- Get IP of remote system (192.168.20.11)
- Create ssh-config entry
```config
Host aya01
HostName 192.168.20.11
Port 22
User tudattr
IdentityFile /mnt/veracrypt1/genesis
```
- copy public key to remote system
`ssh-copy-id -i /mnt/veracrypt1/genesis.pub aya01`
- Add this host to ansible inventory
- Install sudo on remote
- add user to sudo group (with `su --login` without login the path will not be loaded correctly see [here](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=918754)) and `usermod -a -G sudo tudattr`
- set time correctly when getting the following error
```sh
Release file for http://security.debian.org/debian-security/dists/bullseye-security/InRelease is not valid yet (invalid for another 12h 46min 9s). Updates for this repository will not be applied.
mkdir ~/.kube/
scp k3s-server00:/etc/rancher/k3s/k3s.yaml ~/.kube/config
chown $USER ~/.kube/config
sed -i "s/127.0.0.1/192.168.20.22/" ~/.kube/config
export KUBECONFIG=~/.kube/config
```
By doing on remote system (example):
Install flux and continue in the flux repository.
## Longhorn Nodes
To create longhorn nodes from existing kubernetes nodes we want to increase
their storage capacity. Since we're using VMs for our k3s nodes we can
resize the root-disk of the VMs in the proxmox GUI.
Then we have to resize the partitions inside of the VM so the root partition
uses the newly available space.
When we have LVM-based root partition we can do the following:
```sh
sudo systemctl stop ntp.service
sudo ntpd -gq
sudo systemctl start ntp.service
# Create a new partition from the free space.
sudo fdisk /dev/sda
# echo "n\n\n\n\n\nw\n"
# n > 5x\n > w > \n
# Create a LVM volume on the new partition
sudo pvcreate /dev/sda3
sudo vgextend k3s-vg /dev/sda3
# Use the newly available storage in the root volume
sudo lvresize -l +100%FREE -r /dev/k3s-vg/root
```
### zoneminder
- Enable authentification in (Option->System)
- Create new Camera:
- General>Name: BirdCam
- General>Function: Ffmpeg
- General>Function: Modect
- Source>Source Path: `rtsp://user:pw@ip:554/cam/mpeg4`
- Change default admin password
- Create users
## RaspberryPi
- Install raspbian lite (2022-09-22-raspios-bullseye-arm64-lite.img) on pi
- Get IP of remote system (192.168.20.11)
- Create ssh-config entry
```config
Host pi
HostName 192.168.20.11
Port 22
User tudattr
IdentityFile /mnt/veracrypt1/genesis
```
- enable ssh on pi
- copy public key to pi
- change user password of user on pi
- execute `ansible-playbook -i production --ask-vault-pass --extra-vars '@secrets.yml' pi.yml`
## Mikrotik
- Create rsa-key on your device and name it mikrotik_rsa
- On mikrotik run: `/user/ssh-keys/import public-key-file=mikrotik_rsa.pub user=tudattr`
- Create ssh-config entry:
```config
Host mikrotik
HostName 192.168.70.1
Port 2200
User tudattr
IdentityFile /mnt/veracrypt1/mikrotik_rsa
```
### wireguard
thanks to [mikrotik](https://www.medo64.com/2022/04/wireguard-on-mikrotik-routeros-7/)0
quick code
```
# add wiregurad interface
interface/wireguard/add listen-port=51820 name=wg1
# get public key
interface/wireguard/print
$ > public-key: <mikrotik_public_key>
# add network/ip for wireguard interface
ip/address/add address=192.168.200.1/24 network=192.168.200.0 interface=wg1
# add firewall rule for wireguard (maybe specify to be from pppoe-wan)
/ip/firewall/filter/add chain=input protocol=udp dst-port=51820 action=accept
# routing for wg1 clients and rest of the network
> <insert forward for routing between wg1 and other networks>
# enable internet for wg1 clients (may have to add to enable internet list
/ip/firewall/nat/add chain=srcnat src-address=192.168.200.0/24 out-interface=pppoe-wan action=masquerade
```
add peer
```
/interface/wireguard/peers/add interface=wg1 allowed-address=<untaken_ipv4>/24 public-key="<client_public_key"
```
Keygeneragion on archlinux `wg genkey | (umask 0077 && tee wireguard.key) | wg pubkey > peer_A.pub`
Wireguard config on archlinux at `/etc/wireguard/wg0.conf`:
```
[Interface]
PrivateKey = <client_private_key>
Address = 192.168.200.250/24
[Peer]
PublicKey = <mikrotik public key>
Endpoint = tudattr.dev:51820
AllowedIPs = 0.0.0.0/0
```
used ipv4:
- tudattr: 192.168.200.250
- livei: 192.168.200.240
#### notes
- wireguard->add
name: wg_tunnel01
listen port: 51820
[save]
- wireguard->peers->add
interface: wg_tunnel01
endpoint port: 51820
allowed address: ::/0
psk: <password>
persistent keepalive: 25
- ip->address->address list->add
address:192.168.200.1/24
network: 192.168.200.0
interface: wg_tunnel01
## troubleshooting
### Docker networking problem
`docker system prune -a`
### Time problems (NTP service: n/a)
systemctl status systemd-timesyncd.service
when not available
sudo apt install systemd-timesyncd/stable
### Syncthing inotify
echo "fs.inotify.max_user_watches=204800" | sudo tee -a /etc/sysctl.conf
https://forum.cloudron.io/topic/7163/how-to-increase-inotify-limit-for-syncthing/2

View File

@ -1,29 +0,0 @@
---
- name: Set up Servers
hosts: aya01
gather_facts: yes
roles:
- role: common
tags:
- common
- role: samba
tags:
- samba
# - role: power_management
# tags:
# - power_management
- role: backblaze
tags:
- backblaze
- role: node_exporter
tags:
- node_exporter
- role: snmp_exporter
tags:
- snmp_exporter
- role: smart_exporter
tags:
- smart_exporter
- role: docker
tags:
- docker

10
common-k3s.yml Normal file
View File

@ -0,0 +1,10 @@
---
- name: Run the common role on k3s
hosts: k3s
gather_facts: yes
vars_files:
- secrets.yml
roles:
- role: common
tags:
- common

View File

@ -1,20 +1,19 @@
---
- name: Set up Servers
hosts: mii
hosts: db
gather_facts: yes
vars_files:
- secrets.yml
roles:
- role: common
tags:
- common
- role: backblaze
- role: postgres
tags:
- backblaze
- postgres
- role: node_exporter
tags:
- node_exporter
- role: docker
- role: postgres_exporter
tags:
- docker
- role: wireguard
tags:
- wireguard
- postgres_exporter

13
docker-host.yml Normal file
View File

@ -0,0 +1,13 @@
---
- name: Set up Servers
hosts: docker_host
gather_facts: yes
vars_files:
- secrets.yml
roles:
- role: common
tags:
- common
- role: docker_host
tags:
- docker_host

View File

@ -1,545 +1,35 @@
#
# Essential
#
user: tudattr
timezone: Europe/Berlin
rclone_config: "/root/.config/rclone/"
puid: "1000"
pgid: "1000"
pk_path: "/mnt/veracrypt1/genesis"
pubkey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKqc9fnzfCz8fQDFzla+D8PBhvaMmFu2aF+TYkkZRxl9 tuan@genesis-2022-01-20"
local_domain: tudattr.dev
local_subdomains: "local"
remote_domain: tudattr.dev
remote_subdomains: "www,plex,status,tautulli"
backup_domain: seyshiro.de
backup_subdomains: "hass,qbit,zm,"
#
# aya01
#
aya01_host: "aya01"
aya01_ip: "192.168.20.12"
#
# mii
#
mii_host: "mii"
mii_ip: "192.168.200.2"
#
# naruto
#
naruto_host: "naruto"
naruto_ip: "192.168.20.13"
#
# pi
#
pi_host: "pi"
pi_ip: "192.168.20.11"
#
# inko
#
inko_host: "inko"
inko_ip: "192.168.20.14"
#
# Used to download for git releases
#
go_arch_map:
i386: '386'
x86_64: 'amd64'
aarch64: 'arm64'
armv7l: 'armv7'
armv6l: 'armv6'
go_arch: "{{ go_arch_map[ansible_architecture] | default(ansible_architecture) }}"
#
# aya01 - Disks
#
fstab_entries:
- name: "config"
path: "/opt"
type: "ext4"
uuid: "cad60133-dd84-4a2a-8db4-2881c608addf"
- name: "media0"
path: "/mnt/media0"
type: "ext4"
uuid: "c4c724ec-4fe3-4665-adf4-acd31d6b7f95"
- name: "media1"
path: "/mnt/media1"
type: "ext4"
uuid: "8d66d395-1e35-4f5a-a5a7-d181d6642ebf"
mergerfs_entries:
- name: "media"
path: "/media"
branches:
- "/mnt/media0"
- "/mnt/media1"
opts:
- "use_ino"
- "allow_other"
- "cache.files=partial"
- "dropcacheonclose=true"
- "category.create=mfs"
type: "fuse.mergerfs"
public_domain: tudattr.dev
internal_domain: seyshiro.de
#
# Packages
#
common_packages:
- sudo
- build-essential
- curl
- git
- iperf3
- git
- smartmontools
- vim
- curl
- tree
- neovim
- rsync
- smartmontools
- sudo
- systemd-timesyncd
- neofetch
- build-essential
- btrfs-progs
#
# Docker
#
docker_repo_url: https://download.docker.com/linux
docker_apt_gpg_key: "{{ docker_repo_url }}/{{ ansible_distribution | lower }}/gpg"
docker_apt_release_channel: stable
docker_apt_arch: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}"
docker_apt_repository: "deb [arch={{ docker_apt_arch }}] {{ docker_repo_url }}/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} {{ docker_apt_release_channel }}"
docker_network: "172.16.69.0/24"
docker_compose_dir: /opt/docker/compose
docker_dir: /opt/docker/config
docker_data_dir: /media/docker/data # only available on aya01
- tree
- screen
- bat
- fd-find
- ripgrep
mysql_user: user
#
# ZoneMinder
#
zoneminder_host: "zm"
zoneminder_port: "8081"
zoneminder_network: "172.16.42.0/24"
zoneminder_root: "{{ docker_dir }}/zm"
zoneminder_config: "{{ zoneminder_root }}/config"
zoneminder_log: "{{ zoneminder_root}}/log"
zoneminder_db: "{{ zoneminder_root}}/db"
zoneminder_data: "{{ docker_data_dir }}/zm/data"
#
# Syncthing
#
syncthing_host: "syncthing"
syncthing_port: "8384"
syncthing_data: "{{docker_data_dir}}/syncthing/"
#
# Softserve
#
softserve_data: "{{docker_dir}}/softserve/data"
#
# cupsd
#
cupsd_host: "cupsd"
cupsd_port: "631"
cupsd_config: "{{ docker_dir }}/cupsd/"
#
# Uptime Kuma
#
kuma_host: "status"
kuma_port: "3001"
kuma_config: "{{ docker_dir }}/kuma/"
#
# Traefik
#
traefik:
host: "traefik"
admin:
port: "8080"
config: "{{ docker_dir }}/traefik/etc-traefik/"
data: "{{ docker_dir }}/traefik/var-log/"
letsencrypt: "{{ docker_dir }}/traefik/letsencrypt/"
user:
web: "80"
websecure: "443"
#
# DynDns Updater
#
ddns_host: "ddns"
ddns_port: "8000"
ddns_data: "{{ docker_dir }}/ddns-updater/data/"
#
# Home Assistant
#
ha_host: "hass"
ha_port: "8123"
ha_config: "{{ docker_dir }}/home-assistant/config/"
#
# pihole
#
pihole_host: "pihole"
pihole_port: "8089"
pihole_config: "{{ docker_dir }}/pihole/etc-pihole/"
pihole_dnsmasq: "{{ docker_dir }}/pihole/etc-dnsmasq.d/"
#
# backblaze
#
# Directories that will be backupped to backblaze
# MOVED TO HOSTVARS
# backblaze_paths:
# aya01:
# - "{{ docker_compose_dir }}"
# - "{{ docker_dir }}"
# pi:
# - "{{ docker_compose_dir }}"
# - "{{ docker_dir }}"
#
# samba
#
samba:
dependencies:
- "samba"
- "smbclient"
- "cifs-utils"
user: "smbuser"
group: "smbshare"
config: "templates/smb.conf"
shares:
media:
name: "media"
path: "/media"
paperless:
name: "paperless"
path: "{{ paperless.data.consume }}"
#
# netdata
#
netdata_port: "19999"
netdata_config: "{{ docker_dir }}/netdata/"
netdata_lib: "{{ docker_data_dir }}/netdata/lib/"
netdata_cache: "{{ docker_data_dir }}/netdata/cache"
#
# Plex
#
plex_host: "plex"
# plex_ip: "172.16.69.12"
plex_port: "32400"
plex_config: "{{docker_data_dir}}/{{ plex_host }}/config"
plex_tv: "/media/series"
plex_movies: "/media/movies"
plex_music: "/media/songs"
#
# WireGuard
#
wg_config: "templates/wg0.conf"
wg_remote_config: "/etc/wireguard/wg0.conf"
wg_service: "wg-quick@wg0.service"
wg_deps: "wireguard"
wg_ip: "192.168.200.2"
wg_pubkey: "+LaPESyBF6Sb1lqkk4UcestFpXNaKYyyX99tkqwLQhU="
wg_endpoint: "{{ local_subdomains }}.{{ local_domain }}:51820"
wg_allowed_ips: "192.168.20.0/24,192.168.200.1/32"
wg_dns: "{{ aya01_ip }},{{ pi_ip }},1.1.1.1"
arr_downloads: "{{ docker_data_dir }}/arr_downloads"
#
# Sonarr
#
sonarr_port: "8989"
sonarr_host: "sonarr"
sonarr_config: "{{ docker_dir }}/{{ sonarr_host }}/config"
sonarr_media: "{{ plex_tv }}"
sonarr_downloads: "{{ arr_downloads }}/{{ sonarr_host }}"
#
# Radarr
#
radarr_port: "7878"
radarr_host: "radarr"
radarr_config: "{{ docker_dir }}/{{ radarr_host }}/config"
radarr_media: "{{ plex_movies }}"
radarr_downloads: "{{ arr_downloads }}/{{ radarr_host }}"
#
# Lidarr
#
lidarr_port: "8686"
lidarr_host: "lidarr"
lidarr_config: "{{ docker_dir }}/{{ lidarr_host }}/config"
lidarr_media: "{{ plex_music }}"
lidarr_downloads: "{{ arr_downloads }}/{{ lidarr_host }}"
#
# Prowlarr
#
prowlarr_port: "9696"
prowlarr_host: "prowlarr"
prowlarr_config: "{{ docker_dir }}/{{ prowlarr_host }}/config"
#
# bin
#
bin_port: "6162"
bin_host: "bin"
bin_upload: "{{ docker_data_dir }}/{{bin_host}}/upload"
#
# qbittorrentvpn
#
qbit_port: "8082"
qbit_host: "qbit"
qbit_config: "templates/aya01/qbittorrentvpn/config"
qbit_remote_config: "{{ docker_dir }}/{{ qbit_host }}/config"
qbit_downloads: "{{ arr_downloads }}"
qbit_type: "openvpn"
qbit_ssl: "no"
qbit_lan: "192.168.20.0/24, 192.168.30.0/24, {{ docker_network }}"
qbit_dns: "{{ aya01_ip }}, {{ pi_ip }}, 1.1.1.1"
#
# qbittorrentvpn - torrentleech
#
torrentleech_port: "8083"
torrentleech_host: "torrentleech"
torrentleech_remote_config: "{{ docker_dir }}/{{ torrentleech_host }}/config"
#
# Home Assistant
#
hass_port: ""
hass_host: "hass"
#
# Tautulli
#
tautulli_port: "8181"
tautulli_host: "tautulli"
tautulli_config: "{{ docker_dir }}/{{ tautulli_host }}/config"
#
# Code Server
#
code_port: "8443"
code_host: "code"
code_config: "{{ docker_dir }}/{{ code_host }}/config"
#
# GlueTun
#
gluetun_port: ""
gluetun_host: "gluetun"
gluetun_country: "Hungary"
gluetun_config: "{{ docker_dir }}/{{ gluetun_host }}/config"
#
# NodeExporter
#
node_exporter:
port: 9100
host: 'node'
version: 'latest'
serve: 'localhost'
options: ''
bin_path: /usr/local/bin/node_exporter
#
# Prometheus
#
prometheus_puid: "65534"
prometheus_pgid: "65534"
prometheus_host: "prometheus"
prometheus_data: "{{docker_data_dir}}/prometheus/"
prometheus_config: "{{docker_dir}}/prometheus/"
prometheus_port: "9090"
#
# Grafana
#
grafana_host: "grafana"
grafana_port: "3000"
grafana_data: "{{docker_data_dir}}/grafana/"
grafana_config: "{{docker_dir}}/grafana/config/"
grafana_logs: "{{docker_dir}}/grafana/logs/"
grafana_puid: "472"
grafana_pgid: "472"
#
# SNMP Exporter
#
snmp_exporter_port: "9116"
snmp_exporter_target: "192.168.20.1"
snmp_exporter_config: "{{ docker_dir }}/snmp_exporter/"
snmp_exporter_host: "snmp_exporter"
#
# SMART Exporter
#
smart_exporter:
port: 9633
version: 'latest'
options: '--web.listen-address=9633'
bin_path: /usr/local/bin/smart_exporter
#
# Stirling-pdf
#
stirling:
host: "stirling"
dns: "pdf"
port: 8084
#
# nginx proxy manager
#
nginx:
host: "nginx"
endpoints:
http: 80
https: 443
admin: 8080
paths:
letsencrypt: "{{docker_dir}}/nginx/letsencrypt"
data: "{{docker_dir}}/nginx/data"
#
# Jellyfin
#
jellyfin:
host: "jellyfin"
port: "8096"
config: "{{docker_dir}}/jellyfin/config"
cache: "{{docker_dir}}/jellyfin/cache"
media:
tv: "{{ plex_tv }}"
movies: "{{ plex_movies }}"
music: "{{ plex_music }}"
#
# paperless-ngx
#
paperless:
host: "paperless"
port: "8000"
data:
data: "{{ docker_dir }}/paperless/data/data"
media: "{{ docker_dir }}/paperless/data/media"
export: "{{ docker_dir }}/paperless/data/export"
consume: "{{ docker_dir }}/paperless/data/consume"
db:
host: "paperless-sqlite"
db: "paperless"
user: "paperless"
password: "{{ host.paperless.db.password }}"
data: "{{ docker_dir }}/paperless/db/data"
redis:
host: "paperless-redis"
data: "{{ docker_dir }}/paperless/redis/data"
#
# Homarr
#
homarr:
host: "homarr"
volumes:
configs: "{{docker_dir}}/homarr/configs"
data: "{{ docker_data_dir }}/homarr/data/"
icons: "{{docker_dir}}/homarr/icons"
#
# gitea
#
gitea:
host: "git"
url: "https://git.tudattr.dev"
volumes:
data: "{{ docker_data_dir }}/gitea/data"
config: "{{ docker_dir }}/gitea/config"
ports:
http: "3000"
ssh: "2222"
runner:
host: "gitea-runner-{{ host.hostname }}"
token: "{{ host.gitea.runner.token }}"
name: "{{ host.hostname }}"
volumes:
data: "{{ docker_data_dir }}/gitea/runner/data/"
config: "{{ docker_dir }}/gitea/runner/config/"
config_file: "{{ docker_dir }}/gitea/runner/config/config.yml"
#
# Jellyseer
#
jellyseer:
host: "jellyseer"
ports:
http: "5055"
volumes:
config: "{{ docker_dir }}/jellyseer/config"
arch: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}"

View File

@ -0,0 +1,4 @@
docker:
url: "https://download.docker.com/linux"
apt_release_channel: "stable"
dirs: "/opt/docker"

28
group_vars/k3s/vars.yml Normal file
View File

@ -0,0 +1,28 @@
db:
default_user:
user: "postgres"
name: "k3s"
user: "k3s"
password: "{{ vault.k3s.postgres.db.password }}"
listen_address: "{{ k3s.db.ip }}"
k3s:
net: "192.168.20.0/24"
server:
ips:
- 192.168.20.21
- 192.168.20.24
- 192.168.20.30
loadbalancer:
ip: 192.168.20.22
default_port: 6443
db:
ip: 192.168.20.23
default_port: "5432"
agent:
ips:
- 192.168.20.25
- 192.168.20.26
- 192.168.20.27
k3s_db_connection_string: "postgres://{{ db.user }}:{{ db.password }}@{{ k3s.db.ip }}:{{ k3s.db.default_port }}/{{ db.name }}"

View File

@ -1,53 +1,10 @@
ansible_user: "{{ user }}"
---
ansible_user: "root"
ansible_host: 192.168.20.12
ansible_port: 22
ansible_ssh_private_key_file: '{{ pk_path }}'
ansible_become_pass: '{{ vault.aya01.sudo }}'
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.pve.aya01.root.sudo }}"
host:
hostname: "aya01"
ip: "{{ ansible_host }}"
backblaze:
account: "{{ vault.aya01.backblaze.account }}"
key: "{{ vault.aya01.backblaze.key }}"
remote: "remote:aya01-tudattr-dev"
password: "{{ vault.aya01.rclone.password }}"
password2: "{{ vault.aya01.rclone.password2 }}"
paths:
- "{{ docker_compose_dir }}"
- "{{ docker_dir }}"
fstab:
- name: "config"
path: "/opt"
type: "ext4"
uuid: "cad60133-dd84-4a2a-8db4-2881c608addf"
- name: "media0"
path: "/mnt/media0"
type: "ext4"
uuid: "c4c724ec-4fe3-4665-adf4-acd31d6b7f95"
- name: "media1"
path: "/mnt/media1"
type: "ext4"
uuid: "8d66d395-1e35-4f5a-a5a7-d181d6642ebf"
mergerfs:
- name: "media"
path: "/media"
branches:
- "/mnt/media0"
- "/mnt/media1"
opts:
- "use_ino"
- "allow_other"
- "cache.files=partial"
- "dropcacheonclose=true"
- "category.create=mfs"
type: "fuse.mergerfs"
samba:
password: "{{ vault.aya01.samba.password }}"
paperless:
db:
password: "{{ vault.aya01.paperless.db.password }}"
gitea:
runner:
token: "{{ vault.aya01.gitea.runner.token }}"
name: "aya01"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.34
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.docker.host00.sudo }}"
host:
hostname: "docker-host00"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.35
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.docker.host01.sudo }}"
host:
hostname: "docker-host01"
ip: "{{ ansible_host }}"

View File

@ -1,10 +1,10 @@
ansible_user: "{{ user }}"
---
ansible_user: "root"
ansible_host: 192.168.20.14
ansible_port: 22
ansible_ssh_private_key_file: '{{ pk_path }}'
ansible_become_pass: '{{ vault.inko.sudo }}'
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.pve.inko.root.sudo }}"
host:
hostname: "inko"
ip: "{{ ansible_host }}"
fstab:
mergerfs:

10
host_vars/k3s-agent00.yml Normal file
View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.25
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.agent00.sudo }}"
host:
hostname: "k3s-agent00"
ip: "{{ ansible_host }}"

10
host_vars/k3s-agent01.yml Normal file
View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.26
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.agent01.sudo }}"
host:
hostname: "k3s-agent01"
ip: "{{ ansible_host }}"

10
host_vars/k3s-agent02.yml Normal file
View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.27
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.agent02.sudo }}"
host:
hostname: "k3s-agent02"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,9 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.22
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.loadbalancer.sudo }}"
host:
hostname: "k3s-loadbalancer"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.32
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.longhorn00.sudo }}"
host:
hostname: "k3s-longhorn00"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.33
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.longhorn01.sudo }}"
host:
hostname: "k3s-longhorn01"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.31
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.longhorn02.sudo }}"
host:
hostname: "k3s-longhorn02"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,9 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.23
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.postgres.sudo }}"
host:
hostname: "k3s-postgres"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,9 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.21
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.server00.sudo }}"
host:
hostname: "k3s-server00"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.24
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.server01.sudo }}"
host:
hostname: "k3s-server01"
ip: "{{ ansible_host }}"

View File

@ -0,0 +1,10 @@
---
ansible_user: "{{ user }}"
ansible_host: 192.168.20.30
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.k3s.server02.sudo }}"
host:
hostname: "k3s-server02"
ip: "{{ ansible_host }}"

10
host_vars/lulu.yml Normal file
View File

@ -0,0 +1,10 @@
---
ansible_user: "root"
ansible_host: 192.168.20.28
ansible_port: 22
ansible_ssh_private_key_file: "{{ pk_path }}"
ansible_become_pass: "{{ vault.pve.lulu.root.sudo }}"
host:
hostname: "lulu"
ip: "{{ ansible_host }}"

View File

@ -1,20 +0,0 @@
ansible_user: "{{ user }}"
ansible_host: 202.61.207.139
ansible_port: 22
ansible_ssh_private_key_file: '{{ pk_path }}'
ansible_become_pass: '{{ vault.mii.sudo }}'
host:
hostname: "mii"
ip: "192.168.200.2"
backblaze:
account: "{{ vault.mii.backblaze.account }}"
key: "{{ vault.mii.backblaze.key }}"
remote: "remote:mii-tudattr-dev"
password: "{{ vault.mii.rclone.password }}"
password2: "{{ vault.mii.rclone.password2 }}"
paths:
- "{{ docker_compose_dir }}"
- "{{ docker_dir }}"
fstab:
mergerfs:

View File

@ -1,23 +0,0 @@
ansible_user: "{{ user }}"
ansible_host: 192.168.20.13
ansible_port: 22
ansible_ssh_private_key_file: '{{ pk_path }}'
ansible_become_pass: '{{ vault.naruto.sudo }}'
host:
hostname: "naruto"
ip: "{{ ansible_host }}"
backblaze:
account: "{{ vault.naruto.backblaze.account }}"
key: "{{ vault.naruto.backblaze.key }}"
remote: "remote:naruto-tudattr-dev"
password: "{{ vault.naruto.rclone.password }}"
password2: "{{ vault.naruto.rclone.password2 }}"
paths:
- "{{ docker_compose_dir }}"
- "{{ docker_dir }}"
fstab:
mergerfs:
gitea:
runner:
token: "{{ vault.naruto.gitea.runner.token }}"

View File

@ -1,23 +0,0 @@
ansible_user: "{{ user }}"
ansible_host: 192.168.20.11
ansible_port: 22
ansible_ssh_private_key_file: '{{ pk_path }}'
ansible_become_pass: '{{ vault.pi.sudo }}'
host:
hostname: "pi"
ip: "{{ ansible_host }}"
backblaze:
account: "{{ vault.pi.backblaze.account }}"
key: "{{ vault.pi.backblaze.key }}"
remote: "remote:pi-tudattr-dev"
password: "{{ vault.pi.rclone.password }}"
password2: "{{ vault.pi.rclone.password2 }}"
paths:
- "{{ docker_compose_dir }}"
- "{{ docker_dir }}"
fstab:
mergerfs:
gitea:
runner:
token: "{{ vault.pi.gitea.runner.token }}"

31
k3s-agents.yml Normal file
View File

@ -0,0 +1,31 @@
- name: Set up Agents
hosts: k3s_nodes
gather_facts: yes
vars_files:
- secrets.yml
pre_tasks:
- name: Get K3s token from the first server
when: host.ip == k3s.server.ips[0] and inventory_hostname in groups["k3s_server"]
slurp:
src: /var/lib/rancher/k3s/server/node-token
register: k3s_token
become: true
- name: Set fact on k3s.server.ips[0]
when: host.ip == k3s.server.ips[0] and inventory_hostname in groups["k3s_server"]
set_fact: k3s_token="{{ k3s_token['content'] | b64decode | trim }}"
roles:
- role: common
when: inventory_hostname in groups["k3s_agent"]
tags:
- common
- role: k3s_agent
when: inventory_hostname in groups["k3s_agent"]
k3s_token: "{{ hostvars[(hostvars | dict2items | map(attribute='value') | map('dict2items') | map('selectattr', 'key', 'match', 'host') | map('selectattr', 'value.ip', 'match', k3s.server.ips[0] ) | select() | first | items2dict).host.hostname].k3s_token }}"
tags:
- k3s_agent
- role: node_exporter
when: inventory_hostname in groups["k3s_agent"]
tags:
- node_exporter

View File

@ -1,14 +1,16 @@
---
- name: Set up Servers
hosts: inko
hosts: k3s_server
gather_facts: yes
vars_files:
- secrets.yml
roles:
- role: common
tags:
- common
- role: power_management
- role: k3s_server
tags:
- power_management
- k3s_server
- role: node_exporter
tags:
- node_exporter

31
k3s-storage.yml Normal file
View File

@ -0,0 +1,31 @@
- name: Set up storage
hosts: k3s_nodes
gather_facts: yes
vars_files:
- secrets.yml
pre_tasks:
- name: Get K3s token from the first server
when: host.ip == k3s.server.ips[0] and inventory_hostname in groups["k3s_server"]
slurp:
src: /var/lib/rancher/k3s/server/node-token
register: k3s_token
become: true
- name: Set fact on k3s.server.ips[0]
when: host.ip == k3s.server.ips[0] and inventory_hostname in groups["k3s_server"]
set_fact: k3s_token="{{ k3s_token['content'] | b64decode | trim }}"
roles:
- role: common
when: inventory_hostname in groups["k3s_storage"]
tags:
- common
- role: k3s_storage
when: inventory_hostname in groups["k3s_storage"]
k3s_token: "{{ hostvars[(hostvars | dict2items | map(attribute='value') | map('dict2items') | map('selectattr', 'key', 'match', 'host') | map('selectattr', 'value.ip', 'match', k3s.server.ips[0] ) | select() | first | items2dict).host.hostname].k3s_token }}"
tags:
- k3s_storage
- role: node_exporter
when: inventory_hostname in groups["k3s_storage"]
tags:
- node_exporter

View File

@ -1,17 +1,16 @@
---
- name: Set up Servers
hosts: naruto
hosts: loadbalancer
gather_facts: yes
vars_files:
- secrets.yml
roles:
- role: common
tags:
- common
- role: samba
- role: loadbalancer
tags:
- samba
- loadbalancer
- role: node_exporter
tags:
- node_exporter
- role: smart_exporter
tags:
- smart_exporter

17
pi.yml
View File

@ -1,17 +0,0 @@
---
- name: Set up Raspberry Pis
hosts: pi
gather_facts: yes
roles:
- role: common
tags:
- common
- role: backblaze
tags:
- backblaze
- role: node_exporter
tags:
- node_exporter
- role: docker
tags:
- docker

View File

@ -1,9 +1,75 @@
[server]
aya01
[raspberry]
pi
naruto
[vps]
mii
[k3s]
k3s-postgres
k3s-loadbalancer
k3s-server00
k3s-server01
k3s-server02
k3s-agent00
k3s-agent01
k3s-agent02
k3s-longhorn00
k3s-longhorn01
k3s-longhorn02
[k3s_server]
k3s-server00
k3s-server01
k3s-server02
[k3s_agent]
k3s-agent00
k3s-agent01
k3s-agent02
[k3s_storage]
k3s-longhorn00
k3s-longhorn01
k3s-longhorn02
[vm]
k3s-agent00
k3s-agent01
k3s-agent02
k3s-server00
k3s-server01
k3s-server02
k3s-postgres
k3s-loadbalancer
k3s-longhorn00
k3s-longhorn01
k3s-longhorn02
docker-host00
[k3s_nodes]
k3s-server00
k3s-server01
k3s-server02
k3s-agent00
k3s-agent01
k3s-agent02
k3s-longhorn00
k3s-longhorn01
k3s-longhorn02
[db]
k3s-postgres
[loadbalancer]
k3s-loadbalancer
[vm:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -p 22 -W %h:%p -q aya01"'
[docker]
docker-host00
[docker_host]
docker-host00
[proxmox]
aya01
lulu
inko

View File

@ -1,24 +0,0 @@
---
- name: Shut down docker
systemd:
name: docker
state: stopped
become: true
# - name: Backing up for "{{ inventory_hostname }}"
# shell:
# cmd: "rclone sync {{ item }} secret:{{ item }} --transfers 16"
# loop: "{{ host.backblaze.paths }}"
# become: true
- name: Backing up for "{{ inventory_hostname }}"
shell:
cmd: "rclone sync {{ item }} secret:{{ item }} --skip-links"
loop: "{{ host.backblaze.paths }}"
become: true
- name: Restart docker
systemd:
name: docker
state: started
become: true

View File

@ -1,18 +0,0 @@
---
- name: Create rclone config folder
file:
path: "{{ rclone_config }}"
owner: '0'
group: '0'
mode: '700'
state: directory
become: true
- name: Copy "rclone.conf"
template:
src: "rclone.conf.j2"
dest: "{{ rclone_config }}/rclone.conf"
owner: '0'
group: '0'
mode: '400'
become: true

View File

@ -1,13 +0,0 @@
---
- name: Update and upgrade packages
apt:
update_cache: true
upgrade: true
autoremove: true
become: true
- name: Install rclone
apt:
name: "rclone"
state: present
become: true

View File

@ -1,5 +0,0 @@
---
- include_tasks: install.yml
- include_tasks: config.yml
- include_tasks: backup.yml

View File

@ -1,10 +0,0 @@
[remote]
type = b2
account = {{ host.backblaze.account }}
key = {{ host.backblaze.key }}
[secret]
type = crypt
remote = {{ host.backblaze.remote }}
password = {{ host.backblaze.password }}
password2 = {{ host.backblaze.password2 }}

View File

@ -0,0 +1,4 @@
alias cat=batcat
alias vim=nvim
alias fd=fdfind
alias ls=eza

View File

@ -1,7 +1,7 @@
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
case $- in
*i*) ;;
*) return;;
*i*) ;;
*) return ;;
esac
HISTCONTROL=ignoreboth
shopt -s histappend
@ -9,39 +9,38 @@ HISTSIZE=1000
HISTFILESIZE=2000
shopt -s checkwinsize
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
debian_chroot=$(cat /etc/debian_chroot)
fi
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
xterm-color | *-256color) color_prompt=yes ;;
esac
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
color_prompt=yes
else
color_prompt=
fi
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
xterm* | rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*) ;;
esac
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
fi
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
. ~/.bash_aliases
fi
if ! shopt -oq posix; then
@ -51,6 +50,3 @@ if ! shopt -oq posix; then
. /etc/bash_completion
fi
fi
. "$HOME/.cargo/env"

View File

@ -0,0 +1,6 @@
---
- name: Restart sshd
service:
name: sshd
state: restarted
become: yes

View File

@ -1,10 +1,12 @@
---
- name: Copy .bashrc
template:
src: templates/common/bash/bashrc.j2
dest: "/home/{{ user }}/.bashrc"
- name: Copy bash-configs
ansible.builtin.template:
src: "files/bash/{{ item }}"
dest: "/home/{{ user }}/.{{ item }}"
owner: "{{ user }}"
group: "{{ user }}"
mode: 0644
become: yes
register: sshd
mode: "644"
loop:
- bashrc
- bash_aliases
become: true

View File

@ -1,13 +0,0 @@
---
- name: Update and upgrade packages
apt:
update_cache: yes
upgrade: yes
autoremove: yes
become: yes
- name: Install extra packages
apt:
name: "{{ common_packages }}"
state: present
become: yes

View File

@ -0,0 +1,95 @@
---
- name: Ensure /etc/apt/keyrings directory exists
ansible.builtin.file:
path: /etc/apt/keyrings
state: directory
mode: "0755"
become: true
- name: Download and save Gierens repository GPG key
ansible.builtin.get_url:
url: https://raw.githubusercontent.com/eza-community/eza/main/deb.asc
dest: /etc/apt/keyrings/gierens.asc
mode: "0644"
register: gpg_key_result
become: true
- name: Add Gierens repository to apt sources
ansible.builtin.apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/gierens.asc] http://deb.gierens.de stable main"
state: present
update_cache: true
become: true
- name: Install eza package
ansible.builtin.apt:
name: eza
state: present
become: true
- name: Install bottom package
ansible.builtin.apt:
deb: https://github.com/ClementTsang/bottom/releases/download/0.9.6/bottom_0.9.6_amd64.deb
state: present
become: true
- name: Check if Neovim is already installed
ansible.builtin.command: "which nvim"
register: neovim_installed
changed_when: false
ignore_errors: true
- name: Download Neovim AppImage
ansible.builtin.get_url:
url: https://github.com/neovim/neovim/releases/download/v0.10.0/nvim.appimage
dest: /tmp/nvim.appimage
mode: "0755"
when: neovim_installed.rc != 0
register: download_result
- name: Extract Neovim AppImage
ansible.builtin.command:
cmd: "./nvim.appimage --appimage-extract"
chdir: /tmp
when: download_result.changed
register: extract_result
- name: Copy extracted Neovim files to /usr
ansible.builtin.copy:
src: /tmp/squashfs-root/usr/
dest: /usr/
remote_src: true
mode: "0755"
become: true
when: extract_result.changed
- name: Clean up extracted Neovim files
ansible.builtin.file:
path: /tmp/squashfs-root
state: absent
when: extract_result.changed
- name: Remove Neovim AppImage
ansible.builtin.file:
path: /tmp/nvim.appimage
state: absent
when: download_result.changed
- name: Check if Neovim config directory already exists
ansible.builtin.stat:
path: ~/.config/nvim
register: nvim_config
- name: Clone LazyVim starter to Neovim config directory
ansible.builtin.git:
repo: https://github.com/LazyVim/starter
dest: ~/.config/nvim
clone: true
update: false
when: not nvim_config.stat.exists
- name: Remove .git directory from Neovim config
ansible.builtin.file:
path: ~/.config/nvim/.git
state: absent
when: not nvim_config.stat.exists

View File

@ -1,42 +0,0 @@
---
- name: Install dependencies
apt:
name: "mergerfs"
state: present
become: yes
- name: Create mount folders
file:
path: "{{ item.path }}"
state: directory
loop: "{{ host.fstab if host.fstab is iterable else []}}"
become: true
- name: Create fstab entries
mount:
src: "UUID={{ item.uuid }}"
path: "{{ item.path }}"
fstype: "{{ item.type }}"
state: present
backup: true
loop: "{{ host.fstab if host.fstab is iterable else []}}"
become: true
register: fstab
- name: Create/mount mergerfs
mount:
src: "{{ item.branches | join(':') }}"
path: "{{ item.path }}"
fstype: "{{ item.type }}"
opts: "{{ item.opts | join(',') }}"
state: present
backup: true
become: true
loop: "{{ host.mergerfs if host.mergerfs is iterable else []}}"
register: fstab
- name: Mount all disks
command: mount -a
become: true
when: fstab.changed

View File

@ -0,0 +1,14 @@
---
- name: Set a hostname
ansible.builtin.hostname:
name: "{{ host.hostname }}"
become: true
- name: Update /etc/hosts to reflect the new hostname
ansible.builtin.lineinfile:
path: /etc/hosts
regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ host.hostname }}"
state: present
backup: true
become: true

View File

@ -1,6 +1,13 @@
---
- include_tasks: time.yml
- include_tasks: essential.yml
- include_tasks: bash.yml
- include_tasks: sshd.yml
- include_tasks: fstab.yml
- name: Configure Time
ansible.builtin.include_tasks: time.yml
- name: Configure Hostname
ansible.builtin.include_tasks: hostname.yml
- name: Configure Packages
ansible.builtin.include_tasks: packages.yml
- name: Configure Extra-Packages
ansible.builtin.include_tasks: extra_packages.yml
- name: Configure Bash
ansible.builtin.include_tasks: bash.yml
- name: Configure SSH
ansible.builtin.include_tasks: sshd.yml

View File

@ -0,0 +1,13 @@
---
- name: Update and upgrade packages
ansible.builtin.apt:
update_cache: true
upgrade: true
autoremove: true
become: true
- name: Install base packages
ansible.builtin.apt:
name: "{{ common_packages }}"
state: present
become: true

View File

@ -1,23 +1,17 @@
---
- name: Copy sshd_config
template:
src: templates/common/ssh/sshd_config
ansible.builtin.template:
src: templates/ssh/sshd_config
dest: /etc/ssh/sshd_config
mode: 0644
become: yes
register: sshd
mode: "644"
notify:
- Restart sshd
become: true
- name: Copy pubkey
copy:
ansible.builtin.copy:
content: "{{ pubkey }}"
dest: "/home/{{ user }}/.ssh/authorized_keys"
owner: "{{ user }}"
group: "{{ user }}"
mode: "644"
- name: Restart sshd
service:
name: "sshd"
state: "restarted"
become: yes
when: sshd.changed

View File

@ -1,124 +0,0 @@
# $OpenBSD: sshd_config,v 1.103 2018/04/09 20:41:22 tj Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
Include /etc/ssh/sshd_config.d/*.conf
Protocol 2
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
PermitRootLogin no
#StrictModes yes
MaxAuthTries 3
#MaxSessions 10
PubkeyAuthentication yes
# Expect .ssh/authorized_keys2 to be disregarded by default in future.
#AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
AllowAgentForwarding no
AllowTcpForwarding no
#GatewayPorts no
X11Forwarding no
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
PrintMotd no
#PrintLastLog yes
TCPKeepAlive no
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
ClientAliveCountMax 2
UseDNS yes
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server

View File

@ -0,0 +1,18 @@
Include /etc/ssh/sshd_config.d/*.conf
Protocol 2
PermitRootLogin no
MaxAuthTries 3
PubkeyAuthentication yes
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
UsePAM yes
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no
PrintMotd no
TCPKeepAlive no
ClientAliveCountMax 2
UseDNS yes
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server

View File

@ -1,96 +0,0 @@
---
# - include_tasks: zoneminder.yml
# tags:
# - zoneminder
- include_tasks: pihole.yml
tags:
- pihole
- include_tasks: syncthing.yml
tags:
- syncthing
# - include_tasks: softserve.yml
# tags:
# - softserve
- include_tasks: cupsd.yml
tags:
- cupsd
- include_tasks: kuma.yml
tags:
- kuma
# - include_tasks: traefik.yml
# tags:
# - traefik
- include_tasks: plex.yml
tags:
- plex
- include_tasks: ddns.yml
tags:
- ddns
- include_tasks: homeassistant.yml
tags:
- homeassistant
- include_tasks: tautulli.yml
tags:
- tautulli
- include_tasks: sonarr.yml
tags:
- sonarr
- include_tasks: radarr.yml
tags:
- radarr
- include_tasks: lidarr.yml
tags:
- lidarr
- include_tasks: prowlarr.yml
tags:
- prowlarr
- include_tasks: bin.yml
tags:
- bin
- include_tasks: gluetun.yml
tags:
- gluetun
- include_tasks: qbit.yml
tags:
- qbit
- include_tasks: qbit_private.yml
tags:
- qbit_priv
- include_tasks: prometheus.yml
tags:
- prometheus
- include_tasks: grafana.yml
tags:
- grafana
- include_tasks: jellyfin.yml
tags:
- jellyfin
- include_tasks: gitea.yml
tags:
- gitea
- include_tasks: gitea-runner.yml
tags:
- gitea-runner

View File

@ -1,9 +0,0 @@
---
- name: Create bin-config directory
file:
path: "{{ bin_upload }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes

View File

@ -1,19 +0,0 @@
---
- name: Create cupsd-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
loop:
- "{{ cupsd_config }}"
become: true
- name: Copy cupsd config
template:
owner: "{{ puid }}"
src: "templates/aya01/cupsd/cupsd.conf"
dest: "{{ cupsd_config }}/cupsd.conf"
mode: '660'
become: true

View File

@ -1,16 +0,0 @@
---
- name: Create ddns-config directory
file:
path: "{{ docker_dir }}/ddns-updater/data/"
owner: 1000
group: 1000
mode: '700'
state: directory
- name: Copy ddns-config
template:
owner: 1000
src: "templates/{{host.hostname}}/ddns-updater/data/config.json"
dest: "{{ docker_dir }}/ddns-updater/data/config.json"
mode: '400'

View File

@ -1,11 +0,0 @@
---
- name: Create gitea-runner directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ gitea.runner.volumes.data }}"

View File

@ -1,12 +0,0 @@
---
- name: Create gitea directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ gitea.volumes.data }}"
- "{{ gitea.volumes.config }}"

View File

@ -1,11 +0,0 @@
---
- name: Create gitlab-runner directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ gitlab.runner.volumes.config }}"

View File

@ -1,14 +0,0 @@
---
- name: Create gitlab-config
file:
path: "{{ item }}"
owner: "{{ gitlab.puid }}"
group: "{{ gitlab.pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ gitlab.paths.config }}"
- "{{ gitlab.paths.logs }}"
- "{{ gitlab.paths.data }}"

View File

@ -1,11 +0,0 @@
---
- name: Create gluetun-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '775'
state: directory
loop:
- "{{ gluetun_config}}"
become: true

View File

@ -1,22 +0,0 @@
---
- name: Create grafana data directory
file:
path: "{{ item }}"
owner: "{{ grafana_puid }}"
group: "{{ grafana_pgid }}"
mode: '755'
state: directory
loop:
- "{{ grafana_data }}"
- "{{ grafana_config }}"
become: true
- name: Copy grafana config
template:
owner: "{{ grafana_puid }}"
group: "{{ grafana_pgid }}"
src: "templates/aya01/grafana/etc-grafana/grafana.ini.j2"
dest: "{{ grafana_config }}/grafana.ini"
mode: '644'
become: true

View File

@ -1,8 +0,0 @@
---
- name: Create homeassistant-config directory
file:
path: "{{ ha_config }}"
mode: '755'
state: directory
become: true

View File

@ -1,30 +0,0 @@
---
- name: Create zoneminder user
user:
name: zm
uid: 911
shell: /bin/false
become: true
- name: Create Zoneminder config directory
file:
path: "{{ item }}"
owner: 911
group: 911
mode: '700'
state: directory
loop:
- "{{ zoneminder_config }}"
become: true
- name: Create Zoneminder data directory
file:
path: "{{ item }}"
owner: 911
group: 911
mode: '755'
state: directory
loop:
- "{{ zoneminder_data }}"
become: true

View File

@ -1,31 +0,0 @@
---
- name: Create jellyfin-config directory
file:
path: "{{ jellyfin.config }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
- name: Create jellyfin-cache directory
file:
path: "{{ jellyfin.cache }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
- name: Create jellyfin media directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ jellyfin.media.tv }}"
- "{{ jellyfin.media.movies }}"
- "{{ jellyfin.media.music }}"

View File

@ -1,11 +0,0 @@
---
- name: Create kuma-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
loop:
- "{{ kuma_config }}"
become: true

View File

@ -1,13 +0,0 @@
---
- name: Create lidarr directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ lidarr_config }}"
- "{{ lidarr_media }}"
- "{{ lidarr_downloads }}"

View File

@ -1,24 +0,0 @@
---
- include_tasks: install.yml
- include_tasks: user_group_setup.yml
- name: Copy the compose file
template:
src: templates/{{ inventory_hostname }}/compose.yaml
dest: "{{ docker_compose_dir }}/compose.yaml"
register: compose
- include_tasks: "{{ inventory_hostname }}_compose.yml"
tags:
- reload_compose
- name: Update docker Images
shell:
cmd: "docker compose pull"
chdir: "{{ docker_compose_dir }}"
- name: Rebuilding docker images
shell:
cmd: "docker compose up -d --build"
chdir: "{{ docker_compose_dir }}"

View File

@ -1,5 +0,0 @@
---
- include_tasks: nginx-proxy-manager.yml
tags:
- nginx

View File

@ -1,13 +0,0 @@
---
- include_tasks: nginx-proxy-manager.yml
tags:
- nginx
- include_tasks: pihole.yml
tags:
- pihole
- include_tasks: gitea-runner.yml
tags:
- gitea-runner

View File

@ -1,14 +0,0 @@
---
- name: Create netdata dirs
file:
path: "{{ item }}"
owner: 1000
group: 1000
mode: '777'
state: directory
loop:
- "{{ netdata_config }}"
- "{{ netdata_cache }}"
- "{{ netdata_lib }}"
become: true

View File

@ -1,13 +0,0 @@
---
- name: Create nginx-data directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
loop:
- "{{ nginx.paths.letsencrypt }}"
- "{{ nginx.paths.data }}"
become: yes

View File

@ -1,14 +0,0 @@
---
- include_tasks: nginx-proxy-manager.yml
tags:
- nginx
- include_tasks: pihole.yml
tags:
- pihole
- include_tasks: gitea-runner.yml
tags:
- gitea-runner

View File

@ -1,14 +0,0 @@
---
- name: Create pihole-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
loop:
- "{{ docker_dir }}/pihole/etc-pihole/"
- "{{ docker_dir }}/pihole/etc-dnsmasq.d/"
become: true

View File

@ -1,22 +0,0 @@
---
- name: Create plex-config directory
file:
path: "{{ plex_config }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
- name: Create plex media directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ plex_tv }}"
- "{{ plex_movies }}"
- "{{ plex_music }}"

View File

@ -1,21 +0,0 @@
---
- name: Create prometheus dirs
file:
path: "{{ item }}"
owner: "{{ prometheus_puid }}"
group: "{{ prometheus_pgid }}"
mode: '755'
state: directory
loop:
- "{{ prometheus_config }}"
- "{{ prometheus_data }}"
become: true
- name: Place prometheus config
template:
owner: "{{ prometheus_puid }}"
group: "{{ prometheus_pgid}}"
src: "templates/aya01/prometheus/prometheus.yml.j2"
dest: "{{ prometheus_config }}/prometheus.yml"
mode: '644'
become: true

View File

@ -1,11 +0,0 @@
---
- name: Create prowlarr directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ prowlarr_config }}"

View File

@ -1,12 +0,0 @@
---
- name: Create qbit-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '775'
state: directory
loop:
- "{{ qbit_remote_config }}"
- "{{ qbit_downloads }}"
become: true

View File

@ -1,12 +0,0 @@
---
- name: Create qbit_torrentleech-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '775'
state: directory
loop:
- "{{ torrentleech_remote_config }}"
- "{{ qbit_downloads }}"
become: true

View File

@ -1,13 +0,0 @@
---
- name: Create radarr directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ radarr_config }}"
- "{{ radarr_media }}"
- "{{ radarr_downloads }}"

View File

@ -1,12 +0,0 @@
---
- name: Create soft-serve directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
loop:
- "{{ softserve_data }}"
become: true

View File

@ -1,13 +0,0 @@
---
- name: Create sonarr directories
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes
loop:
- "{{ sonarr_config }}"
- "{{ sonarr_media }}"
- "{{ sonarr_downloads }}"

View File

@ -1,20 +0,0 @@
---
- name: Create swag-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
state: directory
loop:
- "{{ swag_config }}"
- name: Copy site-confs
template:
owner: "{{ puid }}"
group: "{{ pgid }}"
src: "{{ item }}"
dest: "{{ swag_remote_site_confs }}"
mode: '664'
loop: "{{ swag_site_confs }}"
become: true

View File

@ -1,18 +0,0 @@
---
- name: Create syncthing directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
loop:
- "{{ syncthing_data }}"
become: true
- name: Resolve inotify error for syncthing
template:
src: "templates/aya01/syncthing/syncthing.conf"
dest: "/etc/sysctl.d/syncthing.conf"
mode: "660"
become: true

View File

@ -1,9 +0,0 @@
---
- name: Create tautulli-config directory
file:
path: "{{ tautulli_config }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
mode: '755'
state: directory
become: yes

View File

@ -1,18 +0,0 @@
---
- name: Create traefik-config directory
file:
path: "{{ item }}"
owner: "{{ puid }}"
group: "{{ pgid }}"
state: directory
loop:
- "{{ docker_dir }}/traefik/etc-traefik/"
- "{{ docker_dir }}/traefik/var-log/"
- name: Copy traefik-config
template:
owner: 1000
src: "templates/common/traefik/etc-traefik/traefik.yml"
dest: "{{ traefik.config }}"
mode: '400'

View File

@ -1,25 +0,0 @@
---
- name: Ensure group "docker" exists
group:
name: docker
state: present
become: yes
- name: Append the group "docker" to "{{ user }}" groups
ansible.builtin.user:
name: "{{ user }}"
shell: /bin/bash
groups: docker
append: yes
become: yes
- name: Make sure that the docker folders exists
ansible.builtin.file:
path: "{{ item }}"
owner: "{{ user }}"
group: "{{ user }}"
state: directory
loop:
- "{{docker_compose_dir}}"
- "{{docker_dir}}"
become: yes

View File

@ -1,30 +0,0 @@
---
- name: Create zoneminder user
user:
name: zm
uid: '911'
shell: /bin/false
become: true
- name: Create Zoneminder config directory
file:
path: "{{ item }}"
owner: '911'
group: '911'
mode: '755'
state: directory
loop:
- "{{ zoneminder_config }}"
become: true
- name: Create Zoneminder data directory
file:
path: "{{ item }}"
owner: '911'
group: '911'
mode: '755'
state: directory
loop:
- "{{ zoneminder_data }}"
become: true

View File

@ -1,518 +0,0 @@
version: '3'
services:
nginx:
container_name: "{{nginx.host}}"
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
networks:
net: {}
ports:
- '{{nginx.endpoints.http}}:80'
- '{{nginx.endpoints.https}}:443'
- '{{nginx.endpoints.admin}}:81'
volumes:
- "{{nginx.paths.data}}:/data"
- "{{nginx.paths.letsencrypt}}:/etc/letsencrypt"
- '/var/run/docker.sock:/var/run/docker.sock'
pihole:
container_name: pihole
image: pihole/pihole:latest
restart: unless-stopped
depends_on:
- nginx
networks:
- net
ports:
- "53:53/tcp"
- "53:53/udp"
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "{{ pihole_config }}:/etc/pihole/"
- "{{ pihole_dnsmasq }}:/etc/dnsmasq.d/"
environment:
- PUID={{puid}}
- PGID={{pgid}}
- TZ={{timezone}}
- "WEBPASSWORD={{ vault_aya01_pihole_password }}"
- "ServerIP={{ host.ip }}"
- "INTERFACE=eth0"
- "DNS1=1.1.1.1"
- "DNS1=1.0.0.1"
dns:
- 127.0.0.1
- 1.1.1.1
cap_add:
- NET_ADMIN
syncthing:
image: syncthing/syncthing
container_name: syncthing
restart: unless-stopped
depends_on:
- pihole
networks:
- net
ports:
- 22000:22000/tcp # TCP file transfers
- 22000:22000/udp # QUIC file transfers
- 21027:21027/udp # Receive local discovery broadcasts
volumes:
- "{{syncthing_data}}:/var/syncthing"
environment:
- PUID={{puid}}
- PGID={{pgid}}
- TZ={{timezone}}
hostname: syncthing
cupsd:
container_name: cupsd
image: olbat/cupsd
restart: unless-stopped
depends_on:
- pihole
networks:
- net
environment:
- PUID={{puid}}
- PGID={{pgid}}
- TZ={{timezone}}
volumes:
- /var/run/dbus:/var/run/dbus
- "{{cupsd_config}}:/etc/cups"
kuma:
container_name: kuma
image: louislam/uptime-kuma:1
restart: unless-stopped
depends_on:
- pihole
networks:
- net
environment:
- PUID={{puid}}
- PGID={{pgid}}
- TZ={{timezone}}
ports:
- "{{kuma_port}}:3001"
volumes:
- "{{ kuma_config }}:/app/data"
plex:
image: lscr.io/linuxserver/plex:latest
container_name: plex
restart: unless-stopped
depends_on:
- pihole
networks:
- net
devices:
- /dev/dri:/dev/dri
ports:
- "{{ plex_port }}:32400"
- "1900:1900"
- "3005:3005"
- "5353:5353"
- "32410:32410"
- "8324:8324"
- "32412:32412"
- "32469:32469"
environment:
- PUID={{puid}}
- PGID={{pgid}}
- TZ={{timezone}}
- VERSION=docker
volumes:
- "{{ plex_config }}:/config"
- "{{ plex_tv }}:/tv:ro"
- "{{ plex_movies }}:/movies:ro"
- "{{ plex_music }}:/music:ro"
sonarr:
image: lscr.io/linuxserver/sonarr:latest
container_name: sonarr
restart: unless-stopped
depends_on:
- prowlarr
networks:
- net
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
volumes:
- {{ sonarr_config }}:/config
- {{ sonarr_media }}:/tv #optional
- {{ sonarr_downloads }}:/downloads #optional
radarr:
image: lscr.io/linuxserver/radarr:latest
container_name: radarr
restart: unless-stopped
depends_on:
- prowlarr
networks:
- net
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
volumes:
- {{ radarr_config }}:/config
- {{ radarr_media }}:/movies #optional
- {{ radarr_downloads }}:/downloads #optional
lidarr:
image: lscr.io/linuxserver/lidarr:latest
container_name: lidarr
restart: unless-stopped
depends_on:
- prowlarr
networks:
- net
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
volumes:
- {{ lidarr_config }}:/config
- {{ lidarr_media }}:/music #optional
- {{ lidarr_downloads }}:/downloads #optional
prowlarr:
image: lscr.io/linuxserver/prowlarr:latest
container_name: prowlarr
restart: unless-stopped
depends_on:
- pihole
networks:
- net
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
volumes:
- {{ prowlarr_config }}:/config
pastebin:
image: wantguns/bin
container_name: pastebin
restart: unless-stopped
depends_on:
- pihole
networks:
- net
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
- ROCKET_PORT={{ bin_port }}
- HOST_URL={{ bin_host }}.{{ aya01_host }}.{{ local_domain }}
volumes:
- {{ bin_upload }}:/app/upload
tautulli:
image: lscr.io/linuxserver/tautulli:latest
container_name: tautulli
restart: unless-stopped
depends_on:
- plex
networks:
- net
environment:
- PUID={{ puid }}
- PGID={{ pgid}}
- TZ={{ timezone }}
ports:
- "{{ tautulli_port }}:8181"
volumes:
- {{ tautulli_config}}:/config
{{ gluetun_host }}:
image: qmcgaw/gluetun
container_name: {{ gluetun_host }}
restart: unless-stopped
networks:
- net
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
volumes:
- {{ gluetun_config }}:/gluetun
environment:
- PUID={{puid}}
- PGID={{pgid}}
- TZ={{ timezone }}
- VPN_SERVICE_PROVIDER=protonvpn
- UPDATER_VPN_SERVICE_PROVIDERS=protonvpn
- UPDATER_PERIOD=24h
- SERVER_COUNTRIES={{ gluetun_country }}
- OPENVPN_USER={{ vault_qbit_vpn_user }}+pmp
- OPENVPN_PASSWORD={{ vault_qbit_vpn_password }}
{{ torrentleech_host }}:
image: qbittorrentofficial/qbittorrent-nox
container_name: {{ torrentleech_host }}
restart: unless-stopped
depends_on:
- gluetun
- sonarr
- radarr
- lidarr
network_mode: "container:{{ gluetun_host }}"
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
- QBT_EULA="accept"
- QBT_WEBUI_PORT="{{ torrentleech_port }}"
volumes:
- {{ torrentleech_remote_config }}:/config
- {{ qbit_downloads }}:/downloads
{{qbit_host}}:
image: qbittorrentofficial/qbittorrent-nox
container_name: {{ qbit_host }}
restart: unless-stopped
depends_on:
- gluetun
- sonarr
- radarr
- lidarr
network_mode: "container:{{ gluetun_host }}"
environment:
- PUID={{ puid }}
- PGID={{ pgid }}
- TZ={{ timezone }}
- QBT_EULA="accept"
- QBT_WEBUI_PORT="{{ qbit_port }}"
volumes:
- {{ qbit_remote_config }}:/config
- {{ qbit_downloads }}:/downloads
{{ prometheus_host }}:
image: prom/prometheus
container_name: {{ prometheus_host }}
restart: unless-stopped
depends_on:
- pihole
networks:
- net
environment:
- PUID={{ prometheus_puid }}
- PGID={{ prometheus_pgid}}
- TZ={{ timezone }}
volumes:
- {{ prometheus_config }}:/etc/prometheus/
- prometheus_data:/prometheus/
{{ grafana_host }}:
image: grafana/grafana-oss
container_name: {{ grafana_host }}
restart: unless-stopped
user: "0:0"
depends_on:
- {{ prometheus_host }}
networks:
- net
environment:
- PUID={{ grafana_puid }}
- PGID={{ grafana_pgid }}
- TZ={{ timezone }}
volumes:
- {{ grafana_data }}:/var/lib/grafana/
- {{ grafana_config }}:/etc/grafana/
ddns-updater:
container_name: ddns-updater
image: "ghcr.io/qdm12/ddns-updater"
restart: unless-stopped
depends_on:
- pihole
networks:
net: {}
volumes:
- "{{ ddns_data }}:/updater/data/"
homeassistant:
container_name: homeassistant
image: "ghcr.io/home-assistant/home-assistant:stable"
restart: unless-stopped
depends_on:
- pihole
networks:
net: {}
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "{{ ha_config }}:/config/"
privileged: true
ports:
- "{{ ha_port }}:8123"
- 4357:4357
- 5683:5683
- 5683:5683/udp
{{stirling.host}}:
container_name: {{stirling.host}}
image: frooodle/s-pdf:latest
restart: unless-stopped
depends_on:
- pihole
networks:
net: {}
{{ jellyfin.host }}:
container_name: {{ jellyfin.host }}
image: jellyfin/jellyfin
restart: 'unless-stopped'
depends_on:
- pihole
networks:
net: {}
devices:
- /dev/dri:/dev/dri
volumes:
- {{ jellyfin.config }}:/config
- {{ jellyfin.cache }}:/cache
- {{ jellyfin.media.tv }}:/tv:ro
- {{ jellyfin.media.movies }}:/movies:ro
- {{ jellyfin.media.music }}:/music:ro
ports:
- "{{ jellyfin.port }}:{{ jellyfin.port }}"
broker:
container_name: {{ paperless.redis.host }}
image: docker.io/library/redis:7
restart: unless-stopped
depends_on:
- pihole
networks:
- net
volumes:
- {{paperless.redis.data}}:/data
db:
container_name: {{ paperless.db.host }}
image: docker.io/library/postgres:15
restart: unless-stopped
depends_on:
- pihole
networks:
- net
volumes:
- {{paperless.db.data}}:/var/lib/postgresql/data
environment:
POSTGRES_DB: {{ paperless.db.db }}
POSTGRES_USER: {{ paperless.db.user }}
POSTGRES_PASSWORD: {{ paperless.db.password }}
paperless:
container_name: {{ paperless.host }}
image: ghcr.io/paperless-ngx/paperless-ngx:latest
restart: unless-stopped
depends_on:
- db
- broker
networks:
- net
healthcheck:
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:{{ paperless.port }}"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- {{ paperless.data.data }}:/usr/src/paperless/data
- {{ paperless.data.media }}:/usr/src/paperless/media
- {{ paperless.data.export }}:/usr/src/paperless/export
- {{ paperless.data.consume }}:/usr/src/paperless/consume
environment:
- "PAPERLESS_REDIS=redis://broker:6379"
- "PAPERLESS_DBHOST=db"
- "PAPERLESS_DBUSER={{paperless.db.user}}"
- "PAPERLESS_DBPASS={{paperless.db.password}}"
- "USERMAP_UID={{ puid }}"
- "USERMAP_GID={{ pgid}}"
- "PAPERLESS_URL=https://{{paperless.host}}.{{ host.hostname }}.{{ backup_domain }}"
- "PAPERLESS_TIME_ZONE={{ timezone }}"
- "PAPERLESS_OCR_LANGUAGE=deu"
{{ homarr.host }}:
container_name: {{ homarr.host }}
image: ghcr.io/ajnart/homarr:latest
restart: unless-stopped
depends_on:
- pihole
networks:
- net
volumes:
- {{ homarr.volumes.configs }}:/app/data/configs
- {{ homarr.volumes.data }}:/data
- {{ homarr.volumes.icons }}:/app/public/icons
{{ gitea.host }}:
container_name: {{ gitea.host }}
image: gitea/gitea:1.20.5-rootless
restart: unless-stopped
depends_on:
- pihole
networks:
- net
volumes:
- {{ gitea.volumes.data }}:/var/lib/gitea
- {{ gitea.volumes.config }}:/etc/gitea
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "{{ gitea.ports.http }}:3000"
- "{{ gitea.ports.ssh }}:2222"
{{ gitea.runner.host }}:
container_name: {{ gitea.runner.host }}
image: gitea/act_runner:nightly
restart: unless-stopped
depends_on:
- {{ gitea.host }}
networks:
- net
volumes:
- "{{ gitea.runner.config_file }}:/config.yaml"
- "{{ gitea.runner.volumes.data }}:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- "GITEA_INSTANCE_URL={{ gitea.url }}"
- "GITEA_RUNNER_REGISTRATION_TOKEN={{ gitea.runner.token }}"
- "GITEA_RUNNER_NAME: {{ gitea.runner.name }}"
- "CONFIG_FILE: /config.yaml"
{{ jellyseer.host }}:
container_name: {{ jellyseer.host }}
image: fallenbagel/jellyseerr:latest
restart: unless-stopped
environment:
- LOG_LEVEL=info
- TZ={{ timezone }}
depends_on:
- {{ jellyfin.host }}
networks:
- net
volumes:
- {{ jellyseer.volumes.config }}:/app/config
networks:
zoneminder:
driver: bridge
ipam:
driver: default
config:
- subnet: {{ zoneminder_network }}
net:
driver: bridge
ipam:
driver: default
config:
- subnet: {{ docker_network }}
volumes:
prometheus_data: {}

View File

@ -1,196 +0,0 @@
#
# Configuration file for the CUPS scheduler. See "man cupsd.conf" for a
# complete description of this file.
#
# Log general information in error_log - change "warn" to "debug"
# for troubleshooting...
LogLevel warn
PageLogFormat
ServerAlias *
# Specifies the maximum size of the log files before they are rotated. The value "0" disables log rotation.
MaxLogSize 0
# Default error policy for printers
ErrorPolicy retry-job
# Allow remote access
Listen *:631
# Show shared printers on the local network.
Browsing Yes
BrowseLocalProtocols dnssd
# Default authentication type, when authentication is required...
DefaultAuthType Basic
DefaultEncryption IfRequested
# Web interface setting...
WebInterface Yes
# Timeout after cupsd exits if idle (applied only if cupsd runs on-demand - with -l)
IdleExitTimeout 60
# Restrict access to the server...
<Location />
Order allow,deny
Allow all
</Location>
# Restrict access to the admin pages...
<Location /admin>
Order allow,deny
Allow all
</Location>
# Restrict access to configuration files...
<Location /admin/conf>
AuthType Default
Require user @SYSTEM
Order allow,deny
Allow all
</Location>
# Restrict access to log files...
<Location /admin/log>
AuthType Default
Require user @SYSTEM
Order allow,deny
Allow all
</Location>
# Set the default printer/job policies...
<Policy default>
# Job/subscription privacy...
JobPrivateAccess default
JobPrivateValues default
SubscriptionPrivateAccess default
SubscriptionPrivateValues default
# Job-related operations must be done by the owner or an administrator...
<Limit Create-Job Print-Job Print-URI Validate-Job>
Order deny,allow
</Limit>
<Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document>
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
# All administration operations require an administrator to authenticate...
<Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices>
AuthType Default
Require user @SYSTEM
Order deny,allow
</Limit>
# All printer operations require a printer operator to authenticate...
<Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs>
AuthType Default
Require user @SYSTEM
Order deny,allow
</Limit>
# Only the owner or an administrator can cancel or authenticate a job...
<Limit Cancel-Job CUPS-Authenticate-Job>
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
<Limit All>
Order deny,allow
</Limit>
</Policy>
# Set the authenticated printer/job policies...
<Policy authenticated>
# Job/subscription privacy...
JobPrivateAccess default
JobPrivateValues default
SubscriptionPrivateAccess default
SubscriptionPrivateValues default
# Job-related operations must be done by the owner or an administrator...
<Limit Create-Job Print-Job Print-URI Validate-Job>
AuthType Default
Order deny,allow
</Limit>
<Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document>
AuthType Default
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
# All administration operations require an administrator to authenticate...
<Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default>
AuthType Default
Require user @SYSTEM
Order deny,allow
</Limit>
# All printer operations require a printer operator to authenticate...
<Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs>
AuthType Default
Require user @SYSTEM
Order deny,allow
</Limit>
# Only the owner or an administrator can cancel or authenticate a job...
<Limit Cancel-Job CUPS-Authenticate-Job>
AuthType Default
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
<Limit All>
Order deny,allow
</Limit>
</Policy>
# Set the kerberized printer/job policies...
<Policy kerberos>
# Job/subscription privacy...
JobPrivateAccess default
JobPrivateValues default
SubscriptionPrivateAccess default
SubscriptionPrivateValues default
# Job-related operations must be done by the owner or an administrator...
<Limit Create-Job Print-Job Print-URI Validate-Job>
AuthType Negotiate
Order deny,allow
</Limit>
<Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document>
AuthType Negotiate
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
# All administration operations require an administrator to authenticate...
<Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default>
AuthType Default
Require user @SYSTEM
Order deny,allow
</Limit>
# All printer operations require a printer operator to authenticate...
<Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs>
AuthType Default
Require user @SYSTEM
Order deny,allow
</Limit>
# Only the owner or an administrator can cancel or authenticate a job...
<Limit Cancel-Job CUPS-Authenticate-Job>
AuthType Negotiate
Require user @OWNER @SYSTEM
Order deny,allow
</Limit>
<Limit All>
Order deny,allow
</Limit>
</Policy>

View File

@ -1,11 +0,0 @@
{
"settings": [
{
"provider": "namecheap",
"domain": "{{ local_domain }}",
"host": "{{ local_subdomains }}",
"password": "{{ vault_ddns_local_password }}",
"provider_ip": true
}
]
}

File diff suppressed because it is too large Load Diff

View File

@ -1,18 +0,0 @@
devices:
- name: mikrotik
address: "{{ e_mikrotik_ip }}"
user: "{{ prm_user }}"
password: "{{ vault_prm_user_password }}"
features:
bgp: false
dhcp: true
dhcpv6: true
dhcpl: true
routes: true
pools: true
optics: true

View File

@ -1,46 +0,0 @@
# Sample config for Prometheus.
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: '{{ user }}'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: 'node'
scrape_interval: 10s
scrape_timeout: 10s
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ['{{ aya01_ip }}:{{node_exporter.port}}']
- targets: ['{{ mii_ip }}:{{node_exporter.port}}']
- targets: ['{{ pi_ip }}:{{node_exporter.port}}']
- targets: ['{{ naruto_ip }}:{{node_exporter.port}}']
- targets: ['{{ inko_ip }}:{{node_exporter.port}}']
- job_name: 'mikrotik'
static_configs:
- targets:
- {{ snmp_exporter_target }}
metrics_path: /snmp
params:
module: [mikrotik]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: {{ aya01_ip }}:{{ snmp_exporter_port }} # The SNMP exporter's real hostname:port.
- job_name: 'SMART'
static_configs:
- targets: ['{{ aya01_ip }}:{{smart_exporter.port}}']

View File

@ -1 +0,0 @@
fs.inotify.max_user_watches=204800

View File

@ -1,36 +0,0 @@
## traefik.yml
# Entry Points
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
# Docker configuration backend
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedbydefault: "false"
# API and dashboard configuration
api:
insecure: true
dashboard: true
log:
filePath: "/var/log/traefik.log"
accessLog:
filePath: "/var/log/access.log"
certificatesResolvers:
myresolver:
acme:
email: "me+cert@tudattr.dev"
storage: "/letsencrypt/acme.json"
dnsChallenge:
provider: "namecheap"
metrics:
prometheus:
entrypoint: "traefik"

View File

@ -1,25 +0,0 @@
version: '3'
services:
nginx:
container_name: "{{nginx.host}}"
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
networks:
net: {}
ports:
- '{{nginx.endpoints.http}}:80'
- '{{nginx.endpoints.https}}:443'
- '{{nginx.endpoints.admin}}:81'
volumes:
- "{{nginx.paths.data}}:/data"
- "{{nginx.paths.letsencrypt}}:/etc/letsencrypt"
- '/var/run/docker.sock:/var/run/docker.sock'
networks:
net:
driver: bridge
ipam:
# driver: default
config:
- subnet: 172.16.69.0/24
gateway: 172.16.69.1

Some files were not shown because too many files have changed in this diff Show More