Compare commits

...

246 Commits

Author SHA1 Message Date
dcb0e83d6c qbt: tune 2026-03-02 23:33:55 -05:00
57080293bb zfs: tuning 2026-03-02 23:33:15 -05:00
b653debbae qbt: disable UPnP 2026-03-02 20:01:54 -05:00
dfe5392543 qbt: delete incomplete subvolume + tweaks 2026-03-02 14:22:17 -05:00
9f949f13d1 minecraft: update mods + add modernfix + debugify 2026-02-28 02:30:19 -05:00
59080fe1b3 fmt 2026-02-28 02:25:38 -05:00
12fca8840d update 2026-02-28 01:57:42 -05:00
49f06fc26c arr-init: extract to standalone flake repo 2026-02-27 15:39:19 -05:00
2c0811cfe9 minecraft: make more responsive 2026-02-27 00:04:07 -05:00
9692fe5f08 update 2026-02-25 19:01:56 -05:00
c142b5d045 ntfy-alerts: suppress notifications for sanoid 2026-02-25 02:10:37 -05:00
16c84fdcb6 zfs: fix sanoid dataset name for jellyfin cache 2026-02-24 21:21:24 -05:00
196f06e41f flake: expose tests as checks output 2026-02-24 14:51:11 -05:00
8013435d99 ntfy-alerts: init 2026-02-24 14:44:00 -05:00
28e3090c72 matrix: update 2026-02-24 13:05:00 -05:00
a22c5b30fe update 2026-02-24 13:04:55 -05:00
c2908f594c matrix: update 2026-02-23 15:24:31 -05:00
9df3f3cae9 update 2026-02-23 15:13:47 -05:00
ea75dad5ba matrix: update 2026-02-21 22:47:33 -05:00
1e25d86d44 update 2026-02-21 22:19:36 -05:00
23475927a1 qbt: increase ConnectionSpeed 2026-02-20 15:27:56 -05:00
fe4040bf3b matrix: update 2026-02-20 15:25:44 -05:00
d91b651152 formating 2026-02-20 15:19:46 -05:00
0a3f93c98d qbt: fix permissions 2026-02-20 14:12:13 -05:00
304ad7f308 qbt: tweak 2026-02-20 11:05:53 -05:00
4fe33b9b32 update 2026-02-19 23:15:26 -05:00
0a0c14993d qbt: Coalesce Read Write 2026-02-19 23:14:27 -05:00
155ebbafcd qbittorrent: enable queueing and AutoTMM 2026-02-19 23:12:18 -05:00
2fed80cdb2 firewall: trust wg-br interface 2026-02-19 23:12:18 -05:00
318908d8ca arr-init: add module for API-based configuration 2026-02-19 23:12:16 -05:00
c35a65e1bf recyclarr: init 2026-02-19 19:12:19 -05:00
af3a3d738e jellyseerr: init 2026-02-19 19:12:19 -05:00
879a3278ee bazarr: init 2026-02-19 19:12:19 -05:00
89d939d37f radarr: init 2026-02-19 19:12:19 -05:00
c290671b52 sonarr: init 2026-02-19 19:12:19 -05:00
ba09476295 prowlarr: init 2026-02-19 19:12:01 -05:00
9b715ba110 qbt: GlobalMaxRatio 6.0 -> 7.0 2026-02-17 22:56:48 -05:00
f6628b9302 jellyfin-qbittorrent-monitor: add stream headroom 2026-02-17 14:31:34 -05:00
7484a11535 jellyfin-qbittorrent-monitor: fix upload 2026-02-17 14:00:05 -05:00
d46ccc8245 update 2026-02-17 00:27:56 -05:00
1988f1a28d minecraft: update mods 2026-02-16 21:57:49 -05:00
9a9ecc6556 jellyfin-qbittorrent-monitor: dynamic bandwidth management 2026-02-15 23:33:45 -05:00
cf3e876f27 matrix: update 2026-02-15 11:51:38 -05:00
935ca6361b update 2026-02-14 22:50:45 -05:00
aa219dcfff matrix: update 2026-02-14 22:49:50 -05:00
62a91a8615 fmt 2026-02-13 15:26:27 -05:00
c01b2336a7 matrix: fix elementx calls
Applies patch from: https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1370
That I am working on. Also updates version to latest (at this time) git
2026-02-13 15:26:17 -05:00
f5abfd5bf6 fix(no-rgb): handle transient hardware unavailability during deploy 2026-02-12 18:48:41 -05:00
82add97a80 feat(tmpfiles): defer per-service file permissions to reduce boot time 2026-02-12 18:48:29 -05:00
84cbe82cb0 update 2026-02-12 12:45:28 -05:00
4e9e3f627b matrix: setup livekit
Needed for element X calls.
2026-02-11 22:14:12 -05:00
9cc63fcfb8 impermanence: fix /etc permissions after re-deploy 2026-02-11 15:41:30 -05:00
35f0c08ee2 ntfy: fix directory 2026-02-10 18:47:17 -05:00
0f1e249127 ntfy 2026-02-10 17:39:01 -05:00
f3e972b3a4 matrix: fix registration 2026-02-10 14:49:58 -05:00
e28f8a70df matrix: add coturn 2026-02-10 14:49:50 -05:00
f27068a974 matrix: fix private folder 2026-02-10 14:22:53 -05:00
795c5b3d41 Revert "matrix: disable"
This reverts commit a887edf510.
2026-02-10 14:08:43 -05:00
a887edf510 matrix: disable 2026-02-10 13:55:45 -05:00
4f71f61c4b matrix: fix continuwuity module 2026-02-10 13:54:22 -05:00
3187130cd3 update 2026-02-10 12:56:12 -05:00
11ab6de305 re-add matrix 2026-02-10 12:49:56 -05:00
b67416a74b syncthing: add grayjay backups 2026-02-06 14:43:08 -05:00
954e124b49 potentially fix fail2ban 2026-02-05 15:11:17 -05:00
a7d6018592 update 2026-02-05 01:33:55 -05:00
37fdf13a3f update 2026-02-03 12:25:24 -05:00
8176376f48 update 2026-02-01 21:30:50 -05:00
58c804ea41 update 2026-01-30 00:43:28 -05:00
a61fedb015 fail2ban: ignoreip from local network 2026-01-27 18:51:08 -05:00
2183ea8363 update 2026-01-26 23:09:22 -05:00
27ffe38ed3 xmrig: 12 threads 2026-01-26 17:51:16 -05:00
a0e6b8428e xmrig: 1gb pages 2026-01-26 14:25:25 -05:00
0b01fc3f28 xmrig 2026-01-26 14:15:27 -05:00
016520c579 update 2026-01-23 12:56:54 -05:00
47cc12f4ed cleanup 2026-01-23 00:29:24 -05:00
a766e67fec cleanup minecraft test 2026-01-22 22:40:40 -05:00
fdb1b559bc wg: don't hardcode namespaceAddress 2026-01-22 14:56:36 -05:00
3026897113 Revert "minecraft: fail2ban"
This reverts commit a23b3d8c5f.
2026-01-22 14:25:52 -05:00
a23b3d8c5f minecraft: fail2ban 2026-01-21 20:21:23 -05:00
4bf05f8b51 hostPlatform -> targetPlatform 2026-01-21 15:25:25 -05:00
d15ec9fe0b fix squaremap 2026-01-21 14:26:39 -05:00
89627e1299 update 2026-01-20 23:08:55 -05:00
897f9b2642 flake: impermanence nixpkgs follow nixpkgs 2026-01-20 23:08:41 -05:00
f87e395225 jellyfin-qbittorrent-monitor: don't use mock qbittorrent 2026-01-20 23:05:15 -05:00
9770e6d667 jellyfin-qbittorrent-monitor: fix mock qbittorrent 2026-01-20 22:38:18 -05:00
8ed67464d0 fmt 2026-01-20 19:48:20 -05:00
da6b4d1915 tests: fix all fail2ban NixOS VM tests
- Add explicit iptables banaction in security.nix for test compatibility
- Force IPv4 in all curl requests to prevent IPv4/IPv6 mismatch issues
- Fix caddy test: use basic_auth directive (not basicauth)
- Override service ports in tests to match direct connections (not via Caddy)
- Vaultwarden: override ROCKET_ADDRESS and ROCKET_LOG for external access
- Immich: increase VM memory to 4GB for stability
- Jellyfin: create placeholder log file and reload fail2ban after startup
- Add tests.nix entries for all 6 fail2ban tests

All tests now pass: ssh, caddy, gitea, vaultwarden, immich, jellyfin
2026-01-20 18:41:01 -05:00
f2ef562724 fail2ban: implement for jellyfin 2026-01-20 14:46:49 -05:00
d9236152aa fail2ban: implement for immich 2026-01-20 14:39:38 -05:00
ba45743ea0 fail2ban: implement for gitea 2026-01-20 14:39:29 -05:00
0214621a58 fail2ban: implement for bitwarden 2026-01-20 14:39:23 -05:00
aa2c61dcd3 fail2ban: implement for caddy basic auth 2026-01-20 14:35:20 -05:00
b550e495c8 nit: move fail2ban to security module 2026-01-20 14:11:15 -05:00
5ad5aff5e8 ssh: add fail2ban 2026-01-20 14:05:02 -05:00
d9a1a01f7f jellyfin-qbittorrent-monitor: handle qbittorrent going down state 2026-01-19 02:42:18 -05:00
eb5d0bb093 security things 2026-01-18 02:36:00 -05:00
c6b39a98cd update 2026-01-18 01:03:18 -05:00
11cacffe7d update 2026-01-15 14:01:27 -05:00
4881780186 monero: move back to hdds 2026-01-15 13:51:25 -05:00
f83e1170af syncthing 2026-01-13 16:55:19 -05:00
a93c789278 jellyfin-qbittorrent-monitor: don't mock out jellyfin for testing 2026-01-13 14:15:11 -05:00
df1d983b63 rework qbittorrent jellyfin monitor test 2026-01-13 13:41:23 -05:00
de89e70a05 impermanence: fix /etc/zfs cache 2026-01-13 13:13:49 -05:00
56fe61011a impermanence: fix persistant ssh host keys 2026-01-13 13:10:19 -05:00
528782ae32 update 2026-01-13 12:39:29 -05:00
8e32b73985 update webpage 2026-01-12 20:08:03 -05:00
b5a63da11e fix pkgs.system deprecation 2026-01-12 15:28:38 -05:00
aeab0a6f5b nixfmt-rfc-style -> nixfmt-tree 2026-01-12 15:23:28 -05:00
28623c3d97 update 2026-01-12 13:07:25 -05:00
513e426f89 nit: cleanup imports 2026-01-09 12:52:16 -05:00
aaef39d31a ytbn: use own nixpkgs 2026-01-08 21:50:48 -05:00
5138c2da80 impermanence: fix home directory declaration 2026-01-08 21:47:22 -05:00
6557a81167 update 2026-01-08 21:46:01 -05:00
68f1f6bbc4 cleanup flake deps 2026-01-08 06:24:58 -05:00
1048f261d4 vaapiVdpau -> libva-vdpau-driver 2026-01-08 06:17:48 -05:00
16d3050eb8 fully remove llama-cpp 2026-01-08 05:41:10 -05:00
d4172a5886 25.05 -> 25.11 2025-12-30 16:38:30 -05:00
a549b01111 organize 2025-12-28 15:49:18 -05:00
b5d2e3188d update 2025-12-20 01:17:09 -05:00
4e76882106 Revert "wg.conf: us-mia-wg-002 -> us-mia-wg-001"
This reverts commit 507ee6d57a.
2025-12-18 01:31:19 -05:00
507ee6d57a wg.conf: us-mia-wg-002 -> us-mia-wg-001
There are issues with mullvad's us-mia-wg-002 node
I emailed then about it. For now, moving to
us-mia-wg-001.
2025-12-17 23:44:20 -05:00
afa8981d91 update 2025-12-16 02:37:12 -05:00
6c617ef56b minecraft: update to 1.21.11 2025-12-13 21:35:37 -05:00
c7d884aca0 list-usb-drives: remove (never worked) 2025-12-13 02:24:46 -05:00
74d0620334 ssh: fix ssh_host_key perms 2025-12-12 21:18:51 -05:00
a5112e322e ssh: move to seperate file 2025-12-12 21:09:39 -05:00
5ae54b8981 update 2025-12-12 15:53:53 -05:00
ca4d0c414f monero: move to ssds 2025-12-08 23:19:25 -05:00
66b9c6472e Pin lanzaboote version to fix upstream issue
See: https://github.com/nix-community/lanzaboote/issues/518
2025-12-08 22:20:37 -05:00
e22558ac06 update 2025-12-08 22:11:30 -05:00
ea9cb09550 update 2025-12-05 23:55:58 -05:00
e8b4bc6b81 nix: add gc 2025-12-05 23:22:11 -05:00
3386fd9716 update 2025-12-05 14:13:40 -05:00
1950bcf6f6 update 2025-12-04 18:26:06 -05:00
32eac71ba0 update 2025-12-03 20:33:34 -05:00
78c92f1ae7 update 2025-12-03 18:19:57 -05:00
4a12643817 graphing-calculator: init 2025-12-03 14:10:50 -05:00
3914a29e0c persistent: streamline installation process with persistent.tar 2025-12-02 00:56:44 -05:00
7897d44bfd update + senior project website 2025-12-01 10:53:11 -05:00
a428a7163c update 2025-11-29 23:35:46 -05:00
fc39655e01 update 2025-11-25 13:08:08 -05:00
2656b8db19 zfs: expand testing to include a failing multi case 2025-11-24 16:19:25 -05:00
089fac3623 update (again) 2025-11-24 13:18:35 -05:00
039fa960f3 update + senior project website 2025-11-24 11:38:40 -05:00
31a9feb98c update 2025-11-21 12:06:43 -05:00
05520cc177 minecraft: update lithium 2025-11-21 12:06:35 -05:00
670430a223 secrets: delete old file 2025-11-20 21:59:09 -05:00
bc55d4203f install: cleanup key and secrets handling 2025-11-20 21:02:33 -05:00
8d420ea86b update 2025-11-20 19:12:40 -05:00
0c4baab0ef move to generic /services 2025-11-20 16:57:38 -05:00
363bff8c40 fix: disable serial-getty
keeps spamming dmesg with stupid messages.
2025-11-20 16:34:47 -05:00
223910744a zfs: fix qbittorrent 2025-11-20 16:30:37 -05:00
ae5189b6c6 zfs: HEAVILY REFACTOR subvolume handling 2025-11-20 16:10:35 -05:00
dd9042ae95 add monero service 2025-11-20 00:57:02 -05:00
86753581f1 update 2025-11-19 11:13:59 -05:00
39418b1bb3 update 2025-11-18 13:12:09 -05:00
90fb711115 zfs: fix zfs escaped spaces test 2025-11-17 10:37:45 -05:00
3408aab609 update 2025-11-17 08:30:36 -05:00
f514d5f653 enable kmscon 2025-11-14 12:11:13 -05:00
935252d8c3 update 2025-11-14 12:03:45 -05:00
ba6f47dde9 jellyfin-qbittorrent-monitor: fix jellyfin api key file perms 2025-11-13 02:43:42 -05:00
097b89a14a Revert "openrgb: override mbedtls_2 with mbedtls"
This reverts commit b1b9a3755f.
2025-11-11 00:46:39 -05:00
50d70e8569 update 2025-11-11 00:12:49 -05:00
3f89ee0147 update 2025-11-09 14:35:35 -05:00
98b2490840 update 2025-11-07 13:14:51 -05:00
65f903c20b update 2025-11-07 02:07:01 -05:00
acc4677982 minecraft: update mods 2025-11-07 00:42:01 -05:00
a528317e08 set gpu module to "xe"
Possibly could fix i915 driver issues I'm having
with my arc a380?

Panic:
```
Unexpected send: action=0x1000
WARNING: CPU: 7 PID: 62977 at drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c:844 intel_guc_ct_send+0x67a/0x7d0 [i915]
Modules linked in: bluetooth ecdh_generic ecc crc16 xt_nat nft_chain_nat nf_nat veth wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305
_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel msr af_packet mei_hdcp mei_pxp cfg80211 mei_gsc mei_me rfkill mei xe snd_hda_codec_hdmi e
dac_mce_amd edac_core intel_rapl_msr amd_atl intel_rapl_common snd_hda_intel crct10dif_pclmul polyval_clmulni polyval_generic snd_intel_dspcfg ghash_clmuln
i_intel snd_intel_sdw_acpi r8169 sha512_ssse3 sha256_ssse3 snd_hda_codec sha1_ssse3 snd_hda_core aesni_intel drm_gpuvm snd_hwdep gf128mul drm_exec realtek
gpu_sched snd_pcm crypto_simd drm_suballoc_helper cryptd mdio_devres drm_ttm_helper wmi_bmof snd_timer of_mdio snd fixed_phy soundcore fwnode_mdio rapl sp5
100_tco libphy watchdog xt_conntrack input_leds joydev led_class evdev mac_hid tiny_power_button rtc_cmos nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 gpio_a
mdpt onboard_usb_dev gpio_generic xt_tcpudp button uas ipt_rpfilter xt_pkttype nft_compat
 nf_tables libcrc32c crc32c_generic crc32c_intel sch_fq_codel ee1004 i2c_piix4 i2c_smbus i2c_dev atkbd libps2 serio vivaldi_fmap loop cpufreq_powersave tun
 tap macvlan bridge stp llc zenpower(O) kvm_amd ccp rng_core kvm irqbypass fuse efi_pstore configfs nfnetlink dmi_sysfs ip_tables x_tables nls_iso8859_1 nl
s_cp437 vfat f2fs fat dm_snapshot dm_bufio hid_generic dm_mod dax crc32_generic lz4hc_compress sd_mod usbhid hid usb_storage lz4_compress i915 ahci i2c_alg
o_bit drm_buddy libahci video libata ttm intel_gtt nvme scsi_mod drm_display_helper nvme_core xhci_pci nvme_auth cec xhci_hcd crc32_pclmul scsi_common wmi
zfs(PO) spl(O) efivarfs autofs4
CPU: 7 UID: 995 PID: 62977 Comm: av:hevc:df0 Tainted: P        W  O       6.12.50-hardened1 #1-NixOS
Tainted: [P]=PROPRIETARY_MODULE, [W]=WARN, [O]=OOT_MODULE
Hardware name: To Be Filled By O.E.M. B550M Pro4/B550M Pro4, BIOS P3.40 01/18/2024
RIP: 0010:intel_guc_ct_send+0x67a/0x7d0 [i915]
Code: 87 d0 06 00 00 3c 01 0f 87 d7 2f 17 00 a8 01 0f 85 42 ff ff ff 90 48 8b 44 24 18 48 c7 c7 50 43 34 c1 8b 30 e8 07 bf 89 d7 90 <0f> 0b 90 90 e9 24 ff
ff ff 48 8b 7c 24 20 e8 63 45 5d d8 48 8d 7c
RSP: 0018:ffffd57551c770c0 EFLAGS: 00010046
RAX: 0000000000000000 RBX: ffff8e019b6b8508 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffd57551c77158 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff8e01a5680b80
R13: ffff8e019b6b82a0 R14: ffff8e018b0c7004 R15: ffff8e019b6b82a0
FS:  00006b26fb1596c0(0000) GS:ffff8e107ef80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00006b26cc2b8438 CR3: 00000003c0dde000 CR4: 0000000000f50ef0
PKRU: 55555554
Call Trace:
 <TASK>
 ? srso_alias_return_thunk+0x5/0xfbef5
 __guc_add_request+0xd2/0x2c0 [i915]
 guc_submit_request+0x1bc/0x210 [i915]
 submit_notify+0xfd/0x150 [i915]
 __i915_sw_fence_complete+0x3a/0x210 [i915]
 __i915_request_queue+0x51/0x70 [i915]
 i915_request_add+0x64/0xe0 [i915]
 intel_context_migrate_copy+0x39e/0xac0 [i915]
 __i915_ttm_move+0x821/0xa00 [i915]
 i915_ttm_move+0x348/0x470 [i915]
 ? unmap_mapping_range+0x85/0x150
 ttm_bo_handle_move_mem+0xe1/0x1d0 [ttm]
 ttm_bo_validate+0xde/0x190 [ttm]
 ? srso_alias_return_thunk+0x5/0xfbef5
 __i915_ttm_get_pages+0x9f/0x1b0 [i915]
 i915_ttm_get_pages+0xca/0x180 [i915]
 ? srso_alias_return_thunk+0x5/0xfbef5
 ? srso_alias_return_thunk+0x5/0xfbef5
 __i915_gem_object_get_pages+0x3a/0x50 [i915]
 i915_vma_pin_ww+0x718/0x9c0 [i915]
 eb_validate_vmas+0x192/0xaa0 [i915]
 ? srso_alias_return_thunk+0x5/0xfbef5
 i915_gem_do_execbuffer+0xfc9/0x2890 [i915]
 i915_gem_execbuffer2_ioctl+0x16b/0x290 [i915]
 ? __pfx_i915_gem_execbuffer2_ioctl+0x10/0x10 [i915]
 drm_ioctl_kernel+0xb8/0x110
 drm_ioctl+0x2c6/0x550
 ? __pfx_i915_gem_execbuffer2_ioctl+0x10/0x10 [i915]
 __x64_sys_ioctl+0x9c/0xe0
 do_syscall_64+0xd5/0x210
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x6b270a88cf0f
Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 28 48 8b 44 24 18 64 48 2b 04 25 28 00 00
RSP: 002b:00006b26fb13f560 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00006b26cc2b4540 RCX: 00006b270a88cf0f
RDX: 00006b26fb13f650 RSI: 00000000c0406469 RDI: 0000000000000003
RBP: 00006b26fb13f650 R08: 0000000000000000 R09: 0000000000000000
R10: 00006b26cc263d10 R11: 0000000000000246 R12: 00000000c0406469
R13: 0000000000000003 R14: 000000003c959460 R15: 000000003c948860
 </TASK>
---[ end trace 0000000000000000 ]---
```
2025-11-06 16:41:51 -05:00
94a98349c5 update 2025-11-05 13:20:42 -05:00
ad5ff98841 update 2025-11-05 00:42:23 -05:00
312db92676 update 2025-11-03 13:21:05 -05:00
1fb72c2674 update 2025-11-02 19:50:05 -05:00
de8bec0353 update 2025-10-31 14:50:49 -04:00
0364bd5aeb update 2025-10-30 14:48:00 -04:00
83b3f4de85 secureboot fixes I think 2025-10-30 00:23:32 -04:00
e2ba51580b networking: temporarily use 192 address 2025-10-29 22:05:12 -04:00
ee628b296c update senior project website 2025-10-27 15:05:03 -04:00
0128b4c104 update 2025-10-27 00:54:00 -04:00
a910a30c01 minecraft: speedup test 2025-10-26 21:49:00 -04:00
376ea182cb minecraft: update mods 2025-10-26 15:50:35 -04:00
6ecd228a58 jellyfin-qbittorrent-monitor: nit with test 2025-10-24 18:14:40 -04:00
f7c2c441ac minecraft: fix nix test 2025-10-24 14:40:28 -04:00
a455d592b4 zfs_ensure_mounted: cleanup test 2025-10-24 13:46:22 -04:00
8aabd1466e jellyfin-qbittorrent-monitor: cleanup 2025-10-24 13:16:40 -04:00
f40f9748a4 jellyfin-qbittorrent-monitor: improve testing infra 2025-10-24 12:35:15 -04:00
73379efe40 update 2025-10-24 10:06:51 -04:00
e9c1df44e8 jellyfin-qbittorrent-monitor: write proper test 2025-10-24 00:12:42 -04:00
b1b9a3755f openrgb: override mbedtls_2 with mbedtls 2025-10-23 16:47:37 -04:00
eedf2fa8ed update 2025-10-23 15:40:47 -04:00
f58fd08e43 jellyfin-qbittorrent-monitor: only count external networks 2025-10-21 23:39:44 -04:00
fb98627a58 update 2025-10-21 21:07:02 -04:00
d0f16a3e93 update 2025-10-19 18:03:14 -04:00
c8cc19b698 llama.cpp: disable 2025-10-19 17:48:17 -04:00
386cf266d5 fix jellyfin api key 2025-10-18 03:10:49 -04:00
46bb9734b7 disable flakes (not needed) 2025-10-18 00:27:42 -04:00
1c904907d6 split up no-rgb and secureboot 2025-10-18 00:27:37 -04:00
1ae9fc29bd fix caddy_auth perms 2025-10-17 23:27:19 -04:00
e8aafda386 fix various agenix things 2025-10-17 23:13:25 -04:00
1ddcccd1c2 use filesystems logic 2025-10-17 22:55:02 -04:00
dd18bd1e6d remove various references to ${username} 2025-10-17 22:34:52 -04:00
31b4d7e80d remove service that doesn't exist 2025-10-17 22:34:35 -04:00
003cf474ff fix script 2025-10-17 22:28:02 -04:00
f9515dd160 claude'd better security things 2025-10-17 20:30:56 -04:00
9e35448f04 zfs_ensure_mounted: cleanup echo grep pattern 2025-10-17 17:25:30 -04:00
852ec18c7b update 2025-10-17 12:03:11 -04:00
b8759218ec llama.cpp: ngl 8-> 12 2025-10-16 20:16:03 -04:00
ea5996dc9e qbt: TimeoutStopSec = 10 2025-10-16 18:44:54 -04:00
d96035120f llama.cpp: reenable + Apriel-1.5-15b-Thinker 2025-10-16 18:44:05 -04:00
3811193739 zfs_ensure_mounted: cleanup sed awk call 2025-10-14 23:00:55 -04:00
d7d84848bb update 2025-10-14 22:32:03 -04:00
85fa1bb3ab update 2025-10-14 02:42:01 -04:00
a3d54e82d1 qbt: improve tracker list parsing 2025-10-14 02:37:20 -04:00
f80e1cf7c7 update 2025-10-11 02:50:18 -04:00
ef44aebe20 minecraft: disable scalable lux 2025-10-10 12:59:39 -04:00
a600a0936e update 2025-10-10 12:35:49 -04:00
e53b510e9b minecraft: update to 1.21.10 2025-10-10 12:35:15 -04:00
8da3470934 minecraft: fix caddy user group 2025-10-09 11:51:26 -04:00
f6c0178421 qbt: use trackerlist repo instead of managing my own trackerlist 2025-10-08 12:29:38 -04:00
7956d18daf minecraft 10gb -> 4gb 2025-10-07 14:12:50 -04:00
83a639a20e impermanence 2025-10-07 14:12:47 -04:00
a4bf2a0ea9 llama.cpp: testing 2025-10-06 01:42:37 -04:00
03729c90c1 update 2025-10-05 16:11:22 -04:00
c0eb03f30e update 2025-10-04 20:05:21 -04:00
7ed529128d minecraft: update lithium 2025-10-03 14:05:56 -04:00
c83d34108e Revert "llama-cpp: re-enable"
This reverts commit e98a23934a.
2025-10-02 22:26:30 -04:00
72d950007b llama-cpp: fix postPatch phase 2025-10-02 22:26:25 -04:00
e98a23934a llama-cpp: re-enable 2025-10-02 21:30:02 -04:00
a75f34e113 llama-cpp: change model 2025-10-02 21:29:43 -04:00
eff5b3b8aa update 2025-10-01 23:45:37 -04:00
c986abb9d3 update 2025-09-30 12:23:30 -04:00
b1d92c3825 senior project website: update 2025-09-28 22:28:54 -04:00
5b3332dd7f senior project website: update 2025-09-28 22:20:10 -04:00
13341094d4 update 2025-09-26 23:48:04 -04:00
2f0fb9b2c0 update 2025-09-23 20:01:48 -04:00
abecf4a723 qbt: disable port forwarding 2025-09-23 11:05:31 -04:00
d2b6348085 update 2025-09-21 01:11:58 -04:00
8ed2b9e80c update 2025-09-19 19:51:13 -04:00
0c7e0e0b67 minecraft: add disconnect-packet-fix and packet-fixer 2025-09-19 14:17:57 -04:00
70f8b99dec update 2025-09-19 00:21:19 -04:00
7db414efe1 update 2025-09-17 10:05:24 -04:00
c9068a8b50 update 2025-09-15 12:30:48 -04:00
13a0344db0 jellyfin-monitor: cleanup 2025-09-15 12:30:28 -04:00
1aec911e72 jellyfin-monitor: only print active streams on change 2025-09-12 14:57:53 -04:00
9d8b8ad33f jellyfin-monitor: only trigger for video 2025-09-12 11:34:58 -04:00
0e90fff70d jellyfin-monitor: remove datetime 2025-09-12 10:12:39 -04:00
3f08fb4729 remove nload 2025-09-12 01:53:32 -04:00
86 changed files with 4552 additions and 988 deletions

2
.gitattributes vendored
View File

@@ -1 +1,3 @@
secrets/** filter=git-crypt diff=git-crypt secrets/** filter=git-crypt diff=git-crypt
usb-secrets/usb-secrets-key* filter=git-crypt diff=git-crypt

View File

@@ -11,8 +11,15 @@
}: }:
{ {
imports = [ imports = [
./hardware.nix ./modules/hardware.nix
./zfs.nix ./modules/zfs.nix
./modules/impermanence.nix
./modules/usb-secrets.nix
./modules/age-secrets.nix
./modules/secureboot.nix
./modules/no-rgb.nix
./modules/security.nix
./modules/ntfy-alerts.nix
./services/postgresql.nix ./services/postgresql.nix
./services/jellyfin.nix ./services/jellyfin.nix
@@ -23,22 +30,45 @@
./services/wg.nix ./services/wg.nix
./services/qbittorrent.nix ./services/qbittorrent.nix
./services/jellyfin-qbittorrent-monitor.nix
./services/bitmagnet.nix ./services/bitmagnet.nix
# ./services/matrix.nix ./services/arr/prowlarr.nix
# ./services/owntracks.nix ./services/arr/sonarr.nix
./services/soulseek.nix ./services/arr/radarr.nix
./services/arr/bazarr.nix
./services/arr/jellyseerr.nix
./services/arr/recyclarr.nix
./services/arr/init.nix
# ./services/llama-cpp.nix ./services/soulseek.nix
./services/ups.nix ./services/ups.nix
./services/bitwarden.nix ./services/bitwarden.nix
./services/matrix.nix
./services/coturn.nix
./services/livekit.nix
./services/monero.nix
./services/xmrig.nix
# KEEP UNTIL 2028 # KEEP UNTIL 2028
./services/caddy_senior_project.nix ./services/caddy_senior_project.nix
./services/graphing-calculator.nix
./services/ssh.nix
./services/syncthing.nix
./services/ntfy.nix
./services/ntfy-alerts.nix
]; ];
services.kmscon.enable = true;
systemd.targets = { systemd.targets = {
sleep.enable = false; sleep.enable = false;
suspend.enable = false; suspend.enable = false;
@@ -46,6 +76,9 @@
hybrid-sleep.enable = false; hybrid-sleep.enable = false;
}; };
# Disable serial getty on ttyS0 to prevent dmesg warnings
systemd.services."serial-getty@ttyS0".enable = false;
# srvos enables vim, i don't want to use vim, disable it here: # srvos enables vim, i don't want to use vim, disable it here:
programs.vim = { programs.vim = {
defaultEditor = false; defaultEditor = false;
@@ -74,15 +107,16 @@
# optimize the store # optimize the store
optimise.automatic = true; optimise.automatic = true;
# enable flakes! # garbage collection
settings = { gc = {
experimental-features = [ automatic = true;
"nix-command" dates = "weekly";
"flakes" options = "--delete-older-than 7d";
];
}; };
}; };
hardware.intelgpu.driver = "xe";
boot = { boot = {
# 6.12 LTS until 2026 # 6.12 LTS until 2026
kernelPackages = pkgs.linuxPackages_6_12_hardened; kernelPackages = pkgs.linuxPackages_6_12_hardened;
@@ -97,30 +131,44 @@
initrd = { initrd = {
compressor = "zstd"; compressor = "zstd";
supportedFilesystems = [ "f2fs" ];
}; };
loader.systemd-boot.enable = lib.mkForce false; # BBR congestion control handles variable-latency VPN connections much
# better than CUBIC by probing bandwidth continuously rather than
# reacting to packet loss.
kernelModules = [ "tcp_bbr" ];
lanzaboote = { kernel.sysctl = {
enable = true; # Use BBR + fair queuing for smooth throughput through the WireGuard VPN
# needed to be in `/etc/secureboot` for sbctl to work "net.core.default_qdisc" = "fq";
pkiBundle = "/etc/secureboot"; "net.ipv4.tcp_congestion_control" = "bbr";
# Disable slow-start after idle: prevents TCP from resetting window
# size on each burst cycle (the primary cause of the 0 -> 40 MB/s spikes)
"net.ipv4.tcp_slow_start_after_idle" = 0;
# Larger socket buffers to accommodate the VPN bandwidth-delay product
# (22ms RTT * target throughput). Current 2.5MB max is too small.
"net.core.rmem_max" = 16777216;
"net.core.wmem_max" = 16777216;
"net.ipv4.tcp_rmem" = "4096 87380 16777216";
"net.ipv4.tcp_wmem" = "4096 65536 16777216";
# Higher backlog for the large number of concurrent torrent connections
"net.core.netdev_max_backlog" = 5000;
# Faster cleanup of dead connections from torrent peer churn
"net.ipv4.tcp_fin_timeout" = 15; # default 60
"net.ipv4.tcp_tw_reuse" = 1;
# Minecraft server optimizations
# Disable autogroup for better scheduling of game server threads
"kernel.sched_autogroup_enabled" = 0;
# Huge pages for Minecraft JVM (4000MB heap / 2MB per page + ~200 overhead)
"vm.nr_hugepages" = 2200;
}; };
}; };
system.activationScripts = {
# extract all my secureboot keys
# TODO! awful secrets management, it's globally readable in /nix/store
"secureboot-keys".text = ''
#!/bin/sh
rm -fr ${config.boot.lanzaboote.pkiBundle} || true
mkdir -p ${config.boot.lanzaboote.pkiBundle}
${pkgs.gnutar}/bin/tar xf ${./secrets/secureboot.tar} -C ${config.boot.lanzaboote.pkiBundle}
chown -R root:wheel ${config.boot.lanzaboote.pkiBundle}
chmod -R 500 ${config.boot.lanzaboote.pkiBundle}
'';
};
environment.etc = { environment.etc = {
"issue".text = ""; "issue".text = "";
}; };
@@ -128,23 +176,10 @@
# Set your time zone. # Set your time zone.
time.timeZone = "America/New_York"; time.timeZone = "America/New_York";
# Enable the OpenSSH daemon.
services.openssh = {
enable = true;
settings = {
AllowUsers = [
username
"root"
];
PasswordAuthentication = false;
PermitRootLogin = "yes"; # for deploying configs
};
};
hardware.graphics = { hardware.graphics = {
enable = true; enable = true;
extraPackages = with pkgs; [ extraPackages = with pkgs; [
vaapiVdpau libva-vdpau-driver
intel-compute-runtime # OpenCL filter support (hardware tonemapping and subtitle burn-in) intel-compute-runtime # OpenCL filter support (hardware tonemapping and subtitle burn-in)
vpl-gpu-rt # QSV on 11th gen or newer vpl-gpu-rt # QSV on 11th gen or newer
]; ];
@@ -183,7 +218,6 @@
lsof lsof
reflac reflac
list-usb-drives
pfetch-rs pfetch-rs
@@ -193,48 +227,6 @@
libatasmart libatasmart
]; ];
systemd.services.no-rgb =
let
no-rgb = (
pkgs.writeShellApplication {
name = "no-rgb";
runtimeInputs = with pkgs; [
openrgb
coreutils
gnugrep
];
text = ''
#!/bin/sh
set -e
NUM_DEVICES=$(openrgb --noautoconnect --list-devices | grep -cE '^[0-9]+: ')
for i in $(seq 0 $((NUM_DEVICES - 1))); do
openrgb --noautoconnect --device "$i" --mode direct --color 000000
done
'';
}
);
in
{
description = "disable rgb";
serviceConfig = {
ExecStart = lib.getExe no-rgb;
Type = "oneshot";
};
wantedBy = [ "multi-user.target" ];
};
services.hardware.openrgb = {
enable = true;
package = pkgs.openrgb-with-all-plugins;
motherboard = "amd";
};
services.udev.packages = [ pkgs.openrgb-with-all-plugins ];
hardware.i2c.enable = true;
networking = { networking = {
nameservers = [ nameservers = [
"1.1.1.1" "1.1.1.1"
@@ -244,13 +236,15 @@
hostName = hostname; hostName = hostname;
hostId = "0f712d56"; hostId = "0f712d56";
firewall.enable = true; firewall.enable = true;
firewall.trustedInterfaces = [ "wg-br" ];
useDHCP = false; useDHCP = false;
enableIPv6 = false; enableIPv6 = false;
interfaces.${eth_interface} = { interfaces.${eth_interface} = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = "10.1.1.102"; address = "192.168.1.50";
# address = "10.1.1.102";
prefixLength = 24; prefixLength = 24;
} }
]; ];
@@ -262,7 +256,8 @@
]; ];
}; };
defaultGateway = { defaultGateway = {
address = "10.1.1.1"; #address = "10.1.1.1";
address = "192.168.1.1";
interface = eth_interface; interface = eth_interface;
}; };
# TODO! fix this # TODO! fix this
@@ -282,20 +277,9 @@
"render" "render"
service_configs.media_group service_configs.media_group
]; ];
hashedPasswordFile = config.age.secrets.hashedPass.path;
# TODO! use proper secrets management
# hashedPasswordFile = builtins.toString ./secrets/hashedPass;
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO4jL6gYOunUlUtPvGdML0cpbKSsPNqQ1jit4E7U1RyH" # laptop
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJjT5QZ3zRDb+V6Em20EYpSEgPW5e/U+06uQGJdraxi" # desktop
];
}; };
# used for deploying configs to server
users.users.root.openssh.authorizedKeys.keys =
config.users.users.${username}.openssh.authorizedKeys.keys;
# https://nixos.wiki/wiki/Fish#Setting_fish_as_your_shell # https://nixos.wiki/wiki/Fish#Setting_fish_as_your_shell
programs.fish.enable = true; programs.fish.enable = true;
programs.bash = { programs.bash = {
@@ -339,7 +323,7 @@
# }; # };
# systemd.tmpfiles.rules = [ # systemd.tmpfiles.rules = [
# "d /tank/music 775 ${username} users" # "Z /tank/music 775 ${username} users"
# ]; # ];
system.stateVersion = "24.11"; system.stateVersion = "24.11";

View File

@@ -1,4 +1,9 @@
{ inputs, ... }:
{ {
imports = [
inputs.disko.nixosModules.disko
];
disko.devices = { disko.devices = {
disk = { disk = {
main = { main = {
@@ -15,17 +20,40 @@
mountpoint = "/boot"; mountpoint = "/boot";
}; };
}; };
root = { persistent = {
size = "20G";
content = {
type = "filesystem";
format = "f2fs";
mountpoint = "/persistent";
};
};
nix = {
size = "100%"; size = "100%";
content = { content = {
type = "filesystem"; type = "filesystem";
format = "f2fs"; format = "f2fs";
mountpoint = "/"; mountpoint = "/nix";
}; };
}; };
}; };
}; };
}; };
}; };
nodev = {
"/" = {
fsType = "tmpfs";
mountOptions = [
"defaults"
"size=2G"
"mode=755"
];
};
};
}; };
fileSystems."/persistent".neededForBoot = true;
fileSystems."/nix".neededForBoot = true;
} }

381
flake.lock generated
View File

@@ -1,12 +1,57 @@
{ {
"nodes": { "nodes": {
"agenix": {
"inputs": {
"darwin": [],
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
],
"systems": "systems"
},
"locked": {
"lastModified": 1770165109,
"narHash": "sha256-9VnK6Oqai65puVJ4WYtCTvlJeXxMzAp/69HhQuTdl/I=",
"owner": "ryantm",
"repo": "agenix",
"rev": "b027ee29d959fda4b60b57566d64c98a202e0feb",
"type": "github"
},
"original": {
"owner": "ryantm",
"repo": "agenix",
"type": "github"
}
},
"arr-init": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1772249948,
"narHash": "sha256-v68tO12mTCET68eZG583U+OlBL4f6kAoHS9iKA/xLzQ=",
"ref": "refs/heads/main",
"rev": "d21eb9f5b0a30bb487de7c0afbbbaf19324eaa49",
"revCount": 1,
"type": "git",
"url": "ssh://gitea@git.gardling.com/titaniumtown/arr-init"
},
"original": {
"type": "git",
"url": "ssh://gitea@git.gardling.com/titaniumtown/arr-init"
}
},
"crane": { "crane": {
"locked": { "locked": {
"lastModified": 1754269165, "lastModified": 1771796463,
"narHash": "sha256-0tcS8FHd4QjbCVoxN9jI+PjHgA4vc/IjkUSp+N3zy0U=", "narHash": "sha256-9bCDuUzpwJXcHMQYMS1yNuzYMmKO/CCwCexpjWOl62I=",
"owner": "ipetkov", "owner": "ipetkov",
"repo": "crane", "repo": "crane",
"rev": "444e81206df3f7d92780680e45858e31d2f07a08", "rev": "3d3de3313e263e04894f284ac18177bd26169bad",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -24,11 +69,11 @@
"utils": "utils" "utils": "utils"
}, },
"locked": { "locked": {
"lastModified": 1756719547, "lastModified": 1770019181,
"narHash": "sha256-N9gBKUmjwRKPxAafXEk1EGadfk2qDZPBQp4vXWPHINQ=", "narHash": "sha256-hwsYgDnby50JNVpTRYlF3UR/Rrpt01OrxVuryF40CFY=",
"owner": "serokell", "owner": "serokell",
"repo": "deploy-rs", "repo": "deploy-rs",
"rev": "125ae9e3ecf62fb2c0fd4f2d894eb971f1ecaed2", "rev": "77c906c0ba56aabdbc72041bf9111b565cdd6171",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -44,11 +89,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1757508292, "lastModified": 1771881364,
"narHash": "sha256-7lVWL5bC6xBIMWWDal41LlGAG+9u2zUorqo3QCUL4p4=", "narHash": "sha256-A5uE/hMium5of/QGC6JwF5TGoDAfpNtW00T0s9u/PN8=",
"owner": "nix-community", "owner": "nix-community",
"repo": "disko", "repo": "disko",
"rev": "146f45bee02b8bd88812cfce6ffc0f933788875a", "rev": "a4cb7bf73f264d40560ba527f9280469f1f081c6",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -76,15 +121,15 @@
"flake-compat_2": { "flake-compat_2": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1747046372, "lastModified": 1767039857,
"narHash": "sha256-CIVLLkVgvHYbgI2UpXvIIBJ12HWgX+fjA8Xf8PUmqCY=", "narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=",
"owner": "edolstra", "owner": "NixOS",
"repo": "flake-compat", "repo": "flake-compat",
"rev": "9100a0f413b0c601e0533d1d94ffd501ce2e7885", "rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "edolstra", "owner": "NixOS",
"repo": "flake-compat", "repo": "flake-compat",
"type": "github" "type": "github"
} }
@@ -105,48 +150,9 @@
"type": "github" "type": "github"
} }
}, },
"flake-parts": {
"inputs": {
"nixpkgs-lib": [
"lanzaboote",
"nixpkgs"
]
},
"locked": {
"lastModified": 1754091436,
"narHash": "sha256-XKqDMN1/Qj1DKivQvscI4vmHfDfvYR2pfuFOJiCeewM=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "67df8c627c2c39c41dbec76a1f201929929ab0bd",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_2": {
"inputs": {
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1730504689,
"narHash": "sha256-hgmguH29K2fvs9szpq2r3pz2/8cJd2LPS+b4tfNFCwE=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "506278e768c2a08bec68eb62932193e341f55c90",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-utils": { "flake-utils": {
"inputs": { "inputs": {
"systems": "systems_2" "systems": "systems_4"
}, },
"locked": { "locked": {
"lastModified": 1731533236, "lastModified": 1731533236,
@@ -166,7 +172,7 @@
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
"lanzaboote", "lanzaboote",
"pre-commit-hooks-nix", "pre-commit",
"nixpkgs" "nixpkgs"
] ]
}, },
@@ -191,37 +197,77 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1756679287, "lastModified": 1772020340,
"narHash": "sha256-Xd1vOeY9ccDf5VtVK12yM0FS6qqvfUop8UQlxEB+gTQ=", "narHash": "sha256-aqBl3GNpCadMoJ/hVkWTijM1Aeilc278MjM+LA3jK6g=",
"owner": "nix-community", "owner": "nix-community",
"repo": "home-manager", "repo": "home-manager",
"rev": "07fc025fe10487dd80f2ec694f1cd790e752d0e8", "rev": "36e38ca0d9afe4c55405fdf22179a5212243eecc",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nix-community", "owner": "nix-community",
"ref": "release-25.05", "ref": "release-25.11",
"repo": "home-manager", "repo": "home-manager",
"type": "github" "type": "github"
} }
}, },
"home-manager_2": {
"inputs": {
"nixpkgs": [
"impermanence",
"nixpkgs"
]
},
"locked": {
"lastModified": 1768598210,
"narHash": "sha256-kkgA32s/f4jaa4UG+2f8C225Qvclxnqs76mf8zvTVPg=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "c47b2cc64a629f8e075de52e4742de688f930dc6",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
"impermanence": {
"inputs": {
"home-manager": "home-manager_2",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1769548169,
"narHash": "sha256-03+JxvzmfwRu+5JafM0DLbxgHttOQZkUtDWBmeUkN8Y=",
"owner": "nix-community",
"repo": "impermanence",
"rev": "7b1d382faf603b6d264f58627330f9faa5cba149",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "impermanence",
"type": "github"
}
},
"lanzaboote": { "lanzaboote": {
"inputs": { "inputs": {
"crane": "crane", "crane": "crane",
"flake-compat": "flake-compat_2",
"flake-parts": "flake-parts",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
], ],
"pre-commit-hooks-nix": "pre-commit-hooks-nix", "pre-commit": "pre-commit",
"rust-overlay": "rust-overlay" "rust-overlay": "rust-overlay"
}, },
"locked": { "locked": {
"lastModified": 1756744479, "lastModified": 1772216104,
"narHash": "sha256-EyZXusK/wRD3V9vDh00W2Re3Eg8UQ+LjVBQrrH9dq1U=", "narHash": "sha256-1TnGN26vnCEQk5m4AavJZxGZTb/6aZyphemRPRwFUfs=",
"owner": "nix-community", "owner": "nix-community",
"repo": "lanzaboote", "repo": "lanzaboote",
"rev": "747b7912f49e2885090c83364d88cf853a020ac1", "rev": "dbe5112de965bbbbff9f0729a9789c20a65ab047",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -230,41 +276,20 @@
"type": "github" "type": "github"
} }
}, },
"llamacpp": {
"inputs": {
"flake-parts": "flake-parts_2",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1757618398,
"narHash": "sha256-BlGooRYcF96P356VQZi7SkGEW0Lo8TzxeYZt5CNmLew=",
"owner": "ggml-org",
"repo": "llama.cpp",
"rev": "0e6ff0046f4a2983b2c77950aa75960fe4b4f0e2",
"type": "github"
},
"original": {
"owner": "ggml-org",
"repo": "llama.cpp",
"type": "github"
}
},
"nix-minecraft": { "nix-minecraft": {
"inputs": { "inputs": {
"flake-compat": "flake-compat_3", "flake-compat": "flake-compat_3",
"flake-utils": "flake-utils",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
] ],
"systems": "systems_3"
}, },
"locked": { "locked": {
"lastModified": 1757555667, "lastModified": 1772160153,
"narHash": "sha256-09403AZgH/TR1bpilDm8yJucZ2hYcZm8bzY3t8NgPJQ=", "narHash": "sha256-lk5IxQzY9ZeeEyjKNT7P6dFnlRpQgkus4Ekc/+slypY=",
"owner": "Infinidoge", "owner": "Infinidoge",
"repo": "nix-minecraft", "repo": "nix-minecraft",
"rev": "d6d19d54dcec2a6afac3b9442643dd18e8b0566d", "rev": "deca3fb710b502ba10cd5cdc8f66c2cc184b92df",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -275,11 +300,11 @@
}, },
"nixos-hardware": { "nixos-hardware": {
"locked": { "locked": {
"lastModified": 1757103352, "lastModified": 1771969195,
"narHash": "sha256-PtT7ix43ss8PONJ1VJw3f6t2yAoGH+q462Sn8lrmWmk=", "narHash": "sha256-qwcDBtrRvJbrrnv1lf/pREQi8t2hWZxVAyeMo7/E9sw=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixos-hardware", "repo": "nixos-hardware",
"rev": "11b2a10c7be726321bb854403fdeec391e798bf0", "rev": "41c6b421bdc301b2624486e11905c9af7b8ec68e",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -291,38 +316,39 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1757545623, "lastModified": 1772047000,
"narHash": "sha256-mCxPABZ6jRjUQx3bPP4vjA68ETbPLNz9V2pk9tO7pRQ=", "narHash": "sha256-7DaQVv4R97cii/Qdfy4tmDZMB2xxtyIvNGSwXBBhSmo=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "8cd5ce828d5d1d16feff37340171a98fc3bf6526", "rev": "1267bb4920d0fc06ea916734c11b0bf004bbe17e",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"ref": "nixos-25.05", "ref": "nixos-25.11",
"repo": "nixpkgs", "repo": "nixpkgs",
"type": "github" "type": "github"
} }
}, },
"nixpkgs-lib": { "nixpkgs_2": {
"locked": { "locked": {
"lastModified": 1730504152, "lastModified": 1764517877,
"narHash": "sha256-lXvH/vOfb4aGYyvFmZK/HlsNsr/0CVWlwYvo2rxJk3s=", "narHash": "sha256-pp3uT4hHijIC8JUK5MEqeAWmParJrgBVzHLNfJDZxg4=",
"type": "tarball", "owner": "NixOS",
"url": "https://github.com/NixOS/nixpkgs/archive/cc2f28000298e1269cea6612cd06ec9979dd5d7f.tar.gz" "repo": "nixpkgs",
"rev": "2d293cbfa5a793b4c50d17c05ef9e385b90edf6c",
"type": "github"
}, },
"original": { "original": {
"type": "tarball", "owner": "NixOS",
"url": "https://github.com/NixOS/nixpkgs/archive/cc2f28000298e1269cea6612cd06ec9979dd5d7f.tar.gz" "ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
} }
}, },
"pre-commit-hooks-nix": { "pre-commit": {
"inputs": { "inputs": {
"flake-compat": [ "flake-compat": "flake-compat_2",
"lanzaboote",
"flake-compat"
],
"gitignore": "gitignore", "gitignore": "gitignore",
"nixpkgs": [ "nixpkgs": [
"lanzaboote", "lanzaboote",
@@ -330,11 +356,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1750779888, "lastModified": 1771858127,
"narHash": "sha256-wibppH3g/E2lxU43ZQHC5yA/7kIKLGxVEnsnVK1BtRg=", "narHash": "sha256-Gtre9YoYl3n25tJH2AoSdjuwcqij5CPxL3U3xysYD08=",
"owner": "cachix", "owner": "cachix",
"repo": "pre-commit-hooks.nix", "repo": "pre-commit-hooks.nix",
"rev": "16ec914f6fb6f599ce988427d9d94efddf25fe6d", "rev": "49bbbfc218bf3856dfa631cead3b052d78248b83",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -345,18 +371,22 @@
}, },
"root": { "root": {
"inputs": { "inputs": {
"agenix": "agenix",
"arr-init": "arr-init",
"deploy-rs": "deploy-rs", "deploy-rs": "deploy-rs",
"disko": "disko", "disko": "disko",
"home-manager": "home-manager", "home-manager": "home-manager",
"impermanence": "impermanence",
"lanzaboote": "lanzaboote", "lanzaboote": "lanzaboote",
"llamacpp": "llamacpp",
"nix-minecraft": "nix-minecraft", "nix-minecraft": "nix-minecraft",
"nixos-hardware": "nixos-hardware", "nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"senior_project-website": "senior_project-website", "senior_project-website": "senior_project-website",
"srvos": "srvos", "srvos": "srvos",
"trackerlist": "trackerlist",
"vpn-confinement": "vpn-confinement", "vpn-confinement": "vpn-confinement",
"website": "website" "website": "website",
"ytbn-graphing-software": "ytbn-graphing-software"
} }
}, },
"rust-overlay": { "rust-overlay": {
@@ -367,11 +397,32 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1754189623, "lastModified": 1771988922,
"narHash": "sha256-fstu5eb30UYwsxow0aQqkzxNxGn80UZjyehQVNVHuBk=", "narHash": "sha256-Fc6FHXtfEkLtuVJzd0B6tFYMhmcPLuxr90rWfb/2jtQ=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "c582ff7f0d8a7ea689ae836dfb1773f1814f472a", "rev": "f4443dc3f0b6c5e6b77d923156943ce816d1fcb9",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"rust-overlay_2": {
"inputs": {
"nixpkgs": [
"ytbn-graphing-software",
"nixpkgs"
]
},
"locked": {
"lastModified": 1764729618,
"narHash": "sha256-z4RA80HCWv2los1KD346c+PwNPzMl79qgl7bCVgz8X0=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "52764074a85145d5001bf0aa30cb71936e9ad5b8",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -383,11 +434,11 @@
"senior_project-website": { "senior_project-website": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1756857133, "lastModified": 1771869552,
"narHash": "sha256-L9uRmF8ybAfMIKwAqrXfd7f1ICqBEu6tBxeWjo4xqRc=", "narHash": "sha256-veaVrRWCSy7HYAAjUFLw8HASKcj+3f0W+sCwS3QiaM4=",
"owner": "Titaniumtown", "owner": "Titaniumtown",
"repo": "senior-project-website", "repo": "senior-project-website",
"rev": "410207b70a26784226fb5ecd9b31b725904d3abd", "rev": "28a2b93492dac877dce0b38f078eacf74fce26e7",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -403,11 +454,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1757552363, "lastModified": 1772071250,
"narHash": "sha256-4dtGagSfwMabRi59g7E8T6FcdghNizLbR4PwU1g8lDI=", "narHash": "sha256-LDWvJDR1J8xE8TBJjzWnOA0oVP/l9xBFC4npQPJDHN4=",
"owner": "nix-community", "owner": "nix-community",
"repo": "srvos", "repo": "srvos",
"rev": "ec58f16bdb57cf3a17bba79f687945dca1703c64", "rev": "5cd73bcf984b72d8046e1175d13753de255adfb9",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -446,9 +497,55 @@
"type": "github" "type": "github"
} }
}, },
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_4": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"trackerlist": {
"flake": false,
"locked": {
"lastModified": 1772233783,
"narHash": "sha256-2jPUBKpPuT4dCXwVFuZvTH3QyURixsfJZD7Zqs0atPY=",
"owner": "ngosang",
"repo": "trackerslist",
"rev": "85c4f103f130b070a192343c334f50c2f56b61a9",
"type": "github"
},
"original": {
"owner": "ngosang",
"repo": "trackerslist",
"type": "github"
}
},
"utils": { "utils": {
"inputs": { "inputs": {
"systems": "systems" "systems": "systems_2"
}, },
"locked": { "locked": {
"lastModified": 1731533236, "lastModified": 1731533236,
@@ -466,11 +563,11 @@
}, },
"vpn-confinement": { "vpn-confinement": {
"locked": { "locked": {
"lastModified": 1749672087, "lastModified": 1767604552,
"narHash": "sha256-j8LG0s0QcvNkZZLcItl78lvTZemvsScir0dG3Ii4B1c=", "narHash": "sha256-FddhMxnc99KYOZ/S3YNqtDSoxisIhVtJ7L4s8XD2u0A=",
"owner": "Maroka-chan", "owner": "Maroka-chan",
"repo": "VPN-Confinement", "repo": "VPN-Confinement",
"rev": "880b3bd2c864dce4f6afc79f6580ca699294c011", "rev": "a6b2da727853886876fd1081d6bb2880752937f3",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -482,11 +579,11 @@
"website": { "website": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1756870892, "lastModified": 1768266466,
"narHash": "sha256-jPcADOXoREf92sA/wHX9tXfuRRDpwIba4NIqz/Gl8Kg=", "narHash": "sha256-d4dZzEcIKuq4DhNtXczaflpRifAtcOgNr45W2Bexnps=",
"ref": "refs/heads/main", "ref": "refs/heads/main",
"rev": "890c7633d469862a54b9c5993050b1f287242f2d", "rev": "06011a27456b3b9f983ef1aa142b5773bcb52b6e",
"revCount": 20, "revCount": 23,
"type": "git", "type": "git",
"url": "https://git.gardling.com/titaniumtown/website" "url": "https://git.gardling.com/titaniumtown/website"
}, },
@@ -494,6 +591,26 @@
"type": "git", "type": "git",
"url": "https://git.gardling.com/titaniumtown/website" "url": "https://git.gardling.com/titaniumtown/website"
} }
},
"ytbn-graphing-software": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs_2",
"rust-overlay": "rust-overlay_2"
},
"locked": {
"lastModified": 1765615270,
"narHash": "sha256-12C6LccKRe5ys0iRd+ob+BliswUSmqOKWhMTI8fNpr0=",
"ref": "refs/heads/main",
"rev": "ac6265eae734363f95909df9a3739bf6360fa721",
"revCount": 1130,
"type": "git",
"url": "https://git.gardling.com/titaniumtown/YTBN-Graphing-Software"
},
"original": {
"type": "git",
"url": "https://git.gardling.com/titaniumtown/YTBN-Graphing-Software"
}
} }
}, },
"root": "root", "root": "root",

176
flake.nix
View File

@@ -2,7 +2,7 @@
description = "Flake for server muffin"; description = "Flake for server muffin";
inputs = { inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05"; nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
lanzaboote = { lanzaboote = {
url = "github:nix-community/lanzaboote"; url = "github:nix-community/lanzaboote";
@@ -19,7 +19,7 @@
vpn-confinement.url = "github:Maroka-chan/VPN-Confinement"; vpn-confinement.url = "github:Maroka-chan/VPN-Confinement";
home-manager = { home-manager = {
url = "github:nix-community/home-manager/release-25.05"; url = "github:nix-community/home-manager/release-25.11";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
@@ -28,11 +28,6 @@
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
llamacpp = {
url = "github:ggml-org/llama.cpp";
inputs.nixpkgs.follows = "nixpkgs";
};
srvos = { srvos = {
url = "github:nix-community/srvos"; url = "github:nix-community/srvos";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
@@ -43,6 +38,18 @@
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
impermanence = {
url = "github:nix-community/impermanence";
inputs.nixpkgs.follows = "nixpkgs";
};
agenix = {
url = "github:ryantm/agenix";
inputs.nixpkgs.follows = "nixpkgs";
inputs.home-manager.follows = "home-manager";
inputs.darwin.follows = "";
};
senior_project-website = { senior_project-website = {
url = "github:Titaniumtown/senior-project-website"; url = "github:Titaniumtown/senior-project-website";
flake = false; flake = false;
@@ -52,6 +59,20 @@
url = "git+https://git.gardling.com/titaniumtown/website"; url = "git+https://git.gardling.com/titaniumtown/website";
flake = false; flake = false;
}; };
trackerlist = {
url = "github:ngosang/trackerslist";
flake = false;
};
ytbn-graphing-software = {
url = "git+https://git.gardling.com/titaniumtown/YTBN-Graphing-Software";
};
arr-init = {
url = "git+ssh://gitea@git.gardling.com/titaniumtown/arr-init";
inputs.nixpkgs.follows = "nixpkgs";
};
}; };
outputs = outputs =
@@ -66,6 +87,8 @@
disko, disko,
srvos, srvos,
deploy-rs, deploy-rs,
impermanence,
arr-init,
... ...
}@inputs: }@inputs:
let let
@@ -78,29 +101,45 @@
zpool_ssds = "tank"; zpool_ssds = "tank";
zpool_hdds = "hdds"; zpool_hdds = "hdds";
torrents_path = "/torrents"; torrents_path = "/torrents";
services_dir = "/${zpool_ssds}/services"; services_dir = "/services";
music_dir = "/${zpool_ssds}/music"; music_dir = "/${zpool_ssds}/music";
media_group = "media"; media_group = "media";
cpu_arch = "znver3";
ports = { ports = {
http = 80;
https = 443; https = 443;
jellyfin = 8096; # no services.jellyfin option for this jellyfin = 8096; # no services.jellyfin option for this
torrent = 6011; torrent = 6011;
bitmagnet = 3333; bitmagnet = 3333;
owntracks = 3825;
gitea = 2283; gitea = 2283;
immich = 2284; immich = 2284;
soulseek_web = 5030; soulseek_web = 5030;
soulseek_listen = 50300; soulseek_listen = 50300;
llama_cpp = 8991; llama_cpp = 8991;
vaultwarden = 8222; vaultwarden = 8222;
syncthing_gui = 8384;
syncthing_protocol = 22000;
syncthing_discovery = 21027;
minecraft = 25565;
matrix = 6167;
matrix_federation = 8448;
coturn = 3478;
coturn_tls = 5349;
ntfy = 2586;
livekit = 7880;
lk_jwt = 8081;
prowlarr = 9696;
sonarr = 8989;
radarr = 7878;
bazarr = 6767;
jellyseerr = 5055;
}; };
https = { https = {
certs = services_dir + "/http_certs"; certs = services_dir + "/http_certs";
domain = "gardling.com"; domain = "gardling.com";
wg_ip = "192.168.15.1";
matrix_hostname = "matrix.${service_configs.https.domain}";
}; };
gitea = { gitea = {
@@ -132,10 +171,6 @@
cacheDir = services_dir + "/jellyfin_cache"; cacheDir = services_dir + "/jellyfin_cache";
}; };
owntracks = {
data_dir = services_dir + "/owntracks";
};
slskd = rec { slskd = rec {
base = "/var/lib/slskd"; base = "/var/lib/slskd";
downloads = base + "/downloads"; downloads = base + "/downloads";
@@ -145,17 +180,73 @@
vaultwarden = { vaultwarden = {
path = "/var/lib/vaultwarden"; path = "/var/lib/vaultwarden";
}; };
monero = {
dataDir = services_dir + "/monero";
};
matrix = {
dataDir = "/var/lib/continuwuity";
domain = "matrix.${https.domain}";
};
ntfy = {
domain = "ntfy.${https.domain}";
};
livekit = {
domain = "livekit.${https.domain}";
};
syncthing = {
dataDir = services_dir + "/syncthing";
signalBackupDir = "/${zpool_ssds}/bak/signal";
grayjayBackupDir = "/${zpool_ssds}/bak/grayjay";
};
prowlarr = {
dataDir = services_dir + "/prowlarr";
};
sonarr = {
dataDir = services_dir + "/sonarr";
};
radarr = {
dataDir = services_dir + "/radarr";
};
bazarr = {
dataDir = services_dir + "/bazarr";
};
jellyseerr = {
configDir = services_dir + "/jellyseerr";
};
recyclarr = {
dataDir = services_dir + "/recyclarr";
};
media = {
moviesDir = torrents_path + "/media/movies";
tvDir = torrents_path + "/media/tv";
};
}; };
pkgs = import nixpkgs { pkgs = import nixpkgs {
inherit system; inherit system;
hostPlatform = system; targetPlatform = system;
buildPlatform = builtins.currentSystem; buildPlatform = builtins.currentSystem;
}; };
lib = import ./lib.nix { inherit inputs pkgs; }; lib = import ./modules/lib.nix { inherit inputs pkgs service_configs; };
testSuite = import ./tests/tests.nix {
inherit pkgs lib inputs;
config = self.nixosConfigurations.muffin.config;
};
in in
{ {
formatter.x86_64-linux = nixpkgs.legacyPackages.x86_64-linux.nixfmt-rfc-style; formatter.x86_64-linux = nixpkgs.legacyPackages.x86_64-linux.nixfmt-tree;
nixosConfigurations.${hostname} = lib.nixosSystem { nixosConfigurations.${hostname} = lib.nixosSystem {
inherit system; inherit system;
specialArgs = { specialArgs = {
@@ -193,22 +284,24 @@
srvos.nixosModules.mixins-terminfo srvos.nixosModules.mixins-terminfo
./disk-config.nix ./disk-config.nix
disko.nixosModules.disko
./configuration.nix ./configuration.nix
vpn-confinement.nixosModules.default
# get nix-minecraft working!
nix-minecraft.nixosModules.minecraft-servers
{ {
nixpkgs.overlays = [ nixpkgs.overlays = [
nix-minecraft.overlay nix-minecraft.overlay
(import ./overlays.nix) (import ./modules/overlays.nix)
]; ];
nixpkgs.config.allowUnfreePredicate =
pkg:
builtins.elem (nixpkgs.lib.getName pkg) [
"minecraft-server"
];
} }
lanzaboote.nixosModules.lanzaboote lanzaboote.nixosModules.lanzaboote
arr-init.nixosModules.default
home-manager.nixosModules.home-manager home-manager.nixosModules.home-manager
( (
{ {
@@ -216,7 +309,7 @@
... ...
}: }:
{ {
home-manager.users.${username} = import ./home.nix; home-manager.users.${username} = import ./modules/home.nix;
} }
) )
] ]
@@ -237,24 +330,19 @@
}; };
}; };
packages.${system} = checks.${system} = testSuite;
let
testSuite = import ./tests/tests.nix { packages.${system} = {
inherit pkgs lib inputs; tests = pkgs.linkFarm "all-tests" (
config = self.nixosConfigurations.muffin.config; pkgs.lib.mapAttrsToList (name: test: {
}; name = name;
in path = test;
{ }) testSuite
tests = pkgs.linkFarm "all-tests" ( );
pkgs.lib.mapAttrsToList (name: test: { }
name = name; // (pkgs.lib.mapAttrs' (name: test: {
path = test; name = "test-${name}";
}) testSuite value = test;
); }) testSuite);
}
// (pkgs.lib.mapAttrs' (name: test: {
name = "test-${name}";
value = test;
}) testSuite);
}; };
} }

103
lib.nix
View File

@@ -1,103 +0,0 @@
{
inputs,
pkgs,
...
}:
inputs.nixpkgs.lib.extend (
final: prev:
let
lib = prev;
in
{
serviceMountDeps =
serviceName: dirs:
{ pkgs, ... }:
{
systemd.services."${serviceName}_mounts" = {
wants = [ "zfs.target" ];
before = [ "${serviceName}.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${lib.getExe pkgs.ensureZfsMounts} ${lib.strings.concatStringsSep " " dirs}";
};
};
systemd.services.${serviceName} = {
wants = [ "${serviceName}_mounts.service" ];
after = [ "${serviceName}_mounts.service" ];
requires = [ "${serviceName}_mounts.service" ];
};
};
# stolen from: https://stackoverflow.com/a/42398526
optimizeWithFlags =
pkg: flags:
lib.overrideDerivation pkg (
old:
let
newflags = lib.foldl' (acc: x: "${acc} ${x}") "" flags;
oldflags = if (lib.hasAttr "NIX_CFLAGS_COMPILE" old) then "${old.NIX_CFLAGS_COMPILE}" else "";
in
{
NIX_CFLAGS_COMPILE = "${oldflags} ${newflags}";
# stdenv = pkgs.clang19Stdenv;
}
);
optimizePackage =
pkg:
final.optimizeWithFlags pkg [
"-O3"
"-march=znver3"
"-mtune=znver3"
];
vpnNamespaceOpenPort =
port: service:
{ ... }:
{
vpnNamespaces.wg = {
portMappings = [
{
from = port;
to = port;
}
];
openVPNPorts = [
{
port = port;
protocol = "both";
}
];
};
systemd.services.${service}.vpnConfinement = {
enable = true;
vpnNamespace = "wg";
};
};
serviceDependZpool =
serviceName: zpool:
{ config, ... }:
{
config = lib.mkIf (zpool != "") {
systemd.services.${serviceName} = {
wants = [ "zfs-import-${zpool}.service" ];
after = [ "zfs-import-${zpool}.service" ];
requires = [ "zfs-import-${zpool}.service" ];
};
# assert that the pool is even enabled
assertions = [
{
assertion = builtins.elem zpool config.boot.zfs.extraPools;
message = "${zpool} is not enabled in `boot.zfs.extraPools`";
}
];
};
};
}
)

84
modules/age-secrets.nix Normal file
View File

@@ -0,0 +1,84 @@
{
config,
lib,
pkgs,
inputs,
...
}:
{
imports = [
inputs.agenix.nixosModules.default
];
# Configure all agenix secrets
age.secrets = {
# ZFS encryption key
zfs-key = {
file = ../secrets/zfs-key.age;
mode = "0400";
owner = "root";
group = "root";
};
# Secureboot keys archive
secureboot-tar = {
file = ../secrets/secureboot.tar.age;
mode = "0400";
owner = "root";
group = "root";
};
# System passwords
hashedPass = {
file = ../secrets/hashedPass.age;
mode = "0400";
owner = "root";
group = "root";
};
# Service authentication
caddy_auth = {
file = ../secrets/caddy_auth.age;
mode = "0400";
owner = "caddy";
group = "caddy";
};
jellyfin-api-key = {
file = ../secrets/jellyfin-api-key.age;
mode = "0400";
owner = "root";
group = "root";
};
slskd_env = {
file = ../secrets/slskd_env.age;
mode = "0400";
owner = "root";
group = "root";
};
# Network configuration
wg0-conf = {
file = ../secrets/wg0.conf.age;
mode = "0400";
owner = "root";
group = "root";
};
# ntfy-alerts secrets
ntfy-alerts-topic = {
file = ../secrets/ntfy-alerts-topic.age;
mode = "0400";
owner = "root";
group = "root";
};
ntfy-alerts-token = {
file = ../secrets/ntfy-alerts-token.age;
mode = "0400";
owner = "root";
group = "root";
};
};
}

View File

@@ -1,6 +1,5 @@
{ {
pkgs, pkgs,
username,
lib, lib,
... ...
}: }:

70
modules/impermanence.nix Normal file
View File

@@ -0,0 +1,70 @@
{
config,
lib,
pkgs,
username,
service_configs,
inputs,
...
}:
{
imports = [
inputs.impermanence.nixosModules.impermanence
];
environment.persistence."/persistent" = {
hideMounts = true;
directories = [
"/var/log"
"/var/lib/systemd/coredump"
"/var/lib/nixos"
"/var/lib/systemd/timers"
# ZFS cache directory - persisting the directory instead of the file
# avoids "device busy" errors when ZFS atomically updates the cache
"/etc/zfs"
];
files = [
# Machine ID
"/etc/machine-id"
];
users.${username} = {
files = [
".local/share/fish/fish_history"
];
};
users.root = {
files = [
".local/share/fish/fish_history"
];
};
};
# Store SSH host keys directly in /persistent to survive tmpfs root wipes.
# This is more reliable than bind mounts for service-generated files.
services.openssh.hostKeys = [
{
path = "/persistent/etc/ssh/ssh_host_ed25519_key";
type = "ed25519";
}
{
path = "/persistent/etc/ssh/ssh_host_rsa_key";
type = "rsa";
bits = 4096;
}
];
# Enforce root ownership on /persistent/etc. The impermanence activation
# script copies ownership from /persistent/etc to /etc via
# `chown --reference`. If /persistent/etc ever gets non-root ownership,
# sshd StrictModes rejects /etc/ssh/authorized_keys.d/root and root SSH
# breaks while non-root users still work.
# Use "z" (set ownership, non-recursive) not "d" (create only, no-op on existing).
systemd.tmpfiles.rules = [
"z /persistent/etc 0755 root root"
];
}

184
modules/lib.nix Normal file
View File

@@ -0,0 +1,184 @@
{
inputs,
pkgs,
service_configs,
...
}:
inputs.nixpkgs.lib.extend (
final: prev:
let
lib = prev;
in
{
# stolen from: https://stackoverflow.com/a/42398526
optimizeWithFlags =
pkg: flags:
lib.overrideDerivation pkg (
old:
let
newflags = lib.foldl' (acc: x: "${acc} ${x}") "" flags;
oldflags = if (lib.hasAttr "NIX_CFLAGS_COMPILE" old) then "${old.NIX_CFLAGS_COMPILE}" else "";
in
{
NIX_CFLAGS_COMPILE = "${oldflags} ${newflags}";
# stdenv = pkgs.clang19Stdenv;
}
);
optimizePackage =
pkg:
final.optimizeWithFlags pkg [
"-O3"
"-march=${service_configs.cpu_arch}"
"-mtune=${service_configs.cpu_arch}"
];
vpnNamespaceOpenPort =
port: service:
{ ... }:
{
vpnNamespaces.wg = {
portMappings = [
{
from = port;
to = port;
}
];
openVPNPorts = [
{
port = port;
protocol = "both";
}
];
};
systemd.services.${service}.vpnConfinement = {
enable = true;
vpnNamespace = "wg";
};
};
serviceMountWithZpool =
serviceName: zpool: dirs:
{ pkgs, config, ... }:
{
systemd.services."${serviceName}-mounts" = {
wants = [ "zfs.target" ] ++ lib.optionals (zpool != "") [ "zfs-import-${zpool}.service" ];
after = lib.optionals (zpool != "") [ "zfs-import-${zpool}.service" ];
before = [ "${serviceName}.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = [
(lib.getExe (
pkgs.writeShellApplication {
name = "ensure-zfs-mounts-with-pool-${serviceName}-${zpool}";
runtimeInputs = with pkgs; [
gawk
coreutils
config.boot.zfs.package
];
text = ''
set -euo pipefail
echo "Ensuring ZFS mounts for service: ${serviceName} (pool: ${zpool})"
echo "Directories: ${lib.strings.concatStringsSep ", " dirs}"
# Validate mounts exist (ensureZfsMounts already has proper PATH)
${lib.getExe pkgs.ensureZfsMounts} ${lib.strings.concatStringsSep " " dirs}
# Additional runtime check: verify paths are on correct zpool
${lib.optionalString (zpool != "") ''
echo "Verifying ZFS mountpoints are on pool '${zpool}'..."
if ! zfs_list_output=$(zfs list -H -o name,mountpoint 2>&1); then
echo "ERROR: Failed to query ZFS datasets: $zfs_list_output" >&2
exit 1
fi
# shellcheck disable=SC2043
for target in ${lib.strings.concatStringsSep " " dirs}; do
echo "Checking: $target"
# Find dataset that has this mountpoint
dataset=$(echo "$zfs_list_output" | awk -v target="$target" '$2 == target {print $1; exit}')
if [ -z "$dataset" ]; then
echo "ERROR: No ZFS dataset found for mountpoint: $target" >&2
exit 1
fi
# Extract pool name from dataset (first part before /)
actual_pool=$(echo "$dataset" | cut -d'/' -f1)
if [ "$actual_pool" != "${zpool}" ]; then
echo "ERROR: ZFS pool mismatch for $target" >&2
echo " Expected pool: ${zpool}" >&2
echo " Actual pool: $actual_pool" >&2
echo " Dataset: $dataset" >&2
exit 1
fi
echo "$target is on $dataset (pool: $actual_pool)"
done
echo "All paths verified successfully on pool '${zpool}'"
''}
echo "Mount validation completed for ${serviceName} (pool: ${zpool})"
'';
}
))
];
};
};
systemd.services.${serviceName} = {
wants = [
"${serviceName}-mounts.service"
];
after = [
"${serviceName}-mounts.service"
];
requires = [
"${serviceName}-mounts.service"
];
};
# assert that the pool is even enabled
#assertions = lib.optionals (zpool != "") [
# {
# assertion = builtins.elem zpool config.boot.zfs.extraPools;
# message = "${zpool} is not enabled in `boot.zfs.extraPools`";
# }
#];
};
serviceFilePerms =
serviceName: tmpfilesRules:
{ pkgs, ... }:
let
confFile = pkgs.writeText "${serviceName}-file-perms.conf" (
lib.concatStringsSep "\n" tmpfilesRules
);
in
{
systemd.services."${serviceName}-file-perms" = {
after = [ "${serviceName}-mounts.service" ];
before = [ "${serviceName}.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.systemd}/bin/systemd-tmpfiles --create ${confFile}";
};
};
systemd.services.${serviceName} = {
wants = [ "${serviceName}-file-perms.service" ];
after = [ "${serviceName}-file-perms.service" ];
};
};
}
)

66
modules/no-rgb.nix Normal file
View File

@@ -0,0 +1,66 @@
{
config,
lib,
pkgs,
...
}:
{
systemd.services.no-rgb =
let
no-rgb = (
pkgs.writeShellApplication {
name = "no-rgb";
runtimeInputs = with pkgs; [
openrgb
coreutils
gnugrep
];
text = ''
# Retry loop to wait for hardware to be ready
NUM_DEVICES=0
for attempt in 1 2 3 4 5; do
DEVICE_LIST=$(openrgb --noautoconnect --list-devices 2>/dev/null) || DEVICE_LIST=""
NUM_DEVICES=$(echo "$DEVICE_LIST" | grep -cE '^[0-9]+: ') || NUM_DEVICES=0
if [ "$NUM_DEVICES" -gt 0 ]; then
break
fi
if [ "$attempt" -lt 5 ]; then
sleep 2
fi
done
# If no devices found after retries, exit gracefully
if [ "$NUM_DEVICES" -eq 0 ]; then
exit 0
fi
# Disable RGB on each device
for i in $(seq 0 $((NUM_DEVICES - 1))); do
openrgb --noautoconnect --device "$i" --mode direct --color 000000 || true
done
'';
}
);
in
{
description = "disable rgb";
after = [ "systemd-udev-settle.service" ];
serviceConfig = {
ExecStart = lib.getExe no-rgb;
Type = "oneshot";
Restart = "on-failure";
RestartSec = 5;
};
wantedBy = [ "multi-user.target" ];
};
services.hardware.openrgb = {
enable = true;
package = pkgs.openrgb-with-all-plugins;
motherboard = "amd";
};
services.udev.packages = [ pkgs.openrgb-with-all-plugins ];
hardware.i2c.enable = true;
}

132
modules/ntfy-alerts.nix Normal file
View File

@@ -0,0 +1,132 @@
{
config,
lib,
pkgs,
...
}:
let
cfg = config.services.ntfyAlerts;
curl = "${pkgs.curl}/bin/curl";
hostname = config.networking.hostName;
# Build the curl auth args as a proper bash array fragment
authCurlArgs =
if cfg.tokenFile != null then
''
if [ -f "${cfg.tokenFile}" ]; then
TOKEN=$(cat "${cfg.tokenFile}" 2>/dev/null || echo "")
if [ -n "$TOKEN" ]; then
AUTH_ARGS=(-H "Authorization: Bearer $TOKEN")
fi
fi
''
else
"";
# Systemd failure alert script
systemdAlertScript = pkgs.writeShellScript "ntfy-systemd-alert" ''
set -euo pipefail
UNIT_NAME="$1"
SERVER_URL="${cfg.serverUrl}"
TOPIC=$(cat "${cfg.topicFile}" 2>/dev/null | tr -d '[:space:]')
if [ -z "$TOPIC" ]; then
echo "ERROR: Could not read topic from ${cfg.topicFile}"
exit 1
fi
# Get journal output for context
JOURNAL_OUTPUT=$(${pkgs.systemd}/bin/journalctl -u "$UNIT_NAME" -n 15 --no-pager 2>/dev/null || echo "No journal output available")
# Build auth args
AUTH_ARGS=()
${authCurlArgs}
# Send notification
${curl} -sf --max-time 15 -X POST \
"$SERVER_URL/$TOPIC" \
-H "Title: [${hostname}] Service failed: $UNIT_NAME" \
-H "Priority: high" \
-H "Tags: warning" \
"''${AUTH_ARGS[@]}" \
-d "$JOURNAL_OUTPUT" || true
'';
in
{
options.services.ntfyAlerts = {
enable = lib.mkEnableOption "ntfy push notifications for system alerts";
serverUrl = lib.mkOption {
type = lib.types.str;
description = "The ntfy server URL (e.g. https://ntfy.example.com)";
example = "https://ntfy.example.com";
};
topicFile = lib.mkOption {
type = lib.types.path;
description = "Path to a file containing the ntfy topic name to publish alerts to.";
example = "/run/agenix/ntfy-alerts-topic";
};
tokenFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = ''
Path to a file containing the ntfy auth token.
If set, uses Authorization: Bearer header for authentication.
'';
example = "/run/secrets/ntfy-token";
};
};
config = lib.mkIf cfg.enable {
# Per-service OnFailure for monitored services
systemd.services = {
"ntfy-alert@" = {
description = "Send ntfy notification for failed service %i";
unitConfig.OnFailure = lib.mkForce "";
serviceConfig = {
Type = "oneshot";
ExecStart = "${systemdAlertScript} %i";
TimeoutSec = 30;
};
};
# TODO: sanoid's ExecStartPre runs `zfs allow` which blocks on TXG sync;
# on the hdds pool (slow spinning disks + large async frees) this causes
# 30+ minute hangs and guaranteed timeouts. Suppress until we fix sanoid
# to run as root without `zfs allow`. See: nixpkgs#72060, openzfs/zfs#14180
"sanoid".unitConfig.OnFailure = lib.mkForce "";
};
# Global OnFailure drop-in for all services
systemd.packages = [
(pkgs.writeTextDir "etc/systemd/system/service.d/onfailure.conf" ''
[Unit]
OnFailure=ntfy-alert@%p.service
'')
# Sanoid-specific drop-in to override the global OnFailure (see TODO above)
(pkgs.writeTextDir "etc/systemd/system/sanoid.service.d/onfailure.conf" ''
[Unit]
OnFailure=
'')
];
# ZED (ZFS Event Daemon) ntfy notification settings
services.zfs.zed = {
enableMail = false;
settings = {
ZED_NTFY_URL = cfg.serverUrl;
ZED_NTFY_TOPIC = "$(cat ${cfg.topicFile} | tr -d '[:space:]')";
ZED_NTFY_ACCESS_TOKEN = lib.mkIf (cfg.tokenFile != null) "$(cat ${cfg.tokenFile})";
ZED_NOTIFY_VERBOSE = true;
};
};
};
}

View File

@@ -15,11 +15,11 @@ final: prev: {
exit 1 exit 1
fi fi
MOUNTED=$(zfs list -o mountpoint,mounted -H | awk '$NF == "yes" {$NF=""; print $0}' | sed 's/[[:space:]]*$//') MOUNTED=$(zfs list -o mountpoint,mounted -H | awk '$NF == "yes" {NF--; print}')
MISSING="" MISSING=""
for target in "$@"; do for target in "$@"; do
if ! echo "$MOUNTED" | grep -Fxq "$target"; then if ! grep -Fxq "$target" <<< "$MOUNTED"; then
MISSING="$MISSING $target" MISSING="$MISSING $target"
fi fi
done done
@@ -43,16 +43,4 @@ final: prev: {
} }
); );
}; };
list-usb-drives = prev.writeShellApplication {
name = "list-usb-drives";
runtimeInputs = with prev; [
findutils
coreutils
];
text = ''
find "/dev/disk/by-id" -name "usb*" -not -name "*-part[0-9]" -printf "%f\n" | sed 's/^usb\-//g' | sed 's/\-[0-9]*\:/ /g' | column -t --table-columns=DRIVE,BAY | sort -n -k 2
'';
};
} }

41
modules/secureboot.nix Normal file
View File

@@ -0,0 +1,41 @@
{
config,
lib,
pkgs,
...
}:
{
boot = {
loader.systemd-boot.enable = lib.mkForce false;
lanzaboote = {
enable = true;
# needed to be in `/etc/secureboot` for sbctl to work
pkiBundle = "/etc/secureboot";
};
};
system.activationScripts = {
# extract secureboot keys from agenix-decrypted tar
"secureboot-keys" = {
deps = [ "agenix" ];
text = ''
#!/bin/sh
# Check if keys already exist (e.g., from disko-install)
if [[ -d ${config.boot.lanzaboote.pkiBundle} && -f ${config.boot.lanzaboote.pkiBundle}/db.key ]]; then
echo "Secureboot keys already present, skipping extraction"
chown -R root:wheel ${config.boot.lanzaboote.pkiBundle}
chmod -R 500 ${config.boot.lanzaboote.pkiBundle}
else
echo "Extracting secureboot keys from agenix"
rm -fr ${config.boot.lanzaboote.pkiBundle} || true
mkdir -p ${config.boot.lanzaboote.pkiBundle}
${pkgs.gnutar}/bin/tar xf ${config.age.secrets.secureboot-tar.path} -C ${config.boot.lanzaboote.pkiBundle}
chown -R root:wheel ${config.boot.lanzaboote.pkiBundle}
chmod -R 500 ${config.boot.lanzaboote.pkiBundle}
fi
'';
};
};
}

37
modules/security.nix Normal file
View File

@@ -0,0 +1,37 @@
{
config,
lib,
pkgs,
...
}:
{
# memory allocator
# BREAKS REDIS-IMMICH
# environment.memoryAllocator.provider = "graphene-hardened";
# disable coredumps
systemd.coredump.enable = false;
services = {
dbus.implementation = "broker";
/*
logrotate.enable = true;
journald = {
storage = "volatile"; # Store logs in memory
upload.enable = false; # Disable remote log upload (the default)
extraConfig = ''
SystemMaxUse=500M
SystemMaxFileSize=50M
'';
};
*/
};
services.fail2ban = {
enable = true;
# Use iptables actions for compatibility
banaction = "iptables-multiport";
banaction-allports = "iptables-allports";
};
}

22
modules/usb-secrets.nix Normal file
View File

@@ -0,0 +1,22 @@
{
config,
lib,
pkgs,
...
}:
{
# Mount USB secrets drive via fileSystems
fileSystems."/mnt/usb-secrets" = {
device = "/dev/disk/by-label/SECRETS";
fsType = "vfat";
options = [
"ro"
"uid=root"
"gid=root"
"umask=377"
];
neededForBoot = true;
};
age.identityPaths = [ "/mnt/usb-secrets/usb-secrets-key" ];
}

View File

@@ -1,4 +1,5 @@
{ {
config,
service_configs, service_configs,
pkgs, pkgs,
... ...
@@ -10,13 +11,14 @@ let
in in
{ {
system.activationScripts = { system.activationScripts = {
# TODO! replace with proper secrets management # Copy decrypted ZFS key from agenix to expected location
# /etc is on tmpfs due to impermanence, so no persistent storage risk
"zfs-key".text = '' "zfs-key".text = ''
#!/bin/sh #!/bin/sh
rm -fr ${zfs-key} || true rm -f ${zfs-key} || true
cp ${./secrets/zfs-key} ${zfs-key} cp ${config.age.secrets.zfs-key.path} ${zfs-key}
chmod 0500 ${zfs-key} chmod 0400 ${zfs-key}
chown root:wheel ${zfs-key} chown root:root ${zfs-key}
''; '';
}; };
@@ -25,13 +27,17 @@ in
boot.kernelParams = boot.kernelParams =
let let
gb = 20; gb = 32;
mb = gb * 1000; mb = gb * 1000;
kb = mb * 1000; kb = mb * 1000;
b = kb * 1000; b = kb * 1000;
in in
[ [
"zfs.zfs_arc_max=${builtins.toString b}" "zfs.zfs_arc_max=${builtins.toString b}"
"zfs.zfs_txg_timeout=30" # longer TXG open time = larger sequential writes = better HDD throughput
"zfs.zfs_dirty_data_max=8589934592" # 8GB dirty data buffer (default 4GB) for USB HDD write smoothing
"zfs.zfs_delay_min_dirty_percent=80" # delay write throttling until 80% dirty (default 60%)
"zfs.zfs_vdev_async_write_max_active=30" # more concurrent async writes to vdevs (default 10)
]; ];
boot.supportedFilesystems = [ "zfs" ]; boot.supportedFilesystems = [ "zfs" ];
@@ -62,7 +68,7 @@ in
yearly = 0; yearly = 0;
}; };
datasets."${service_configs.zpool_ssds}/services/jellyfin_cache" = { datasets."${service_configs.zpool_ssds}/services/jellyfin/cache" = {
recursive = true; recursive = true;
autoprune = true; autoprune = true;
autosnap = true; autosnap = true;

88
scripts/install.sh Executable file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env bash
set -euo pipefail
DISK="${1:-}"
FLAKE_DIR="$(dirname "$(realpath "$0")")"
if [[ -z "$DISK" ]]; then
echo "Usage: $0 <disk_device>"
echo "Example: $0 /dev/nvme0n1"
echo " $0 /dev/sda"
exit 1
fi
if [[ ! -b "$DISK" ]]; then
echo "Error: $DISK is not a block device"
exit 1
fi
echo "Installing NixOS to $DISK using flake at $FLAKE_DIR"
# Create temporary directories
mkdir -p /tmp/secureboot
mkdir -p /tmp/persistent
# Function to cleanup on exit
cleanup() {
echo "Cleaning up..."
rm -rf /tmp/secureboot 2>/dev/null || true
rm -rf /tmp/persistent 2>/dev/null || true
}
trap cleanup EXIT
# Decrypt secureboot keys using the key in the repo
echo "Decrypting secureboot keys..."
if [[ ! -f "$FLAKE_DIR/usb-secrets/usb-secrets-key" ]]; then
echo "Error: usb-secrets-key not found at $FLAKE_DIR/usb-secrets/usb-secrets-key"
exit 1
fi
nix-shell -p age --run "age -d -i '$FLAKE_DIR/usb-secrets/usb-secrets-key' '$FLAKE_DIR/secrets/secureboot.tar.age'" | \
tar -x -C /tmp/secureboot
echo "Secureboot keys extracted"
# Extract persistent partition secrets
echo "Extracting persistent partition contents..."
if [[ -f "$FLAKE_DIR/secrets/persistent.tar" ]]; then
tar -xzf "$FLAKE_DIR/secrets/persistent.tar" -C /tmp/persistent
echo "Persistent partition contents extracted"
else
echo "Warning: persistent.tar not found, skipping persistent secrets"
fi
# Check if disko-install is available
if ! command -v disko-install >/dev/null 2>&1; then
echo "Running disko-install via nix..."
DISKO_INSTALL="nix run github:nix-community/disko#disko-install --"
else
DISKO_INSTALL="disko-install"
fi
echo "Running disko-install to partition, format, and install NixOS..."
# Build the extra-files arguments
EXTRA_FILES_ARGS=(
--extra-files /tmp/secureboot /etc/secureboot
--extra-files "$FLAKE_DIR/usb-secrets/usb-secrets-key" /mnt/usb-secrets/usb-secrets-key
)
# Add each top-level item from persistent separately to avoid nesting
# cp -ar creates /dst/src when copying directories, so we need to copy each item
#
# Also disko-install actually copies the files from extra-files, so we are good here
if [[ -d /tmp/persistent ]] && [[ -n "$(ls -A /tmp/persistent 2>/dev/null)" ]]; then
for item in /tmp/persistent/*; do
if [[ -e "$item" ]]; then
basename=$(basename "$item")
EXTRA_FILES_ARGS+=(--extra-files "$item" "/persistent/$basename")
fi
done
fi
# Run disko-install with secureboot keys available
sudo $DISKO_INSTALL \
--mode format \
--flake "$FLAKE_DIR#muffin" \
--disk main "$DISK" \
"${EXTRA_FILES_ARGS[@]}"

Binary file not shown.

BIN
secrets/caddy_auth.age Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
secrets/hashedPass.age Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
secrets/livekit_keys Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
secrets/persistent.tar Normal file

Binary file not shown.

Binary file not shown.

BIN
secrets/secureboot.tar.age Normal file

Binary file not shown.

BIN
secrets/slskd_env.age Normal file

Binary file not shown.

Binary file not shown.

BIN
secrets/wg0.conf.age Normal file

Binary file not shown.

BIN
secrets/xmrig-wallet Normal file

Binary file not shown.

Binary file not shown.

BIN
secrets/zfs-key.age Normal file

Binary file not shown.

34
services/arr/bazarr.nix Normal file
View File

@@ -0,0 +1,34 @@
{
pkgs,
config,
service_configs,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "bazarr" service_configs.zpool_ssds [
service_configs.bazarr.dataDir
])
(lib.serviceMountWithZpool "bazarr" service_configs.zpool_hdds [
service_configs.torrents_path
])
(lib.serviceFilePerms "bazarr" [
"Z ${service_configs.bazarr.dataDir} 0700 ${config.services.bazarr.user} ${config.services.bazarr.group}"
])
];
services.bazarr = {
enable = true;
listenPort = service_configs.ports.bazarr;
};
services.caddy.virtualHosts."bazarr.${service_configs.https.domain}".extraConfig = ''
import ${config.age.secrets.caddy_auth.path}
reverse_proxy :${builtins.toString service_configs.ports.bazarr}
'';
users.users.${config.services.bazarr.user}.extraGroups = [
service_configs.media_group
];
}

115
services/arr/init.nix Normal file
View File

@@ -0,0 +1,115 @@
{ config, service_configs, ... }:
{
services.arrInit = {
prowlarr = {
enable = true;
serviceName = "prowlarr";
port = service_configs.ports.prowlarr;
dataDir = service_configs.prowlarr.dataDir;
apiVersion = "v1";
networkNamespacePath = "/run/netns/wg";
syncedApps = [
{
name = "Sonarr";
implementation = "Sonarr";
configContract = "SonarrSettings";
prowlarrUrl = "http://localhost:${builtins.toString service_configs.ports.prowlarr}";
baseUrl = "http://${config.vpnNamespaces.wg.bridgeAddress}:${builtins.toString service_configs.ports.sonarr}";
apiKeyFrom = "${service_configs.sonarr.dataDir}/config.xml";
syncCategories = [
5000
5010
5020
5030
5040
5045
5050
5090
];
serviceName = "sonarr";
}
{
name = "Radarr";
implementation = "Radarr";
configContract = "RadarrSettings";
prowlarrUrl = "http://localhost:${builtins.toString service_configs.ports.prowlarr}";
baseUrl = "http://${config.vpnNamespaces.wg.bridgeAddress}:${builtins.toString service_configs.ports.radarr}";
apiKeyFrom = "${service_configs.radarr.dataDir}/config.xml";
syncCategories = [
2000
2010
2020
2030
2040
2045
2050
2060
2070
2080
];
serviceName = "radarr";
}
];
};
sonarr = {
enable = true;
serviceName = "sonarr";
port = service_configs.ports.sonarr;
dataDir = service_configs.sonarr.dataDir;
rootFolders = [ service_configs.media.tvDir ];
downloadClients = [
{
name = "qBittorrent";
implementation = "QBittorrent";
configContract = "QBittorrentSettings";
fields = {
host = config.vpnNamespaces.wg.namespaceAddress;
port = service_configs.ports.torrent;
useSsl = false;
tvCategory = "tvshows";
};
}
];
};
radarr = {
enable = true;
serviceName = "radarr";
port = service_configs.ports.radarr;
dataDir = service_configs.radarr.dataDir;
rootFolders = [ service_configs.media.moviesDir ];
downloadClients = [
{
name = "qBittorrent";
implementation = "QBittorrent";
configContract = "QBittorrentSettings";
fields = {
host = config.vpnNamespaces.wg.namespaceAddress;
port = service_configs.ports.torrent;
useSsl = false;
movieCategory = "movies";
};
}
];
};
};
services.bazarrInit = {
enable = true;
dataDir = "/var/lib/bazarr";
port = service_configs.ports.bazarr;
sonarr = {
enable = true;
dataDir = service_configs.sonarr.dataDir;
port = service_configs.ports.sonarr;
serviceName = "sonarr";
};
radarr = {
enable = true;
dataDir = service_configs.radarr.dataDir;
port = service_configs.ports.radarr;
serviceName = "radarr";
};
};
}

View File

@@ -0,0 +1,43 @@
{
pkgs,
config,
service_configs,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "jellyseerr" service_configs.zpool_ssds [
service_configs.jellyseerr.configDir
])
(lib.serviceFilePerms "jellyseerr" [
"Z ${service_configs.jellyseerr.configDir} 0700 jellyseerr jellyseerr"
])
];
services.jellyseerr = {
enable = true;
port = service_configs.ports.jellyseerr;
configDir = service_configs.jellyseerr.configDir;
};
systemd.services.jellyseerr.serviceConfig = {
DynamicUser = lib.mkForce false;
User = "jellyseerr";
Group = "jellyseerr";
ReadWritePaths = [ service_configs.jellyseerr.configDir ];
};
users.users.jellyseerr = {
isSystemUser = true;
group = "jellyseerr";
home = service_configs.jellyseerr.configDir;
};
users.groups.jellyseerr = { };
services.caddy.virtualHosts."jellyseerr.${service_configs.https.domain}".extraConfig = ''
# import ${config.age.secrets.caddy_auth.path}
reverse_proxy :${builtins.toString service_configs.ports.jellyseerr}
'';
}

30
services/arr/prowlarr.nix Normal file
View File

@@ -0,0 +1,30 @@
{
pkgs,
service_configs,
config,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "prowlarr" service_configs.zpool_ssds [
service_configs.prowlarr.dataDir
])
(lib.vpnNamespaceOpenPort service_configs.ports.prowlarr "prowlarr")
];
services.prowlarr = {
enable = true;
dataDir = service_configs.prowlarr.dataDir;
settings.server.port = service_configs.ports.prowlarr;
};
systemd.services.prowlarr.serviceConfig = {
ExecStartPre = "+${pkgs.coreutils}/bin/chown -R prowlarr /var/lib/prowlarr";
};
services.caddy.virtualHosts."prowlarr.${service_configs.https.domain}".extraConfig = ''
import ${config.age.secrets.caddy_auth.path}
reverse_proxy ${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.prowlarr}
'';
}

36
services/arr/radarr.nix Normal file
View File

@@ -0,0 +1,36 @@
{
pkgs,
config,
service_configs,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "radarr" service_configs.zpool_ssds [
service_configs.radarr.dataDir
])
(lib.serviceMountWithZpool "radarr" service_configs.zpool_hdds [
service_configs.torrents_path
])
(lib.serviceFilePerms "radarr" [
"Z ${service_configs.radarr.dataDir} 0700 ${config.services.radarr.user} ${config.services.radarr.group}"
])
];
services.radarr = {
enable = true;
dataDir = service_configs.radarr.dataDir;
settings.server.port = service_configs.ports.radarr;
settings.update.mechanism = "external";
};
services.caddy.virtualHosts."radarr.${service_configs.https.domain}".extraConfig = ''
import ${config.age.secrets.caddy_auth.path}
reverse_proxy :${builtins.toString service_configs.ports.radarr}
'';
users.users.${config.services.radarr.user}.extraGroups = [
service_configs.media_group
];
}

202
services/arr/recyclarr.nix Normal file
View File

@@ -0,0 +1,202 @@
{
pkgs,
config,
service_configs,
lib,
...
}:
let
radarrConfig = "${service_configs.radarr.dataDir}/config.xml";
sonarrConfig = "${service_configs.sonarr.dataDir}/config.xml";
appDataDir = "${service_configs.recyclarr.dataDir}/data";
# Runs as root (via + prefix) to read API keys, writes secrets.yml for recyclarr
generateSecrets = pkgs.writeShellScript "recyclarr-generate-secrets" ''
RADARR_KEY=$(${pkgs.gnugrep}/bin/grep -oP '(?<=<ApiKey>)[^<]+' ${radarrConfig})
SONARR_KEY=$(${pkgs.gnugrep}/bin/grep -oP '(?<=<ApiKey>)[^<]+' ${sonarrConfig})
cat > ${appDataDir}/secrets.yml <<EOF
movies_api_key: $RADARR_KEY
series_api_key: $SONARR_KEY
EOF
chown recyclarr:recyclarr ${appDataDir}/secrets.yml
chmod 600 ${appDataDir}/secrets.yml
'';
in
{
imports = [
(lib.serviceMountWithZpool "recyclarr" service_configs.zpool_ssds [
service_configs.recyclarr.dataDir
])
];
systemd.tmpfiles.rules = [
"d ${service_configs.recyclarr.dataDir} 0755 recyclarr recyclarr -"
"d ${appDataDir} 0755 recyclarr recyclarr -"
];
services.recyclarr = {
enable = true;
command = "sync";
schedule = "daily";
user = "recyclarr";
group = "recyclarr";
configuration = {
radarr.movies = {
base_url = "http://localhost:${builtins.toString service_configs.ports.radarr}";
include = [
{ template = "radarr-quality-definition-movie"; }
{ template = "radarr-quality-profile-remux-web-2160p"; }
{ template = "radarr-custom-formats-remux-web-2160p"; }
];
quality_profiles = [
{
name = "Remux + WEB 2160p";
upgrade = {
allowed = true;
until_quality = "Remux-2160p";
};
qualities = [
{ name = "Remux-2160p"; }
{
name = "WEB 2160p";
qualities = [
"WEBDL-2160p"
"WEBRip-2160p"
];
}
{ name = "Remux-1080p"; }
{ name = "Bluray-1080p"; }
{
name = "WEB 1080p";
qualities = [
"WEBDL-1080p"
"WEBRip-1080p"
];
}
{ name = "HDTV-1080p"; }
];
}
];
custom_formats = [
# Upscaled
{
trash_ids = [ "bfd8eb01832d646a0a89c4deb46f8564" ];
assign_scores_to = [
{
name = "Remux + WEB 2160p";
score = -10000;
}
];
}
# x265 (HD) - override template -10000 penalty
{
trash_ids = [ "dc98083864ea246d05a42df0d05f81cc" ];
assign_scores_to = [
{
name = "Remux + WEB 2160p";
score = 0;
}
];
}
# x265 (no HDR/DV) - override template -10000 penalty
{
trash_ids = [ "839bea857ed2c0a8e084f3cbdbd65ecb" ];
assign_scores_to = [
{
name = "Remux + WEB 2160p";
score = 0;
}
];
}
];
};
sonarr.series = {
base_url = "http://localhost:${builtins.toString service_configs.ports.sonarr}";
include = [
{ template = "sonarr-quality-definition-series"; }
{ template = "sonarr-v4-quality-profile-web-2160p"; }
{ template = "sonarr-v4-custom-formats-web-2160p"; }
];
quality_profiles = [
{
name = "WEB-2160p";
upgrade = {
allowed = true;
until_quality = "WEB 2160p";
};
qualities = [
{
name = "WEB 2160p";
qualities = [
"WEBDL-2160p"
"WEBRip-2160p"
];
}
{ name = "Bluray-1080p Remux"; }
{ name = "Bluray-1080p"; }
{
name = "WEB 1080p";
qualities = [
"WEBDL-1080p"
"WEBRip-1080p"
];
}
{ name = "HDTV-1080p"; }
];
}
];
custom_formats = [
# Upscaled
{
trash_ids = [ "23297a736ca77c0fc8e70f8edd7ee56c" ];
assign_scores_to = [
{
name = "WEB-2160p";
score = -10000;
}
];
}
# x265 (HD) - override template -10000 penalty
{
trash_ids = [ "47435ece6b99a0b477caf360e79ba0bb" ];
assign_scores_to = [
{
name = "WEB-2160p";
score = 0;
}
];
}
# x265 (no HDR/DV) - override template -10000 penalty
{
trash_ids = [ "9b64dff695c2115facf1b6ea59c9bd07" ];
assign_scores_to = [
{
name = "WEB-2160p";
score = 0;
}
];
}
];
};
};
};
# Add secrets generation before recyclarr runs
systemd.services.recyclarr = {
after = [
"network-online.target"
"radarr.service"
"sonarr.service"
];
wants = [ "network-online.target" ];
serviceConfig.ExecStartPre = "+${generateSecrets}";
};
}

42
services/arr/sonarr.nix Normal file
View File

@@ -0,0 +1,42 @@
{
pkgs,
config,
service_configs,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "sonarr" service_configs.zpool_ssds [
service_configs.sonarr.dataDir
])
(lib.serviceMountWithZpool "sonarr" service_configs.zpool_hdds [
service_configs.torrents_path
])
(lib.serviceFilePerms "sonarr" [
"Z ${service_configs.sonarr.dataDir} 0700 ${config.services.sonarr.user} ${config.services.sonarr.group}"
])
];
systemd.tmpfiles.rules = [
"d /torrents/media 2775 root ${service_configs.media_group} -"
"d ${service_configs.media.tvDir} 2775 root ${service_configs.media_group} -"
"d ${service_configs.media.moviesDir} 2775 root ${service_configs.media_group} -"
];
services.sonarr = {
enable = true;
dataDir = service_configs.sonarr.dataDir;
settings.server.port = service_configs.ports.sonarr;
settings.update.mechanism = "external";
};
services.caddy.virtualHosts."sonarr.${service_configs.https.domain}".extraConfig = ''
import ${config.age.secrets.caddy_auth.path}
reverse_proxy :${builtins.toString service_configs.ports.sonarr}
'';
users.users.${config.services.sonarr.user}.extraGroups = [
service_configs.media_group
];
}

View File

@@ -25,7 +25,7 @@
}; };
services.caddy.virtualHosts."bitmagnet.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."bitmagnet.${service_configs.https.domain}".extraConfig = ''
${builtins.readFile ../secrets/caddy_auth} import ${config.age.secrets.caddy_auth.path}
reverse_proxy ${service_configs.https.wg_ip}:${builtins.toString service_configs.ports.bitmagnet} reverse_proxy ${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.bitmagnet}
''; '';
} }

View File

@@ -7,16 +7,18 @@
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "vaultwarden" [ (lib.serviceMountWithZpool "vaultwarden" service_configs.zpool_ssds [
service_configs.vaultwarden.path service_configs.vaultwarden.path
config.services.vaultwarden.backupDir config.services.vaultwarden.backupDir
]) ])
(lib.serviceMountDeps "backup-vaultwarden" [ (lib.serviceMountWithZpool "backup-vaultwarden" service_configs.zpool_ssds [
service_configs.vaultwarden.path service_configs.vaultwarden.path
config.services.vaultwarden.backupDir config.services.vaultwarden.backupDir
]) ])
(lib.serviceDependZpool "vaultwarden" service_configs.zpool_ssds) (lib.serviceFilePerms "vaultwarden" [
(lib.serviceDependZpool "backup-vaultwarden" service_configs.zpool_ssds) "Z ${service_configs.vaultwarden.path} 0700 vaultwarden vaultwarden"
"Z ${config.services.vaultwarden.backupDir} 0700 vaultwarden vaultwarden"
])
]; ];
services.vaultwarden = { services.vaultwarden = {
@@ -41,8 +43,18 @@
} }
''; '';
systemd.tmpfiles.rules = [ # Protect Vaultwarden login from brute force attacks
"d ${service_configs.vaultwarden.path} 0700 vaultwarden vaultwarden" services.fail2ban.jails.vaultwarden = {
"d ${config.services.vaultwarden.backupDir} 0700 vaultwarden vaultwarden" enabled = true;
]; settings = {
backend = "systemd";
port = "http,https";
# defaults: maxretry=5, findtime=10m, bantime=10m
};
filter.Definition = {
failregex = ''^.*Username or password is incorrect\. Try again\. IP: <HOST>\..*$'';
ignoreregex = "";
journalmatch = "_SYSTEMD_UNIT=vaultwarden.service";
};
};
} }

View File

@@ -1,7 +1,6 @@
{ {
config, config,
service_configs, service_configs,
username,
pkgs, pkgs,
lib, lib,
inputs, inputs,
@@ -45,10 +44,9 @@ let
in in
{ {
imports = [ imports = [
(lib.serviceMountDeps "caddy" [ (lib.serviceMountWithZpool "caddy" service_configs.zpool_ssds [
config.services.caddy.dataDir config.services.caddy.dataDir
]) ])
(lib.serviceDependZpool "caddy" service_configs.zpool_ssds)
]; ];
services.caddy = { services.caddy = {
@@ -76,14 +74,34 @@ in
service_configs.ports.https service_configs.ports.https
# http (but really acmeCA challenges) # http (but really acmeCA challenges)
80 service_configs.ports.http
]; ];
networking.firewall.allowedUDPPorts = [ networking.firewall.allowedUDPPorts = [
service_configs.ports.https service_configs.ports.https
]; ];
users.users.${username}.extraGroups = [ # Protect Caddy basic auth endpoints from brute force attacks
config.services.caddy.group services.fail2ban.jails.caddy-auth = {
]; enabled = true;
settings = {
backend = "auto";
port = "http,https";
logpath = "/var/log/caddy/access-*.log";
# defaults: maxretry=5, findtime=10m, bantime=10m
# Ignore local network IPs - NAT hairpinning causes all LAN traffic to
# appear from the router IP (192.168.1.1). Banning it blocks all internal access.
ignoreip = "127.0.0.1/8 ::1 192.168.1.0/24";
};
filter.Definition = {
# Only match 401s where an Authorization header was actually sent.
# Without this, the normal HTTP Basic Auth challenge-response flow
# (browser probes without credentials, gets 401, then resends with
# credentials) counts every page visit as a "failure."
failregex = ''^.*"remote_ip":"<HOST>".*"Authorization":\["REDACTED"\].*"status":401.*$'';
ignoreregex = "";
datepattern = ''"ts":{Epoch}\.'';
};
};
} }

59
services/coturn.nix Normal file
View File

@@ -0,0 +1,59 @@
{
config,
lib,
service_configs,
...
}:
{
services.coturn = {
enable = true;
realm = service_configs.https.domain;
use-auth-secret = true;
static-auth-secret = lib.strings.trim (builtins.readFile ../secrets/coturn_static_auth_secret);
listening-port = service_configs.ports.coturn;
tls-listening-port = service_configs.ports.coturn_tls;
no-cli = true;
# recommended security settings from Synapse's coturn docs
extraConfig = ''
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
denied-peer-ip=0.0.0.0-0.255.255.255
denied-peer-ip=100.64.0.0-100.127.255.255
denied-peer-ip=169.254.0.0-169.254.255.255
denied-peer-ip=192.0.0.0-192.0.0.255
denied-peer-ip=198.18.0.0-198.19.255.255
denied-peer-ip=198.51.100.0-198.51.100.255
denied-peer-ip=203.0.113.0-203.0.113.255
denied-peer-ip=240.0.0.0-255.255.255.255
denied-peer-ip=::1
denied-peer-ip=64:ff9b::-64:ff9b::ffff:ffff
denied-peer-ip=::ffff:0.0.0.0-::ffff:255.255.255.255
denied-peer-ip=100::-100::ffff:ffff:ffff:ffff
denied-peer-ip=2001::-2001:1ff:ffff:ffff:ffff:ffff:ffff:ffff
denied-peer-ip=2002::-2002:ffff:ffff:ffff:ffff:ffff:ffff:ffff
denied-peer-ip=fc00::-fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
denied-peer-ip=fe80::-febf:ffff:ffff:ffff:ffff:ffff:ffff:ffff
'';
};
# coturn needs these ports open
networking.firewall = {
allowedTCPPorts = [
service_configs.ports.coturn
service_configs.ports.coturn_tls
];
allowedUDPPorts = [
service_configs.ports.coturn
service_configs.ports.coturn_tls
];
# relay port range
allowedUDPPortRanges = [
{
from = config.services.coturn.min-port;
to = config.services.coturn.max-port;
}
];
};
}

View File

@@ -3,13 +3,14 @@
lib, lib,
config, config,
service_configs, service_configs,
username,
... ...
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "gitea" [ config.services.gitea.stateDir ]) (lib.serviceMountWithZpool "gitea" service_configs.zpool_ssds [ config.services.gitea.stateDir ])
(lib.serviceDependZpool "gitea" service_configs.zpool_ssds) (lib.serviceFilePerms "gitea" [
"Z ${config.services.gitea.stateDir} 0700 ${config.services.gitea.user} ${config.services.gitea.group}"
])
]; ];
services.gitea = { services.gitea = {
@@ -43,11 +44,6 @@
reverse_proxy :${builtins.toString config.services.gitea.settings.server.HTTP_PORT} reverse_proxy :${builtins.toString config.services.gitea.settings.server.HTTP_PORT}
''; '';
systemd.tmpfiles.rules = [
# 0700 for ssh permission reasons
"d ${config.services.gitea.stateDir} 0700 ${config.services.gitea.user} ${config.services.gitea.group}"
];
services.postgresql = { services.postgresql = {
ensureDatabases = [ config.services.gitea.user ]; ensureDatabases = [ config.services.gitea.user ];
ensureUsers = [ ensureUsers = [
@@ -61,7 +57,18 @@
services.openssh.settings.AllowUsers = [ config.services.gitea.user ]; services.openssh.settings.AllowUsers = [ config.services.gitea.user ];
users.users.${username}.extraGroups = [ # Protect Gitea login from brute force attacks
config.services.gitea.group services.fail2ban.jails.gitea = {
]; enabled = true;
settings = {
backend = "systemd";
port = "http,https";
# defaults: maxretry=5, findtime=10m, bantime=10m
};
filter.Definition = {
failregex = "^.*Failed authentication attempt for .* from <HOST>:.*$";
ignoreregex = "";
journalmatch = "_SYSTEMD_UNIT=gitea.service";
};
};
} }

View File

@@ -0,0 +1,16 @@
{
service_configs,
inputs,
pkgs,
...
}:
let
graphing-calculator =
inputs.ytbn-graphing-software.packages.${pkgs.stdenv.targetPlatform.system}.web;
in
{
services.caddy.virtualHosts."graphing.${service_configs.https.domain}".extraConfig = ''
root * ${graphing-calculator}
file_server browse
'';
}

View File

@@ -2,16 +2,20 @@
service_configs, service_configs,
pkgs, pkgs,
config, config,
username,
lib, lib,
... ...
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "immich-server" [ config.services.immich.mediaLocation ]) (lib.serviceMountWithZpool "immich-server" service_configs.zpool_ssds [
(lib.serviceMountDeps "immich-machine-learning" [ config.services.immich.mediaLocation ]) config.services.immich.mediaLocation
(lib.serviceDependZpool "immich-server" service_configs.zpool_ssds) ])
(lib.serviceDependZpool "immich-machine-learning" service_configs.zpool_ssds) (lib.serviceMountWithZpool "immich-machine-learning" service_configs.zpool_ssds [
config.services.immich.mediaLocation
])
(lib.serviceFilePerms "immich-server" [
"Z ${config.services.immich.mediaLocation} 0770 ${config.services.immich.user} ${config.services.immich.group}"
])
]; ];
services.immich = { services.immich = {
@@ -29,10 +33,6 @@
reverse_proxy :${builtins.toString config.services.immich.port} reverse_proxy :${builtins.toString config.services.immich.port}
''; '';
systemd.tmpfiles.rules = [
"d ${config.services.immich.mediaLocation} 0770 ${config.services.immich.user} ${config.services.immich.group}"
];
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
immich-go immich-go
]; ];
@@ -42,7 +42,18 @@
"render" "render"
]; ];
users.users.${username}.extraGroups = [ # Protect Immich login from brute force attacks
config.services.immich.group services.fail2ban.jails.immich = {
]; enabled = true;
settings = {
backend = "systemd";
port = "http,https";
# defaults: maxretry=5, findtime=10m, bantime=10m
};
filter.Definition = {
failregex = "^.*Failed login attempt for user .* from ip address <HOST>.*$";
ignoreregex = "";
journalmatch = "_SYSTEMD_UNIT=immich-server.service";
};
};
} }

View File

@@ -0,0 +1,57 @@
{
pkgs,
service_configs,
config,
...
}:
{
systemd.services."jellyfin-qbittorrent-monitor" = {
description = "Monitor Jellyfin streaming and control qBittorrent rate limits";
after = [
"network.target"
"jellyfin.service"
"qbittorrent.service"
];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = pkgs.writeShellScript "jellyfin-monitor-start" ''
export JELLYFIN_API_KEY=$(cat $CREDENTIALS_DIRECTORY/jellyfin-api-key)
exec ${
pkgs.python3.withPackages (ps: with ps; [ requests ])
}/bin/python ${./jellyfin-qbittorrent-monitor.py}
'';
Restart = "always";
RestartSec = "10s";
# Security hardening
DynamicUser = true;
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
MemoryDenyWriteExecute = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
RemoveIPC = true;
# Load credentials from agenix secrets
LoadCredential = "jellyfin-api-key:${config.age.secrets.jellyfin-api-key.path}";
};
environment = {
JELLYFIN_URL = "http://localhost:${builtins.toString service_configs.ports.jellyfin}";
QBITTORRENT_URL = "http://${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString service_configs.ports.torrent}";
CHECK_INTERVAL = "30";
# Bandwidth budget configuration
TOTAL_BANDWIDTH_BUDGET = "30000000"; # 30 Mbps in bits per second
SERVICE_BUFFER = "5000000"; # 5 Mbps reserved for other services (bps)
DEFAULT_STREAM_BITRATE = "10000000"; # 10 Mbps fallback when bitrate unknown (bps)
MIN_TORRENT_SPEED = "100"; # KB/s - below this, pause torrents instead
STREAM_BITRATE_HEADROOM = "1.1"; # multiplier per stream for bitrate fluctuations
};
};
}

View File

@@ -3,10 +3,10 @@
import requests import requests
import time import time
import logging import logging
from datetime import datetime
import sys import sys
import signal import signal
import json import json
import ipaddress
logging.basicConfig( logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s" level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
@@ -14,6 +14,12 @@ logging.basicConfig(
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class ServiceUnavailable(Exception):
"""Raised when a monitored service is temporarily unavailable."""
pass
class JellyfinQBittorrentMonitor: class JellyfinQBittorrentMonitor:
def __init__( def __init__(
self, self,
@@ -21,62 +27,112 @@ class JellyfinQBittorrentMonitor:
qbittorrent_url="http://localhost:8080", qbittorrent_url="http://localhost:8080",
check_interval=30, check_interval=30,
jellyfin_api_key=None, jellyfin_api_key=None,
streaming_start_delay=10,
streaming_stop_delay=60,
total_bandwidth_budget=30000000,
service_buffer=5000000,
default_stream_bitrate=10000000,
min_torrent_speed=100,
stream_bitrate_headroom=1.1,
): ):
self.jellyfin_url = jellyfin_url self.jellyfin_url = jellyfin_url
self.qbittorrent_url = qbittorrent_url self.qbittorrent_url = qbittorrent_url
self.check_interval = check_interval self.check_interval = check_interval
self.jellyfin_api_key = jellyfin_api_key self.jellyfin_api_key = jellyfin_api_key
self.total_bandwidth_budget = total_bandwidth_budget
self.service_buffer = service_buffer
self.default_stream_bitrate = default_stream_bitrate
self.min_torrent_speed = min_torrent_speed
self.stream_bitrate_headroom = stream_bitrate_headroom
self.last_streaming_state = None self.last_streaming_state = None
self.throttle_active = False self.current_state = "unlimited"
self.torrents_paused = False
self.last_alt_limits = None
self.running = True self.running = True
self.session = requests.Session() # Use session for cookies self.session = requests.Session() # Use session for cookies
self.last_active_streams = []
# Hysteresis settings to prevent rapid switching # Hysteresis settings to prevent rapid switching
self.streaming_start_delay = 10 self.streaming_start_delay = streaming_start_delay
self.streaming_stop_delay = 60 self.streaming_stop_delay = streaming_stop_delay
self.last_state_change = 0 self.last_state_change = 0
# Local network ranges (RFC 1918 private networks + localhost)
self.local_networks = [
ipaddress.ip_network("10.0.0.0/8"),
ipaddress.ip_network("172.16.0.0/12"),
ipaddress.ip_network("192.168.0.0/16"),
ipaddress.ip_network("127.0.0.0/8"),
ipaddress.ip_network("::1/128"), # IPv6 localhost
ipaddress.ip_network("fe80::/10"), # IPv6 link-local
]
def is_local_ip(self, ip_address: str) -> bool:
"""Check if an IP address is from a local network"""
try:
ip = ipaddress.ip_address(ip_address)
return any(ip in network for network in self.local_networks)
except ValueError:
logger.warning(f"Invalid IP address format: {ip_address}")
return True # Treat invalid IPs as local for safety
def signal_handler(self, signum, frame): def signal_handler(self, signum, frame):
logger.info("Received shutdown signal, cleaning up...") logger.info("Received shutdown signal, cleaning up...")
self.running = False self.running = False
self.restore_normal_limits() self.restore_normal_limits()
sys.exit(0) sys.exit(0)
def check_jellyfin_sessions(self): def check_jellyfin_sessions(self) -> list[dict]:
"""Check if anyone is actively streaming from Jellyfin""" headers = (
try: {"X-Emby-Token": self.jellyfin_api_key} if self.jellyfin_api_key else {}
headers = {} )
if self.jellyfin_api_key:
headers["X-Emby-Token"] = self.jellyfin_api_key
try:
response = requests.get( response = requests.get(
f"{self.jellyfin_url}/Sessions", headers=headers, timeout=10 f"{self.jellyfin_url}/Sessions", headers=headers, timeout=10
) )
response.raise_for_status() response.raise_for_status()
sessions = response.json()
# Count active streaming sessions
active_streams = []
for session in sessions:
if (
"NowPlayingItem" in session
and session.get("PlayState", {}).get("IsPaused", True) == False
):
item = session["NowPlayingItem"]
user = session.get("UserName", "Unknown")
active_streams.append(f"{user}: {item.get('Name', 'Unknown')}")
return active_streams
except requests.exceptions.RequestException as e: except requests.exceptions.RequestException as e:
logger.error(f"Failed to check Jellyfin sessions: {e}") logger.error(f"Failed to check Jellyfin sessions: {e}")
return [] raise ServiceUnavailable(f"Jellyfin unavailable: {e}") from e
try:
sessions = response.json()
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
logger.error(f"Failed to parse Jellyfin response: {e}") logger.error(f"Failed to parse Jellyfin response: {e}")
return [] raise ServiceUnavailable(f"Jellyfin returned invalid JSON: {e}") from e
def check_qbittorrent_alternate_limits(self): active_streams = []
"""Check if alternate speed limits are currently enabled""" for session in sessions:
if (
"NowPlayingItem" in session
and not session.get("PlayState", {}).get("IsPaused", True)
and not self.is_local_ip(session.get("RemoteEndPoint", ""))
):
item = session["NowPlayingItem"]
item_type = item.get("Type", "").lower()
if item_type in ["movie", "episode", "video"]:
user = session.get("UserName", "Unknown")
stream_name = f"{user}: {item.get('Name', 'Unknown')}"
if session.get("TranscodingInfo") and session[
"TranscodingInfo"
].get("Bitrate"):
bitrate = session["TranscodingInfo"]["Bitrate"]
elif item.get("Bitrate"):
bitrate = item["Bitrate"]
elif item.get("MediaSources", [{}])[0].get("Bitrate"):
bitrate = item["MediaSources"][0]["Bitrate"]
else:
bitrate = self.default_stream_bitrate
bitrate = min(int(bitrate), 100_000_000)
# Add headroom to account for bitrate fluctuations
bitrate = int(bitrate * self.stream_bitrate_headroom)
active_streams.append({"name": stream_name, "bitrate_bps": bitrate})
return active_streams
def check_qbittorrent_alternate_limits(self) -> bool:
try: try:
response = self.session.get( response = self.session.get(
f"{self.qbittorrent_url}/api/v2/transfer/speedLimitsMode", timeout=10 f"{self.qbittorrent_url}/api/v2/transfer/speedLimitsMode", timeout=10
@@ -87,38 +143,20 @@ class JellyfinQBittorrentMonitor:
logger.warning( logger.warning(
f"SpeedLimitsMode endpoint returned HTTP {response.status_code}" f"SpeedLimitsMode endpoint returned HTTP {response.status_code}"
) )
raise ServiceUnavailable(
f"qBittorrent returned HTTP {response.status_code}"
)
except requests.exceptions.RequestException as e: except requests.exceptions.RequestException as e:
logger.error(f"SpeedLimitsMode endpoint failed: {e}") logger.error(f"SpeedLimitsMode endpoint failed: {e}")
except Exception as e: raise ServiceUnavailable(f"qBittorrent unavailable: {e}") from e
logger.error(f"Failed to parse speedLimitsMode response: {e}")
# Fallback: try transfer info endpoint def use_alt_limits(self, enable: bool) -> None:
try: action = "enabled" if enable else "disabled"
response = self.session.get(
f"{self.qbittorrent_url}/api/v2/transfer/info", timeout=10
)
if response.status_code == 200:
data = response.json()
if "use_alt_speed_limits" in data:
return data["use_alt_speed_limits"]
except Exception as e:
logger.error(f"Transfer info fallback failed: {e}")
logger.warning(
"Could not determine qBittorrent alternate limits status, using tracked state"
)
return self.throttle_active
def toggle_qbittorrent_limits(self, enable_throttle):
"""Toggle qBittorrent alternate speed limits"""
try: try:
current_throttle = self.check_qbittorrent_alternate_limits() current_throttle = self.check_qbittorrent_alternate_limits()
if current_throttle == enable_throttle: if current_throttle == enable:
action = "enabled" if enable_throttle else "disabled" logger.debug(
logger.info(
f"Alternate speed limits already {action}, no action needed" f"Alternate speed limits already {action}, no action needed"
) )
return return
@@ -128,32 +166,95 @@ class JellyfinQBittorrentMonitor:
timeout=10, timeout=10,
) )
response.raise_for_status() response.raise_for_status()
self.throttle_active = enable_throttle
# Verify the change took effect
new_state = self.check_qbittorrent_alternate_limits() new_state = self.check_qbittorrent_alternate_limits()
if new_state == enable_throttle: if new_state == enable:
action = "enabled" if enable_throttle else "disabled" logger.info(f"Alternate speed limits {action}")
logger.info(f"✓ Successfully {action} alternate speed limits")
else: else:
logger.warning( logger.warning(
f"Toggle may have failed: expected {enable_throttle}, got {new_state}" f"Toggle may have failed: expected {enable}, got {new_state}"
) )
except ServiceUnavailable:
logger.warning(
f"qBittorrent unavailable, cannot {action} alternate speed limits"
)
except requests.exceptions.RequestException as e: except requests.exceptions.RequestException as e:
action = "enable" if enable_throttle else "disable"
logger.error(f"Failed to {action} alternate speed limits: {e}") logger.error(f"Failed to {action} alternate speed limits: {e}")
except Exception as e:
logger.error(f"Failed to toggle qBittorrent limits: {e}")
def restore_normal_limits(self): def pause_all_torrents(self) -> None:
"""Ensure normal speed limits are restored on shutdown""" try:
if self.throttle_active: response = self.session.post(
f"{self.qbittorrent_url}/api/v2/torrents/stop",
data={"hashes": "all"},
timeout=10,
)
response.raise_for_status()
except requests.exceptions.RequestException as e:
logger.error(f"Failed to pause torrents: {e}")
def resume_all_torrents(self) -> None:
try:
response = self.session.post(
f"{self.qbittorrent_url}/api/v2/torrents/start",
data={"hashes": "all"},
timeout=10,
)
response.raise_for_status()
except requests.exceptions.RequestException as e:
logger.error(f"Failed to resume torrents: {e}")
def set_alt_speed_limits(self, dl_kbs: float, ul_kbs: float) -> None:
try:
payload = {
"alt_dl_limit": int(dl_kbs * 1024),
"alt_up_limit": int(ul_kbs * 1024),
}
response = self.session.post(
f"{self.qbittorrent_url}/api/v2/app/setPreferences",
data={"json": json.dumps(payload)},
timeout=10,
)
response.raise_for_status()
self.last_alt_limits = (dl_kbs, ul_kbs)
except requests.exceptions.RequestException as e:
logger.error(f"Failed to set alternate speed limits: {e}")
def restore_normal_limits(self) -> None:
if self.torrents_paused:
logger.info("Resuming all torrents before shutdown...")
self.resume_all_torrents()
self.torrents_paused = False
if self.current_state != "unlimited":
logger.info("Restoring normal speed limits before shutdown...") logger.info("Restoring normal speed limits before shutdown...")
self.toggle_qbittorrent_limits(False) self.use_alt_limits(False)
self.current_state = "unlimited"
def should_change_state(self, new_streaming_state): def sync_qbittorrent_state(self) -> None:
try:
if self.current_state == "unlimited":
actual_state = self.check_qbittorrent_alternate_limits()
if actual_state:
logger.warning(
"qBittorrent state mismatch detected: expected alt speed OFF, got ON. Re-syncing..."
)
self.use_alt_limits(False)
elif self.current_state == "throttled":
if self.last_alt_limits:
self.set_alt_speed_limits(*self.last_alt_limits)
actual_state = self.check_qbittorrent_alternate_limits()
if not actual_state:
logger.warning(
"qBittorrent state mismatch detected: expected alt speed ON, got OFF. Re-syncing..."
)
self.use_alt_limits(True)
elif self.current_state == "paused":
self.pause_all_torrents()
self.torrents_paused = True
except ServiceUnavailable:
pass
def should_change_state(self, new_streaming_state: bool) -> bool:
"""Apply hysteresis to prevent rapid state changes""" """Apply hysteresis to prevent rapid state changes"""
now = time.time() now = time.time()
@@ -162,7 +263,6 @@ class JellyfinQBittorrentMonitor:
time_since_change = now - self.last_state_change time_since_change = now - self.last_state_change
# Start throttling (streaming started)
if new_streaming_state and not self.last_streaming_state: if new_streaming_state and not self.last_streaming_state:
if time_since_change >= self.streaming_start_delay: if time_since_change >= self.streaming_start_delay:
self.last_state_change = now self.last_state_change = now
@@ -170,10 +270,9 @@ class JellyfinQBittorrentMonitor:
else: else:
remaining = self.streaming_start_delay - time_since_change remaining = self.streaming_start_delay - time_since_change
logger.info( logger.info(
f"Streaming started - waiting {remaining:.1f}s before enabling throttling" f"Streaming started - waiting {remaining:.1f}s before enforcing limits"
) )
# Stop throttling (streaming stopped)
elif not new_streaming_state and self.last_streaming_state: elif not new_streaming_state and self.last_streaming_state:
if time_since_change >= self.streaming_stop_delay: if time_since_change >= self.streaming_stop_delay:
self.last_state_change = now self.last_state_change = now
@@ -181,41 +280,120 @@ class JellyfinQBittorrentMonitor:
else: else:
remaining = self.streaming_stop_delay - time_since_change remaining = self.streaming_stop_delay - time_since_change
logger.info( logger.info(
f"Streaming stopped - waiting {remaining:.1f}s before disabling throttling" f"Streaming stopped - waiting {remaining:.1f}s before restoring unlimited mode"
) )
return False return False
def run(self): def run(self):
"""Main monitoring loop"""
logger.info("Starting Jellyfin-qBittorrent monitor") logger.info("Starting Jellyfin-qBittorrent monitor")
logger.info(f"Jellyfin URL: {self.jellyfin_url}") logger.info(f"Jellyfin URL: {self.jellyfin_url}")
logger.info(f"qBittorrent URL: {self.qbittorrent_url}") logger.info(f"qBittorrent URL: {self.qbittorrent_url}")
logger.info(f"Check interval: {self.check_interval}s") logger.info(f"Check interval: {self.check_interval}s")
logger.info(f"Streaming start delay: {self.streaming_start_delay}s")
logger.info(f"Streaming stop delay: {self.streaming_stop_delay}s")
logger.info(f"Total bandwidth budget: {self.total_bandwidth_budget} bps")
logger.info(f"Service buffer: {self.service_buffer} bps")
logger.info(f"Default stream bitrate: {self.default_stream_bitrate} bps")
logger.info(f"Minimum torrent speed: {self.min_torrent_speed} KB/s")
logger.info(f"Stream bitrate headroom: {self.stream_bitrate_headroom}x")
# Set up signal handlers
signal.signal(signal.SIGINT, self.signal_handler) signal.signal(signal.SIGINT, self.signal_handler)
signal.signal(signal.SIGTERM, self.signal_handler) signal.signal(signal.SIGTERM, self.signal_handler)
while self.running: while self.running:
try: try:
# Check for active streaming self.sync_qbittorrent_state()
active_streams = self.check_jellyfin_sessions()
try:
active_streams = self.check_jellyfin_sessions()
except ServiceUnavailable:
logger.warning("Jellyfin unavailable, maintaining current state")
time.sleep(self.check_interval)
continue
streaming_active = len(active_streams) > 0 streaming_active = len(active_streams) > 0
# Log current status if active_streams:
if streaming_active: for stream in active_streams:
logger.info( logger.debug(
f"Active streams ({len(active_streams)}): {', '.join(active_streams)}" f"Active stream: {stream['name']} ({stream['bitrate_bps']} bps)"
) )
elif len(active_streams) == 0 and self.last_streaming_state:
logger.info("No active streaming sessions") if active_streams != self.last_active_streams:
if streaming_active:
stream_names = ", ".join(
stream["name"] for stream in active_streams
)
logger.info(
f"Active streams ({len(active_streams)}): {stream_names}"
)
elif len(active_streams) == 0 and self.last_streaming_state:
logger.info("No active streaming sessions")
# Apply hysteresis and change state if needed
if self.should_change_state(streaming_active): if self.should_change_state(streaming_active):
self.last_streaming_state = streaming_active self.last_streaming_state = streaming_active
self.toggle_qbittorrent_limits(streaming_active)
streaming_state = bool(self.last_streaming_state)
total_streaming_bps = sum(
stream["bitrate_bps"] for stream in active_streams
)
remaining_bps = (
self.total_bandwidth_budget
- self.service_buffer
- total_streaming_bps
)
remaining_kbs = max(0, remaining_bps) / 8 / 1024
if not streaming_state:
desired_state = "unlimited"
elif streaming_active:
if remaining_kbs >= self.min_torrent_speed:
desired_state = "throttled"
else:
desired_state = "paused"
else:
desired_state = self.current_state
if desired_state != self.current_state:
if desired_state == "unlimited":
action = "resume torrents, disable alt speed"
elif desired_state == "throttled":
action = (
"set alt limits "
f"dl={int(remaining_kbs)}KB/s ul={int(remaining_kbs)}KB/s, enable alt speed"
)
else:
action = "pause torrents"
logger.info(
"State change %s -> %s | streams=%d total_bps=%d remaining_bps=%d action=%s",
self.current_state,
desired_state,
len(active_streams),
total_streaming_bps,
remaining_bps,
action,
)
if desired_state == "unlimited":
if self.torrents_paused:
self.resume_all_torrents()
self.torrents_paused = False
self.use_alt_limits(False)
elif desired_state == "throttled":
if self.torrents_paused:
self.resume_all_torrents()
self.torrents_paused = False
self.set_alt_speed_limits(remaining_kbs, remaining_kbs)
self.use_alt_limits(True)
else:
if not self.torrents_paused:
self.pause_all_torrents()
self.torrents_paused = True
self.current_state = desired_state
self.last_active_streams = active_streams
time.sleep(self.check_interval) time.sleep(self.check_interval)
except KeyboardInterrupt: except KeyboardInterrupt:
@@ -236,12 +414,26 @@ if __name__ == "__main__":
qbittorrent_url = os.getenv("QBITTORRENT_URL", "http://localhost:8080") qbittorrent_url = os.getenv("QBITTORRENT_URL", "http://localhost:8080")
check_interval = int(os.getenv("CHECK_INTERVAL", "30")) check_interval = int(os.getenv("CHECK_INTERVAL", "30"))
jellyfin_api_key = os.getenv("JELLYFIN_API_KEY") jellyfin_api_key = os.getenv("JELLYFIN_API_KEY")
streaming_start_delay = int(os.getenv("STREAMING_START_DELAY", "10"))
streaming_stop_delay = int(os.getenv("STREAMING_STOP_DELAY", "60"))
total_bandwidth_budget = int(os.getenv("TOTAL_BANDWIDTH_BUDGET", "30000000"))
service_buffer = int(os.getenv("SERVICE_BUFFER", "5000000"))
default_stream_bitrate = int(os.getenv("DEFAULT_STREAM_BITRATE", "10000000"))
min_torrent_speed = int(os.getenv("MIN_TORRENT_SPEED", "100"))
stream_bitrate_headroom = float(os.getenv("STREAM_BITRATE_HEADROOM", "1.1"))
monitor = JellyfinQBittorrentMonitor( monitor = JellyfinQBittorrentMonitor(
jellyfin_url=jellyfin_url, jellyfin_url=jellyfin_url,
qbittorrent_url=qbittorrent_url, qbittorrent_url=qbittorrent_url,
check_interval=check_interval, check_interval=check_interval,
jellyfin_api_key=jellyfin_api_key, jellyfin_api_key=jellyfin_api_key,
streaming_start_delay=streaming_start_delay,
streaming_stop_delay=streaming_stop_delay,
total_bandwidth_budget=total_bandwidth_budget,
service_buffer=service_buffer,
default_stream_bitrate=default_stream_bitrate,
min_torrent_speed=min_torrent_speed,
stream_bitrate_headroom=stream_bitrate_headroom,
) )
monitor.run() monitor.run()

View File

@@ -2,17 +2,19 @@
pkgs, pkgs,
config, config,
service_configs, service_configs,
username,
lib, lib,
... ...
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "jellyfin" [ (lib.serviceMountWithZpool "jellyfin" service_configs.zpool_ssds [
config.services.jellyfin.dataDir config.services.jellyfin.dataDir
config.services.jellyfin.cacheDir config.services.jellyfin.cacheDir
]) ])
(lib.serviceDependZpool "jellyfin" service_configs.zpool_ssds) (lib.serviceFilePerms "jellyfin" [
"Z ${config.services.jellyfin.dataDir} 0700 ${config.services.jellyfin.user} ${config.services.jellyfin.group}"
"Z ${config.services.jellyfin.cacheDir} 0700 ${config.services.jellyfin.user} ${config.services.jellyfin.group}"
])
]; ];
services.jellyfin = { services.jellyfin = {
@@ -25,24 +27,34 @@
}; };
services.caddy.virtualHosts."jellyfin.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."jellyfin.${service_configs.https.domain}".extraConfig = ''
reverse_proxy :${builtins.toString service_configs.ports.jellyfin} reverse_proxy :${builtins.toString service_configs.ports.jellyfin} {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
}
request_body { request_body {
max_size 4096MB max_size 4096MB
} }
''; '';
systemd.tmpfiles.rules = [
"d ${config.services.jellyfin.dataDir} 0700 ${config.services.jellyfin.user} ${config.services.jellyfin.group}"
"d ${config.services.jellyfin.cacheDir} 0700 ${config.services.jellyfin.user} ${config.services.jellyfin.group}"
];
users.users.${config.services.jellyfin.user}.extraGroups = [ users.users.${config.services.jellyfin.user}.extraGroups = [
"video" "video"
"render" "render"
service_configs.media_group service_configs.media_group
]; ];
users.users.${username}.extraGroups = [ # Protect Jellyfin login from brute force attacks
config.services.jellyfin.group services.fail2ban.jails.jellyfin = {
]; enabled = true;
settings = {
backend = "auto";
port = "http,https";
logpath = "${config.services.jellyfin.dataDir}/log/log_*.log";
# defaults: maxretry=5, findtime=10m, bantime=10m
};
filter.Definition = {
failregex = ''^.*Authentication request for .* has been denied \(IP: "<ADDR>"\)\..*$'';
ignoreregex = "";
};
};
} }

53
services/livekit.nix Normal file
View File

@@ -0,0 +1,53 @@
{
service_configs,
...
}:
let
keyFile = ../secrets/livekit_keys;
ports = service_configs.ports;
in
{
services.livekit = {
enable = true;
inherit keyFile;
openFirewall = true;
settings = {
port = ports.livekit;
bind_addresses = [ "127.0.0.1" ];
rtc = {
port_range_start = 50100;
port_range_end = 50200;
use_external_ip = true;
};
# Disable LiveKit's built-in TURN; coturn is already running
turn = {
enabled = false;
};
logging = {
level = "info";
};
};
};
services.lk-jwt-service = {
enable = true;
inherit keyFile;
livekitUrl = "wss://${service_configs.livekit.domain}";
port = ports.lk_jwt;
};
services.caddy.virtualHosts."${service_configs.livekit.domain}".extraConfig = ''
@jwt path /sfu/get /healthz
handle @jwt {
reverse_proxy :${builtins.toString ports.lk_jwt}
}
handle {
reverse_proxy :${builtins.toString ports.livekit}
}
'';
}

View File

@@ -1,35 +0,0 @@
{
pkgs,
service_configs,
config,
inputs,
lib,
...
}:
{
services.llama-cpp = {
enable = true;
model = builtins.toString (
pkgs.fetchurl {
url = "https://huggingface.co/ggml-org/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-mxfp4.gguf";
sha256 = "52f57ab7d3df3ba9173827c1c6832e73375553a846f3e32b49f1ae2daad688d4";
}
);
port = service_configs.ports.llama_cpp;
host = "0.0.0.0";
# vulkan broken: https://github.com/ggml-org/llama.cpp/issues/13801
package = (lib.optimizePackage inputs.llamacpp.packages.${pkgs.system}.default);
extraFlags = [
# "-ngl"
# "9999"
];
};
# have to do this in order to get vulkan to work
systemd.services.llama-cpp.serviceConfig.DynamicUser = lib.mkForce false;
services.caddy.virtualHosts."llm.${service_configs.https.domain}".extraConfig = ''
${builtins.readFile ../secrets/caddy_auth}
reverse_proxy :${builtins.toString config.services.llama-cpp.port}
'';
}

View File

@@ -1,39 +1,53 @@
{ {
pkgs,
config, config,
pkgs,
service_configs, service_configs,
lib, lib,
... ...
}: }:
let
package =
let
src = pkgs.fetchFromGitea {
domain = "forgejo.ellis.link";
owner = "continuwuation";
repo = "continuwuity";
rev = "052c4dfa2165fdc4839fed95b71446120273cf23";
hash = "sha256-kQV4glRrKczoJpn9QIMgB5ac+saZQjSZPel+9K9Ykcs=";
};
in
pkgs.matrix-continuwuity.overrideAttrs (old: {
inherit src;
cargoDeps = pkgs.rustPlatform.fetchCargoVendor {
inherit src;
name = "${old.pname}-vendor";
hash = "sha256-vlOXQL8wwEGFX+w0G/eIeHW3J1UDzhJ501kYhAghDV8=";
};
patches = (old.patches or [ ]) ++ [
];
});
in
{ {
services.matrix-conduit.settings.global.registration_token = imports = [
builtins.readFile ../secrets/matrix_reg_token; (lib.serviceMountWithZpool "continuwuity" service_configs.zpool_ssds [
"/var/lib/private/continuwuity"
])
(lib.serviceFilePerms "continuwuity" [
"Z /var/lib/private/continuwuity 0770 ${config.services.matrix-continuwuity.user} ${config.services.matrix-continuwuity.group}"
])
];
services.caddy.virtualHosts.${service_configs.https.domain}.extraConfig = lib.mkBefore '' services.matrix-continuwuity = {
header /.well-known/matrix/* Content-Type application/json
header /.well-known/matrix/* Access-Control-Allow-Origin *
respond /.well-known/matrix/server `{"m.server": "${service_configs.https.matrix_hostname}:${service_configs.ports.https}"}`
respond /.well-known/matrix/client `{"m.server":{"base_url":"https://${service_configs.https.matrix_hostname}"},"m.homeserver":{"base_url":"https://${service_configs.https.matrix_hostname}"},"org.matrix.msc3575.proxy":{"base_url":"https://${config.services.matrix-conduit.settings.global.server_name}"}}`
'';
services.caddy.virtualHosts."${service_configs.https.matrix_hostname}".extraConfig = ''
reverse_proxy :${builtins.toString config.services.matrix-conduit.settings.global.port}
'';
# Exact duplicate
services.caddy.virtualHosts."${service_configs.https.matrix_hostname}:8448".extraConfig =
config.services.caddy.virtualHosts."${config.services.matrix-conduit.settings.global.server_name
}".extraConfig;
services.matrix-conduit = {
enable = true; enable = true;
package = pkgs.conduwuit; inherit package;
settings.global = { settings.global = {
port = 6167; port = [ service_configs.ports.matrix ];
server_name = service_configs.https.domain; server_name = service_configs.https.domain;
database_backend = "rocksdb";
allow_registration = true; allow_registration = true;
registration_token = lib.strings.trim (builtins.readFile ../secrets/matrix_reg_token);
new_user_displayname_suffix = ""; new_user_displayname_suffix = "";
@@ -44,22 +58,42 @@
"envs.net" "envs.net"
]; ];
# without this, conduit fails to start address = [
address = "0.0.0.0"; "0.0.0.0"
];
# TURN server config (coturn)
turn_secret = config.services.coturn.static-auth-secret;
turn_uris = [
"turn:${service_configs.https.domain}?transport=udp"
"turn:${service_configs.https.domain}?transport=tcp"
];
turn_ttl = 86400;
}; };
}; };
systemd.tmpfiles.rules = [ services.caddy.virtualHosts.${service_configs.https.domain}.extraConfig = lib.mkBefore ''
"d /var/lib/private/matrix-conduit 0770 conduit conduit" header /.well-known/matrix/* Content-Type application/json
]; header /.well-known/matrix/* Access-Control-Allow-Origin *
respond /.well-known/matrix/server `{"m.server": "${service_configs.matrix.domain}:${builtins.toString service_configs.ports.https}"}`
respond /.well-known/matrix/client `{"m.server":{"base_url":"https://${service_configs.matrix.domain}"},"m.homeserver":{"base_url":"https://${service_configs.matrix.domain}"},"org.matrix.msc3575.proxy":{"base_url":"https://${config.services.matrix-continuwuity.settings.global.server_name}"},"org.matrix.msc4143.rtc_foci":[{"type":"livekit","livekit_service_url":"https://${service_configs.livekit.domain}"}]}`
'';
services.caddy.virtualHosts."${service_configs.matrix.domain}".extraConfig = ''
reverse_proxy :${builtins.toString service_configs.ports.matrix}
'';
# Exact duplicate for federation port
services.caddy.virtualHosts."${service_configs.matrix.domain}:${builtins.toString service_configs.ports.matrix_federation}".extraConfig =
config.services.caddy.virtualHosts."${service_configs.matrix.domain}".extraConfig;
# for federation # for federation
networking.firewall.allowedTCPPorts = [ networking.firewall.allowedTCPPorts = [
8448 service_configs.ports.matrix_federation
]; ];
# for federation # for federation
networking.firewall.allowedUDPPorts = [ networking.firewall.allowedUDPPorts = [
8448 service_configs.ports.matrix_federation
]; ];
} }

View File

@@ -2,31 +2,25 @@
pkgs, pkgs,
service_configs, service_configs,
lib, lib,
username,
config, config,
inputs,
... ...
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "minecraft-server-${service_configs.minecraft.server_name}" [ (lib.serviceMountWithZpool "minecraft-server-${service_configs.minecraft.server_name}"
"${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}" service_configs.zpool_ssds
[
"${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}"
]
)
inputs.nix-minecraft.nixosModules.minecraft-servers
(lib.serviceFilePerms "minecraft-server-${service_configs.minecraft.server_name}" [
"Z ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name} 700 ${config.services.minecraft-servers.user} ${config.services.minecraft-servers.group}"
"Z ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap/web 750 ${config.services.minecraft-servers.user} ${config.services.minecraft-servers.group}"
]) ])
(lib.serviceDependZpool "minecraft-server-${service_configs.minecraft.server_name}" service_configs.zpool_ssds)
]; ];
environment.systemPackages = [
(pkgs.writeScriptBin "mc-console" ''
#!/bin/sh
${pkgs.tmux}/bin/tmux -S /run/minecraft/${service_configs.minecraft.server_name}.sock attach
'')
];
nixpkgs.config.allowUnfreePredicate =
pkg:
builtins.elem (lib.getName pkg) [
"minecraft-server"
];
services.minecraft-servers = { services.minecraft-servers = {
enable = true; enable = true;
eula = true; eula = true;
@@ -35,16 +29,47 @@
servers.${service_configs.minecraft.server_name} = { servers.${service_configs.minecraft.server_name} = {
enable = true; enable = true;
package = pkgs.fabricServers.fabric-1_21_8; package = pkgs.fabricServers.fabric-1_21_11;
jvmOpts = jvmOpts =
let let
heap_size = "10000M"; heap_size = "4000M";
in in
"-Xmx${heap_size} -Xms${heap_size} -XX:+UseZGC -XX:+ZGenerational"; lib.concatStringsSep " " [
# Memory
"-Xmx${heap_size}"
"-Xms${heap_size}"
# GC
"-XX:+UseZGC"
"-XX:+ZGenerational"
# Base JVM optimizations (brucethemoose/Minecraft-Performance-Flags-Benchmarks)
"-XX:+UnlockExperimentalVMOptions"
"-XX:+UnlockDiagnosticVMOptions"
"-XX:+AlwaysActAsServerClassMachine"
"-XX:+AlwaysPreTouch"
"-XX:+DisableExplicitGC"
"-XX:+UseNUMA"
"-XX:+PerfDisableSharedMem"
"-XX:+UseFastUnorderedTimeStamps"
"-XX:+UseCriticalJavaThreadPriority"
"-XX:ThreadPriorityPolicy=1"
"-XX:AllocatePrefetchStyle=3"
"-XX:-DontCompileHugeMethods"
"-XX:MaxNodeLimit=240000"
"-XX:NodeLimitFudgeFactor=8000"
"-XX:ReservedCodeCacheSize=400M"
"-XX:NonNMethodCodeHeapSize=12M"
"-XX:ProfiledCodeHeapSize=194M"
"-XX:NonProfiledCodeHeapSize=194M"
"-XX:NmethodSweepActivity=1"
"-XX:+UseVectorCmov"
# Large pages (requires vm.nr_hugepages sysctl)
"-XX:+UseLargePages"
"-XX:LargePageSizeInBytes=2m"
];
serverProperties = { serverProperties = {
server-port = 25565; server-port = service_configs.ports.minecraft;
enforce-whitelist = true; enforce-whitelist = true;
gamemode = "survival"; gamemode = "survival";
white-list = true; white-list = true;
@@ -63,60 +88,85 @@
with pkgs; with pkgs;
builtins.attrValues { builtins.attrValues {
FabricApi = fetchurl { FabricApi = fetchurl {
url = "https://cdn.modrinth.com/data/P7dR8mSH/versions/zhzhM2yQ/fabric-api-0.130.0%2B1.21.8.jar"; url = "https://cdn.modrinth.com/data/P7dR8mSH/versions/i5tSkVBH/fabric-api-0.141.3%2B1.21.11.jar";
sha512 = "27399d629d3fb955c8fc1e5e86cacb9b124814bb97ee7fe283336b0e28f5eb9ae31619814ab4aef70c5beea908d2a1ed5a8dd6b8641a53ecd50375d50067f061"; sha512 = "c20c017e23d6d2774690d0dd774cec84c16bfac5461da2d9345a1cd95eee495b1954333c421e3d1c66186284d24a433f6b0cced8021f62e0bfa617d2384d0471";
}; };
FerriteCore = fetchurl { FerriteCore = fetchurl {
url = "https://cdn.modrinth.com/data/uXXizFIs/versions/CtMpt7Jr/ferritecore-8.0.0-fabric.jar"; url = "https://cdn.modrinth.com/data/uXXizFIs/versions/Ii0gP3D8/ferritecore-8.2.0-fabric.jar";
sha512 = "131b82d1d366f0966435bfcb38c362d604d68ecf30c106d31a6261bfc868ca3a82425bb3faebaa2e5ea17d8eed5c92843810eb2df4790f2f8b1e6c1bdc9b7745"; sha512 = "3210926a82eb32efd9bcebabe2f6c053daf5c4337eebc6d5bacba96d283510afbde646e7e195751de795ec70a2ea44fef77cb54bf22c8e57bb832d6217418869";
}; };
Lithium = fetchurl { Lithium = fetchurl {
url = "https://cdn.modrinth.com/data/gvQqBUqZ/versions/pDfTqezk/lithium-fabric-0.18.0%2Bmc1.21.8.jar"; url = "https://cdn.modrinth.com/data/gvQqBUqZ/versions/qvNsoO3l/lithium-fabric-0.21.3%2Bmc1.21.11.jar";
sha512 = "6c69950760f48ef88f0c5871e61029b59af03ab5ed9b002b6a470d7adfdf26f0b875dcd360b664e897291002530981c20e0b2890fb889f29ecdaa007f885100f"; sha512 = "2883739303f0bb602d3797cc601ed86ce6833e5ec313ddce675f3d6af3ee6a40b9b0a06dafe39d308d919669325e95c0aafd08d78c97acd976efde899c7810fd";
}; };
NoChatReports = fetchurl { NoChatReports = fetchurl {
url = "https://cdn.modrinth.com/data/qQyHxfxd/versions/LhwpK0O6/NoChatReports-FABRIC-1.21.7-v2.14.0.jar"; url = "https://cdn.modrinth.com/data/qQyHxfxd/versions/rhykGstm/NoChatReports-FABRIC-1.21.11-v2.18.0.jar";
sha512 = "6e93c822e606ad12cb650801be1b3f39fcd2fef64a9bb905f357eb01a28451afddb3a6cadb39c112463519df0a07b9ff374d39223e9bf189aee7e7182077a7ae"; sha512 = "d2c35cc8d624616f441665aff67c0e366e4101dba243bad25ed3518170942c1a3c1a477b28805cd1a36c44513693b1c55e76bea627d3fced13927a3d67022ccc";
}; };
squaremap = fetchurl { squaremap = fetchurl {
url = "https://cdn.modrinth.com/data/PFb7ZqK6/versions/V9xWIMui/squaremap-fabric-mc1.21.8-1.3.8.jar"; url = "https://cdn.modrinth.com/data/PFb7ZqK6/versions/BW8lMXBi/squaremap-fabric-mc1.21.11-1.3.12.jar";
sha512 = "ed32aca04ef0ad6d46549f9309a342624b64857296515037e5531611d43a7f5d4a6b97f6495f76d2ecfdfac9e4f0bf8a66c938c379cdddae59c8a7f2fe0c03f4"; sha512 = "f62eb791a3f5812eb174565d318f2e6925353f846ef8ac56b4e595f481494e0c281f26b9e9fcfdefa855093c96b735b12f67ee17c07c2477aa7a3439238670d9";
}; };
scalablelux = fetchurl { scalablelux = fetchurl {
url = "https://cdn.modrinth.com/data/Ps1zyz6x/versions/PQLHDg2Q/ScalableLux-0.1.5%2Bfabric.e4acdcb-all.jar"; url = "https://cdn.modrinth.com/data/Ps1zyz6x/versions/PV9KcrYQ/ScalableLux-0.1.6%2Bfabric.c25518a-all.jar";
sha512 = "ec8fabc3bf991fbcbe064c1e97ded3e70f145a87e436056241cbb1e14c57ea9f59ef312f24c205160ccbda43f693e05d652b7f19aa71f730caec3bb5f7f7820a"; sha512 = "729515c1e75cf8d9cd704f12b3487ddb9664cf9928e7b85b12289c8fbbc7ed82d0211e1851375cbd5b385820b4fedbc3f617038fff5e30b302047b0937042ae7";
}; };
c2me = fetchurl { c2me = fetchurl {
url = "https://cdn.modrinth.com/data/VSNURh3q/versions/RzzXyBlx/c2me-fabric-mc1.21.8-0.3.4%2Brc.1.0.jar"; url = "https://cdn.modrinth.com/data/VSNURh3q/versions/QdLiMUjx/c2me-fabric-mc1.21.11-0.3.7%2Balpha.0.7.jar";
sha512 = "4addc9ccbc66b547c96152c7fafcaccde47eefa62b0e99a31f7b4ee5844ac738f2557909bd74e1f755ff4835ce13e8ff6c556f8ebda276370912f50ebd054e3a"; sha512 = "f9543febe2d649a82acd6d5b66189b6a3d820cf24aa503ba493fdb3bbd4e52e30912c4c763fe50006f9a46947ae8cd737d420838c61b93429542573ed67f958e";
}; };
krypton = fetchurl { krypton = fetchurl {
url = "https://cdn.modrinth.com/data/fQEb0iXm/versions/neW85eWt/krypton-0.2.9.jar"; url = "https://cdn.modrinth.com/data/fQEb0iXm/versions/O9LmWYR7/krypton-0.2.10.jar";
sha512 = "2e2304b1b17ecf95783aee92e26e54c9bfad325c7dfcd14deebf9891266eb2933db00ff77885caa083faa96f09c551eb56f93cf73b357789cb31edad4939ffeb"; sha512 = "4dcd7228d1890ddfc78c99ff284b45f9cf40aae77ef6359308e26d06fa0d938365255696af4cc12d524c46c4886cdcd19268c165a2bf0a2835202fe857da5cab";
};
spark = fetchurl {
url = "https://cdn.modrinth.com/data/l6YH9Als/versions/3KCl7Vx0/spark-1.10.142-fabric.jar";
sha512 = "95b7e4f2416e20abf9d9df41fcbce04f28ebf0aa086374742652789a88642dd6820c8884ab240334555345b49c39f7d0caf23d521cec9516991ef43ba24758af";
}; };
better-fabric-console = fetchurl { better-fabric-console = fetchurl {
url = "https://cdn.modrinth.com/data/Y8o1j1Sf/versions/DMBZUPjK/better-fabric-console-mc1.21.8-1.2.5.jar"; url = "https://cdn.modrinth.com/data/Y8o1j1Sf/versions/6aIKl5wy/better-fabric-console-mc1.21.11-1.2.9.jar";
sha512 = "d0de1aec66add0158e5a97424a21fc4bd0d26c54457d1bf15cd19e60939ed5d8b4dc4120a6aeec00925723b7dc431a9b84f60ad96d56a9e50620ef34b091cae6"; sha512 = "427247dafd99df202ee10b4bf60ffcbbecbabfadb01c167097ffb5b85670edb811f4d061c2551be816295cbbc6b8ec5ec464c14a6ff41912ef1f6c57b038d320";
}; };
disconnect-packet-fix = fetchurl {
url = "https://cdn.modrinth.com/data/rd9rKuJT/versions/Gv74xveQ/disconnect-packet-fix-fabric-2.0.0.jar";
sha512 = "1fd6f09a41ce36284e1a8e9def53f3f6834d7201e69e54e24933be56445ba569fbc26278f28300d36926ba92db6f4f9c0ae245d23576aaa790530345587316db";
};
packet-fixer = fetchurl {
url = "https://cdn.modrinth.com/data/c7m1mi73/versions/CUh1DWeO/packetfixer-fabric-3.3.4-1.21.11.jar";
sha512 = "33331b16cb40c5e6fbaade3cacc26f3a0e8fa5805a7186f94d7366a0e14dbeee9de2d2e8c76fa71f5e9dd24eb1c261667c35447e32570ea965ca0f154fdfba0a";
};
# fork of Modernfix for 1.21.11 (upstream will support 26.1)
modernfix = fetchurl {
url = "https://cdn.modrinth.com/data/TjSm1wrD/versions/JwSO8JCN/modernfix-5.25.2-build.4.jar";
sha512 = "0d65c05ac0475408c58ef54215714e6301113101bf98bfe4bb2ba949fbfddd98225ac4e2093a5f9206a9e01ba80a931424b237bdfa3b6e178c741ca6f7f8c6a3";
};
debugify = fetchurl {
url = "https://cdn.modrinth.com/data/QwxR6Gcd/versions/8Q49lnaU/debugify-1.21.11%2B1.0.jar";
sha512 = "04d82dd33f44ced37045f1f9a54ad4eacd70861ff74a8800f2d2df358579e6cb0ea86a34b0086b3e87026b1a0691dd6594b4fdc49f89106466eea840518beb03";
};
} }
); );
}; };
}; };
}; };
systemd.services.minecraft-server-main = {
serviceConfig = {
Nice = -5;
IOSchedulingPriority = 0;
LimitMEMLOCK = "infinity"; # Required for large pages
};
};
services.caddy.virtualHosts = lib.mkIf (config.services.caddy.enable) { services.caddy.virtualHosts = lib.mkIf (config.services.caddy.enable) {
"map.${service_configs.https.domain}".extraConfig = '' "map.${service_configs.https.domain}".extraConfig = ''
root * ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap/web root * ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap/web
@@ -127,12 +177,13 @@
users.users = lib.mkIf (config.services.caddy.enable) { users.users = lib.mkIf (config.services.caddy.enable) {
${config.services.caddy.user}.extraGroups = [ ${config.services.caddy.user}.extraGroups = [
# for `map.gardling.com` # for `map.gardling.com`
"minecraft" config.services.minecraft-servers.group
]; ];
}; };
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [
"d ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name} 700 ${config.services.minecraft-servers.user} ${config.services.minecraft-servers.group}" # Allow caddy (in minecraft group) to traverse to squaremap/web for map.gardling.com
"d ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap/web 750 ${config.services.minecraft-servers.user} ${config.services.minecraft-servers.group}" "z ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name} 710 ${config.services.minecraft-servers.user} ${config.services.minecraft-servers.group}"
"z ${service_configs.minecraft.parent_dir}/${service_configs.minecraft.server_name}/squaremap 710 ${config.services.minecraft-servers.user} ${config.services.minecraft-servers.group}"
]; ];
} }

23
services/monero.nix Normal file
View File

@@ -0,0 +1,23 @@
{
service_configs,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "monero" service_configs.zpool_hdds [
service_configs.monero.dataDir
])
(lib.serviceFilePerms "monero" [
"Z ${service_configs.monero.dataDir} 0700 monero monero"
])
];
services.monero = {
enable = true;
dataDir = service_configs.monero.dataDir;
rpc = {
restricted = true;
};
};
}

10
services/ntfy-alerts.nix Normal file
View File

@@ -0,0 +1,10 @@
{ config, service_configs, ... }:
{
services.ntfyAlerts = {
enable = true;
serverUrl = "https://${service_configs.ntfy.domain}";
topicFile = config.age.secrets.ntfy-alerts-topic.path;
tokenFile = config.age.secrets.ntfy-alerts-token.path;
};
}

34
services/ntfy.nix Normal file
View File

@@ -0,0 +1,34 @@
{
config,
service_configs,
lib,
...
}:
{
imports = [
(lib.serviceMountWithZpool "ntfy-sh" service_configs.zpool_ssds [
"/var/lib/private/ntfy-sh"
])
(lib.serviceFilePerms "ntfy-sh" [
"Z /var/lib/private/ntfy-sh 0700 ${config.services.ntfy-sh.user} ${config.services.ntfy-sh.group}"
])
];
services.ntfy-sh = {
enable = true;
settings = {
base-url = "https://${service_configs.ntfy.domain}";
listen-http = "127.0.0.1:${builtins.toString service_configs.ports.ntfy}";
behind-proxy = true;
auth-default-access = "deny-all";
enable-login = true;
enable-signup = false;
};
};
services.caddy.virtualHosts."${service_configs.ntfy.domain}".extraConfig = ''
reverse_proxy :${builtins.toString service_configs.ports.ntfy}
'';
}

View File

@@ -1,46 +0,0 @@
{
pkgs,
service_configs,
username,
...
}:
let
owntracks_pkg = pkgs.owntracks-recorder.overrideAttrs (old: {
installPhase = old.installPhase + ''
mkdir -p $out/usr/share/ot-recorder
cp -R docroot/* $out/usr/share/ot-recorder'';
});
in
{
users.groups.owntracks = { };
users.users.owntracks = {
isNormalUser = true;
group = "owntracks";
};
systemd.services.owntracks = {
enable = true;
description = "Store and access data published by OwnTracks apps";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = "owntracks";
Group = "owntracks";
WorkingDirectory = "${owntracks_pkg}";
ExecStart = "${owntracks_pkg}/bin/ot-recorder -S ${service_configs.owntracks.data_dir} --doc-root usr/share/ot-recorder --http-port ${builtins.toString service_configs.ports.owntracks} --port 0";
};
};
systemd.tmpfiles.rules = [
"d ${service_configs.owntracks.data_dir} 0770 owntracks owntracks"
];
services.caddy.virtualHosts."owntracks.${service_configs.https.domain}".extraConfig = ''
${builtins.readFile ../secrets/owntracks_caddy_auth}
reverse_proxy :${builtins.toString service_configs.ports.owntracks}
'';
users.users.${username}.extraGroups = [
"owntracks"
];
}

View File

@@ -1,15 +1,18 @@
{ {
pkgs, pkgs,
config, config,
username,
service_configs, service_configs,
lib, lib,
... ...
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "postgresql" [ config.services.postgresql.dataDir ]) (lib.serviceMountWithZpool "postgresql" service_configs.zpool_ssds [
(lib.serviceDependZpool "postgresql" service_configs.zpool_ssds) config.services.postgresql.dataDir
])
(lib.serviceFilePerms "postgresql" [
"Z ${config.services.postgresql.dataDir} 0700 postgres postgres"
])
]; ];
services.postgresql = { services.postgresql = {
@@ -18,12 +21,4 @@
dataDir = service_configs.postgres.dataDir; dataDir = service_configs.postgres.dataDir;
}; };
systemd.tmpfiles.rules = [
# postgresql requires 0700
"d ${config.services.postgresql.dataDir} 0700 postgresql postgresql"
];
users.users.${username}.extraGroups = [
"postgresql"
];
} }

View File

@@ -2,25 +2,37 @@
pkgs, pkgs,
config, config,
service_configs, service_configs,
username,
lib, lib,
inputs,
... ...
}: }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "qbittorrent" [ (lib.serviceMountWithZpool "qbittorrent" service_configs.zpool_hdds [
service_configs.torrents_path service_configs.torrents_path
config.services.qbittorrent.serverConfig.Preferences.Downloads.TempPath ])
(lib.serviceMountWithZpool "qbittorrent" service_configs.zpool_ssds [
"${config.services.qbittorrent.profileDir}/qBittorrent" "${config.services.qbittorrent.profileDir}/qBittorrent"
]) ])
(lib.vpnNamespaceOpenPort config.services.qbittorrent.webuiPort "qbittorrent") (lib.vpnNamespaceOpenPort config.services.qbittorrent.webuiPort "qbittorrent")
(lib.serviceDependZpool "qbittorrent" service_configs.zpool_hdds) (lib.serviceFilePerms "qbittorrent" [
# 0770: group (media) needs write to delete files during upgrades —
# Radarr/Sonarr must unlink the old file before placing the new one.
"Z ${config.services.qbittorrent.serverConfig.Preferences.Downloads.SavePath} 0770 ${config.services.qbittorrent.user} ${service_configs.media_group}"
"Z ${config.services.qbittorrent.serverConfig.Preferences.Downloads.TempPath} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}"
"Z ${config.services.qbittorrent.profileDir} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}"
])
]; ];
services.qbittorrent = { services.qbittorrent = {
enable = true; enable = true;
webuiPort = service_configs.ports.torrent; webuiPort = service_configs.ports.torrent;
profileDir = "/var/lib/qBittorrent"; profileDir = "/var/lib/qBittorrent";
# Set the service group to 'media' so the systemd unit runs with media as
# the primary GID. Linux assigns new file ownership from the process's GID
# (set by systemd's Group= directive), not from /etc/passwd. Without this,
# downloads land as qbittorrent:qbittorrent (0700), blocking Radarr/Sonarr.
group = service_configs.media_group;
serverConfig.LegalNotice.Accepted = true; serverConfig.LegalNotice.Accepted = true;
@@ -41,175 +53,89 @@
serverConfig.BitTorrent = { serverConfig.BitTorrent = {
Session = { Session = {
MaxConnectionsPerTorrent = 10; MaxConnectionsPerTorrent = 100;
MaxUploadsPerTorrent = 10; MaxUploadsPerTorrent = 15;
MaxConnections = -1; MaxConnections = -1;
MaxUploads = -1; MaxUploads = -1;
MaxActiveCheckingTorrents = 5; MaxActiveCheckingTorrents = 2; # reduce disk pressure from concurrent hash checks
# queueing # queueing
QueueingSystemEnabled = false; QueueingSystemEnabled = true;
MaxActiveDownloads = 2; # num of torrents that can download at the same time MaxActiveDownloads = 15;
MaxActiveUploads = 20; MaxActiveUploads = -1;
MaxActiveTorrents = -1;
IgnoreSlowTorrentsForQueueing = true; IgnoreSlowTorrentsForQueueing = true;
GlobalUPSpeedLimit = 0; GlobalUPSpeedLimit = 0;
GlobalDLSpeedLimit = 0; GlobalDLSpeedLimit = 10000;
# Alternate speed limits for when Jellyfin is streaming # Alternate speed limits for when Jellyfin is streaming
AlternativeGlobalUPSpeedLimit = 500; # 500 KB/s when throttled AlternativeGlobalUPSpeedLimit = 500; # 500 KB/s when throttled
AlternativeGlobalDLSpeedLimit = 800; # 800 KB/s when throttled AlternativeGlobalDLSpeedLimit = 800; # 800 KB/s when throttled
IncludeOverheadInLimits = true; IncludeOverheadInLimits = true;
GlobalMaxRatio = 6.0; GlobalMaxRatio = 7.0;
AddTrackersEnabled = true; AddTrackersEnabled = true;
AdditionalTrackers = ( AdditionalTrackers = lib.concatStringsSep "\\n" (
lib.concatStringsSep "\\n" [ lib.lists.filter (x: x != "") (
"http://0123456789nonexistent.com:80/announce" lib.strings.splitString "\n" (builtins.readFile "${inputs.trackerlist}/trackers_all.txt")
"http://0d.kebhana.mx:443/announce" )
"http://1337.abcvg.info:80/announce"
"http://bittorrent-tracker.e-n-c-r-y-p-t.net:1337/announce"
"http://bt1.xxxxbt.cc:6969/announce"
"http://bt.poletracker.org:2710/announce"
"http://buny.uk:6969/announce"
"http://finbytes.org:80/announce.php"
"http://highteahop.top:6960/announce"
"http://home.yxgz.club:6969/announce"
"http://open.tracker.cl:1337/announce"
"http://open.trackerlist.xyz:80/announce"
"http://p4p.arenabg.com:1337/announce"
"http://public.tracker.vraphim.com:6969/announce"
"http://region.nl1.privex.cc:6969/announce"
"http://retracker.spark-rostov.ru:80/announce"
"http://seeders-paradise.org:80/announce"
"http://servandroidkino.ru:80/announce"
"http://share.hkg-fansub.info:80/announce.php"
"http://shubt.net:2710/announce"
"https://sparkle.ghostchu-services.top:443/announce"
"https://tracker.bt4g.com:443/announce"
"https://tracker.expli.top:443/announce"
"https://tracker.gcrenwp.top:443/announce"
"https://tracker.ghostchu-services.top:443/announce"
"https://tracker.leechshield.link:443/announce"
"https://tracker.moeblog.cn:443/announce"
"https://tracker.pmman.tech:443/announce"
"https://tracker.yemekyedim.com:443/announce"
"https://tracker.zhuqiy.top:443/announce"
"https://tr.zukizuki.org:443/announce"
"http://taciturn-shadow.spb.ru:6969/announce"
"http://t.jaekr.sh:6969/announce"
"http://t.overflow.biz:6969/announce"
"http://tracker1.bt.moack.co.kr:80/announce"
"http://tracker1.itzmx.com:8080/announce"
"http://tracker.23794.top:6969/announce"
"http://tracker2.dler.org:80/announce"
"http://tracker810.xyz:11450/announce"
"http://tracker.bittor.pw:1337/announce"
"http://tracker.bt4g.com:2095/announce"
"http://tracker.bt-hash.com:80/announce"
"http://tracker.bz:80/announce"
"http://tracker.corpscorp.online:80/announce"
"http://tracker.darkness.services:6969/announce"
"http://tracker.dler.com:6969/announce"
"http://tracker.dler.org:6969/announce"
"http://tracker.dmcomic.org:2710/announce"
"http://tracker.files.fm:6969/announce"
"http://tracker.ghostchu-services.top:80/announce"
"http://tracker.ipv6tracker.org:80/announce"
"http://tracker.lintk.me:2710/announce"
"http://tracker.moxing.party:6969/announce"
"http://tracker.mywaifu.best:6969/announce"
"http://tracker.opentrackr.org:1337/announce"
"http://tracker.qu.ax:6969/announce"
"http://tracker.renfei.net:8080/announce"
"http://tracker.sbsub.com:2710/announce"
"http://tracker.vanitycore.co:6969/announce"
"http://tracker.waaa.moe:6969/announce"
"http://tracker.xiaoduola.xyz:6969/announce"
"http://tracker.zhuqiy.top:80/announce"
"http://tr.kxmp.cf:80/announce"
"http://wepzone.net:6969/announce"
"http://www.genesis-sp.org:2710/announce"
"http://www.torrentsnipe.info:2701/announce"
"udp://1c.premierzal.ru:6969/announce"
"udp://bandito.byterunner.io:6969/announce"
"udp://bittorrent-tracker.e-n-c-r-y-p-t.net:1337/announce"
"udp://bt.ktrackers.com:6666/announce"
"udp://concen.org:6969/announce"
"udp://d40969.acod.regrucolo.ru:6969/announce"
"udp://discord.heihachi.pw:6969/announce"
"udp://evan.im:6969/announce"
"udp://exodus.desync.com:6969/announce"
"udp://explodie.org:6969/announce"
"udp://inferno.demonoid.is:3391/announce"
"udp://ipv4announce.sktorrent.eu:6969/announce"
"udp://ipv4.rer.lol:2710/announce"
"udp://isk.richardsw.club:6969/announce"
"udp://leet-tracker.moe:1337/announce"
"udp://martin-gebhardt.eu:25/announce"
"udp://ns-1.x-fins.com:6969/announce"
"udp://open.demonii.com:1337"
"udp://open.demonii.com:1337/announce"
"udp://open.dstud.io:6969/announce"
"udp://open.free-tracker.ga:6969/announce"
"udp://open.stealth.si:80/announce"
"udp://open.tracker.cl:1337/announce"
"udp://opentracker.io:6969/announce"
"udp://p4p.arenabg.com:1337/announce"
"udp://public.tracker.vraphim.com:6969/announce"
"udp://retracker01-msk-virt.corbina.net:80/announce"
"udp://retracker.lanta.me:2710/announce"
"udp://t.overflow.biz:6969/announce"
"udp://tr4ck3r.duckdns.org:6969/announce"
"udp://tracker2.dler.org:80/announce"
"udp://tracker.bittor.pw:1337/announce"
"udp://tracker.dler.com:6969/announce"
"udp://tracker.dler.org:6969/announce"
"udp://tracker.filemail.com:6969/announce"
"udp://tracker.fnix.net:6969/announce"
"udp://tracker.gigantino.net:6969/announce"
"udp://tracker.gmi.gd:6969/announce"
"udp://tracker.ololosh.space:6969/announce"
"udp://tracker.openbittorrent.com:80"
"udp://tracker.opentrackr.org:1337/announce"
"udp://tracker.srv00.com:6969/announce"
"udp://tracker.therarbg.to:6969/announce"
"udp://tracker.tiny-vps.com:6969/announce"
"udp://tracker.torrent.eu.org:451/announce"
"udp://tracker.torrust-demo.com:6969/announce"
"udp://tracker.tryhackx.org:6969/announce"
"udp://ttk2.nbaonlineservice.com:6969/announce"
"udp://wepzone.net:6969/announce"
]
); );
AnnounceToAllTrackers = true; AnnounceToAllTrackers = true;
# idk why it also has to be specified here too? # idk why it also has to be specified here too?
inherit (config.services.qbittorrent.serverConfig.Preferences.Downloads) TempPath; inherit (config.services.qbittorrent.serverConfig.Preferences.Downloads) TempPath;
TempPathEnabled = true; TempPathEnabled = true;
# how many connections per sec ConnectionSpeed = 100;
ConnectionSpeed = 300;
SaveResumeDataInterval = 300; # save resume data every 5 min (default 60s)
ResumeDataStorageType = "SQLite"; # SQLite is more efficient than legacy per-file .fastresume storage
# Automatic Torrent Management: use category save paths for new torrents
DisableAutoTMMByDefault = false;
DisableAutoTMMTriggers.CategorySavePathChanged = false;
DisableAutoTMMTriggers.DefaultSavePathChanged = false;
ChokingAlgorithm = "RateBased"; ChokingAlgorithm = "RateBased";
PieceExtentAffinity = true; PieceExtentAffinity = true;
SuggestMode = true; SuggestMode = true;
CoalesceReadWrite = true;
# max_queued_disk_bytes: the max bytes waiting in the disk I/O queue.
# When this limit is reached, peer connections stop reading from their
# sockets until the disk thread catches up -- causing the spike-then-zero
# pattern. Default is 1MB; high_performance_seed() uses 7MB.
# 64MB is above the preset but justified for slow raidz1 HDD random writes
# where ZFS txg commits cause periodic I/O stalls.
DiskQueueSize = 67108864; # 64MB
# === Network buffer tuning (from libtorrent high_performance_seed preset) ===
# "always stuff at least 1 MiB down each peer pipe, to quickly ramp up send rates"
SendBufferLowWatermark = 1024; # 1MB (KiB) -- matches high_performance_seed
# "of 500 ms, and a send rate of 4 MB/s, the upper limit should be 2 MB"
SendBufferWatermark = 3072; # 3MB (KiB) -- matches high_performance_seed
# "put 1.5 seconds worth of data in the send buffer"
SendBufferWatermarkFactor = 150; # percent -- matches high_performance_seed
}; };
Network = {
# traffic is routed through a vpn, we don't need
# port forwarding
PortForwardingEnabled = false;
};
Session.UseUPnP = false;
}; };
}; };
systemd.tmpfiles.rules = [ systemd.services.qbittorrent.serviceConfig.TimeoutStopSec = lib.mkForce 10;
"d ${config.services.qbittorrent.serverConfig.Preferences.Downloads.SavePath} 0750 ${config.services.qbittorrent.user} ${service_configs.media_group}"
"d ${config.services.qbittorrent.serverConfig.Preferences.Downloads.TempPath} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}"
"d ${config.services.qbittorrent.profileDir} 0700 ${config.services.qbittorrent.user} ${config.services.qbittorrent.group}"
];
services.caddy.virtualHosts."torrent.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."torrent.${service_configs.https.domain}".extraConfig = ''
${builtins.readFile ../secrets/caddy_auth} import ${config.age.secrets.caddy_auth.path}
reverse_proxy ${service_configs.https.wg_ip}:${builtins.toString config.services.qbittorrent.webuiPort} reverse_proxy ${config.vpnNamespaces.wg.namespaceAddress}:${builtins.toString config.services.qbittorrent.webuiPort}
''; '';
users.users.${config.services.qbittorrent.user}.extraGroups = [ users.users.${config.services.qbittorrent.user}.extraGroups = [

View File

@@ -11,13 +11,17 @@ let
in in
{ {
imports = [ imports = [
(lib.serviceMountDeps "slskd" [ (lib.serviceMountWithZpool "slskd" "" [
service_configs.slskd.base service_configs.slskd.base
service_configs.slskd.downloads service_configs.slskd.downloads
service_configs.slskd.incomplete service_configs.slskd.incomplete
]) ])
(lib.serviceDependZpool "slskd" service_configs.zpool_ssds) (lib.serviceFilePerms "slskd" [
(lib.serviceDependZpool "slskd" service_configs.zpool_hdds) "Z ${service_configs.music_dir} 0750 ${username} music"
"Z ${service_configs.slskd.base} 0750 ${config.services.slskd.user} ${config.services.slskd.group}"
"Z ${service_configs.slskd.downloads} 0750 ${config.services.slskd.user} music"
"Z ${service_configs.slskd.incomplete} 0750 ${config.services.slskd.user} music"
])
]; ];
users.groups."music" = { }; users.groups."music" = { };
@@ -26,7 +30,7 @@ in
"skskd_env".text = '' "skskd_env".text = ''
#!/bin/sh #!/bin/sh
rm -fr ${slskd_env} || true rm -fr ${slskd_env} || true
cp ${../secrets/slskd_env} ${slskd_env} cp ${config.age.secrets.slskd_env.path} ${slskd_env}
chmod 0500 ${slskd_env} chmod 0500 ${slskd_env}
chown ${config.services.slskd.user}:${config.services.slskd.group} ${slskd_env} chown ${config.services.slskd.user}:${config.services.slskd.group} ${slskd_env}
''; '';
@@ -67,13 +71,6 @@ in
users.users.${config.services.jellyfin.user}.extraGroups = [ "music" ]; users.users.${config.services.jellyfin.user}.extraGroups = [ "music" ];
users.users.${username}.extraGroups = [ "music" ]; users.users.${username}.extraGroups = [ "music" ];
systemd.tmpfiles.rules = [
"d ${service_configs.music_dir} 0750 ${username} music"
"d ${service_configs.slskd.base} 0750 ${config.services.slskd.user} ${config.services.slskd.group}"
"d ${service_configs.slskd.downloads} 0750 ${config.services.slskd.user} music"
"d ${service_configs.slskd.incomplete} 0750 ${config.services.slskd.user} music"
];
# doesn't work with auth???? # doesn't work with auth????
services.caddy.virtualHosts."soulseek.${service_configs.https.domain}".extraConfig = '' services.caddy.virtualHosts."soulseek.${service_configs.https.domain}".extraConfig = ''
reverse_proxy :${builtins.toString config.services.slskd.settings.web.port} reverse_proxy :${builtins.toString config.services.slskd.settings.web.port}

35
services/ssh.nix Normal file
View File

@@ -0,0 +1,35 @@
{
config,
lib,
pkgs,
username,
...
}:
{
# Enable the OpenSSH daemon.
services.openssh = {
enable = true;
settings = {
AllowUsers = [
username
"root"
];
PasswordAuthentication = false;
PermitRootLogin = "yes"; # for deploying configs
};
};
systemd.tmpfiles.rules = [
"Z /etc/ssh 755 root root"
"Z /etc/ssh/ssh_host_* 600 root root"
];
users.users.${username}.openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO4jL6gYOunUlUtPvGdML0cpbKSsPNqQ1jit4E7U1RyH" # laptop
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJjT5QZ3zRDb+V6Em20EYpSEgPW5e/U+06uQGJdraxi" # desktop
];
# used for deploying configs to server
users.users.root.openssh.authorizedKeys.keys =
config.users.users.${username}.openssh.authorizedKeys.keys;
}

54
services/syncthing.nix Normal file
View File

@@ -0,0 +1,54 @@
{
config,
lib,
pkgs,
service_configs,
...
}:
{
imports = [
(lib.serviceMountWithZpool "syncthing" service_configs.zpool_ssds [
service_configs.syncthing.dataDir
service_configs.syncthing.signalBackupDir
service_configs.syncthing.grayjayBackupDir
])
(lib.serviceFilePerms "syncthing" [
"Z ${service_configs.syncthing.dataDir} 0750 ${config.services.syncthing.user} ${config.services.syncthing.group}"
"Z ${service_configs.syncthing.signalBackupDir} 0750 ${config.services.syncthing.user} ${config.services.syncthing.group}"
"Z ${service_configs.syncthing.grayjayBackupDir} 0750 ${config.services.syncthing.user} ${config.services.syncthing.group}"
])
];
services.syncthing = {
enable = true;
dataDir = service_configs.syncthing.dataDir;
guiAddress = "127.0.0.1:${toString service_configs.ports.syncthing_gui}";
overrideDevices = false;
overrideFolders = false;
settings = {
gui = {
insecureSkipHostcheck = true; # Allow access via reverse proxy
};
options = {
urAccepted = 1; # enable usage reporting
relaysEnabled = true;
};
};
};
# Open firewall ports for syncthing protocol
networking.firewall = {
allowedTCPPorts = [ service_configs.ports.syncthing_protocol ];
allowedUDPPorts = [ service_configs.ports.syncthing_discovery ];
};
services.caddy.virtualHosts."syncthing.${service_configs.https.domain}".extraConfig = ''
import ${config.age.secrets.caddy_auth.path}
reverse_proxy :${toString service_configs.ports.syncthing_gui}
'';
}

View File

@@ -1,62 +1,20 @@
{ {
pkgs, pkgs,
service_configs, config,
eth_interface, inputs,
... ...
}: }:
{ {
imports = [
inputs.vpn-confinement.nixosModules.default
];
# network namespace that is proxied through mullvad # network namespace that is proxied through mullvad
vpnNamespaces.wg = { vpnNamespaces.wg = {
enable = true; enable = true;
wireguardConfigFile = ../secrets/wg0.conf; wireguardConfigFile = config.age.secrets.wg0-conf.path;
accessibleFrom = [ accessibleFrom = [
# "192.168.0.0/24" # "192.168.0.0/24"
]; ];
}; };
environment.systemPackages = with pkgs; [
# used to monitor bandwidth usage
nload
];
systemd.services."jellyfin-qbittorrent-monitor" = {
description = "Monitor Jellyfin streaming and control qBittorrent rate limits";
after = [
"network.target"
"jellyfin.service"
"qbittorrent.service"
];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = pkgs.writeShellScript "jellyfin-monitor-start" ''
export JELLYFIN_API_KEY=$(cat ${../secrets/jellyfin-api-key})
exec ${
pkgs.python3.withPackages (ps: with ps; [ requests ])
}/bin/python ${./jellyfin-qbittorrent-monitor.py}
'';
Restart = "always";
RestartSec = "10s";
# Security hardening
DynamicUser = true;
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
MemoryDenyWriteExecute = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
RemoveIPC = true;
};
environment = {
JELLYFIN_URL = "http://localhost:${builtins.toString service_configs.ports.jellyfin}";
QBITTORRENT_URL = "http://${service_configs.https.wg_ip}:${builtins.toString service_configs.ports.torrent}";
CHECK_INTERVAL = "30";
};
};
} }

63
services/xmrig.nix Normal file
View File

@@ -0,0 +1,63 @@
{
config,
lib,
pkgs,
hostname,
...
}:
let
walletAddress = lib.strings.trim (builtins.readFile ../secrets/xmrig-wallet);
threadCount = 12;
in
{
services.xmrig = {
enable = true;
package = pkgs.xmrig;
settings = {
autosave = true;
cpu = {
enabled = true;
huge-pages = true;
hw-aes = true;
rx = lib.range 0 (threadCount - 1);
};
randomx = {
"1gb-pages" = true;
};
opencl = false;
cuda = false;
pools = [
{
url = "gulf.moneroocean.stream:20128";
user = walletAddress;
pass = hostname + "~rx/0";
keepalive = true;
tls = true;
}
];
};
};
systemd.services.xmrig.serviceConfig = {
Nice = 19;
CPUSchedulingPolicy = "idle";
IOSchedulingClass = "idle";
};
# Stop mining on UPS battery to conserve power
services.apcupsd.hooks = lib.mkIf config.services.apcupsd.enable {
onbattery = "systemctl stop xmrig";
offbattery = "systemctl start xmrig";
};
# Reserve 1GB huge pages for RandomX (dataset is ~2GB)
boot.kernelParams = [
"hugepagesz=1G"
"hugepages=3"
];
}

124
tests/fail2ban-caddy.nix Normal file
View File

@@ -0,0 +1,124 @@
{
config,
lib,
pkgs,
...
}:
pkgs.testers.runNixOSTest {
name = "fail2ban-caddy";
nodes = {
server =
{
config,
pkgs,
lib,
...
}:
{
imports = [
../modules/security.nix
];
# Set up Caddy with basic auth (minimal config, no production stuff)
# Using bcrypt hash generated with: caddy hash-password --plaintext testpass
services.caddy = {
enable = true;
virtualHosts.":80".extraConfig = ''
log {
output file /var/log/caddy/access-server.log
format json
}
basic_auth {
testuser $2a$14$XqaQlGTdmofswciqrLlMz.rv0/jiGQq8aU.fP6mh6gCGiLf6Cl3.a
}
respond "Authenticated!" 200
'';
};
# Add the fail2ban jail for caddy-auth (same as in services/caddy.nix)
services.fail2ban.jails.caddy-auth = {
enabled = true;
settings = {
backend = "auto";
port = "http,https";
logpath = "/var/log/caddy/access-*.log";
maxretry = 3; # Lower for testing
};
filter.Definition = {
# Only match 401s where an Authorization header was actually sent
failregex = ''^.*"remote_ip":"<HOST>".*"Authorization":\["REDACTED"\].*"status":401.*$'';
ignoreregex = "";
datepattern = ''"ts":{Epoch}\.'';
};
};
# Create log directory and initial log file so fail2ban can start
systemd.tmpfiles.rules = [
"d /var/log/caddy 755 caddy caddy"
"f /var/log/caddy/access-server.log 644 caddy caddy"
];
networking.firewall.allowedTCPPorts = [ 80 ];
};
client = {
environment.systemPackages = [ pkgs.curl ];
};
};
testScript = ''
import time
import re
start_all()
server.wait_for_unit("caddy.service")
server.wait_for_unit("fail2ban.service")
server.wait_for_open_port(80)
time.sleep(2)
with subtest("Verify caddy-auth jail is active"):
status = server.succeed("fail2ban-client status")
assert "caddy-auth" in status, f"caddy-auth jail not found in: {status}"
with subtest("Verify correct password works"):
# Use -4 to force IPv4 for consistency
result = client.succeed("curl -4 -s -u testuser:testpass http://server/")
print(f"Curl result: {result}")
assert "Authenticated" in result, f"Auth should succeed: {result}"
with subtest("Unauthenticated requests (browser probes) should not trigger ban"):
# Simulate browser probe requests - no Authorization header sent
# This is the normal HTTP Basic Auth challenge-response flow:
# browser sends request without credentials, gets 401, then resends with credentials
for i in range(5):
client.execute("curl -4 -s http://server/ || true")
time.sleep(0.5)
time.sleep(3)
status = server.succeed("fail2ban-client status caddy-auth")
print(f"caddy-auth jail status after unauthenticated requests: {status}")
match = re.search(r"Currently banned:\s*(\d+)", status)
banned = int(match.group(1)) if match else 0
assert banned == 0, f"Unauthenticated 401s should NOT trigger ban, but {banned} IPs were banned: {status}"
with subtest("Generate failed basic auth attempts (wrong password)"):
# Use -4 to force IPv4 for consistent IP tracking
# These send an Authorization header with wrong credentials
for i in range(4):
client.execute("curl -4 -s -u testuser:wrongpass http://server/ || true")
time.sleep(1)
with subtest("Verify IP is banned after wrong password attempts"):
time.sleep(5)
status = server.succeed("fail2ban-client status caddy-auth")
print(f"caddy-auth jail status: {status}")
# Check that at least 1 IP is banned
match = re.search(r"Currently banned:\s*(\d+)", status)
assert match and int(match.group(1)) >= 1, f"Expected at least 1 banned IP, got: {status}"
with subtest("Verify banned client cannot connect"):
# Use -4 to test with same IP that was banned
exit_code = client.execute("curl -4 -s --max-time 3 http://server/ 2>&1")[0]
assert exit_code != 0, "Connection should be blocked"
'';
}

123
tests/fail2ban-gitea.nix Normal file
View File

@@ -0,0 +1,123 @@
{
config,
lib,
pkgs,
...
}:
let
testServiceConfigs = {
zpool_ssds = "";
gitea = {
dir = "/var/lib/gitea";
domain = "git.test.local";
};
postgres = {
socket = "/run/postgresql";
};
ports = {
gitea = 3000;
};
};
testLib = lib.extend (
final: prev: {
serviceMountWithZpool =
serviceName: zpool: dirs:
{ ... }:
{ };
serviceFilePerms = serviceName: tmpfilesRules: { ... }: { };
}
);
giteaModule =
{ config, pkgs, ... }:
{
imports = [
(import ../services/gitea.nix {
inherit config pkgs;
lib = testLib;
service_configs = testServiceConfigs;
})
];
};
in
pkgs.testers.runNixOSTest {
name = "fail2ban-gitea";
nodes = {
server =
{
config,
lib,
pkgs,
...
}:
{
imports = [
../modules/security.nix
giteaModule
];
# Enable postgres for gitea
services.postgresql.enable = true;
# Disable ZFS mount dependency
systemd.services."gitea-mounts".enable = lib.mkForce false;
systemd.services.gitea = {
wants = lib.mkForce [ ];
after = lib.mkForce [ "postgresql.service" ];
requires = lib.mkForce [ ];
};
# Override for faster testing and correct port
services.fail2ban.jails.gitea.settings = {
maxretry = lib.mkForce 3;
# In test, we connect directly to Gitea port, not via Caddy
port = lib.mkForce "3000";
};
networking.firewall.allowedTCPPorts = [ 3000 ];
};
client = {
environment.systemPackages = [ pkgs.curl ];
};
};
testScript = ''
import time
import re
start_all()
server.wait_for_unit("postgresql.service")
server.wait_for_unit("gitea.service")
server.wait_for_unit("fail2ban.service")
server.wait_for_open_port(3000)
time.sleep(3)
with subtest("Verify gitea jail is active"):
status = server.succeed("fail2ban-client status")
assert "gitea" in status, f"gitea jail not found in: {status}"
with subtest("Generate failed login attempts"):
# Use -4 to force IPv4 for consistent IP tracking
for i in range(4):
client.execute(
"curl -4 -s -X POST http://server:3000/user/login -d 'user_name=baduser&password=badpass' || true"
)
time.sleep(0.5)
with subtest("Verify IP is banned"):
time.sleep(3)
status = server.succeed("fail2ban-client status gitea")
print(f"gitea jail status: {status}")
# Check that at least 1 IP is banned
match = re.search(r"Currently banned:\s*(\d+)", status)
assert match and int(match.group(1)) >= 1, f"Expected at least 1 banned IP, got: {status}"
with subtest("Verify banned client cannot connect"):
# Use -4 to test with same IP that was banned
exit_code = client.execute("curl -4 -s --max-time 3 http://server:3000/ 2>&1")[0]
assert exit_code != 0, "Connection should be blocked"
'';
}

135
tests/fail2ban-immich.nix Normal file
View File

@@ -0,0 +1,135 @@
{
config,
lib,
pkgs,
...
}:
let
testServiceConfigs = {
zpool_ssds = "";
https = {
domain = "test.local";
};
ports = {
immich = 2283;
};
immich = {
dir = "/var/lib/immich";
};
};
testLib = lib.extend (
final: prev: {
serviceMountWithZpool =
serviceName: zpool: dirs:
{ ... }:
{ };
serviceFilePerms = serviceName: tmpfilesRules: { ... }: { };
}
);
immichModule =
{ config, pkgs, ... }:
{
imports = [
(import ../services/immich.nix {
inherit config pkgs;
lib = testLib;
service_configs = testServiceConfigs;
})
];
};
in
pkgs.testers.runNixOSTest {
name = "fail2ban-immich";
nodes = {
server =
{
config,
lib,
pkgs,
...
}:
{
imports = [
../modules/security.nix
immichModule
];
# Immich needs postgres
services.postgresql.enable = true;
# Let immich create its own DB for testing
services.immich.database.createDB = lib.mkForce true;
# Disable ZFS mount dependencies
systemd.services."immich-server-mounts".enable = lib.mkForce false;
systemd.services."immich-machine-learning-mounts".enable = lib.mkForce false;
systemd.services.immich-server = {
wants = lib.mkForce [ ];
after = lib.mkForce [ "postgresql.service" ];
requires = lib.mkForce [ ];
};
systemd.services.immich-machine-learning = {
wants = lib.mkForce [ ];
after = lib.mkForce [ ];
requires = lib.mkForce [ ];
};
# Override for faster testing and correct port
services.fail2ban.jails.immich.settings = {
maxretry = lib.mkForce 3;
# In test, we connect directly to Immich port, not via Caddy
port = lib.mkForce "2283";
};
networking.firewall.allowedTCPPorts = [ 2283 ];
# Immich needs more resources
virtualisation.diskSize = 4 * 1024;
virtualisation.memorySize = 4 * 1024; # 4GB RAM for Immich
};
client = {
environment.systemPackages = [ pkgs.curl ];
};
};
testScript = ''
import time
import re
start_all()
server.wait_for_unit("postgresql.service")
server.wait_for_unit("immich-server.service", timeout=120)
server.wait_for_unit("fail2ban.service")
server.wait_for_open_port(2283, timeout=60)
time.sleep(3)
with subtest("Verify immich jail is active"):
status = server.succeed("fail2ban-client status")
assert "immich" in status, f"immich jail not found in: {status}"
with subtest("Generate failed login attempts"):
# Use -4 to force IPv4 for consistent IP tracking
for i in range(4):
client.execute(
"curl -4 -s -X POST http://server:2283/api/auth/login -H 'Content-Type: application/json' -d '{\"email\":\"bad@user.com\",\"password\":\"badpass\"}' || true"
)
time.sleep(0.5)
with subtest("Verify IP is banned"):
time.sleep(3)
status = server.succeed("fail2ban-client status immich")
print(f"immich jail status: {status}")
# Check that at least 1 IP is banned
match = re.search(r"Currently banned:\s*(\d+)", status)
assert match and int(match.group(1)) >= 1, f"Expected at least 1 banned IP, got: {status}"
with subtest("Verify banned client cannot connect"):
# Use -4 to test with same IP that was banned
exit_code = client.execute("curl -4 -s --max-time 3 http://server:2283/ 2>&1")[0]
assert exit_code != 0, "Connection should be blocked"
'';
}

147
tests/fail2ban-jellyfin.nix Normal file
View File

@@ -0,0 +1,147 @@
{
config,
lib,
pkgs,
...
}:
let
testServiceConfigs = {
zpool_ssds = "";
https = {
domain = "test.local";
};
ports = {
jellyfin = 8096;
};
jellyfin = {
dataDir = "/var/lib/jellyfin";
cacheDir = "/var/cache/jellyfin";
};
media_group = "media";
};
testLib = lib.extend (
final: prev: {
serviceMountWithZpool =
serviceName: zpool: dirs:
{ ... }:
{ };
serviceFilePerms = serviceName: tmpfilesRules: { ... }: { };
optimizePackage = pkg: pkg; # No-op for testing
}
);
jellyfinModule =
{ config, pkgs, ... }:
{
imports = [
(import ../services/jellyfin.nix {
inherit config pkgs;
lib = testLib;
service_configs = testServiceConfigs;
})
];
};
in
pkgs.testers.runNixOSTest {
name = "fail2ban-jellyfin";
nodes = {
server =
{
config,
lib,
pkgs,
...
}:
{
imports = [
../modules/security.nix
jellyfinModule
];
# Create the media group
users.groups.media = { };
# Disable ZFS mount dependency
systemd.services."jellyfin-mounts".enable = lib.mkForce false;
systemd.services.jellyfin = {
wants = lib.mkForce [ ];
after = lib.mkForce [ ];
requires = lib.mkForce [ ];
};
# Override for faster testing and correct port
services.fail2ban.jails.jellyfin.settings = {
maxretry = lib.mkForce 3;
# In test, we connect directly to Jellyfin port, not via Caddy
port = lib.mkForce "8096";
};
# Create log directory and placeholder log file for fail2ban
# Jellyfin logs to files, not systemd journal
systemd.tmpfiles.rules = [
"d /var/lib/jellyfin/log 0755 jellyfin jellyfin"
"f /var/lib/jellyfin/log/log_placeholder.log 0644 jellyfin jellyfin"
];
# Make fail2ban start after Jellyfin
systemd.services.fail2ban = {
wants = [ "jellyfin.service" ];
after = [ "jellyfin.service" ];
};
# Give jellyfin more disk space and memory
virtualisation.diskSize = 3 * 1024;
virtualisation.memorySize = 2 * 1024;
};
client = {
environment.systemPackages = [ pkgs.curl ];
};
};
testScript = ''
import time
import re
start_all()
server.wait_for_unit("jellyfin.service")
server.wait_for_unit("fail2ban.service")
server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60)
time.sleep(2)
# Wait for Jellyfin to create real log files and reload fail2ban
server.wait_until_succeeds("ls /var/lib/jellyfin/log/log_2*.log", timeout=30)
server.succeed("fail2ban-client reload jellyfin")
with subtest("Verify jellyfin jail is active"):
status = server.succeed("fail2ban-client status")
assert "jellyfin" in status, f"jellyfin jail not found in: {status}"
with subtest("Generate failed login attempts"):
# Use -4 to force IPv4 for consistent IP tracking
for i in range(4):
client.execute("""
curl -4 -s -X POST http://server:8096/Users/authenticatebyname \
-H 'Content-Type: application/json' \
-H 'X-Emby-Authorization: MediaBrowser Client="test", Device="test", DeviceId="test", Version="1.0"' \
-d '{"Username":"baduser","Pw":"badpass"}' || true
""")
time.sleep(0.5)
with subtest("Verify IP is banned"):
time.sleep(3)
status = server.succeed("fail2ban-client status jellyfin")
print(f"jellyfin jail status: {status}")
# Check that at least 1 IP is banned
match = re.search(r"Currently banned:\s*(\d+)", status)
assert match and int(match.group(1)) >= 1, f"Expected at least 1 banned IP, got: {status}"
with subtest("Verify banned client cannot connect"):
# Use -4 to test with same IP that was banned
exit_code = client.execute("curl -4 -s --max-time 3 http://server:8096/ 2>&1")[0]
assert exit_code != 0, "Connection should be blocked"
'';
}

104
tests/fail2ban-ssh.nix Normal file
View File

@@ -0,0 +1,104 @@
{
config,
lib,
pkgs,
...
}:
let
testServiceConfigs = {
zpool_ssds = "";
zpool_hdds = "";
};
securityModule = import ../modules/security.nix;
sshModule =
{
config,
lib,
pkgs,
...
}:
{
imports = [
(import ../services/ssh.nix {
inherit config lib pkgs;
username = "testuser";
})
];
};
in
pkgs.testers.runNixOSTest {
name = "fail2ban-ssh";
nodes = {
server =
{
config,
lib,
pkgs,
...
}:
{
imports = [
securityModule
sshModule
];
# Override for testing - enable password auth
services.openssh.settings.PasswordAuthentication = lib.mkForce true;
users.users.testuser = {
isNormalUser = true;
password = "correctpassword";
};
networking.firewall.allowedTCPPorts = [ 22 ];
};
client = {
environment.systemPackages = with pkgs; [
sshpass
openssh
];
};
};
testScript = ''
import time
start_all()
server.wait_for_unit("sshd.service")
server.wait_for_unit("fail2ban.service")
server.wait_for_open_port(22)
time.sleep(2)
with subtest("Verify sshd jail is active"):
status = server.succeed("fail2ban-client status")
assert "sshd" in status, f"sshd jail not found in: {status}"
with subtest("Generate failed SSH login attempts"):
# Use -4 to force IPv4, timeout and NumberOfPasswordPrompts=1 to ensure quick failure
# maxRetry is 3 in our config, so 4 attempts should trigger a ban
for i in range(4):
client.execute(
"timeout 5 sshpass -p 'wrongpassword' ssh -4 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=3 -o NumberOfPasswordPrompts=1 testuser@server echo test 2>/dev/null || true"
)
time.sleep(1)
with subtest("Verify IP is banned"):
# Wait for fail2ban to process the logs and apply the ban
time.sleep(5)
status = server.succeed("fail2ban-client status sshd")
print(f"sshd jail status: {status}")
# Check that at least 1 IP is banned
import re
match = re.search(r"Currently banned:\s*(\d+)", status)
assert match and int(match.group(1)) >= 1, f"Expected at least 1 banned IP, got: {status}"
with subtest("Verify banned client cannot connect"):
# Use -4 to test with same IP that was banned
exit_code = client.execute("timeout 3 nc -4 -z -w 2 server 22")[0]
assert exit_code != 0, "Connection should be blocked for banned IP"
'';
}

View File

@@ -0,0 +1,137 @@
{
config,
lib,
pkgs,
...
}:
let
testServiceConfigs = {
zpool_ssds = "";
https = {
domain = "test.local";
};
ports = {
vaultwarden = 8222;
};
vaultwarden = {
path = "/var/lib/vaultwarden";
};
};
testLib = lib.extend (
final: prev: {
serviceMountWithZpool =
serviceName: zpool: dirs:
{ ... }:
{ };
serviceFilePerms = serviceName: tmpfilesRules: { ... }: { };
}
);
vaultwardenModule =
{ config, pkgs, ... }:
{
imports = [
(import ../services/bitwarden.nix {
inherit config pkgs;
lib = testLib;
service_configs = testServiceConfigs;
})
];
};
in
pkgs.testers.runNixOSTest {
name = "fail2ban-vaultwarden";
nodes = {
server =
{
config,
lib,
pkgs,
...
}:
{
imports = [
../modules/security.nix
vaultwardenModule
];
# Disable ZFS mount dependencies
systemd.services."vaultwarden-mounts".enable = lib.mkForce false;
systemd.services."backup-vaultwarden-mounts".enable = lib.mkForce false;
systemd.services.vaultwarden = {
wants = lib.mkForce [ ];
after = lib.mkForce [ ];
requires = lib.mkForce [ ];
};
systemd.services.backup-vaultwarden = {
wants = lib.mkForce [ ];
after = lib.mkForce [ ];
requires = lib.mkForce [ ];
};
# Override Vaultwarden settings for testing
# - Listen on all interfaces (not just localhost)
# - Enable logging at info level to capture failed login attempts
services.vaultwarden.config = {
ROCKET_ADDRESS = lib.mkForce "0.0.0.0";
ROCKET_LOG = lib.mkForce "info";
};
# Override for faster testing and correct port
services.fail2ban.jails.vaultwarden.settings = {
maxretry = lib.mkForce 3;
# In test, we connect directly to Vaultwarden port, not via Caddy
port = lib.mkForce "8222";
};
networking.firewall.allowedTCPPorts = [ 8222 ];
};
client = {
environment.systemPackages = [ pkgs.curl ];
};
};
testScript = ''
import time
import re
start_all()
server.wait_for_unit("vaultwarden.service")
server.wait_for_unit("fail2ban.service")
server.wait_for_open_port(8222)
time.sleep(2)
with subtest("Verify vaultwarden jail is active"):
status = server.succeed("fail2ban-client status")
assert "vaultwarden" in status, f"vaultwarden jail not found in: {status}"
with subtest("Generate failed login attempts"):
# Use -4 to force IPv4 for consistent IP tracking
for i in range(4):
client.execute("""
curl -4 -s -X POST 'http://server:8222/identity/connect/token' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Bitwarden-Client-Name: web' \
-H 'Bitwarden-Client-Version: 2024.1.0' \
-d 'grant_type=password&username=bad@user.com&password=badpass&scope=api+offline_access&client_id=web&deviceType=10&deviceIdentifier=test&deviceName=test' \
|| true
""")
time.sleep(0.5)
with subtest("Verify IP is banned"):
time.sleep(3)
status = server.succeed("fail2ban-client status vaultwarden")
print(f"vaultwarden jail status: {status}")
# Check that at least 1 IP is banned
match = re.search(r"Currently banned:\s*(\d+)", status)
assert match and int(match.group(1)) >= 1, f"Expected at least 1 banned IP, got: {status}"
with subtest("Verify banned client cannot connect"):
# Use -4 to test with same IP that was banned
exit_code = client.execute("curl -4 -s --max-time 3 http://server:8222/ 2>&1")[0]
assert exit_code != 0, "Connection should be blocked"
'';
}

53
tests/file-perms.nix Normal file
View File

@@ -0,0 +1,53 @@
{
config,
lib,
pkgs,
...
}:
let
testPkgs = pkgs.appendOverlays [ (import ../modules/overlays.nix) ];
in
testPkgs.testers.runNixOSTest {
name = "file-perms test";
nodes.machine =
{ pkgs, ... }:
{
imports = [
(lib.serviceFilePerms "test-service" [
"Z /tmp/test-perms-dir 0750 nobody nogroup"
])
];
systemd.services."test-service" = {
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = lib.getExe pkgs.bash;
};
};
};
testScript = ''
start_all()
machine.wait_for_unit("multi-user.target")
# Create test directory with wrong permissions
machine.succeed("mkdir -p /tmp/test-perms-dir")
machine.succeed("chown root:root /tmp/test-perms-dir")
machine.succeed("chmod 700 /tmp/test-perms-dir")
# Start service -- this should pull in test-service-file-perms
machine.succeed("systemctl start test-service")
# Verify file-perms service ran and is active
machine.succeed("systemctl is-active test-service-file-perms.service")
# Verify permissions were fixed by tmpfiles
result = machine.succeed("stat -c '%U:%G' /tmp/test-perms-dir").strip()
assert result == "nobody:nogroup", f"Expected nobody:nogroup, got {result}"
result = machine.succeed("stat -c '%a' /tmp/test-perms-dir").strip()
assert result == "750", f"Expected 750, got {result}"
'';
}

View File

@@ -0,0 +1,583 @@
{
lib,
pkgs,
inputs,
...
}:
let
payloads = {
auth = pkgs.writeText "auth.json" (builtins.toJSON { Username = "jellyfin"; });
empty = pkgs.writeText "empty.json" (builtins.toJSON { });
};
in
pkgs.testers.runNixOSTest {
name = "jellyfin-qbittorrent-monitor";
nodes = {
server =
{ ... }:
{
imports = [
inputs.vpn-confinement.nixosModules.default
];
services.jellyfin.enable = true;
# Real qBittorrent service
services.qbittorrent = {
enable = true;
webuiPort = 8080;
openFirewall = true;
serverConfig.LegalNotice.Accepted = true;
serverConfig.Preferences = {
WebUI = {
# Disable authentication for testing
AuthSubnetWhitelist = "0.0.0.0/0,::/0";
AuthSubnetWhitelistEnabled = true;
LocalHostAuth = false;
};
Downloads = {
SavePath = "/var/lib/qbittorrent/downloads";
TempPath = "/var/lib/qbittorrent/incomplete";
};
};
serverConfig.BitTorrent.Session = {
# Normal speed - unlimited
GlobalUPSpeedLimit = 0;
GlobalDLSpeedLimit = 0;
# Alternate speed limits for when Jellyfin is streaming
AlternativeGlobalUPSpeedLimit = 100;
AlternativeGlobalDLSpeedLimit = 100;
};
};
environment.systemPackages = with pkgs; [
curl
ffmpeg
];
virtualisation.diskSize = 3 * 1024;
networking.firewall.allowedTCPPorts = [
8096
8080
];
networking.interfaces.eth1.ipv4.addresses = lib.mkForce [
{
address = "192.168.1.1";
prefixLength = 24;
}
];
networking.interfaces.eth1.ipv4.routes = [
{
address = "203.0.113.0";
prefixLength = 24;
}
];
# Create directories for qBittorrent
systemd.tmpfiles.rules = [
"d /var/lib/qbittorrent/downloads 0755 qbittorrent qbittorrent"
"d /var/lib/qbittorrent/incomplete 0755 qbittorrent qbittorrent"
];
};
# Public test IP (RFC 5737 TEST-NET-3) so Jellyfin sees it as external
client = {
environment.systemPackages = [ pkgs.curl ];
networking.interfaces.eth1.ipv4.addresses = lib.mkForce [
{
address = "203.0.113.10";
prefixLength = 24;
}
];
networking.interfaces.eth1.ipv4.routes = [
{
address = "192.168.1.0";
prefixLength = 24;
}
];
};
};
testScript = ''
import json
import time
from urllib.parse import urlencode
auth_header = 'MediaBrowser Client="NixOS Test", DeviceId="test-1337", Device="TestDevice", Version="1.0"'
def api_get(path, token=None):
header = auth_header + (f", Token={token}" if token else "")
return f"curl -sf 'http://server:8096{path}' -H 'X-Emby-Authorization:{header}'"
def api_post(path, json_file=None, token=None):
header = auth_header + (f", Token={token}" if token else "")
if json_file:
return f"curl -sf -X POST 'http://server:8096{path}' -d '@{json_file}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{header}'"
return f"curl -sf -X POST 'http://server:8096{path}' -H 'X-Emby-Authorization:{header}'"
def is_throttled():
return server.succeed("curl -s http://localhost:8080/api/v2/transfer/speedLimitsMode").strip() == "1"
def get_alt_dl_limit():
prefs = json.loads(server.succeed("curl -s http://localhost:8080/api/v2/app/preferences"))
return prefs["alt_dl_limit"]
def get_alt_up_limit():
prefs = json.loads(server.succeed("curl -s http://localhost:8080/api/v2/app/preferences"))
return prefs["alt_up_limit"]
def are_torrents_paused():
torrents = json.loads(server.succeed("curl -s 'http://localhost:8080/api/v2/torrents/info'"))
if not torrents:
return False
return all(t["state"].startswith("stopped") for t in torrents)
movie_id: str = ""
media_source_id: str = ""
start_all()
server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60)
server.wait_for_unit("qbittorrent.service")
server.wait_for_open_port(8080)
# Wait for qBittorrent WebUI to be responsive
server.wait_until_succeeds("curl -sf http://localhost:8080/api/v2/app/version", timeout=30)
with subtest("Complete Jellyfin setup wizard"):
server.wait_until_succeeds(api_get("/Startup/Configuration"))
server.succeed(api_get("/Startup/FirstUser"))
server.succeed(api_post("/Startup/Complete"))
with subtest("Authenticate and get token"):
auth_result = json.loads(server.succeed(api_post("/Users/AuthenticateByName", "${payloads.auth}")))
token = auth_result["AccessToken"]
user_id = auth_result["User"]["Id"]
with subtest("Create test video library"):
tempdir = server.succeed("mktemp -d -p /var/lib/jellyfin").strip()
server.succeed(f"chmod 755 '{tempdir}'")
server.succeed(f"ffmpeg -f lavfi -i testsrc2=duration=5 '{tempdir}/Test Movie (2024) [1080p].mkv'")
add_folder_query = urlencode({
"name": "Test Library",
"collectionType": "Movies",
"paths": tempdir,
"refreshLibrary": "true",
})
server.succeed(api_post(f"/Library/VirtualFolders?{add_folder_query}", "${payloads.empty}", token))
def is_library_ready(_):
folders = json.loads(server.succeed(api_get("/Library/VirtualFolders", token)))
return all(f.get("RefreshStatus") == "Idle" for f in folders)
retry(is_library_ready, timeout=60)
def get_movie(_):
global movie_id, media_source_id
items = json.loads(server.succeed(api_get(f"/Users/{user_id}/Items?IncludeItemTypes=Movie&Recursive=true", token)))
if items["TotalRecordCount"] > 0:
movie_id = items["Items"][0]["Id"]
item_info = json.loads(server.succeed(api_get(f"/Users/{user_id}/Items/{movie_id}", token)))
media_source_id = item_info["MediaSources"][0]["Id"]
return True
return False
retry(get_movie, timeout=60)
with subtest("Start monitor service"):
python = "${pkgs.python3.withPackages (ps: [ ps.requests ])}/bin/python"
monitor = "${../services/jellyfin-qbittorrent-monitor.py}"
server.succeed(f"""
systemd-run --unit=monitor-test \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
{python} {monitor}
""")
time.sleep(2)
assert not is_throttled(), "Should start unthrottled"
client_auth = 'MediaBrowser Client="External Client", DeviceId="external-9999", Device="ExternalDevice", Version="1.0"'
client_auth2 = 'MediaBrowser Client="External Client 2", DeviceId="external-8888", Device="ExternalDevice2", Version="1.0"'
server_ip = "192.168.1.1"
with subtest("Client authenticates from external network"):
auth_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}'"
client_auth_result = json.loads(client.succeed(auth_cmd))
client_token = client_auth_result["AccessToken"]
with subtest("Second client authenticates from external network"):
auth_cmd2 = f"curl -sf -X POST 'http://{server_ip}:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth2}'"
client_auth_result2 = json.loads(client.succeed(auth_cmd2))
client_token2 = client_auth_result2["AccessToken"]
with subtest("External video playback triggers throttling"):
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-1",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
time.sleep(2)
assert is_throttled(), "Should throttle for external video playback"
with subtest("Pausing disables throttling"):
playback_progress = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-1",
"IsPaused": True,
"PositionTicks": 10000000,
}
progress_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Progress' -d '{json.dumps(playback_progress)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(progress_cmd)
time.sleep(2)
assert not is_throttled(), "Should unthrottle when paused"
with subtest("Resuming re-enables throttling"):
playback_progress["IsPaused"] = False
playback_progress["PositionTicks"] = 20000000
progress_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Progress' -d '{json.dumps(playback_progress)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(progress_cmd)
time.sleep(2)
assert is_throttled(), "Should re-throttle when resumed"
with subtest("Stopping playback disables throttling"):
playback_stop = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-1",
"PositionTicks": 50000000,
}
stop_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(playback_stop)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(stop_cmd)
time.sleep(2)
assert not is_throttled(), "Should unthrottle when playback stops"
with subtest("Single stream sets proportional alt speed limits"):
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-proportional",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
time.sleep(3)
assert is_throttled(), "Should be in alt speed mode during streaming"
dl_limit = get_alt_dl_limit()
ul_limit = get_alt_up_limit()
# Both upload and download should get remaining bandwidth (proportional)
assert dl_limit > 0, f"Download limit should be > 0, got {dl_limit}"
assert ul_limit == dl_limit, f"Upload limit ({ul_limit}) should equal download limit ({dl_limit})"
# Stop playback
playback_stop = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-proportional",
"PositionTicks": 50000000,
}
stop_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(playback_stop)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(stop_cmd)
time.sleep(3)
with subtest("Multiple streams reduce available bandwidth"):
# Start first stream
playback1 = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-multi-1",
"CanSeek": True,
"IsPaused": False,
}
start_cmd1 = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback1)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd1)
time.sleep(3)
single_dl_limit = get_alt_dl_limit()
# Start second stream with different client identity
playback2 = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-multi-2",
"CanSeek": True,
"IsPaused": False,
}
start_cmd2 = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback2)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth2}, Token={client_token2}'"
client.succeed(start_cmd2)
time.sleep(3)
dual_dl_limit = get_alt_dl_limit()
# Two streams should leave less bandwidth than one stream
assert dual_dl_limit < single_dl_limit, f"Two streams ({dual_dl_limit}) should have lower limit than one ({single_dl_limit})"
# Stop both streams
stop1 = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-multi-1",
"PositionTicks": 50000000,
}
stop_cmd1 = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(stop1)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(stop_cmd1)
stop2 = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-multi-2",
"PositionTicks": 50000000,
}
stop_cmd2 = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(stop2)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth2}, Token={client_token2}'"
client.succeed(stop_cmd2)
time.sleep(3)
with subtest("Budget exhaustion pauses all torrents"):
# Stop current monitor
server.succeed("systemctl stop monitor-test || true")
time.sleep(1)
# Add a dummy torrent so we can check pause state
server.succeed("curl -sf -X POST 'http://localhost:8080/api/v2/torrents/add' -d 'urls=magnet:?xt=urn:btih:0000000000000000000000000000000000000001%26dn=test-torrent'")
time.sleep(2)
# Start monitor with impossibly low budget
server.succeed(f"""
systemd-run --unit=monitor-exhaust \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=1000 \
--setenv=SERVICE_BUFFER=500 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
{python} {monitor}
""")
time.sleep(2)
# Start a stream - this will exceed the tiny budget
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-exhaust",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
time.sleep(3)
assert are_torrents_paused(), "Torrents should be paused when budget is exhausted"
with subtest("Recovery from pause restores unlimited"):
# Stop the stream
playback_stop = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-exhaust",
"PositionTicks": 50000000,
}
stop_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(playback_stop)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(stop_cmd)
time.sleep(3)
assert not is_throttled(), "Should return to unlimited after streams stop"
assert not are_torrents_paused(), "Torrents should be resumed after streams stop"
# Clean up: stop exhaust monitor, restart normal monitor
server.succeed("systemctl stop monitor-exhaust || true")
time.sleep(1)
server.succeed(f"""
systemd-run --unit=monitor-test \
--setenv=JELLYFIN_URL=http://localhost:8096 \
--setenv=JELLYFIN_API_KEY={token} \
--setenv=QBITTORRENT_URL=http://localhost:8080 \
--setenv=CHECK_INTERVAL=1 \
--setenv=STREAMING_START_DELAY=1 \
--setenv=STREAMING_STOP_DELAY=1 \
--setenv=TOTAL_BANDWIDTH_BUDGET=50000000 \
--setenv=SERVICE_BUFFER=2000000 \
--setenv=DEFAULT_STREAM_BITRATE=10000000 \
--setenv=MIN_TORRENT_SPEED=100 \
{python} {monitor}
""")
time.sleep(2)
with subtest("Local playback does NOT trigger throttling"):
local_auth = 'MediaBrowser Client="Local Client", DeviceId="local-1111", Device="LocalDevice", Version="1.0"'
local_auth_result = json.loads(server.succeed(
f"curl -sf -X POST 'http://localhost:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}'"
))
local_token = local_auth_result["AccessToken"]
local_playback = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-local",
"CanSeek": True,
"IsPaused": False,
}
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'")
time.sleep(2)
assert not is_throttled(), "Should NOT throttle for local playback"
local_playback["PositionTicks"] = 50000000
server.succeed(f"curl -sf -X POST 'http://localhost:8096/Sessions/Playing/Stopped' -d '{json.dumps(local_playback)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{local_auth}, Token={local_token}'")
# === SERVICE RESTART TESTS ===
with subtest("qBittorrent restart during throttled state re-applies throttling"):
# Start external playback to trigger throttling
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-restart-1",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
time.sleep(2)
assert is_throttled(), "Should be throttled before qBittorrent restart"
# Restart qBittorrent (this resets alt_speed to its config default - disabled)
server.succeed("systemctl restart qbittorrent.service")
server.wait_for_unit("qbittorrent.service")
server.wait_for_open_port(8080)
server.wait_until_succeeds("curl -sf http://localhost:8080/api/v2/app/version", timeout=30)
# qBittorrent restarted - alt_speed is now False (default on startup)
# The monitor should detect this and re-apply throttling
time.sleep(3) # Give monitor time to detect and re-apply
assert is_throttled(), "Monitor should re-apply throttling after qBittorrent restart"
# Stop playback to clean up
playback_stop = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-restart-1",
"PositionTicks": 50000000,
}
stop_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing/Stopped' -d '{json.dumps(playback_stop)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(stop_cmd)
time.sleep(2)
with subtest("qBittorrent restart during unthrottled state stays unthrottled"):
# Verify we're unthrottled (no active streams)
assert not is_throttled(), "Should be unthrottled before test"
# Restart qBittorrent
server.succeed("systemctl restart qbittorrent.service")
server.wait_for_unit("qbittorrent.service")
server.wait_for_open_port(8080)
server.wait_until_succeeds("curl -sf http://localhost:8080/api/v2/app/version", timeout=30)
# Give monitor time to check state
time.sleep(3)
assert not is_throttled(), "Should remain unthrottled after qBittorrent restart with no streams"
with subtest("Jellyfin restart during throttled state maintains throttling"):
# Start external playback to trigger throttling
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-restart-2",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
time.sleep(2)
assert is_throttled(), "Should be throttled before Jellyfin restart"
# Restart Jellyfin
server.succeed("systemctl restart jellyfin.service")
server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60)
# During Jellyfin restart, monitor can't reach Jellyfin
# After restart, sessions are cleared - monitor should eventually unthrottle
# But during the unavailability window, throttling should be maintained (fail-safe)
time.sleep(3)
# Re-authenticate (old token invalid after restart)
client_auth_result = json.loads(client.succeed(
f"curl -sf -X POST 'http://{server_ip}:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}'"
))
client_token = client_auth_result["AccessToken"]
client_auth_result2 = json.loads(client.succeed(
f"curl -sf -X POST 'http://{server_ip}:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth2}'"
))
client_token2 = client_auth_result2["AccessToken"]
# No active streams after Jellyfin restart, should eventually unthrottle
time.sleep(3)
assert not is_throttled(), "Should unthrottle after Jellyfin restart clears sessions"
with subtest("Monitor recovers after Jellyfin temporary unavailability"):
# Re-authenticate with fresh token
client_auth_result = json.loads(client.succeed(
f"curl -sf -X POST 'http://{server_ip}:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}'"
))
client_token = client_auth_result["AccessToken"]
client_auth_result2 = json.loads(client.succeed(
f"curl -sf -X POST 'http://{server_ip}:8096/Users/AuthenticateByName' -d '@${payloads.auth}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth2}'"
))
client_token2 = client_auth_result2["AccessToken"]
# Start playback
playback_start = {
"ItemId": movie_id,
"MediaSourceId": media_source_id,
"PlaySessionId": "test-play-session-restart-3",
"CanSeek": True,
"IsPaused": False,
}
start_cmd = f"curl -sf -X POST 'http://{server_ip}:8096/Sessions/Playing' -d '{json.dumps(playback_start)}' -H 'Content-Type:application/json' -H 'X-Emby-Authorization:{client_auth}, Token={client_token}'"
client.succeed(start_cmd)
time.sleep(2)
assert is_throttled(), "Should be throttled"
# Stop Jellyfin briefly (simulating temporary unavailability)
server.succeed("systemctl stop jellyfin.service")
time.sleep(2)
# During unavailability, throttle state should be maintained (fail-safe)
assert is_throttled(), "Should maintain throttle during Jellyfin unavailability"
# Bring Jellyfin back
server.succeed("systemctl start jellyfin.service")
server.wait_for_unit("jellyfin.service")
server.wait_for_open_port(8096)
server.wait_until_succeeds("curl -sf http://localhost:8096/health | grep -q Healthy", timeout=60)
# After Jellyfin comes back, sessions are gone - should unthrottle
time.sleep(3)
assert not is_throttled(), "Should unthrottle after Jellyfin returns with no sessions"
'';
}

View File

@@ -6,57 +6,52 @@
... ...
}: }:
let let
testServiceConfigs = {
minecraft = {
server_name = "main";
parent_dir = "/var/lib/minecraft";
};
https = {
domain = "test.local";
};
ports = {
minecraft = 25565;
};
zpool_ssds = "";
};
# Create pkgs with nix-minecraft overlay and unfree packages allowed # Create pkgs with nix-minecraft overlay and unfree packages allowed
testPkgs = import inputs.nixpkgs { testPkgs = import inputs.nixpkgs {
system = pkgs.system; system = pkgs.stdenv.targetPlatform.system;
config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [ "minecraft-server" ]; config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [ "minecraft-server" ];
overlays = [ overlays = [
inputs.nix-minecraft.overlay inputs.nix-minecraft.overlay
(import ../overlays.nix) (import ../modules/overlays.nix)
]; ];
}; };
# Create a wrapper module that imports the actual minecraft service
minecraftService =
{ config, ... }:
{
imports = [
(import ../services/minecraft.nix {
inherit lib config;
pkgs = testPkgs;
service_configs = {
minecraft = {
server_name = "main";
parent_dir = "/var/lib/minecraft";
};
https = {
domain = "test.local";
};
zpool_ssds = "";
};
username = "testuser";
})
];
# Override nixpkgs config to prevent conflicts in test environment
nixpkgs.config = lib.mkForce {
allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [ "minecraft-server" ];
};
};
in in
testPkgs.testers.runNixOSTest { testPkgs.testers.runNixOSTest {
name = "minecraft server startup test"; name = "minecraft server startup test";
node.specialArgs = {
inherit inputs lib;
service_configs = testServiceConfigs;
username = "testuser";
};
nodes.machine = nodes.machine =
{ ... }: { lib, ... }:
{ {
imports = [ imports = [
inputs.nix-minecraft.nixosModules.minecraft-servers ../services/minecraft.nix
minecraftService
]; ];
# Enable caddy service (required by minecraft service) # Enable caddy service (required by minecraft service)
services.caddy.enable = true; services.caddy.enable = true;
# Enable networking for the test (needed for minecraft mods to download mappings)
networking.dhcpcd.enable = true;
# Disable the ZFS mount dependency service in test environment # Disable the ZFS mount dependency service in test environment
systemd.services."minecraft-server-main_mounts".enable = lib.mkForce false; systemd.services."minecraft-server-main_mounts".enable = lib.mkForce false;
@@ -65,6 +60,10 @@ testPkgs.testers.runNixOSTest {
wants = lib.mkForce [ ]; wants = lib.mkForce [ ];
after = lib.mkForce [ ]; after = lib.mkForce [ ];
requires = lib.mkForce [ ]; requires = lib.mkForce [ ];
serviceConfig = {
Nice = lib.mkForce 0;
LimitMEMLOCK = lib.mkForce "infinity";
};
}; };
# Test-specific overrides only - reduce memory for testing # Test-specific overrides only - reduce memory for testing
@@ -85,9 +84,15 @@ testPkgs.testers.runNixOSTest {
# Wait for minecraft service to be available # Wait for minecraft service to be available
machine.wait_for_unit("minecraft-server-main.service") machine.wait_for_unit("minecraft-server-main.service")
machine.sleep(20) # Wait up to 60 seconds for the server to complete startup
with machine.nested("Waiting for minecraft server startup completion"):
# Check that the service is active and not failed try:
machine.succeed("systemctl is-active minecraft-server-main.service") machine.wait_until_succeeds(
"grep -Eq '\\[[0-9]+:[0-9]+:[0-9]+\\] \\[Server thread/INFO\\]: Done \\([0-9]+\\.[0-9]+s\\)! For help, type \"help\"' /var/lib/minecraft/main/logs/latest.log",
timeout=120
)
except Exception:
print(machine.succeed("cat /var/lib/minecraft/main/logs/latest.log"))
raise
''; '';
} }

174
tests/ntfy-alerts.nix Normal file
View File

@@ -0,0 +1,174 @@
{
config,
lib,
pkgs,
...
}:
let
testPkgs = pkgs.appendOverlays [ (import ../modules/overlays.nix) ];
in
testPkgs.testers.runNixOSTest {
name = "ntfy-alerts";
nodes.machine =
{ pkgs, ... }:
{
imports = [
../modules/ntfy-alerts.nix
];
system.stateVersion = config.system.stateVersion;
virtualisation.memorySize = 2048;
environment.systemPackages = with pkgs; [
curl
jq
];
# Create test topic file
systemd.tmpfiles.rules = [
"f /run/ntfy-test-topic 0644 root root - test-alerts"
];
# Mock ntfy server that records POST requests
systemd.services.mock-ntfy =
let
mockNtfyScript = pkgs.writeScript "mock-ntfy.py" ''
import json
import os
from http.server import HTTPServer, BaseHTTPRequestHandler
from datetime import datetime
REQUESTS_FILE = "/tmp/ntfy-requests.json"
class MockNtfy(BaseHTTPRequestHandler):
def _respond(self, code=200, body=b"Ok"):
self.send_response(code)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(body if isinstance(body, bytes) else body.encode())
def do_GET(self):
self._respond()
def do_POST(self):
content_length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(content_length).decode() if content_length > 0 else ""
request_data = {
"timestamp": datetime.now().isoformat(),
"path": self.path,
"headers": dict(self.headers),
"body": body,
}
# Load existing requests or start new list
requests = []
if os.path.exists(REQUESTS_FILE):
try:
with open(REQUESTS_FILE, "r") as f:
requests = json.load(f)
except:
requests = []
requests.append(request_data)
with open(REQUESTS_FILE, "w") as f:
json.dump(requests, f, indent=2)
self._respond()
def log_message(self, format, *args):
pass
HTTPServer(("0.0.0.0", 8080), MockNtfy).serve_forever()
'';
in
{
description = "Mock ntfy server";
wantedBy = [ "multi-user.target" ];
before = [ "ntfy-alert@test-fail.service" ];
serviceConfig = {
ExecStart = "${pkgs.python3}/bin/python3 ${mockNtfyScript}";
Type = "simple";
};
};
# Test service that will fail
systemd.services.test-fail = {
description = "Test service that fails";
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.coreutils}/bin/false";
};
};
# Configure ntfy-alerts to use mock server
services.ntfyAlerts = {
enable = true;
serverUrl = "http://localhost:8080";
topicFile = "/run/ntfy-test-topic";
};
};
testScript = ''
import json
import time
start_all()
# Wait for mock ntfy server to be ready
machine.wait_for_unit("mock-ntfy.service")
machine.wait_until_succeeds("curl -sf http://localhost:8080/", timeout=30)
# Verify the ntfy-alert@ template service exists
machine.succeed("systemctl list-unit-files | grep ntfy-alert@")
# Verify the global OnFailure drop-in is configured
machine.succeed("cat /etc/systemd/system/service.d/onfailure.conf | grep -q 'OnFailure=ntfy-alert@%p.service'")
# Trigger the test-fail service
machine.succeed("systemctl start test-fail.service || true")
# Wait a moment for the failure notification to be sent
time.sleep(2)
# Verify the ntfy-alert@test-fail service ran
machine.succeed("systemctl is-active ntfy-alert@test-fail.service || systemctl is-failed ntfy-alert@test-fail.service || true")
# Check that the mock server received a POST request
machine.wait_until_succeeds("test -f /tmp/ntfy-requests.json", timeout=30)
# Verify the request content
result = machine.succeed("cat /tmp/ntfy-requests.json")
requests = json.loads(result)
assert len(requests) >= 1, f"Expected at least 1 request, got {len(requests)}"
# Check the first request
req = requests[0]
assert "/test-alerts" in req["path"], f"Expected path to contain /test-alerts, got {req['path']}"
assert "Title" in req["headers"], "Expected Title header"
assert "test-fail" in req["headers"]["Title"], f"Expected Title to contain 'test-fail', got {req['headers']['Title']}"
assert req["headers"]["Priority"] == "high", f"Expected Priority 'high', got {req['headers'].get('Priority')}"
assert req["headers"]["Tags"] == "warning", f"Expected Tags 'warning', got {req['headers'].get('Tags')}"
print(f"Received notification: Title={req['headers']['Title']}, Body={req['body'][:100]}...")
# Idempotency test: trigger failure again
machine.succeed("rm /tmp/ntfy-requests.json")
machine.succeed("systemctl reset-failed test-fail.service || true")
machine.succeed("systemctl start test-fail.service || true")
time.sleep(2)
# Verify another notification was sent
machine.wait_until_succeeds("test -f /tmp/ntfy-requests.json", timeout=30)
result = machine.succeed("cat /tmp/ntfy-requests.json")
requests = json.loads(result)
assert len(requests) >= 1, f"Expected at least 1 request after second failure, got {len(requests)}"
print("All tests passed!")
'';
}

View File

@@ -11,4 +11,17 @@ in
zfsTest = handleTest ./zfs.nix; zfsTest = handleTest ./zfs.nix;
testTest = handleTest ./testTest.nix; testTest = handleTest ./testTest.nix;
minecraftTest = handleTest ./minecraft.nix; minecraftTest = handleTest ./minecraft.nix;
jellyfinQbittorrentMonitorTest = handleTest ./jellyfin-qbittorrent-monitor.nix;
filePermsTest = handleTest ./file-perms.nix;
# fail2ban tests
fail2banSshTest = handleTest ./fail2ban-ssh.nix;
fail2banCaddyTest = handleTest ./fail2ban-caddy.nix;
fail2banGiteaTest = handleTest ./fail2ban-gitea.nix;
fail2banVaultwardenTest = handleTest ./fail2ban-vaultwarden.nix;
fail2banImmichTest = handleTest ./fail2ban-immich.nix;
fail2banJellyfinTest = handleTest ./fail2ban-jellyfin.nix;
# ntfy alerts test
ntfyAlertsTest = handleTest ./ntfy-alerts.nix;
} }

View File

@@ -7,58 +7,78 @@
}: }:
let let
# Create pkgs with ensureZfsMounts overlay # Create pkgs with ensureZfsMounts overlay
testPkgs = import inputs.nixpkgs { testPkgs = pkgs.appendOverlays [ (import ../modules/overlays.nix) ];
system = pkgs.system;
overlays = [ (import ../overlays.nix) ];
};
in in
testPkgs.testers.runNixOSTest { testPkgs.testers.runNixOSTest {
name = "zfs folder dependency and mounting test"; name = "zfs test";
nodes.machine = nodes.machine =
{ pkgs, ... }: { pkgs, ... }:
{ {
imports = [ imports = [
(lib.serviceMountDeps "foobar" [ "/mnt/foobar_data" ]) # Test valid paths within zpool
(lib.serviceMountDeps "foobarSadge" [ (lib.serviceMountWithZpool "test-service" "rpool" [ "/mnt/rpool_data" ])
"/mnt/foobar_data"
"/mnt/does_not_exist_lol"
])
# Test service with paths outside zpool (should fail assertion)
(lib.serviceMountWithZpool "invalid-service" "rpool2" [ "/mnt/rpool_data" ])
# Test multi-command logic: service with multiple serviceMountWithZpool calls
(lib.serviceMountWithZpool "multi-service" "rpool" [ "/mnt/rpool_data" ])
(lib.serviceMountWithZpool "multi-service" "rpool2" [ "/mnt/rpool2_data" ])
# Test multi-command logic: service with multiple serviceMountWithZpool calls
# BUT this one should fail as `/mnt/rpool_moar_data` is not on rpool2
(lib.serviceMountWithZpool "multi-service-fail" "rpool" [ "/mnt/rpool_data" ])
(lib.serviceMountWithZpool "multi-service-fail" "rpool2" [ "/mnt/rpool_moar_data" ])
]; ];
virtualisation = { virtualisation = {
emptyDiskImages = [ emptyDiskImages = [
4096 4096
4096
]; ];
useBootLoader = true; # Add this to avoid ZFS hanging issues
useEFIBoot = true; additionalPaths = [ pkgs.zfs ];
}; };
boot.loader.systemd-boot.enable = true;
boot.loader.timeout = 0;
boot.loader.efi.canTouchEfiVariables = true;
networking.hostId = "deadbeef"; networking.hostId = "deadbeef";
boot.kernelPackages = config.boot.kernelPackages; boot.kernelPackages = config.boot.kernelPackages;
boot.zfs.package = config.boot.zfs.package; boot.zfs.package = config.boot.zfs.package;
boot.supportedFilesystems = [ "zfs" ]; boot.supportedFilesystems = [ "zfs" ];
environment.systemPackages = [ environment.systemPackages = with pkgs; [
pkgs.parted parted
pkgs.ensureZfsMounts ensureZfsMounts
]; ];
systemd.services.foobar = { systemd.services."test-service" = {
serviceConfig = { serviceConfig = {
Type = "oneshot"; Type = "oneshot";
RemainAfterExit = true; RemainAfterExit = true;
ExecStart = "${lib.getExe pkgs.bash} -c \"true\""; ExecStart = lib.getExe pkgs.bash;
}; };
}; };
systemd.services.foobarSadge = { systemd.services."invalid-service" = {
serviceConfig = { serviceConfig = {
Type = "oneshot"; Type = "oneshot";
RemainAfterExit = true; RemainAfterExit = true;
ExecStart = "${lib.getExe pkgs.bash} -c \"true\""; ExecStart = lib.getExe pkgs.bash;
};
};
systemd.services."multi-service" = {
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = lib.getExe pkgs.bash;
};
};
systemd.services."multi-service-fail" = {
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = lib.getExe pkgs.bash;
}; };
}; };
}; };
@@ -66,37 +86,68 @@ testPkgs.testers.runNixOSTest {
testScript = '' testScript = ''
start_all() start_all()
machine.wait_for_unit("multi-user.target") machine.wait_for_unit("multi-user.target")
# Setup ZFS pool
machine.succeed( machine.succeed(
"parted --script /dev/vdb mklabel msdos", "parted --script /dev/vdb mklabel msdos",
"parted --script /dev/vdb -- mkpart primary 1024M -1s", "parted --script /dev/vdb -- mkpart primary 1024M -1s",
"zpool create rpool /dev/vdb1"
) )
machine.fail("zfsEnsureMounted") # Setup ZFS pool 2
machine.fail("zfsEnsureMounted /mnt/test_mountpoint") machine.succeed(
"parted --script /dev/vdc mklabel msdos",
"parted --script /dev/vdc -- mkpart primary 1024M -1s",
"zpool create rpool2 /dev/vdc1"
)
machine.succeed("zpool create rpool /dev/vdb1") machine.succeed("zfs create -o mountpoint=/mnt/rpool_data rpool/data")
machine.succeed("zfs create -o mountpoint=/mnt/test_mountpoint rpool/test")
machine.succeed("zfsEnsureMounted /mnt/test_mountpoint") machine.succeed("zfs create -o mountpoint=/mnt/rpool2_data rpool2/data")
machine.fail("zfsEnsureMounted /mnt/does_not_exist_lol") machine.succeed("zfs create -o mountpoint=/mnt/rpool_moar_data rpool/moar_data")
machine.fail("zfsEnsureMounted /mnt/test_mountpoint /mnt/does_not_exist_lol")
machine.succeed("zfs create -o mountpoint=/mnt/test_mountpoint_dos rpool/test2") # Test that valid service starts successfully
machine.succeed("systemctl start test-service")
machine.succeed("zfsEnsureMounted /mnt/test_mountpoint /mnt/test_mountpoint_dos") # Manually test our validation logic by checking the debug output
zfs_output = machine.succeed("zfs list -H -o name,mountpoint")
print("ZFS LIST OUTPUT:")
print(zfs_output)
machine.succeed("zfs create -o mountpoint='/mnt/test path with spaces' rpool/test3") dataset = machine.succeed("zfs list -H -o name,mountpoint | awk '/\\/mnt\\/rpool_data/ { print $1 }'")
print("DATASET FOR /mnt/rpool_data:")
print(dataset)
machine.succeed("zfsEnsureMounted '/mnt/test path with spaces'") # Test that invalid-service mount service fails validation
machine.fail("systemctl start invalid-service.service")
# machine.succeed("zfsEnsureMounted /mnt/test\\ escaped\\ spaces") # TODO! fix escaped spaces # Check the journal for our detailed validation error message
journal_output = machine.succeed("journalctl -u invalid-service-mounts.service --no-pager")
print("JOURNAL OUTPUT:")
print(journal_output)
machine.succeed("zfsEnsureMounted /mnt/test_mountpoint '/mnt/test path with spaces' /mnt/test_mountpoint_dos") # Verify our validation error is in the journal using Python string matching
assert "ERROR: ZFS pool mismatch for /mnt/rpool_data" in journal_output
assert "Expected pool: rpool2" in journal_output
assert "Actual pool: rpool" in journal_output
machine.succeed("zfs create -o mountpoint=/mnt/foobar_data rpool/foobar")
machine.succeed("systemctl start foobar")
machine.fail("systemctl start foobarSadge") # Test that invalid-service mount service fails validation
machine.fail("systemctl start multi-service-fail.service")
# Check the journal for our detailed validation error message
journal_output = machine.succeed("journalctl -u multi-service-fail-mounts.service --no-pager")
print("JOURNAL OUTPUT:")
print(journal_output)
# Verify our validation error is in the journal using Python string matching
assert "ERROR: ZFS pool mismatch for /mnt/rpool_moar_data" in journal_output, "no zfs pool mismatch found (1)"
assert "Expected pool: rpool2" in journal_output, "no zfs pool mismatch found (2)"
assert "Actual pool: rpool" in journal_output, "no zfs pool mismatch found (3)"
machine.succeed("systemctl start multi-service")
machine.succeed("systemctl is-active multi-service-mounts.service")
''; '';
} }

44
usb-secrets/setup-usb.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/usr/bin/env nix-shell
#! nix-shell -i bash -p parted dosfstools
set -euo pipefail
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
USB_DEVICE="$1"
if [[ -z "${USB_DEVICE:-}" ]]; then
echo "Usage: $0 <usb_device>"
echo "Example: $0 /dev/sdb"
exit 1
fi
if [[ ! -b "$USB_DEVICE" ]]; then
echo "Error: $USB_DEVICE is not a block device"
exit 1
fi
if [[ ! -f "$SCRIPT_DIR/usb-secrets/usb-secrets-key" ]]; then
echo "Error: usb-secrets-key not found at $SCRIPT_DIR/usb-secrets/usb-secrets-key"
exit 1
fi
echo "WARNING: This will completely wipe $USB_DEVICE"
echo "Press Ctrl+C to abort, or Enter to continue..."
read
echo "Creating partition and formatting as FAT32..."
parted -s "$USB_DEVICE" mklabel msdos
parted -s "$USB_DEVICE" mkpart primary fat32 0% 100%
parted -s "$USB_DEVICE" set 1 boot on
USB_PARTITION="${USB_DEVICE}1"
mkfs.fat -F 32 -n "SECRETS" "$USB_PARTITION"
echo "Copying key to USB..."
MOUNT_POINT=$(mktemp -d)
trap "umount $MOUNT_POINT 2>/dev/null || true; rmdir $MOUNT_POINT" EXIT
mount "$USB_PARTITION" "$MOUNT_POINT"
cp "$SCRIPT_DIR/usb-secrets/usb-secrets-key" "$MOUNT_POINT/"
umount "$MOUNT_POINT"
echo "USB setup complete! Label: SECRETS"
echo "Create multiple backup USB keys for redundancy."

BIN
usb-secrets/usb-secrets-key Normal file

Binary file not shown.