• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since your clients lose internet access (except to the local network, since there’s a separate route for that on the Wireguard “server” container") if any of the following happens:

    • “Client” container is spun down
    • The Wireguard interface inside the “client” container is spun down (you can try this out by execing wg-quick down wg0 inside the container)
    • or even if the interface is up but the VPN connection is down (try changing the endpoint IP to a random one instead of the correct one provided by your VPN service provider)

    I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself got restarted/updated.

    But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest versions of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:

    • A MASQUERADE rule on the tunnel, meaning the tun0 interface.
    • Gluetun is configured to drop all FORWARD packets (filter table) by default. You’ll have to change that chain rule to ACCEPT. Again, I’m not a networking expert, so I’m not sure whether or not this compromises the kill-switch in any way, at least in any way that’s relevant to the desired setup/behavior. You could potentially set a more restrictive rule to only allow traffic coming in from <wireguard_container_IP>, but I’ll leave that up to you. You’ll also need to figure out the best way to persist the rules through container restarts.

    First, here’s the docker compose setup I used:

    networks:
      wghomenet:
        name: wghomenet
        ipam:
          config:
            - subnet: 172.22.0.0/24
              gateway: 172.22.0.1
    
    services:
      gluetun:
        image: qmcgaw/gluetun
        container_name: gluetun
        cap_add:
          - NET_ADMIN
        devices:
          - /dev/net/tun:/dev/net/tun
        ports:
          - 8888:8888/tcp # HTTP proxy
          - 8388:8388/tcp # Shadowsocks
          - 8388:8388/udp # Shadowsocks
        volumes:
          - ./config:/gluetun
        environment:
          - VPN_SERVICE_PROVIDER=<your stuff here>
          - VPN_TYPE=wireguard
          # - WIREGUARD_PRIVATE_KEY=<your stuff here>
          # - WIREGUARD_PRESHARED_KEY=<your stuff here>
          # - WIREGUARD_ADDRESSES=<your stuff here>
          # - SERVER_COUNTRIES=<your stuff here>
          # Timezone for accurate log times
          - TZ= <your stuff here>
          # Server list updater
          # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
          - UPDATER_PERIOD=24h
        sysctls:
          - net.ipv4.conf.all.src_valid_mark=1
        networks:
          wghomenet:
            ipv4_address: 172.22.0.101
    
      wireguard-server:
        image: lscr.io/linuxserver/wireguard
        container_name: wireguard-server
        cap_add:
          - NET_ADMIN
        environment:
          - PUID=1000
          - PGID=1001
          - TZ=<your stuff here>
          - INTERNAL_SUBNET=10.13.13.0
          - PEERS=chromebook
        volumes:
          - ./config/wg-server:/config
          - /lib/modules:/lib/modules #optional
        restart: always
        ports:
          - 51820:51820/udp
        networks:
          wghomenet:
            ipv4_address: 172.22.0.5
        sysctls:
          - net.ipv4.conf.all.src_valid_mark=1
    

    You already have your “server” container properly configured. Now for Gluetun: I exec into the container docker exec -it gluetun sh. Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE. And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT.

    Note on the last command: In my case I did iptables-legacy because all the rules were defined there already (iptables gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior on the testing container I spun up on the VPS compared to the one I have running on my homelab.

    Good luck, and let me know if you run into any issues!

    EDIT: The rules look like this afterwards:

    Output of iptables-legacy -vL -t filter:

    Chain INPUT (policy DROP 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    10710  788K ACCEPT     all  --  lo     any     anywhere             anywhere
    16698   14M ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
        1    40 ACCEPT     all  --  eth0   any     anywhere             172.22.0.0/24
    
    # note the ACCEPT policy here
    Chain FORWARD (policy ACCEPT 3593 packets, 1681K bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Chain OUTPUT (policy DROP 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    10710  788K ACCEPT     all  --  any    lo      anywhere             anywhere
    13394 1518K ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
        0     0 ACCEPT     all  --  any    eth0    dac4b9c06987         172.22.0.0/24
        1   176 ACCEPT     udp  --  any    eth0    anywhere             connected-by.global-layer.com  udp dpt:1637
      916 55072 ACCEPT     all  --  any    tun0    anywhere             anywhere
    

    And the output of iptables -vL -t nat:

    Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DOCKER_OUTPUT  all  --  any    any     anywhere             127.0.0.11
    
    # note the MASQUERADE rule here
    Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DOCKER_POSTROUTING  all  --  any    any     anywhere             127.0.0.11
      312 18936 MASQUERADE  all  --  any    tun+    anywhere             anywhere
    
    Chain DOCKER_OUTPUT (1 references)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DNAT       tcp  --  any    any     anywhere             127.0.0.11           tcp dpt:domain to:127.0.0.11:39905
        0     0 DNAT       udp  --  any    any     anywhere             127.0.0.11           udp dpt:domain to:127.0.0.11:56734
    
    Chain DOCKER_POSTROUTING (1 references)
     pkts bytes target     prot opt in     out     source               destination
        0     0 SNAT       tcp  --  any    any     127.0.0.11           anywhere             tcp spt:39905 to::53
        0     0 SNAT       udp  --  any    any     127.0.0.11           anywhere             udp spt:56734 to::53
    
    

  • Gluetun likely doesn’t have the proper firewall rules in place to enable this sort of traffic routing, simply because it’s made for another use case (using the container’s network stack directly with network_mode: "service:gluetun").

    Try to first get this setup working with two vanilla Wireguard containers (instead of Wireguard + gluetun). If it does, you’ll know that your Wireguard “server” container is properly set up. Then replace the second container that’s acting as a VPN client with gluetun and run tcpdump again. You likely need to add a postrouting masquerade rule on the NAT table.

    Here’s my own working setup for reference.

    Wireguard “server” container:

    [Interface]
    Address = <address>
    ListenPort = 51820
    PrivateKey = <privateKey>
    PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    PostUp = wg set wg0 fwmark 51820
    PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.101 table 51820
    PostUp = ip -4 rule add not fwmark 51820 table 51820
    PostUp = ip -4 rule add table main suppress_prefixlength 0
    PostUp = ip route add 192.168.16.0/24 via 172.22.0.1
    PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del 192.168.16.0/24 via 172.22.0.1
    
    #peer configurations (clients) go here
    

    and the Wireguard VPN client that I route traffic through:

    # Based on my VPN provider's configuration + additional firewall rules to route traffic correctly
    [Interface]
    PrivateKey = <key>
    Address = <address>
    DNS = 192.168.16.81 # local Adguard
    PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE #Route traffic coming in from outside the container (host/other container)
    PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
    
    [Peer]
    PublicKey = <key>
    AllowedIPs = 0.0.0.0/0
    Endpoint = <endpoint_IP>:51820
    

    Note the NAT MASQUERADE rule.











  • Well yes, any E-Ink device should be able to open a PDF, but PadMu gives you the ability to sync two devices so you can place them next to each other and display two pages at once. I think it has additional features specifically for working with sheet music, like an infra-red sensor for turning pages by waving your hands in front of the device. I know the Gvido has that (Edit: But the PadMu actually doesn’t; it’s all software enhancements and the dual display mode).

    This review showcases the side-by-side display (double mode) feature at around 4:20. Can Onyx devices do that? I haven’t checked, but my guess is no.






  • Disco Elysium was full of such moments for me. Here’s one:

    You spend a lot of time in the game basically talking to yourself and your inner voices, and one of these voices is volition. If you put enough points into it, it’ll chime in when you’re having an identity crisis or struggling to keep yourself together and it’ll try to cheer you up and keep you going. At the end of Day 1 in the game you, an amnesiac cop, stand on a balcony in an impoverished district reflecting on the day’s events and trying to make sense of the reality you’ve woken up into with barely any of your memories intact. If you pass a volition check, it’ll say the following line:

    “No. This is somewhere to be. This is all you have, but it’s still something. Streets and sodium lights. The sky, the world. You’re still alive.”

    This line in combination with the somewhat retro Euro setting, the faint lighting, and the sombre-yet-somewhat-upbeat music was very powerful. The image it painted was quite relatable for me. I just sat there for a minute staring at the scene and soaking it all in. Even though this is a predominantly text-based game with barely any cinematics/animations, I felt a level of immersion I had rarely, if ever, experienced before.

    Oh, look at that. Someone actually made a volition compilation. 😀 This video will give you a better idea of what I’m describing: https://www.youtube.com/watch?v=ENSAbyGlij0 Minor spoilers alert!




  • In response to your update: Try specifying the user that’s supposed to own the mapped directories in the docker compose file. Then make sure the UID and GID you use match an existing user on the new system you are testing the backup on.

    First you need to get the id of the user you want to run the container as. For a user called foo, run id foo. Note down the UID and GID.

    Then in your compose file, modify the db_recipes service definition and set the UID and GID of the user that should own the mapped volumes:

      db_recipes:
        restart: always
        image: postgres:15-alpine
        user: "1000:1000" #Replace this with the corresponding UID and GID of your user
        volumes:
          - ./postgresql:/var/lib/postgresql/data
        env_file:
          - ./.env
    

    Recreate the container using docker compose up -d (don’t just restart it; you need to load the new config from the docker compose file). Then inspect the postgresql directory using ls -l to check whether it’s actually owned by user with UID 1000 and group with GID 1000. This should solve the issue you are having with that backup program you’re using. It’s probably unable to copy that particular directory because it’s owned by root:root and you’re not running it as root (don’t do that; it would circumvent the real problem rather than help you address it).

    Now, when it comes to copying this to another machine, as already mentioned you could use something that preserves permissions like rsync, but for learning purposes I’d intentionally do it manually as you did before to potentially mess things up. On the new machine, repeat this process. First find the UID and GID of the current non-root user (or whatever user you want to run your containers as). Then make sure that UID and GID are set in the compose files. Then inspect the directories to make sure they have the correct ownership. If the compose file isn’t honoring the user flag or if the ownership doesn’t match the UID and GID you set for whatever reason, you can also use chown -R UID:GID ./postgresql to change ownership (replace UID:GID with the actual IDs), but that might get overwritten if you don’t properly specify it in the compose file as well, so only do it for testing purposes.

    Edit: I also highly recommend using CLIs (terminal) instead of the GUI for this sort of thing. In my experience, the GUIs aren’t always designed to give you all the information you need and can actually make things more difficult for you.