Tag: technology

  • Part 2 – FoundryVTT, Google Cloud, and Ansible

    Part 2 – FoundryVTT, Google Cloud, and Ansible

    Now the fun part of writing Ansible to manage it all.


    Provisioning

    The first playbook should create the project and instance, then do some simple configuration. Part of this is done on localhost and then the remote; using one file with two plays in it allows me to add the new instance to the inventory for the second play.

    These tasks will require use of the shell module to run the gcloud command to create the porject and enable the compute engine API.

    YAML
    ---
    - name: 01 - Provision Project and e2-micro Instance
    hosts: localhost
    connection: local
    tasks:
    - name: Ensure GCP Project exists
    ansible.builtin.shell: |
    gcloud projects describe {{ project_id }} || gcloud projects create {{ project_id }}
    register: project_creation
    changed_when: "'Created' in project_creation.stderr"
    - name: Enable Compute Engine API
    ansible.builtin.shell: |
    gcloud services enable compute.googleapis.com --project={{ project_id }}
    changed_when: false

    We will need an ssh key to connect to the instance once created. Since I use separate ssh keys for each, I’m going to ahead and create one here, I’ll add the passphrase later manually.

        - name: Ensure local .ssh directory exists
          ansible.builtin.file:
            path: ~/.ssh
            state: directory
            mode: '0700'
    
        - name: Check if local SSH key already exists
          ansible.builtin.stat:
            path: "{{ ansible_ssh_private_key_file }}"
          register: _ssh_key_file
    
        - name: Generate SSH Key for GCP (Passphrase-less)
          ansible.builtin.shell: |
            if [ ! -f {{ ansible_ssh_private_key_file }} ]; then
              ssh-keygen -t ed25519 -f {{ ansible_ssh_private_key_file }} -C "{{ ansible_user }}" -N ""
            fi
          when: _ssh_key_file.stat is defined and not _ssh_key_file.stat.exists
    
        - name: Get current GCP project metadata
          ansible.builtin.shell: |
            gcloud compute project-info describe --project={{ project_id }} --format="value(commonInstanceMetadata.items.ssh-keys)"
          register: _gcp_metadata
          changed_when: false
    
        - name: Upload SSH Key to GCP Project Metadata
          ansible.builtin.shell: |
            PUB_KEY=$(cat {{ ansible_ssh_private_key_file }}.pub)
            gcloud compute project-info add-metadata --project={{ project_id }} \
              --metadata=ssh-keys="{{ ansible_user }}:$PUB_KEY"
          when: lookup('ansible.builtin.file', ansible_ssh_private_key_file + '.pub') not in _gcp_metadata.stdout
          changed_when: true

    Then we have to create the instance, for this we can use the collection. I’m printing out the IP as I will add that to my domain. This is where we also add the instance to the inventory for the next play.

    YAML
    - name: Create Foundry Compute Instance
    google.cloud.gcp_compute_instance:
    name: "{{ instance_name }}"
    machine_type: "{{ machine_type }}"
    zone: "{{ zone }}"
    project: "{{ project_id }}"
    auth_kind: application
    disks:
    - auto_delete: true
    boot: true
    initialize_params:
    source_image: "{{ image_family }}"
    disk_size_gb: 30
    network_interfaces:
    - access_configs:
    - name: "External NAT"
    type: "ONE_TO_ONE_NAT"
    tags:
    items:
    - "{{ network_tag }}"
    state: present
    register: instance # This captures the returned object
    - name: Inspect the instance inforamation
    ansible.builtin.debug:
    var: instance
    verbosity: 1
    - name: Add new instance to inventory
    ansible.builtin.add_host:
    name: "{{ instance.networkInterfaces[0].accessConfigs[0].natIP }}"
    groups: foundry_servers
    - name: Print the public IP address
    ansible.builtin.debug:
    msg: "The public IP address is {{ instance.networkInterfaces[0].accessConfigs[0].natIP }}"

    Because these are micro servers, adding swap file to the instance as it will not have enough memory for FoundryVTT otherwise.

    YAML
    ---
    - name: Post-Provisioning OS Tuning
    hosts: foundry_servers
    gather_facts: true
    tasks:
    - name: Wait for SSH to become available
    ansible.builtin.wait_for_connection:
    timeout: 120
    - name: Check if swapfile exists
    ansible.builtin.stat:
    path: /swapfile
    register: _swapfile
    become: true
    - name: Create swapfile
    become: true
    when: _swapfile.stat is defined and not _swapfile.stat.exists
    block:
    - name: Create 2GB Swap File for e2-micro stability
    ansible.builtin.command: fallocate -l 2g /swapfile
    changed_when: true
    - name: Set permissions on swapfile
    ansible.builtin.file:
    path: '/swapfile'
    owner: root
    group: root
    mode: '0600'
    - name: Format swapfile
    ansible.builtin.command: mkswap /swapfile
    changed_when: true
    - name: Add swapfile
    become: true
    block:
    - name: Update fstab file
    ansible.builtin.mount:
    name: none
    src: /swapfile
    fstype: swap
    opts: sw
    passno: 0
    dump: 0
    state: present
    - name: Check if swapfile is already on
    ansible.builtin.command: swapon --show
    register: _swap_check
    changed_when: false
    failed_when: false
    - name: Activate swap
    ansible.builtin.command: swapon /swapfile
    register: _swapon
    when: _swapon.stdout is defined and '/swapfile' in _swapon.stdout
    changed_when: _swapon.rc is defined and _swapon_rc == 0

    Billing Kill Switch

    I don’t want unexpected charges so these are the steps you do to create the billing kill switch.

    YAML
    ---
    - name: 02 - Deploy Billing Kill Switch
    hosts: localhost
    connection: local
    tasks:
    - name: Create Pub/Sub Topic
    google.cloud.gcp_pubsub_topic:
    name: "{{ topic_name }}"
    project: "{{ project_id }}"
    auth_kind: application
    state: present
    - name: Ensure Billing Identity is Initialized
    ansible.builtin.shell: |
    gcloud beta services identity create \
    --service=billing.googleapis.com \
    --project={{ project_id }}
    register: identity_result
    failed_when:
    - identity_result.rc != 0
    - "'PERMISSION_DENIED' not in identity_result.stderr"
    changed_when: "'Created' in identity_result.stderr"
    - name: Link Budget to Pub/Sub Topic
    ansible.builtin.shell: |
    BUDGET_ID=$(gcloud billing budgets list --billing-account={{ billing_account }} --format="value(name)" --filter="displayName={{ budget_name }}")
    gcloud billing budgets update $BUDGET_ID --notifications-rule-pubsub-topic=projects/{{ project_id }}/topics/{{ topic_name }}
    register: budget_update
    changed_when: "'Updated' in budget_update.stderr"
    - name: Deploy Cloud Function (Gen 2)
    ansible.builtin.shell: |
    gcloud functions deploy stop-resources-function \
    --gen2 \
    --runtime=python311 \
    --region={{ region }} \
    --entry-point=stop_billing_limit \
    --trigger-topic={{ topic_name }} \
    --source=./safety_net_code
    register: function_deploy
    changed_when: true
    - name: Output Deployment Status
    ansible.builtin.debug:
    msg: "Safety Net Deployed to {{ region }}. Budget '{{ budget_name }}' is now monitoring for a kill-switch trigger."

    Firewalls

    Now I will need port 80 open but I want to limit port 443 to specific IP addresses. Ultimately, I will configure the server to respond with some sort of error when port 80. I also want to enable and disable IP addresses with this playbook so I can add and remove players.

    So, this playbook runs by providing a target ID and a tag and can be run in:

    ansible-playbook -e target_ip=#.#.#.# –tags=add_player
    ansible-playbook -e target_ip=#.#.#.# –tags=add_player
    ansible-playbook -e target_ip=#.#.#.# –tags=remove_player
    ansible-playbook -e target_ip=#.#.#.# –tags=remove_player
    ansible-playbook –tags=setup_infra

    YAML
    ---
    - name: 03 - Managed Foundry Firewall (Restricted 443)
    hosts: localhost
    connection: local
    pre_tasks:
    - name: Validate target_ip for whitelisting tasks
    ansible.builtin.assert:
    that:
    - target_ip is defined
    - target_ip | length > 0
    fail_msg: "ERROR: You must provide target_ip (e.g., -e 'target_ip=1.2.3.4') for this tag."
    tags: [add_player, remove_player]
    tasks:
    - name: Ensure Port 80 is open to the world
    google.cloud.gcp_compute_firewall:
    name: "allow-http-global"
    project: "{{ project_id }}"
    auth_kind: application
    allowed:
    - ip_protocol: tcp
    ports: ["80"]
    source_ranges: ["0.0.0.0/0"]
    target_tags: ["{{ network_tag }}"]
    state: present
    tags: [always, setup_infra]
    - name: Fetch current 443 whitelist from GCP
    ansible.builtin.shell: |
    gcloud compute firewall-rules describe allow-foundry-https \
    --project={{ project_id }} --format="value(sourceRanges)"
    register: current_fw_raw
    ignore_errors: true
    changed_when: false
    tags: [always]
    - name: Parse current IPs into a list
    ansible.builtin.set_fact:
    current_ips: "{{ current_fw_raw.stdout.split(',') if current_fw_raw.rc == 0 else [] }}"
    tags: [always]
    - name: Add Player IP to Whitelist
    google.cloud.gcp_compute_firewall:
    name: "allow-foundry-https"
    project: "{{ project_id }}"
    auth_kind: application
    allowed:
    - ip_protocol: tcp
    ports: ["443"]
    # Append new IP and ensure no duplicates
    source_ranges: "{{ (current_ips + [target_ip + '/32']) | unique | list }}"
    target_tags: ["{{ network_tag }}"]
    state: present
    tags: [add_player]
    - name: Remove Player IP from Whitelist
    google.cloud.gcp_compute_firewall:
    name: "allow-foundry-https"
    project: "{{ project_id }}"
    auth_kind: application
    allowed:
    - ip_protocol: tcp
    ports: ["443"]
    # Filter out the specific IP
    source_ranges: "{{ current_ips | reject('equalto', target_ip + '/32') | list }}"
    target_tags: ["{{ network_tag }}"]
    state: present
    when: current_ips | length > 1 # Safety: Don't delete the last IP or rule fails
    tags: [remove_player]

    NGINX Proxy and SSL

    Finally, need to install nginx and certbot, configure SSL, configure and NGINX.

    YAML
    ---
    - name: Deploy Nginx and SSL
    hosts: foundry_servers
    become: true
    tasks:
    - name: Install Nginx and Certbot
    ansible.builtin.apt:
    name:
    - nginx
    - certbot
    - python3-certbot-nginx
    state: present
    update_cache: yes
    - name: Ensure Nginx is stopped for Standalone Certbot
    ansible.builtin.systemd:
    name: nginx
    state: stopped
    - name: Request SSL Certificate
    ansible.builtin.shell: |
    certbot certonly --standalone --non-interactive --agree-tos \
    -m {{ admin_email }} \
    -d {{ domain_name }} \
    --pre-hook "systemctl stop foundryvtt 2>/dev/null || true" \
    --post-hook "systemctl start foundryvtt 2>/dev/null || true"
    register: cert_result
    - name: Create Nginx Configuration for Foundry
    ansible.builtin.template:
    src: templates/foundry_nginx.conf.j2
    dest: /etc/nginx/sites-available/foundry
    notify: Restart Nginx
    - name: Enable Foundry Site
    ansible.builtin.file:
    src: /etc/nginx/sites-available/foundry
    dest: /etc/nginx/sites-enabled/foundry
    state: link
    - name: Remove Default Nginx Site
    ansible.builtin.file:
    path: /etc/nginx/sites-enabled/default
    state: absent
    notify: Restart Nginx
    handlers:
    - name: Restart Nginx
    ansible.builtin.systemd:
    name: nginx
    state: restarted
    enabled: yes

    Bonus – Verify everything

    YAML
    ---
    - name: Foundry Infrastructure and Safety Audit
    hosts: localhost
    connection: local
    tasks:
    - name: Verify budget link to pubsub
    ansible.builtin.shell: |
    gcloud billing budgets list --billing-account={{ billing_account }} --format="json" | \
    jq -r '.[] | select(.displayName=="{{ budget_name }}") | .notificationsRule.pubsubTopic'
    register: budget_link
    failed_when: "topic_name not in budget_link.stdout"
    changed_when: false
    - name: Verify function compute admin permissions
    ansible.builtin.shell: |
    gcloud projects get-iam-policy {{ project_id }} \
    --flatten="bindings[].members" \
    --filter="bindings.role:roles/compute.admin" \
    --format="value(bindings.members)"
    register: iam_policy
    failed_when:
    - "project_id not in iam_policy.stdout"
    - name: Check if safety function is active
    ansible.builtin.shell: |
    gcloud functions describe stop-resources-function \
    --region={{ region }} --gen2 --format="value(state)"
    register: function_state
    failed_when: "'ACTIVE' not in function_state.stdout"
    - name: Validate network split status
    block:
    - name: Assert port 80 is globally open
    ansible.builtin.wait_for:
    host: "{{ domain_name }}"
    port: 80
    timeout: 5
    - name: Assert port 443 is whitelisted for current IP
    ansible.builtin.wait_for:
    host: "{{ domain_name }}"
    port: 443
    timeout: 5
    rescue:
    - name: Report network block
    ansible.builtin.debug:
    msg: "Connectivity check failed. Ensure target_ip is whitelisted."
    - name: Perform safety net heartbeat test
    ansible.builtin.shell: |
    gcloud pubsub topics publish {{ topic_name }} \
    --message='{"costAmount": 0.01, "budgetAmount": {{ budget_limit }}}'
    changed_when: false
  • Joplin Server with Podman and Quadlets (2025 Edit)

    Joplin Server with Podman and Quadlets (2025 Edit)

    Prepare Environment

    The /tmp folder needs to be mounted on tmpfs (or ramfs…)

    sudo systemctl enable --now tmp.mount
    

    Open port in software firewall

    sudo firewall-cmd --permanent --add-port 22300/tcp
    sudo firewall-cmd --reload
    

    Create joplin user and add subgid and subuid values. The range size below is probably not really necessary…

    sudo useradd -m -c "Joplin Container User" joplin<br>sudo usermod --add-subuids 100000-165536 --add-subgids 100000-165536
    

    Create central storage for sync data (adjust for your environment)

     sudo mkdir -p /appdata/joplin
     sudo chown -R joplin:joplin /appdata/joplin
     sudo chmod 2777 /appdata/joplin
     sudo semanage fcontext -a container_file_t "/appdata/joplin(/.*)?"
     sudo restorecon -Rv /appdata/joplin/
    

    Login as joplin user. Note you can not use su here because it will require a login shell. Alternately you can

    sudo machinectl shell joplin@
    mkdir -p ~/.config/containers/systemd  # For Quadlet files
    mkdir -p ~/cvols/postgres              # For database
    

    Set up reverse proxy (optional) if you don’t want to expose the port

    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    
    location /joplin/ {
        proxy_redirect off;
        rewrite	^/joplin/(.*)$ /$1 break;
        proxy_pass http://127.0.0.1:22300/joplin;
    }
    

    Create Podman secrets via environment variables

    export POSTGRES_PASSWORD='blah'
    export POSTGRES_USER='blah'
    export MAILER_AUTH_PASSWORD='blah'
    export MAILER_AUTH_USER='blah'
    podman secret create mailer_auth_password --env MAILER_AUTH_PASSWORD
    podman secret create mailer_auth_user --env MAILER_AUTH_USER
    podman secret create postgres_password --env POSTGRES_PASSWORD
    podman secret create postgres_user --env POSTGRES_USER
    

    Or Create Podman secrets using echo

    echo -n 'blah' | podman secret create mailer_auth_password -
    echo -n 'blah' | podman secret create mailer_auth_user -
    echo -n 'blah' | podman secret create postgres_password -
    echo -n 'blah' | podman secret create postgres_user -
    

    Either of the methods run the risk of your password being in your shell history. Either clear your history when done, or configure your history to ignore echo and export lines, or ignore lines starting with a space and preface all commands with a space.

    Quadlet Setup

    Create three files in your ~/.config/containers/systemd folder

    The jsync.network file contents (alter to suit your needs)

    # jsync.network
    [Network]
    Subnet=192.168.30.0/24
    Gateway=192.168.30.1
    Label=app=joplin
    

    The jsync_app.container file (adjust for your environment, per what you created above)
    Note: Replace myserver, smtp_server with your server name and your smtp server name respectively.

    # jsync_app.container
    [Unit]
    Requires=jsync_db.service
    After=jsync_db.service
    
    [Container]
    Environment=APP_PORT=22300
    Environment=APP_BASE_URL='http://myserver/joplin'
    Environment=DB_CLIENT=pg
    Environment=POSTGRES_DATABASE='joplin'
    Environment=POSTGRES_PORT=5432
    Environment=POSTGRES_HOST='myserver'
    Environment=MAILER_ENABLED=1
    Environment=MAILER_HOST='smtp_server'
    Environment=MAILER_PORT=587
    Environment=MAILER_SECURITY='starttls'
    Environment=MAILER_NOREPLY_NAME='Joplin'
    Environment=MAILER_NOREPLY_EMAIL='noreply@localhost'
    Environment=STORAGE_DRIVER='Type=Filesystem; Path=/sync_data'
    Environment=STORAGE_DRIVER_FALLBACK='Type=Database; Mode=ReadAndClear'
    Image=docker.io/joplin/server:latest
    PublishPort=22300:22300
    Volume=/appdata/joplin:/sync_data:z
    Network=jsync.network
    Secret=postgres_password,type=env,target=POSTGRES_PASSWORD
    Secret=mailer_auth_password,type=env,target=MAILER_AUTH_PASSWORD
    Secret=mailer_auth_user,type=env,target=MAILER_AUTH_USER
    Secret=postgres_user,type=env,target=POSTGRES_USER
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=multi-user.target default.target
    ```
    

    The jysnc_db.container file (adjust per your environment per what you created above)

    # jsync_db.container
    [Container]
    Environment=POSTGRES_DB='joplin'
    Image=docker.io/postgres:16
    PublishPort=5432:5432
    Volume=/home/joplin/cvol/postgres:/var/lib/postgresql/data:z
    Secret=postgres_password,type=env,target=POSTGRES_PASSWORD
    Secret=postgres_user,type=env,target=POSTGRES_USER
    Network=jsync.network
    
    [Service]
    Restart=always
    

    Now update systemctl

    systemctl --user daemon-reload
    systemctl --user start jsync_app.service
    

    If you get invalid origin error and are running selinux, you may need to

    setsebool httpd_can_network_connect true