Self Hosting

Documentation and Notes on my Self Hosting videos.

Docker Containers

Chapter for Docker specific containers/services

Docker Containers

AdGuard - Docker Setup

Installing AdGuard

Docker Command

docker run --name adguardhome\
    --restart unless-stopped\
    -v /home/techdox/elzim-docker/adguard/workdir:/opt/adguardhome/work\
    -v /home/techdox/elzim-docker/adguard/confdir:/opt/adguardhome/conf\
    -p 192.168.68.110:53:53/tcp -p 192.168.68.110:53:53/udp\
    -p 80:80/tcp -p 443:443/tcp -p 443:443/udp -p 3000:3000/tcp\
    -p 853:853/tcp\
    -p 784:784/udp -p 853:853/udp -p 8853:8853/udp\
    -p 5443:5443/tcp -p 5443:5443/udp\
    -d adguard/adguardhome

Note - On line 5 I am using a hardcoded IP address of my server as I was having a conflict with a broadcast address, change the IP address and the volume directory to match your local server.

Docker Hub Link - https://hub.docker.com/r/adguard/adguardhome

Docker Containers

Nextcloud - Docker Setup

Requirements

Steps to follow

  1. Create folder using follow mkdir nextcloud
  2. Use the command ls to verify the folder was created
  3. Move inside the folder nextcloud using the command cd nextcloud
  4. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

Docker Compose Command

version: '2'

volumes:
  nextcloud:
  db:

services:
  db:
    image: mariadb:10.6
    restart: always
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD= <DELETE THIS AND ENTER PASSWORD HERE>
      - MYSQL_PASSWORD= <DELETE THIS AND ENTER PASSWORD HERE>
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud

  app:
    image: nextcloud
    restart: always
    ports:
      - 8080:80
    links:
      - db
    volumes:
      - nextcloud:/var/www/html
    environment:
      - MYSQL_PASSWORD= <DELETE THIS AND ENTER PASSWORD HERE>
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_HOST=db

 

Above is the code  required to deploy NextCloud to , this will standup two containers. One container is the MySQL Database and the other is Nextcloud itself.

  1. Copy and paste the code above into the docker-compose.yaml file.

  2.  You need to make sure that your don't change the DATABASE or USER values, (these are on lines 17 and 18 as well as lines 31 and 32). If you do change them you need to make sure they match on both line 17 and 18 as well as line 31 and 32.

  3. What happens here is that when you stand these containers up, Nextcloud should automatically mount to the Database, if you are asked for the Database details in the Nextcloud setup then something went wrong. 

  4. Make sure you updated the values in the code above, so lines 15,16,28,30 need to be updated.

  5. It's compulsory to change lines 15, 16 and 30.

  6. You can technically leave line 13 and 28 which is setting up your storage location for MySQL and Nextcloud containers but it's best practice to have these stored somewhere dedicated.

  7. Once you have the compose file setup, run 'sudo docker-compose up -d'

  8. Containers will be setup and created.

  9. In a browser visit server-local-ip-address:8080


Docker Containers

JellyFin - Page is work in progress

JellyFin- Docker Setup

This guide looks like a really useful starting point to help write the rest of this JellyFin Docker Setup guide - https://cyberhost.uk/how-to-self-host-jellyfin/

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir jellyfin 
  2. Use the command ls to verify the folder was created
  3. Move inside the jellyfin folder using the command cd jellyfin
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/jellyfin printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

Docker Compose Command

---
version: "2.1"
services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Pacific/Auckland
    volumes:
      - /home/server/jellyfin/library:/config
      - /home/server/jellyfin/tvseries:/data/tvshows
      - /home/server/jellyfin/movies:/data/movies
    ports:
      - 8096:8096
      - 8920:8920 #optional
      - 7359:7359/udp #optional
      - 1900:1900/udp #optional
    restart: unless-stopped

Above is the code  required to deploy JellyFin

  1. Copy and paste the code above into the docker-compose.yaml file.

  2.  Next we want to edit the lines 12, 13, and 14  in the docker-compose.yaml file they currently read
           - /path/to/library:/config
          - /path/to/tvseries:/data/tvshows
          - /path/to/movies:/data/movies

  3.  Update lines 12, 13 and 14 so they read (refer to instruction 5 above if this doesn't make sense or you arent sure what to update with
           - /home/[YOUR-USER-NAME]/jellyfin:/config
           - /home/[YOUR-USER-NAME]/jellyfin:/data/tvshows
           - /home/[YOUR-USER-NAME]/jellyfin:/data/movies

  4. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

  5. Press Y then return on your keyboard to save everything

  6. Once you have the compose file setup, run 'sudo docker-compose up -d'

  7. Containers will be setup and created.

  8. In a browser visit server-local-ip-address:8096

Docker Containers

Watchtower - Image updater

Introduction

Watchtower is a Docker container management tool that automates the process of updating running Docker containers. It constantly monitors your Docker images and automatically pulls updated versions from the Docker registry, stops the existing containers, and starts the new ones with the latest versions. This documentation provides an overview of Watchtower, including its description, image build commands, and useful tips and tricks.

Description

Watchtower simplifies the management of Docker containers by streamlining the update process. It eliminates the need for manual intervention, allowing you to focus on other tasks while ensuring that your containers are always up to date with the latest versions available. Watchtower is an open-source project and offers a reliable solution for automating container updates in your Docker environment.

Code Snippet

version: '3'
services:
  watchtower:
    image: containrrr/watchtower
    command:
      - --interval=12h
      - --cleanup=true
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock


Docker Containers

Backup Docker Volumes (Nextcloud Example)

Description:

In this guide, you will learn how to securely backup Docker volumes using Nextcloud as an example. Docker volumes play a critical role in persisting data within containerized applications, and backing them up is essential for data safety and recovery. Follow these step-by-step instructions to confidently create backups of Nextcloud's Docker volumes, including the data and configuration directories. Safeguard your valuable data today!

Backup Steps:

Backup Nextcloud Database:

docker run --rm --volumes-from nextcloud-db-1 -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar -C /var/lib/mysql .

Backup Nextcloud Front End:

docker run --rm --volumes-from nextcloud-app-1 -v $(pwd):/backup ubuntu tar cvf /backup/backupdata.tar -C /var/www/html/data .

docker run --rm --volumes-from nextcloud-app-1 -v $(pwd):/backup ubuntu tar cvf /backup/backupconfig.tar -C /var/www/html/config .

Restore Steps:

Restore Nextcloud Database:

 

docker run --rm --volumes-from nextcloud-db-1 -v $(pwd):/backup ubuntu bash -c "cd /var/lib/mysql && tar xvf /backup/backup.tar"

Restore Nextcloud Frontend:

docker run --rm --volumes-from nextcloud-app-1 -v $(pwd):/backup ubuntu bash -c "cd /var/www/html/data && tar xvf /backup/backupdata.tar"

docker run --rm --volumes-from nextcloud-app-1 -v $(pwd):/backup ubuntu bash -c "cd /var/www/html/config && tar xvf /backup/backupconfig.tar"

Ensure that the paths and container names match your specific setup. Following these instructions, you'll be able to backup and restore Docker volumes, ensuring the safety and integrity of your Nextcloud data.

Docker Containers

FreshRSS - Page is a work in progress

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir freshrss 
  2. Use the command ls to verify the folder was created
  3. Move inside the jellyfin folder using the command cd freshrss
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/freshrss printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

    Docker Compose Command

    version: "2.1" services:   freshrss:     image: lscr.io/linuxserver/freshrss:latest     container_name: freshrss     environment:       - PUID=1000       - PGID=1000       - TZ=Etc/UTC     volumes:       - /path/to/data:/config     ports:       - 80:80     restart: unless-stopped Above is the code  required to deploy FreshRSS

    1. Copy and paste the code above into the docker-compose.yaml file.

    2.  Next we want to edit the line 10  in the docker-compose.yaml file they currently read
             /path/to/data:/config

    3.  Update line 10, so it reads (refer to instruction 5 above if this doesn't make sense or you arent sure what to update with
           /home/[YOUR-USER-NAME]/freshrss:/config

    4. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

    5. Press Y then return on your keyboard to save everything

    6. Once you have the compose file setup, run 'sudo docker-compose up -d'

    7. Containers will be setup and created.


Fresh RSS Setup and install

  1. In a browser visit server-local-ip-address:80
  2. Select Language - choose english and then click 'submit' or 'go to next step'
  3. Type of database - choose SQLite
  4. Fill in username and password details. For authentication method choose webform/javascriot

Docker Containers

Logitech Media Server - Page is a work in progress

Docker-compose is from here -  https://hub.docker.com/r/lmscommunity/logitechmediaserver 

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir lms
  2. Use the command ls to verify the folder was created
  3. Move inside the lms folder using the command cd lms
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/lmsprinted out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml
version: '3'
services:
  lms:
    container_name: lms
    image: lmscommunity/logitechmediaserver
    volumes:
      - /<somewhere>:/config:rw
      - /<somewhere>:/music:ro
      - /<somewhere>:/playlist:rw
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
    ports:
      - 9000:9000/tcp
      - 9090:9090/tcp
      - 3483:3483/tcp
      - 3483:3483/udp
    restart: always

Above is the code  required to deploy Logitech Media Server

  1. Copy and paste the code above into the docker-compose.yaml file.

  2.  Next we want to edit 7, 8 and 9  in the docker-compose.yaml file they currently read
         - /<somewhere>:/config:rw
          - /<somewhere>:/music:ro
          - /<somewhere>:/playlist:rw
  3. Update lines 7, 8 and 9 so they read (refer to instruction 5 above if this doesn't make sense or you aren't sure what to update with)
          - /home/[YOUR-USERNAME]/lms:/config:rw
          - /home/[YOUR-USERNAME]/lms:/music:ro
          - /home/[YOUR-USERNAME]/lms:/playlist:rw
  4. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

  5. Press Y then return on your keyboard to save everything

  6. Once you have the compose file setup, run 'sudo docker-compose up -d'

  7. Containers will be setup and created.

  8. In a browser visit server-local-ip-address:9000
Docker Containers

Grocy - Page is a work in process

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir grocy 
  2. Use the command ls to verify the folder was created
  3. Move inside the jellyfin folder using the command cd grocy
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/grocy printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

    Docker Compose Command

    version: "2.1" services:   grocy:     image: lscr.io/linuxserver/grocy:latest     container_name: grocy     environment:       - PUID=1000       - PGID=1000       - TZ=Etc/UTC     volumes:       - /path/to/data:/config # JAY - Change this? - should it run  /home/[YOUR-USER-NAME]/grocy:/config     ports:       - 9283:80 # JAY - do I need to change the ports?     restart: unless-stopped

    Above is the code  required to deploy FreshRSS


    1. Copy and paste the code above into the docker-compose.yaml file.

    2.  Next we want to edit the line 10  in the docker-compose.yaml file they currently read
             /path/to/data:/config

    3.  Update line 11 so it reads (refer to instruction 5 above if this doesn't make sense or you arent sure what to update with)
           /home/[YOUR-USER-NAME]/grocy:/config

    4. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

    5. Press Y then return on your keyboard to save everything

    6. Once you have the compose file setup, run 'sudo docker-compose up -d'

    7. Containers will be setup and created.

Fresh RSS Setup and install

  1. In a browser visit server-local-ip-address:9283
  2. Log in and Password is admin admin - NEED TO FIND OUT IF IT IS POSSIBLE TO CHANGE THIS
Docker Containers

Pair Drop - Page is a work in process

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir pairdrop 
  2. Use the command ls to verify the folder was created
  3. Move inside the jellyfin folder using the command cd pairdrop
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/pairdrop printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

    Docker Compose Command

version: "2.1"
services:
  pairdrop:
    image: lscr.io/linuxserver/pairdrop:latest
    container_name: pairdrop
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - RATE_LIMIT=false #optional
      - WS_FALLBACK=false #optional
      - RTC_CONFIG= #optional
      - DEBUG_MODE=false #optional
    ports:
      - 3000:3000
    restart: unless-stopped

Above is the code  required to deploy FreshRSS

  1. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

  2. Press Y then return on your keyboard to save everything

  3. Once you have the compose file setup, run 'sudo docker-compose up -d'

  4. Containers will be setup and created.
  5. In a browser visit server-local-ip-address:3000
Docker Containers

Syncthing - Page is a work in process

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir syncthing  
  2. Use the command ls to verify the folder was created
  3. Move inside the jellyfin folder using the command cd syncthing
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/pairdrop printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

    1. Docker Compose Command

    version: "2.1"
    services:
      syncthing:
        image: lscr.io/linuxserver/syncthing:latest
        container_name: syncthing
        hostname: syncthing #optional
        environment:
          - PUID=1000
          - PGID=1000
          - TZ=Etc/UTC
        volumes:
          - /path/to/appdata/config:/config
          - /path/to/data1:/data1
          - /path/to/data2:/data2
        ports:
          - 8384:8384
          - 22000:22000/tcp
          - 22000:22000/udp
          - 21027:21027/udp
        restart: unless-stopped

    Above is the code  required to deploy syncthing

    1. Copy and paste the code above into the docker-compose.yaml file.

    2.  Next we want to edit the lines 12, 13, and 14  in the docker-compose.yaml file they currently read
            - /path/to/appdata/config:/config
            - /path/to/data1:/data1
            - /path/to/data2:/data2

    3.  Update lines 12, 13 and 14 so they read (refer to instruction 5 above if this doesn't make sense or you aren't sure what to update with
             - /home/[YOUR-USER-NAME]/syncthing:/config
             - /home/[YOUR-USER-NAME]/syncthing:/data1
             - /home/[YOUR-USER-NAME]/syncthing:/data2

    4. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

    5. Press Y then return on your keyboard to save everything

    6. Once you have the compose file setup, run 'sudo docker-compose up -d'

    7. Containers will be setup and created.

    8. In a browser visit server-local-ip-address:8324
Docker Containers

Wireguard - Docker Setup

This is a Docker Compose file written in version "3" format. It defines two services: wireguard and wireguard-ui. These services are used to set up and manage a WireGuard VPN server along with a web-based user interface for configuration.

Services

wireguard

wireguard-ui

Note: Make sure to replace the placeholder values for environment variables (e.g., SENDGRID_API_KEY, EMAIL_FROM_ADDRESS, etc.) with actual values suitable for your deployment.

This Docker Compose file provides a convenient way to set up and manage both the WireGuard VPN server and the WireGuard UI using containers. By running docker-compose up, the services will be started and can be accessed from the specified ports and configurations.

version: "3"

services:

  wireguard:
    image: linuxserver/wireguard:latest
    container_name: wireguard
    cap_add:
      - NET_ADMIN
    volumes:
      - ./config:/config
    ports:
      - "5000:5000"
      - "51820:51820/udp"

  wireguard-ui:
    image: ngoduykhanh/wireguard-ui:latest
    container_name: wireguard-ui
    depends_on:
      - wireguard
    cap_add:
      - NET_ADMIN
    network_mode: service:wireguard
    environment:
      - SENDGRID_API_KEY
      - EMAIL_FROM_ADDRESS
      - EMAIL_FROM_NAME
      - SESSION_SECRET
      - WGUI_USERNAME=admin
      - WGUI_PASSWORD=password
      - WG_CONF_TEMPLATE
      - WGUI_MANAGE_START=true
      - WGUI_MANAGE_RESTART=true
    logging:
      driver: json-file
      options:
        max-size: 50m
    volumes:
      - ./db:/app/db
      - ./config:/etc/wireguard

Post Up

iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Post Down

iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

The "Post Up" command and the "Post Down" command are used in the configuration of WireGuard to set up and tear down network routing rules for the WireGuard interface.

The "Post Up" command performs the following actions:

  1. It adds a rule to the FORWARD chain of the iptables firewall to accept incoming traffic on the WireGuard interface (wg0). This allows packets to be forwarded between the WireGuard network and other networks.
  2. It adds a rule to the POSTROUTING chain of the iptables NAT (Network Address Translation) table to perform MASQUERADE on outgoing packets from the WireGuard interface (wg0) before they are sent out through the eth0 interface. MASQUERADE modifies the source IP address of the packets to match the IP address of the eth0 interface, allowing the response packets to be correctly routed back to the WireGuard network.

The "Post Down" command reverses the actions performed by the "Post Up" command:

  1. It deletes the rule from the FORWARD chain of the iptables firewall that accepts incoming traffic on the WireGuard interface (wg0).
  2. It deletes the rule from the POSTROUTING chain of the iptables NAT table that performs MASQUERADE on outgoing packets from the WireGuard interface (wg0).

These commands are typically used when configuring a WireGuard VPN server in scenarios where Network Address Translation (NAT) is involved, such as when the server is behind a router performing NAT.

Docker Containers

FocalBoard - Docker Setup

Requirements

Steps to follow

  1. Create folder using follow mkdir focalboard
  2. Use the command ls to verify the folder was created
  3. Move inside the folder nextcloud using the command cd focalboard
  4. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

Docker Compose Command

 

version: "3.3"
services:
  focalboard:
    ports:
      - 8000:8000
    restart: always
    image: mattermost/focalboard

Above is the code  required to deploy FocalBoard

  1. Copy and paste the code above into the docker-compose.yaml file.
  2.  There is no need to edit or change the code any further once it has been pasted in to the docker-compose.yaml file


  3. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

  4. Press Y then return on your keyboard to save everything

  5. Once you have the compose file setup, run 'sudo docker-compose up -d'

FocalBoard Setup and install

  1. In a browser visit server-local-ip-address:8000
  2. Click on the link "or create and account if you dont have one"

  3. Enter email - you can use a fake email here if you don't want to add a real one. test@test.com works fine
  4. Enter a username and password
Docker Containers

Podgrab - Docker setup

Requirements

Steps to follow

  1. Create folder using follow mkdir podgrab
  2. Use the command ls to verify the folder was created
  3. Move inside the folder nextcloud using the command cd podgrab
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/podgrab printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

Docker Compose Command

version: "2.1"
services:
  podgrab:
    image: akhilrex/podgrab
    container_name: podgrab
    environment:
      - CHECK_FREQUENCY=240
     # - PASSWORD=password     ## Uncomment to enable basic authentication, username = podgrab
    volumes:
      - /path/to/config:/config
      - /path/to/data:/assets
    ports:
      - 8095:8080
    restart: unless-stopped

Above is the code  required to deploy Podgrab

  1. Copy and paste the code above into the docker-compose.yaml file.

  2.  Next we want to edit the lines 10, and 11 in the docker-compose.yaml file, they currently read
           - /path/to/config:/config
          - /path/to/data:/assets

  3.  Update lines 10, and 11 so they read (refer to instruction 5 above if this doesn't make sense or you arent sure what to update with
           - /home/[YOUR-USER-NAME]/podgrab:/config
          - /home/[YOUR-USER-NAME]/podgrab:/assets

  4. Once everything is copied and pasted in and edited press control+x (at the same time) on your keyboard to exit editing mode.

  5. Press Y then return on your keyboard to save everything

  6. Once you have the compose file setup, run 'sudo docker-compose up -d'

  7. Containers will be setup and created.

  8. In a browser visit server-local-ip-address:8095
Docker Containers

gPodder - Work in progress

Requirements

Steps to follow

  1.  Create a folder using the command - mkdir gpodder 
  2. Use the command ls to verify the folder was created
  3. Move inside the gpodder folder using the command cd gpodder
  4.  Next check what what directory you are in using the command pwd
  5.  After running pwd you should see something like /home/[YOUR-USER-NAME]/gpodder printed out. Remember this you will need to know it when updating the docker compose file.
  6. Create a docker-compose.yaml file by running the following command sudo nano docker-compose.yaml

Docker Compose Command

---
version: "2.1"
services:
  gpodder:
    image: xthursdayx/gpodder-docker
    container_name: gPodder
    environment:
      - PUID=99
      - PGID=100
      - TZ=America/New_York
      - PASSWORD= #optional
    volumes:
      - /path/to/config:/config
      - /path/to/downloads:/downloads
    ports:
      - 3004:3000
    restart: unless-stopped

Above is the code  required to deploy gpodder

  1. Copy and paste the code above into the docker-compose.yaml file.
  2. You need to change line 10 to the correct timezome - you can select your correct timezone from this timezone database 
  3.  Next we want to edit the line 13 and 14  in the docker-compose.yaml file they currently read
           - /path/to/config:/config       - /path/to/downloads:/downloads

  4.  Update lines 13 and 14, so it reads (refer to instruction 5 above if this doesn't make sense or you arent sure what to update with
         - /home/[YOUR-USER-NAME]/gpodder:/config
          - /home/[YOUR-USER-NAME]/gpodder:/downloads
  5. In a browser visit server-local-ip-address:3004


    If you have FileBroswer installed you can upload a .OPML file to import your podcast subscriptions to gPodder. You must upload the file in to the gPodder folder that you created at the begining of this tutorial.

Docker Containers

Uptime Kuma

Uptime Kuma - Overview

Uptime Kuma is a powerful open-source tool for monitoring the availability and performance of your websites and services. With Uptime Kuma, you can easily track the uptime and response times of various endpoints. It provides a user-friendly web interface to view and analyze monitoring data.

This Docker Compose file sets up the Uptime Kuma service using the specified Docker image. It ensures that the service restarts automatically in case of failures. Port 3010 on the host is mapped to port 3001 inside the container, allowing you to access the Uptime Kuma web interface.

The use of Docker volumes ensures that data and configuration for Uptime Kuma are stored persistently, even if the container is recreated or updated. Additionally, mounting the Docker socket from the host into the container enables Uptime Kuma to monitor other Docker containers running on the same host, providing comprehensive monitoring capabilities.

Docker Compose

version: '3'
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    restart: always
    ports:
      - "3010:3001"
    volumes:
      - uptime-kuma:/app/data
      - /var/run/docker.sock:/var/run/docker.sock


volumes:
  uptime-kuma:

Volumes:

Deploying the Compose

docker-compose up -d

You can run this command in the directory where your Docker Compose file (containing the Uptime Kuma service definition) is located. It will start the Uptime Kuma service as a detached background process, allowing you to continue using your terminal for other tasks without the need to keep the container's output visible.

Make sure you have Docker and Docker Compose installed on your system before running this command.

Docker Containers

Memos

Memos

Memos is your gateway to superior note-taking, powered by Docker for reliability and simplicity. With Memos, you can effortlessly organize your thoughts, tasks, and ideas while ensuring your data stays secure and accessible.

This Docker Compose file sets up the Memos service using the latest Docker image. It guarantees that your Memos instance runs smoothly, even in the face of unexpected hiccups. By mapping port 5230 on your host to the internal 5230 port in the container, Memos becomes conveniently accessible through your web browser.

The use of Docker volumes plays a crucial role in preserving your Memos data and configurations. Your notes and settings are stored securely, unaffected by container updates or restarts. Furthermore, by mounting the Docker socket from your host into the container, Memos gains the ability to monitor other Docker containers on the same host, providing a holistic monitoring experience.

Docker Compose File

version: "3.0"
services:
  memos:
    image: neosmemo/memos:latest
    container_name: memos
    volumes:
      - ~/.memos/:/var/opt/memos
    ports:
      - 5230:5230

Deploying the Compose

To start your Memos instance with Docker Compose, execute the following command:

docker-compose up -d

Run this command within the directory containing your Docker Compose file. It will initiate Memos as a detached background service, granting you the freedom to use your terminal for other tasks while keeping your Memos instance running smoothly.

Make sure you have Docker and Docker Compose installed on your system before running this command. Enjoy elevated note-taking with Memos and Docker!

 

Docker Containers

Excalidraw

Excalidraw Compose and Documentation

Excalidraw is your go-to tool for creating and documenting your diagrams and visualizations effortlessly. This Docker Compose setup ensures a reliable and straightforward experience with Excalidraw. Whether you're sketching diagrams or documenting concepts, Excalidraw and Docker have you covered.

Docker Compose File

version: "3.8"

services:
  excalidraw:
    container_name: excalidraw
    image: excalidraw/excalidraw:latest
    ports:
      - "3030:80"
    restart: on-failure

Container Name: Set the container's name to "excalidraw" for easy reference and management.

Image: Specifies the Docker image "excalidraw/excalidraw:latest" to be used for Excalidraw.

Ports: Maps port 3030 on the host to port 80 in the container, allowing you to access Excalidraw through your web browser.

Restart: Configured to restart the container on failure, ensuring continuous availability.

Deploying the Compose

To start your Excalidraw instance with Docker Compose, follow these steps:

  1. Ensure you have Docker and Docker Compose installed on your system.

  2. Create a docker-compose.yml file with the provided content.

  3. Navigate to the directory containing your docker-compose.yml file.

  4. Execute the following command:

    sh
    docker-compose up -d

    This will initiate Excalidraw as a detached background service, allowing you to use your terminal for other tasks while keeping Excalidraw running smoothly.

Enjoy creating and documenting with Excalidraw and Docker!

Docker Containers

Filebrowser


Filebrowser - Easy File Management with Docker



version: '3'
services:
  filebrowser:
    image: filebrowser/filebrowser:s6
    container_name: filebrowser
    volumes:
      - /home:/srv #Change to match your directory
      - /home/techdox/docker/filebrowser/filebrowser.db:/database/filebrowser.db #Change to match your directory
      - /home/techdox/docker/filebrowser/settings.json:/config/settings.json #Change to match your directory
    environment:
      - PUID=$(id -u)
      - PGID=$(id -g)
    ports:
      - 8095:80 #Change the port if needed

Explanation:

This Docker Compose file sets up the Filebrowser service using the specified Docker image. It ensures that the service restarts automatically in case of failures. Port 8095 on the host is mapped to port 80 inside the container, allowing you to access Filebrowser's web interface.

The use of Docker volumes ensures that data and configuration for Filebrowser are stored persistently, even if the container is recreated or updated.

Image: This specifies the Docker image to use for the Filebrowser service. In this case, it uses the "filebrowser/filebrowser:s6" image.

Container Name: Sets the name of the Docker container to "filebrowser." This is useful for referencing and managing the container.

Volumes:

  • /home:/srv: Mounts the host's "/home" directory to "/srv" inside the container.
  • /home/techdox/docker/filebrowser/filebrowser.db:/database/filebrowser.db: Mounts the Filebrowser database file for persistent storage.
  • /home/techdox/docker/filebrowser/settings.json:/config/settings.json: Mounts the Filebrowser settings file for configuration.

Environment:

  • PUID=$(id -u): Specifies the user ID for the container.
  • PGID=$(id -g): Specifies the group ID for the container.

Ports: Maps port 8095 on the host to port 80 inside the container, allowing access to Filebrowser's web interface.

Deploying the Compose:

docker-compose up -d

You can run this command in the directory where your Docker Compose file (containing the Filebrowser service definition) is located. It will start the Filebrowser service as a detached background process. Ensure you have Docker and Docker Compose installed on your system before running this command.

Docker Containers

Grafana

Grafana Overview

This Docker Compose file sets up the Grafana service using the specified Docker image. It includes configuration for automatic restarts under certain conditions and port mapping to access the Grafana interface.

Docker volumes are utilized to ensure persistent storage of Grafana's data and configurations. This setup facilitates easy monitoring and visualization of data from various sources, making it a powerful tool for data analysis and observability.

Grafana Docker Compose

version: "3.8"
services:
  grafana:
    image: grafana/grafana
    container_name: grafana
    restart: unless-stopped
    ports:
     - '3000:3000'
    volumes:
      - grafana-storage:/var/lib/grafana
volumes:
  grafana-storage: {}

Explanation of Key Components:

Deploying the Compose:

Ensure Docker and Docker Compose are installed on your system before running the command. This setup provides a robust and flexible approach to data visualization and monitoring with Grafana.

 

Docker Containers

Prometheus

Prometheus Overview

Prometheus is an open-source monitoring and alerting toolkit widely used for its simplicity and effectiveness in monitoring various types of environments. It collects and stores metrics as time series data, allowing for powerful querying, alerting, and visualization capabilities.

Prometheus Docker Compose:

services:
  prometheus:
    image: prom/prometheus
    container_name: prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
    ports:
      - 9090:9090
    restart: unless-stopped
    volumes:
      - ./prometheus:/etc/prometheus
      - prom_data:/prometheus
volumes:
  prom_data:

Prometheus Configuration (prometheus.yml):

global:
  scrape_interval: 15s
  scrape_timeout: 10s
  evaluation_interval: 15s
alerting:
  alertmanagers:
    - static_configs:
      - targets: []
      scheme: http
      timeout: 10s
      api_version: v1
scrape_configs:
  - job_name: prometheus
    honor_timestamps: true
    scrape_interval: 15s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
      - targets:
        - localhost:9090
  - job_name: elzim # Change to what ever you like
    static_configs:
      - targets: ['192.168.68.109:9100'] #Change this to your servers IP

This file defines the operational aspects of Prometheus, including:

This configuration tells Prometheus to scrape metrics from the node exporter running at 192.168.68.109 on port 9100.

This setup allows Prometheus to scrape metrics from its own instance as well as from the specified node exporter, providing comprehensive monitoring capabilities. The specified configuration and volume mappings ensure that your Prometheus settings are preserved and applied correctly.

This setup provides a robust and flexible solution for monitoring your infrastructure and applications, leveraging the power of Prometheus in a containerized environment.

 

 

 

Docker Containers

LinkStack

Setting Up Linkstack with Docker Compose

Linkstack is a web application designed for managing and organizing web links. It provides an intuitive interface for storing, categorizing, and accessing various web resources efficiently.

Docker Compose Configuration for Linkstack

To set up Linkstack in a Docker environment, you can use the following Docker Compose configuration. This will create an isolated environment for Linkstack, ensuring it runs consistently across different setups.

Docker Compose File (docker-compose.yml)

version: '3.8'

services:
  linkstack:
    image: linkstackorg/linkstack
    container_name: linkstack
    hostname: linkstack
    environment:
      #HTTP_SERVER_NAME: "www.example.xyz"
      #HTTPS_SERVER_NAME: "www.example.xyz"
      SERVER_ADMIN: "admin@example.xyz"
      TZ: "Pacific/Auckland"
      PHP_MEMORY_LIMIT: "512M"
      UPLOAD_MAX_FILESIZE: "8M"
    ports:
      - "8099:80"
      - "8443:443"
    restart: unless-stopped
    volumes:
      - "linkstack:/htdocs"
volumes:
  linkstack:

Key Components of the Configuration:

Deploying Linkstack

  1. Create a docker-compose.yml file with the above content.
  2. In the directory containing the file, run the command docker-compose up -d to start Linkstack as a detached background process.
  3. Once the container is running, Linkstack can be accessed via http://<host-ip>:8099 or https://<host-ip>:8443.

Note: You may configure the HTTP_SERVER_NAME and HTTPS_SERVER_NAME environment variables with your domain names if you plan to use Linkstack with custom domains.

Accessing Linkstack: After deployment, Linkstack is accessible through the specified ports. You can start organizing and managing your web links using its web interface.

 

Docker Containers

Duplicati - Docker Setup

Setting Up Duplicati with Docker Compose

Introduction to Duplicati: Duplicati is a free and open-source backup software that allows you to securely store backups online in various standard protocols and services. It's known for its versatility and ease of use, providing features like encryption, compression, and scheduling.

Docker Compose Configuration for Duplicati:

The following Docker Compose configuration will help you set up Duplicati in a Docker environment. This ensures a consistent and isolated setup for your backup needs.

Docker Compose File (docker-compose.yml):

version: "2.1"
services:
  duplicati:
    image: lscr.io/linuxserver/duplicati:latest
    container_name: duplicati
    environment:
      - PUID=0
      - PGID=0
      - TZ=Etc/UTC
      - CLI_ARGS= #optional
    volumes:
      - ./config:/config
      - ./backups:/backups
      - /:/source
    ports:
      - 8200:8200
    restart: unless-stopped

Key Components of the Configuration:

Deploying Duplicati:

  1. Save the above Docker Compose configuration in a docker-compose.yml file.
  2. Run docker-compose up -d in the directory containing this file to start Duplicati in detached mode.
  3. Once running, access the Duplicati web interface via http://<host-ip>:8200.

Configuring and Using Duplicati: After deployment, you can configure backup jobs, schedules, and destinations through the Duplicati web interface. Ensure to properly set up encryption and choose a reliable backup destination to secure your data.

 

Kubernetes Services

Chapter for Kubernetes Services

Kubernetes Services

Portainer - Deploying MicroK8s

How to setup MicroK8s using Portainer

Useful Linux and Docker commands to know

https://www.composerize.com/ - convert docker run commands to docker-compose.yaml script.

Useful to know

Docker volumes explained in 6 minutes - https://yewtu.be/watch?v=p2PH_YPCsis


Useful Linux Commands

Remove file
rm <file name>

Remove a directory (folder)
rm -r <directory name>

Useful Docker Commands

Remove everything created in yaml file
Navigate to the directory where your yaml file is
Then run a sudo docker compose down
Next run 'sudo docker volume ls' - this will show any volumes
If there are some volumes showing run 'sudo docker-compose down -v'

Otherwise you can do a "docker container ls -a" This will list all your containers, find your navidome container/s and run "docker container rm " to remove it
If it has volumes you can remove those as well by running "docker volume ls" find the volume you want to remove and run "docker volume rm "

Storage Solutions

Guides for setting up self-hosted storage solutions.

Storage Solutions

GlusterFS Setup

Requirements

GlusterFS is only supported on 64bit systems, so make sure the host machine running GlusterFS and any machines utilising the share is also running on a 64bit system.

This guide is created for Ubuntu 22.04 jammy

Steps to follow

Run the following commands on all systems that will be utilising the share, as well as the host system.

Adding Hosts to /etc/hosts

We want to make sure our machines can talk to each other via their host names, this can be done by editing and adding the following.

Edit /etc/hosts

sudo nano /etc/hosts

Add in your IP addresses and Hostnames of your machines that will be using GlusterFS, this is my example on my  GlusterFS host machine, but remember to add this to all your nodes that will be using GlusterFS

127.0.0.1 localhost
127.0.1.1 elzim

192.168.68.109  elzim
192.168.68.105  pi4lab01
192.168.68.114  pi4lab02

Installing GlusterFS

Setup the GlusterFS repository. At the time of writing this, GlusterFS-10 is the latest release.

sudo add-apt-repository ppa:gluster/glusterfs-10

Run an apt update to update the repositories.

sudo apt update

Install GlusterFS

sudo apt install glusterfs-server -y

Start and enable GlusterFS

sudo systemctl start glusterd
sudo systemctl enable glusterd

Peering the Nodes

This command is only run on the host machine

Before running the peering command, make sure you run it under sudo via 

sudo -s

The following command will peer all the nodes to the GlusterFS Pool, this is using the hostnames for my environment, make sure to change this to suit yours.

gluster peer probe pi4lab01; gluster peer probe pi4lab02;

Running the below command will show your hosts now connected to the pool.

sudo gluster pool list

image.png

Creating the Gluster Volume

Let's create the directory that will be used for the GlusterFS volume - This is run on all nodes
Note: You can name "volumes" to anything you like.

sudo mkdir -p /gluster/volumes

Now we can create the volume across the Gluster pool - This is run just on the host

sudo gluster volume create staging-gfs replica 3 elzim:/gluster/volumes pi4lab01:/gluster/volumes pi4lab02:/gluster/volumes force

Start the volume by running the below command

sudo gluster volume start staging-gfs

To ensure the volume automatically mounts on reboot or other circumstances, follow these steps on all machines:

To verify that the GlusterFS volume is successfully mounted, run the command: df -h.

localhost:/staging-gfs                 15G  4.8G  9.1G  35% /mnt

Files created in the /mnt directory will now show up in the/gluster/volumes directorie on every machine.

    Logging

    Documentation on logging, such as Grafana and Node Exporter

    Logging

    Setting Up Node Exporter

    Download Node Exporter

    Begin by downloading Node Exporter using the wget command:

    wget https://github.com/prometheus/node_exporter/releases/download/v1.7.0/node_exporter-1.7.0.linux-amd64.tar.gz

    Note: Ensure you are using the latest version of Node Exporter and the correct architecture build for your server. The provided link is for amd64. For the latest releases, check here - https://github.com/prometheus/node_exporter/releases

    Extract the Contents

    After downloading, extract the contents with the following command:

    tar xvf node_exporter-1.7.0.linux-amd64.tar.gz

    Move the Node Exporter Binary

    Change to the directory and move the node_exporter binary to /usr/local/bin:

    cd node_exporter-1.7.0.linux-amd64 sudo cp node_exporter /usr/local/bin

    Then, clean up by removing the downloaded tar file and its directory:

    rm -rf ./node_exporter-1.7.0.linux-amd64

    Create a Node Exporter User

    Create a dedicated user for running Node Exporter:

    sudo useradd --no-create-home --shell /bin/false node_exporter

    Assign ownership permissions of the node_exporter binary to this user:

    sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter

    Configure the Service

    To ensure Node Exporter automatically starts on server reboot, configure the systemd service:

    sudo nano /etc/systemd/system/node_exporter.service

    Then, paste the following configuration:

    [Unit]
    Description=Node Exporter
    Wants=network-online.target
    After=network-online.target
    
    [Service]
    User=node_exporter
    Group=node_exporter
    Type=simple
    ExecStart=/usr/local/bin/node_exporter
    Restart=always
    RestartSec=3
    
    [Install]
    WantedBy=multi-user.target
    
    Save and exit the editor.

    Enable and Start the Service

    Reload the systemd daemon:

    sudo systemctl daemon-reload

    Enable the Node Exporter service:

    sudo systemctl enable node_exporter

    Start the service:

    sudo systemctl start node_exporter

    To confirm the service is running properly, check its status:

    sudo systemctl status node_exporter.service