+4
Planned

Docker Documentation Additions & Suggestions

Rudhra 6 months ago updated by ovidiu 4 months ago 10

I believe a lot of people can benefit from these additions.

  1. Docker-Compose example with: 
    1. the most modern and by far the easiest to configure https reverse proxy with A+
      security rating: caddy (nothing has to be installed on the host!).
    2. OnlyOffice Documentserver fully working example.

    Everything with a $ should be customized to the user installation (or present in an .env file with the desired value). 

    version: "2.3"
    services:
    ## __________________
      caddy:
    container_name: caddy-proxy
    image: lucaslorentz/caddy-docker-proxy:ci-alpine
    restart: always
    networks:
    - web-proxy
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - $DOCKERDIR/caddy/caddy_data:/data
    - $DOCKERDIR/caddy/config:/config
    labels:
    caddy.email: $EMAIL
    ports:
    - 80:80
    - 443:443 ## ________________
    filerun:
    image: afian/filerun
    container_name: filerun
    restart: always
    networks:
    - web-proxy
    - filerun
    environment:
    FR_DB_HOST: filerun-db
    FR_DB_PORT: 3306
    FR_DB_NAME: filerundb
    FR_DB_USER: $USERNAME
    FR_DB_PASS: $PW_INT
    APACHE_RUN_USER: $USERNAME
    APACHE_RUN_USER_ID: $PUID
    APACHE_RUN_GROUP: $USERNAME
    APACHE_RUN_GROUP_ID: $PGID
    depends_on:
    - filerun-db
    - filerun-tika
    - filerun-search
    volumes:
    - $DOCKERDIR/filerun/html:/var/www/html
    - $DATAPOOL/Users:/user-files
    labels:
    caddy: files.$DOMAIN, drive.$DOMAIN
    caddy.reverse_proxy: "{{upstreams 80}}"
    caddy.reverse_proxy.header_up: "Host files.$DOMAIN"
    # Required extra headers
    caddy.file_server: "" # required for fileservers
    caddy.encode: gzip # required for fileservers
    caddy.header.Strict-Transport-Security: '"max-age=31536000;"' # Recommended security hardening for fileservers
    caddy.header.X-XSS-Protection: '"1; mode=block;"' # Recommended security hardening for fileservers
    caddy.header.X-Content-Type-Options: "nosniff" # Seems required to open files in OnlyOffice
    caddy.header.X-Frame-Options: "SAMEORIGIN" # Seems required to open files in OnlyOffice
    ## ________________ filerun-db:
    image: mariadb:10.1
    container_name: filerun-db
    restart: always
    networks:
    - filerun
    environment:
    MYSQL_ROOT_PASSWORD: $PW_INT
    MYSQL_USER: $USER
    MYSQL_PASSWORD: $PW_INT
    MYSQL_DATABASE: filerundb
    volumes:
    - $DOCKERDIR/filerun/db:/var/lib/mysql
    ##____________________ filerun-tika:
    image: logicalspark/docker-tikaserver
    container_name: filerun-tika
    restart: always
    networks:
    - filerun
    ##____________________
    filerun-search:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
    container_name: filerun-search
    restart: always
    networks:
    - filerun
    environment:
    cluster.name: docker-cluster
    bootstrap.memory_lock: 'true'
    ES_JAVA_OPTS: '-Xms512m -Xmx512m'
    ulimits:
    memlock:
    soft: -1
    hard: -1
    mem_limit: 1g
    volumes:
    - $DOCKERDIR/filerun/esearch:/usr/share/elasticsearch/data
    ##
    ##_____________________ OnlyOffice Document Server [Cloud/Office]
    onlyoffice:
    image: onlyoffice/documentserver
    container_name: onlyoffice
    stdin_open: 'true'
    restart: always
    networks:
    - web-proxy
    tty: 'true'
    volumes:
    - $DOCKERDIR/onlyoffice/data:/var/www/onlyoffice/Data
    - $DOCKERDIR/onlyoffice/log:/var/log/onlyoffice
    - $DOCKERDIR/onlyoffice/database:/var/lib/postgresql
    - /usr/share/fonts:/usr/share/fonts
    dns: 9.9.9.9
    environment:
    JWT_ENABLED: 'true'
    JWT_SECRET: $ONLYOFFICEJWT
    labels:
    caddy: office.$DOMAIN
    caddy.reverse_proxy: "{{upstreams 80}}"
    # Required extra headers
    caddy.file_server: ""
    caddy.encode: gzip
    caddy.header.X-Content-Type-Options: "nosniff"

    OnlyOffice: Run this after the container has loaded, to process the mapped host fonts:

    docker exec onlyoffice /usr/bin/documentserver-generate-allfonts.sh

    2. ElasticSearch required preparations on a Debian/Ubuntu host, before running the container. I think this is absolutely essential for this part of the guide: https://docs.filerun.com/file_indexing

    #!/bin/bash
    # ElasticSearch ~ requirements
    # ---------------------------------------------
    # Create folder and set permissions
    sudo mkdir -p $HOME/docker/filerun/esearch
    sudo chown -R $USER:$USER $HOME/docker/filerun/esearch
    sudo chmod 777 $HOME/docker/filerun/esearch
    
    # IMPORTANT! Should be the same user:group as the owner of the personal data you access via FileRun!
    sudo mkdir -p $HOME/docker/html
    sudo chown -R $USER:$USER $HOME/docker/html
    sudo chmod 755 $HOME/docker/filerun/esearch
    
    # Change OS virtual mem allocation as it is too low by default for ElasticSearch
    sudo sysctl -w vm.max_map_count=262144
    
    # Make this change permanent
    sudo sh -c "echo 'vm.max_map_count=262144' >> /etc/sysctl.conf

    3. Command line tools that should be run regularly (nightly).

    With the below commands, the user does not need to be inside the container, can simply use cron on the host, making it completely independent from the container: when the container is recreated, this still works. Saves a lot of time!

    I think this is really a good addition for this part of the documentation, at least a single example of the command to show how to run it from the host, instead of letting users do this within the container:

    https://docs.filerun.com/command_line_tools

    #!/bin/bash
    # FileRun nightly tasks
    docker exec -w /var/www/html/cron -it filerun php empty_trash.php -days 30
    docker exec -w /var/www/html/cron -it filerun php paths_cleanup.php
    docker exec -w /var/www/html/cron -it filerun php metadata_index.php
    docker exec -w /var/www/html/cron -it filerun php make_thumbs.php
    docker exec -w /var/www/html/cron -it filerun php process_search_index_queue.php
    docker exec -w /var/www/html/cron -it filerun php index_filenames.php true

    Planned

    Thank you very much Rudhra for this post!

    I get the following error when attempting to do a docker-compose up

    ERROR: Service "caddy" uses an undefined network "web-proxy"

    I tried adding it manually and it still fails.  I tried pasting this into portainer and it also complains about the networks.

    Thanks,

    Hi tims, I am not able to edit my post anymore. What was missing is this part at the end of thecompose file: 

    networks:
    web-proxy:
    driver: bridge

    Docker is all about isolation/containerization and that includes networking. As you can see, for each service a network has been defined, but it is not necessary. You can also decide to remove "networks: web-proxy" and "networks: filerun" from your compose. In that case, the services will still use Docker bridge network, but instead of a seperate one they will use the default that is probably used by all your other Docker services as well.

    I like to group the docker services that belong together/make sense in one network bridge (I believe Docker recommends you to do this/it is suppose to be good practice).
    Since the Filerun container needs to be exposed via https, it is also in the web-proxy network, which is the network that contains the Caddy service as well as other (non-Filerun related) containers that I expose via https.

    As an example you can see my full compose file here. It is identical to the above with all other services I run (and their networks): https://github.com/zilexa/Homeserver/blob/master/docker/docker-compose.yml

    Well, it starts now, but can't seem to get to it from the outside.  Verified port forwarding is working.  Have all 3 cnames created pointed at the correct IP address, pinged them to verify.  Getting HTTP2 protocol error.  I normally use Nginx Proxy Manager, what would I need to change to make that work instead of caddy?  

    Thanks,

    Also, this is a completely fresh install of Ubuntu Server with latest docker and docker-compose installed. Nothing else running, NPM runs on one of my other ubuntu servers. I didn't want anything to occupy any critical ports.

     It could depend on your domain provider. Not all domain providers are the same. I use Porkbun which adheres to normal standards. I know someone that uses GoDaddy and needed to activate the GoDaddy plugin.

    Anyways, this is offtopic. Best is to check the log of your caddy container, understand the process and seek help in the Caddy forum.

    I use cloudflare for DNS.  Run the Ubuntu servers locally on a VMWare server 6.5.  Tried on Ubuntu 20.04 and 21.04 both not working.  Is this for an older version of Ubuntu maybe?

    The above is a generic, absolute minimum compose. You will have to adjust it to your provider.

    I have put a lot of effort to figure everything out as I was a total noob. You clearly haven't even googled Caddy + Cloudflare. Otherwise you would have seen this:
    https://caddyserver.com/docs/caddyfile/directives/tls#examples

    Now all you need is to figure out the syntax of caddy-docker-proxy, for which you can use its GitHub page and there are multiple Cloudflare examples in the closed issues.


    Caddy.community forum also has plenty of examples and documentation, no need to continue the discussion here.

    I tried the example for onlyoffice psoted here and if I visit: https://office.mydomain.tldI get the welcome text but if I visit https://office.mydomain.tld/healthcheck/ I get "false" in the browser and the onlyoffice contaienr lgos saying:

    [2021-05-29T13:32:09.587] [ERROR] nodeJS - healthCheck error │
    │ error: password authentication failed for user "onlyoffice" │
    │ at Connection.parseE (/snapshot/server/build/server/DocService/node_modules/pg/lib/connection.js:614:13) │
    │ at Connection.parseMessage (/snapshot/server/build/server/DocService/node_modules/pg/lib/connection.js:413:19) │
    │ at Socket. (/snapshot/server/build/server/DocService/node_modules/pg/lib/connection.js:129:22) │
    │ at Socket.emit (events.js:198:13) │
    │ at addChunk (_stream_readable.js:288:12) │
    │ at readableAddChunk (_stream_readable.js:269:11) │
    │ at Socket.Readable.push (_stream_readable.js:224:10) │
    │ at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)

    Any ideas? Google didn't help :-/

    I "deleted" the containers and stated from scratch and everything went smooth the second time for whatever reason.