openLuup on Synology via Docker

Whilst I don’t have a solution to run OpenLuup on Docker, I do have a solution to run OpenLuup very easily on a Synology NAS. The only thing that is needed is a reasonably powerfull NAS and the installation of PHPVirtualbox which can be found here: phpVirtualBox download | SourceForge.net. You can then import the ready made virtualbox image from Cudanet and update from there.

I have this running since over a year on a selfbuild Xpenology Nas with 16 GM RAM.

cheers,

Jacques

I have it running on a docker container. When I go to http://dockerip:80 I get “forbidden” should a website be displayed? Or should I do anything els before Altui is displayed? :-[

403 Forbidden
nginx/1.6.2

Edit… following the manual I get:
http://:3480/data_request?id=altui
(this takes about one minute and requires, of course, an internet connection.) If you want
to watch it working you can tail the log with tail -f LuaUPnP.log
This should load a functional ALTUI interface which should run when you access the URL
http://:3480/data_request?id=lr_ALTUI_Handler&command=home#
You will immediately be prompted to upgrade to the latest version (you don?t have to ?
again, it takes about a minute and will restart openLuup.) You should be able to exercise
all the ALTUI functionality through menus, etc., and simply see a single device (ALTUI) in
the system.

But I only see:
AltUI, amg0, Waiting Initial Data

and that never changes…

Is anyone running openLuup in a VM on a late-model Synology NAS, such as the DS218+ (which has a virtual machine manager built-in)?

I’m trying to decide whether this rabbit hole is worth my going down (since I at least have experience installing Ubuntu in all kinds of places).

  • Libra

So you found an old thread … :slight_smile:

And yes, I’m running openLuup on my DS412+, Dockerized. I use (shameless plug) my own docker image for it, vwout/openluup. Read more about in in openLuup on Docker (Hub).

It works out of the box, create a container and off you go. In that case you may however loose data (like your configuration and installed plugings) when upgrading the image. Synology unfortunately does not allow you to create volumes for persistence of data - at least not via the GUI. When you have console access (SSH) this of course is possible.
The easiest in that case is to use docker-compose. When you have root access, you can use the following docker-compose.yml file:

version: '2.3'

services:
  openluup:
    image: vwout/openluup
    ports:
      - "3480:3480"
    restart: unless-stopped
    volumes:
      - type: volume
        source: cmh-ludl
        target: /etc/cmh-ludl/

volumes:
  cmh-ludl:
    name: openluup-env
    labels:
      org.label-schema.description: "openLuup environment with plugins and userdata"
1 Like

I am trying to get this running on a Synology 212+. I had it working while testing a couple of weeks ago. I deleted everything and started over and cant get it to work again. The docker images runs from the GUI. I can connect to console. I am trying to install Alternate UI. I type latest in the update box and click update. Waited for 5 mins. Reload Luup, Hard Refresh the browser. Under version it says Github.latest. When I try to access it i get the message: No handler for data_request?id=lr_ALTUI_Handler.
Is this how it is supposed to install. Any Ideas what Im doing wrong?

If you started openLuup based on a fresh, I think you are running into the issue addressed in the recent topic New install on Raspperry Pi, AltUi not loading. The root cause is described by the author in this post: New install on Raspperry Pi, AltUi not loading - #4 by akbooer - openLuup - Ezlo Community, which suggests to use the development branch.

My openluup Docker image uses the master branch of openLuup, which means it does not contain the fix yet.
@akbooer: Any chance that the GitHub fix in the AltAppStore will be master soon (e.g. commit 2020.04.16 ‘master release candidate’?

Yes, very soon. Working on the final tweaks to a 5-th Anniversary edition, but it’s not clear to me why you’re not already using the development version.

1 Like

Hi @vwout, @akbooer and everyone else!

I have successfully gotten openLuup to run on my Synology NAS.
I tried the CLI way, just following the Docker section “Running” and " Persisting configuration" and now I have openLuup running with persitsance! :slight_smile:

However, I would like to transfer my current Vera Reactor backup file to my new openLuup Reactor, and I can’t find where I should put the file in order to restore it according to Rigpapas instructions (Backup & Restore - Reactor).

So… my question is basically; what should I do to be able to upload my Reactor backup file to a place where I can access it from Docker/openLuup/ALtUI? Where do I find this openluup-env volume in the file system of my NAS?

Thanks for all the effort you have put into this, both @vwout, @akbooer and Rigpapa! Really impressed how fast the communication is between the openluup/vera using the verabridge!

Thanks in advance for the help!
Tim

1 Like

I’m glad that things are working well for you, having got openLuup up and running.

I am no Reactor expert, but @rigpapa is the author.

I do note the following in the link you sent:

The Restore button restores the last configuration backup (stored on the Vera as reactor-config-backup.json in /etc/cmh-ludl/ on Vera or the directory in which Reactor is installed on openLuup

and that seems pretty explicit, but OTOH, I also know nothing about Docker! So let’s wait and see who turns up with some real help…

If memory serves, you use docker cp from outside the container. The question is, what directory path inside the container would be the target? I don’t know; that’s a question for the container creator. You could also docker attach to the container and wander around until you find the container’s plugin directory (running find / -name L_Reactor.lua should expose it pretty quickly).

I’m not aware of a file uploader in ALTUI, so… paging @amg0… is there one? Can there be? @akbooer, does your new built-in UI have one? This would really be the most consistent and user-friendly approach, IMO.

Hi!

Thanks for the quick answers! Yeah, my plan is to do a Reactor Backup, download it to my computer, edit the json to add the 10 or 100 to the device id:s, upload it to docker/openluup/where reactor saves it’s backup files, and restore a backup, basically migrating my Reactor-logic. The Reactor part should work… Or at least without changing the device id:s, which would be my plan B.

Thanks!

Don’t edit the JSON. There’s no need. There’s a “Device Repair” repair tool that will appear in the Tools tab of your ReactorSensor when a broken device is detected in the configuration. That should work much better/easier.

Thanks for the suggetsions @rigpapa! Yesterday was the firts time I played with Docker, and openLuup for that matter, so your suggestions sounds interesting. I’ll wait for @vwout who has done the container to get back about storage location and any suggestions. It would in any case be nice to be able to backup the config/plugin files and be able to restore them if you need /want to upgrade the Docker image/container at some point. But if that could somehow be done via the GUI that would be really nifty.

Oooh, awesome! Big thanks for that! :smiley:

It could easily. There’s nothing to stop you right now from pasting into a Lua Test string and writing it somewhere.

File writing comes with the risk of over-writing something important, though…

The replies by @akbooer and @rigpapa are correct.
The Docker image has openLuup installed in /etc/cmh-ludl/, so that is where the backup should be placed.

The easiest way is to use docker cp <path to backup json on host> <openluup-container-id>:/etc/cmh-ludl/.

For this reason, and for easy plugin development, I’m using a bind to /etc/cmh-lu/ (see docker-openluup/docker-compose.yml at master · vwout/docker-openluup · GitHub), so you can easily exchange files between the host and the container. In case of the Reactor backup, you would only have to copy the file to /etc/cmh-ludl/ using an interactive console - unless, I don’t know if that works, Reactor also checks /etc/cmh-lu/ for the backup json.

Reactor expects the backup file to be in the same location as L_Reactor.lua, wherever that may be.

I used docker export relaxed_pascal > openluup.tar to get a dump of the filesystem of the running container. I wanted to see where I would find the Reactor files, but searching through the whole .tar file has only given me Reactor references in L_VeraBridge.lua and devices.lua. both under the path etc/cmh-ludl/openluup/. I could not find the file ‘L_Reactor.lua’, nor the Reactor backup file, in the archive.

I also tried modifying some Reactor logic (adding a blank sensor) and exported the file system again, but the size of the tar archive is identical. This leads me to believe that the files are stored in my docker volumes and the info/files are saved there. Running docker volume ls gives me:

local               openluup-backups
local               openluup-env
local               openluup-logs

Can I somehow access them / copy to them / see what is in them @vwout ? Or should I start from scratch and do something different? I currently run the following to start the container:

sudo docker run -d -v openluup-env:/etc/cmh-ludl/ -v openluup-logs:/etc/cmh-ludl/logs/ -v openluup-backups:/etc/cmh-ludl/backup/ -p 3480:3480 vwout/openluup:alpine

but I have also tried the following:

sudo docker run -d -v /volume1/docker/openluup-env:/etc/cmh-ludl/ -v /volume1/docker/openluup-logs:/etc/cmh-ludl/logs/ -v /volume1/docker/openluup-backups:/etc/cmh-ludl/backup/ -p 3480:3480 vwout/openluup:alpine

This command should bind the host file system folder to the docker container openluup folders, if I understand these things correctly, but it doesn’t work. The container just starts and then stops…

EDIT: Just a quick update; I just went ahead and tried running ran sudo docker cp reactor-config-backup.json a22b6b7e32bf:/etc/cmh-ludl/. Reactor in openLuup now sees the new backup file and I can restore all of my logic. Happiness! Still feels like magic, and I’m not sure how much of a pain it will be if the Docker environment/container or NAS crashes one day, but for now it works.

Is there anyway I could do the reverse, i.e. copy EVERYTHING from a22b6b7e32bf:/etc/cmh-ludl/ to a folder on the NAS? Basically a manual backup that I could copy back in the future if I need to?

Thanks again for any input and/or suggestions!

1 Like

In your Docker setup you are using volumes to preserve data. The volume openluup-env is mounted at /etc/cmh-ludl/. This means the files that are stored in /etc/cmh-ludl are kept in the volume openluup-env and not in the container created from the image vwout/openluup:alpine.
This is the reason why e.g. L_Reactor.lua is not in the tar file created by docker export - the export command does not export the contents of volumes.

The reason that you do see some lua files in /etc/cmh-ludl/ in the tar file, is that my image contains openLuup. Upon mounting of new empty volume, Docker will copy the files of the mounted destination location, to the volume. New files that are added later, e.g. when you install the Reactor plugin, are stored on the volume and not in the container, nor (of course) does it modify the source image.

This also explains why using binds (-v /volume1/docker/openluup-env:/etc/cmh-ludl/) fails openLuup to start properly. Unless you manually copied files to /volume1/docker/openluup-env, this folder is empty and does not contain openLuup.
In contrary to new (empty) volumes, Docker does not copy the files from the destination directory to the binded filesystem.

To see which files are in the volume openluup-env, mount it and run ls. You can mount the volume to any container, but you can also start an interactive shell inside the openluup container that you started from vwout/openluup:alpine, e.g. using docker exec -it <id-or-name of openluup container> /bin/sh.

To backup the contents of openluup-env, you also could mount the volume to a secondary container. An alternative is to mix mounts and binds. Use a volume mount for the openluup environment and binds for logs and backups:

sudo docker run -d -v openluup-env:/etc/cmh-ludl/ -v /volume1/docker/openluup-logs:/etc/cmh-ludl/logs/ -v /volume1/docker/openluup-backups:/etc/cmh-ludl/backup/ -p 3480:3480 vwout/openluup:alpine

(or add arbitrary other mounts to other locations)

When you now run the following command, the created tarball will be stored on your filesystem in /volume1/docker/openluup-backups.

docker exec <id-or-name of openluup container> tar -C /etc/cmh-ludl/ -czf /etc/cmh-ludl/backup/openluup-env.tar.gz
1 Like

Oh, this explains alot! I never had any openluup files in the directory. I thought it would be populated from the image/container.

Your run command mixing mounts and binds works great. However, when I tried running the “backup” part I don’t get the files:

sudo docker exec c49c8a6d1af1 tar -C /etc/cmh-ludl/ -czf /etc/cmh-ludl/backup/openluup-env.tar.gz
tar: empty archive

As a side note; I can now see files in the /volume1/docker/openluup-logs folder but nothing in the /volume1/docker/openluup-backups folder for some reason. But that can perhaps be because of no backups existing at this point…?