Hardware: What are you using to run openHAB?

I think most folks using the micro platforms are using them as Runtime only. In this mode, they're mounting the remote FS and editing on their convention machines
Just on remote editing and updating. DropBox is another option:

[url=https://github.com/openhab/openhab/wiki/Dropbox-IO]https://github.com/openhab/openhab/wiki/Dropbox-IO[/url]

@Electron Employer: Looks like the Cubieboard4 only consumes 12.5 Watts :slight_smile: Your electrons are working overtime at 30W.

I recently started on openHAB and have been testing on a full server. Itā€™s almost production-ready now, so I moved it to a Raspberry Pi 2.

The RPI 2 is usable, but too slow for my taste; rules which took milliseconds to process on the full server now take 2-3 seconds.

This is probably because our environment is fairly large. We have about 70 devices total with approximately 40 rules running a large variety of functions and bindings (Pushover, Nest, DSC, Plex, Astro, System Monitor, some custom stuff). Persistence is being handled by mysql on an external HDD, but Iā€™ve also tested against RRD4J which was even worse in terms of performance.

I feel thereā€™s probably room for optimization, but Iā€™m going to also experiment with some slightly better hardware (Odroid XU4) to see what the difference might be.

@silencery,
Iā€™d be interested in knowing more details about your config, esp when it comes to the performance elements (eg [tt]vmstat 5[/tt], or similar).

Iā€™ve not run on an RPi2 (yet) but Iā€™ve seen a number of people have issues with the iops achievable via the SD Card interface on those devices.

That said, I have ~1000 items in my setup, most with RRD-based persistence. The bulk of these are coming/proxied from my MiOS/UI5 system (Alarm, ZWave etc), but ~150 are coming from @watouā€™s native Nest Binding, and there are ~ 100 coming from native Harmony, Astro, Weather MQTT Bindings. Most of the time, this system is idleā€¦ very idle.

I see bursts in my ODroid C1 System when itā€™s RRDā€™s are written, as well as when some of my custom Rules (for Energy calcs) kick in, but theyā€™re not impacting Scene timing (I have Debug level logging enabled, and I write timing data for each Rule executed)

If you go the ODroid route, it would be worthwhile picking up their eMMC module, and the battery for the on-board RTC. I replaced the U1 SDCard with one of these, and the difference is noticeable (and the U1 MicroSD Card is also noticeable over a regular Class 10 MicroSD Card).

The XU4 looks like a nice module.

1000 items? Wow :slight_smile:

Thatā€™s impressively huge AND you have debug logging on? Your IO must be taking a decent hit. Thatā€™s very interesting info to share; it sounds like I do indeed have a lot of room for optimization.

Yes, Iā€™ve been concerned about storage bottlenecking the RPI2, so I moved the root FS off SD onto a conventional HDD. Results are still the same, so it seems thereā€™s something else causing the performance hiccups. Iā€™ve also taken care to watch the openHAB output to fix anything that may have been causing errors or warnings.

Hereā€™s a paste of the vmstat output. Everything seems to be ok. Itā€™s not even hitting the swap:

procs -----------memory---------- ā€”swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 344088 134824 283388 0 0 0 4 434 297 2 0 98 0
0 0 0 344236 134824 283396 0 0 0 22 620 486 1 1 98 0
1 0 0 343820 134828 283396 0 0 0 9 466 333 2 0 98 0
0 0 0 344216 134828 283404 0 0 0 4 647 500 2 1 98 0
0 0 0 343712 134828 283416 0 0 0 4 468 323 3 0 96 0
0 0 0 344020 134828 283424 0 0 0 4 618 482 2 0 98 0
0 0 0 343924 134828 283424 0 0 0 4 450 307 2 0 98 0
0 0 0 344176 134828 283432 0 0 0 8 622 484 1 1 98 0
0 0 0 344044 134828 283432 0 0 0 19 445 320 2 0 98 0
4 0 0 344288 134828 283440 0 0 0 5 585 441 2 1 97 0
0 0 0 343884 134832 283436 0 0 0 4 516 374 2 0 98 0
3 0 0 343872 134832 283448 0 0 0 4 450 307 2 0 98 0
0 0 0 344068 134832 283456 0 0 0 14 773 590 5 1 95 0
1 0 0 343712 134832 283456 0 0 0 7 444 309 1 0 98 0

I should say the slow triggers are not consistent; sometimes things happen quickly, sometimes they take forever. I thought it was just an issue of the objects not being cached in memory yet, but the slowness happens even if the server has been running for a while. Iā€™ll need to find some time to trace.

The sluggishness comes up in various areas, but one example is a rule I setup to dim the lights when a movie is playing on the Plex client. The associated actions are pretty simple: evaluate if itā€™s day/night. send a notification to plex via json, and dim the lights if a movie is playing during the evening. When i fire the trigger (start or resume a movie), this action can take anywhere from near instantaneous to 5-6 seconds to complete. Odd.

By the way, your suggestion for adding locks on shared objects in the other thread was awesome. It really helped enhance consistency by quite a bit. Now I just gotta focus on performance.

Thanks!

@ silencery

Actually itā€™s not that much, a Vera Device expands to ~10-15 Items in openHAB so the #'s rack-up quickly :wink:

Hereā€™s what mine looks like when the system is just ticking along:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 101368 140900 285164    0    0     0  1664 9501 3803  4  4 88  3  0
 0  0      0 101496 140900 285164    0    0     0    21 8349  547  0  0 99  0  0
 0  0      0 101504 140900 285164    0    0     0     1 8307  451  0  0 100  0  0
 0  0      0 101472 140900 285164    0    0     0     0 8312  447  0  0 100  0  0
 0  0      0 101472 140900 285164    0    0     0    31 8313  463  0  0 100  0  0
 0  0      0 101440 140900 285164    0    0     0     2 8323  461  0  0 100  0  0
 0  0      0 101496 140900 285164    0    0     0     0 8334  463  1  0 99  0  0
 0  0      0 101000 140900 285176    0    0     0     0 8340  468  5  0 95  0  0
 0  0      0 101224 140900 285176    0    0     0     3 8337  487  1  0 99  0  0
 0  0      0 101232 140900 285176    0    0     0     0 8322  468  0  0 100  0  0
 0  0      0 101200 140900 285176    0    0     0     0 8345  490  0  0 100  0  0
 0  0      0 101200 140900 285180    0    0     0     3 8315  461  0  0 99  0  0
 0  0      0 100772 140908 285180    0    0     0  1671 9385 3484  4  5 88  4  0
 0  0      0 100860 140908 285180    0    0     0    25 8333  510  0  0 99  1  0
 0  0      0 100836 140908 285180    0    0     0     5 8310  454  0  0 100  0  0
 0  0      0 100836 140908 285180    0    0     0     0 8460  811  0  0 99  0  0

Of course, during the busy times it looks a lot different. The largish figures under Blocks Out are the RRD Sync, which is set to write each minute, in addition to each-change. Looking at the Wait stat, it only has a small impact on the system during that time.

For my logging, I have DEBUG enabled with both Local FS and SYSLOG as output. The latter is another RPi where I centralize all my device logs, and itā€™s on an Attached USB Drive where I offline/archive logs for months (Router, openHAB, Mac, etc, etc). In the worst case, I could turn off openHABā€™s DEBUG Logging and Iā€™ve have a copy on the other machine.

Your #'s arenā€™t anything to worry about, so Iā€™d expect the blockage to be somewhere else. It would be worthwhile double-checkin the locks youā€™ve added to ensure theyā€™re tightly scoped around ā€œjustā€ the bit that needs the consistency lock. Iā€™d also look at those rules and create distinct locks for the bits that need them, just incase thereā€™s any unneeded sharing going on.

Slow Rule executions will occur on the first time each Rule is executed after Startup (or after a Rule has been edited, causing a Reload). These timings are often large, even in fast-CPU environments and donā€™t seem to be avoidable, but theyā€™re [largely] a one-time cost - unless youā€™re hot editing Rules frequently.

It may also be that your VMStat #'s are quite different during Scene execution, so you may want to create a script to gather the logs over a longer period so they can be looked at in more detail.

I've also taken care to watch the openHAB output to fix anything that may have been causing errors or warnings.
Definitely good practice... esp after you've just written any new Rules ;)

I put timing hooks into quite a few of my scenes, which allows me to go back and see if there are long-running things ā€œafter the factā€.

The typical form of these is:

[code]rule ā€œā€¦ā€
when
ā€¦
then
var t = now

...
var long x = now.getMillis - t.getMillis
logInfo("eagle", "PERF Pull-Data-from-Eagle elapsed: " + String::valueOf(x) + "ms")

end
[/code]

These have come in VERY handy for all sorts of issues.

The other thing I changed a lot of is my use of ā€œ[tt]Itemā€¦ received update ā€¦[/tt]ā€ in Rules. Iā€™ve converted almost all of these to use ā€œ[tt]Item ā€¦ changed[/tt]ā€ or ā€œ[tt]Item ā€¦ changed to ā€¦[/tt]ā€.

I used to put all the logic in the Rule itself, and going this route has helped a bunch (as the System is now doing the comparisons, instead of the Rule itself)

Running:

Hardware:

  • Asus Q1900ITX - http://www.asrock.com/mb/Intel/Q1900-ITX/
  • 8 GB of ram
  • 2x 120 GB kingston SSD Drives (I know, but itā€™s a prototype for now)
  • zwave bindings: vera lite, 2x vera edge, aeon usb stick (all zwave devices has direct connection to a vera so everything is instant, and large commands are pretty fast)

Software:

  • Proxmox 4.3
  • zfs mirror on ssd drives
  • one container for each: mysql, mqtt, apache+php+nodejs for custom interfaces, nagios, email forwarder, nodered, openhab, and other

Usage on idle: 1.4 GB of ram + 0.4 for proxmox, 1% IO delay

Will expand this into a high-availability 3-node proxmox cluster, with only one node standing required to be able for automation and remote controll to work, once I get the hang of OpenHab.

Thatā€™s one heck of a config. Have you worked out whatā€™s eating the RAM? OH isnā€™t usually all that bigā€¦

Heā€™s also running mysql, mqtt, apache+php+nodejs for custom interfaces, nagios, email forwarder and nodered apart from OH. Which goes a long way towards the memory usage numbers.

Yeah, I saw that, but was interested in which component was hogging the memory. There are a bunch of folks using OH, MQTT together on small footprint HW ā€¦ so Iā€™m guessing Apache, but a breakdown would be handy.

My guess would be both MySQL and the Apache PHP combination.

I am running it in my iMac with 3.5GHZ CPU, SSD drive and 32GB RAM :D. I just installed the Hue plugin. It may take me a few weeks to set up everything up :frowning:

Iā€™m running it on my i7 Mac mini Server with Sierra. I only just started so I havenā€™t got to grips with it yet.

My Mini Server was already on to run various things such as CCTV software and the HomeKit and Alexa bridge softwares so I decided Iā€™d give openHab2 a go.

Running it flawlessly from a RPI2 together with a MQTT broker, apache web server and MySQL database server.
Iā€™ll add openluup to it soon.

Why would you run openHAB and openLuup?

To check how much can be done with 1 hardware system (rpi). I need Openhab today to interface Nikobus.