Throttled! (high change rate)

I have a couple of sensors based (partially) on geofence.
When I pass certain point driving by car, they trigger some operations.
I didn’t have much opportunities to test such sensors yet, but so far it was quite OK.
But today I’ve got some message "Throttled! (high change rate) on two of my reactor sensors, and I don’t really know what to do with such message, neither how to clear the sensor from it (reset, probably?)

What does it mean, and why it appeared? Both sensors have simple logic: one checks switch status and house mode at the point of entering geofence, the other checks only a switch status.
Don’t get what part of the checked devices would have “high change rate”?

Reactor does rate-limiting on device updates and overall (ReactorSensor) state changes, to prevent a runaway device or an accidental loop in logic configured within or between ReactorSensors from overwhelming the system.

The “high change rate” warning means that your ReactorSensor (overall) was toggling between tripped and untripped states at too high a rate. This is controlled by the MaxChangeRate state variable on the ReactorSensor, which is the number of state changes allowed per minute before throttling kicks in. The default is an admittedly conservative 5. Review of the Logic Summary’s event history is usually sufficient to establish what condition(s) is (are) driving the high change rate. Loops are also possible, so be on the lookout for those. If that review doesn’t reveal any problems and it’s possible and reasonable for the conditions to be changing that frequently or more, you can safely set the MaxChangeRate value higher.

There is also a possible “high update rate” warning, which means that the devices within your ReactorSensor are changing with high frequency. This is controlled by the MaxUpdateRate state variable on the RS, again in events per second, and the default is 30.

Both forms of throttling are self-resetting, which is to say, when the condition settles down, the RS will resume normal operation.

Interesting, because both those sensors still stating this message, regardless that any changes should stop at least 5h ago (I left geofence, switch state and house mode didn’t change)

I tried to check logs, but I don’t get where the problem is. Probably with geofence, as the last part of the log is statement:
“03/31/19 15:34:04 devicewatch: name=Reactor, var=IsHome, device=599”
repeated 8 times with the same date and time

OK, can you go to the Tools tab, and go through the “log snippet” process there, and PM me that and a Logic Summary. I’ll take a look. The expected change frequency for IsHome is once per minute, so that looks weird.

Edit: well, wait, it would be unusual except that if you have 8 ReactorSensors with geofence conditions, that would be expected. In any case, let’s dig in and see.

Hi,

Make sure you do not have the test tools options checked. That caused this for me.

Cheers Rene

Rene, your problem is a bit different, but is fixed in the 2.4 release now available.

OK, I PM’ed you with log snipped and logic script
I don’t have 8 sensors based on geofence, 5 in total, 4 based on geofence.
8 is 4 multiplied by 2, but I don’t have a clue if it is any coincidence or not.
In the meantime the “Throttled” message disappeared.

OK. That all makes sense. The event log actually shows two sets of 4 messages with two different timestamps, not all 8 messages, so the 4 matching up to the number of geofencing Reactor instances reconciles.

I found the issue that causes the repetition of those messages. That in itself should not cause the rate-limiting to trip. While the first message will cause an update/eval to be queued, the remaining messages will do nothing, as the request is already queued at that point.

In any case, I’ve posted the fix to the Github stable version of L_Reactor.lua. If you wish, you can download that file and push it up to your system.

Otherwise, everything logged looks normal and good. If the rate-limit trip happens again, try to capture your Logic Summary and just grab the whole Vera log file (let me know and I’ll give you a separate URL where you can upload the large file).

OK, I’ll upload the fix and see (next opportunity to test is on friday)
How can I get the whole Vera log file?

Best way is to use SCP (via *nix command line scp command, WinSCP on Windows, etc.) to grab /var/log/cmh/LuaUPnP.log.

Second best is to fetch http://your-vera-ip/cgi-bin/cmh/log.sh?Device=LuaUPnP in a browser, then copy-pasta to a plain text editor (not Word/LibreOffice, etc., but NotePad++, notepad.exe, BBEdit, etc.).

Most of the time I have only remote access to the controller :frowning:
Maybe It is possible via AltUI?

Ah… OK. Plan C, then… on the Tools tab of your ReactorSensors is a Troubleshooting section you’re probably already familiar with. If you follow the process there to get a log snippet, that might work–it only extracts a recent 1000 lines of the log file starting from when the sensor restarts, so the challenge is to get the problem that causes the throttling to happen in that window. Full and complete log is better, but in a pinch, this often provides enough.

OK, let see how it behaves after your fix, if there will still be a problem I’ll try to get the snippet as soon as possible.
These two sensors are set to trigger when I’m leaving location where controller is, but I have two other working when I’m going there (so fare no issues with these two, but I set all of them very recently). If these two will misbehave as well, then I’ll have opportunity to download the log directly from Vera.

Interesting…
It seems that it is OK now, at least no “Throttled!” warning appeared today.
But I still have issues with geofence based reactor sensors.
For example one did only half of the programmed job when it tripped, and now it still shows that the geofence condition is true, regardless that I’m about 30 km from the geofence area (which is correctly reported by application, I was notified that I left the geofence for which the sensor was set).
I’ve checked another sensor based on same geofence and it also reports that I’m in the geofence, which is not true.

The Logic Summary reports the content of the geofence data as well as the condition states, so that would handy.

I"ve PM-ed it to you.
In the status section of the sensor I have:
Geofence — (in) true as of 21:28:06

In the lotic summary, at the beginning I have info that two geofences have status “in”. The one when I’m sitting now (Home), and the one which was set to trigger the sensor

That would be a reflection of Vera’s data, so for some reason, the exit state for the other geofence never occurred in user_data. I’ve seen that happen. Unfortunately, there’s not really anything I can about that. I think one of the possible shortcomings of Vera’s current geofence implementation is that it doesn’t seem to reconcile against itself–that is, it seems to handle each geotag individually, entry and exit, so if an exit is missed, the later entry into another one doesn’t cause it to recheck all of the others and catch up with/resend/send late the missed exit.

There were supposed to be significant improvement in Geofencing in the latest beta, but the release notes merely talked about removing geofences for deleted users. So I am un-convinced there will be much improvement (@Sorin et al have missed responding to my specific question).
Personally I’m not risking a Beta on my only system

Cheers

C

I know from my own testing of this during development that there are a lot of factors that go into successful geofencing, and many of them are out of Vera’s control. For example, there are settings on the phone in Android that affect the performance of the app, and the user can unwittingly apply these at any time. Power-saving mode can cause the app to be put to sleep when in the background, or reduce the frequency and accuracy of location services reports; turning off mobile data or applying a data cap, airplane mode, restricting network usage on apps running in the background, rebooting the phone (esp. while in motion), etc., can all impede the operation of the app and cause geofencing failures.

And then there’s Vera’s cloud, and even your own Internet connection, that stand between.

In my view, the dependency on an app on a mobile device connected through a cloud is inherently fragile. But, there really aren’t a lot of other ways to do it, either, so we kind of have to take the good with the bad. I’ve also learned that Vera’s (Android Beta) mobile app does a pretty good job for its part, when it’s allowed to work. Since making some of the foregoing discoveries and adjusting my settings for the app carefully, I’ve had few problems, but I also work from home, so I don’t get frequent opportunities to test as a natural consequence of my lifestyle.

I’m confident enough with the current Beta firmware that I’m going to install it on my home system this weekend. I can’t test it well enough outside of that environment to be truly useful to Vera, anyway. Wish me luck!

But seriously, I’ve been running the Beta Android app on my phone for a while now, and it does pretty good on current released firmware (7.0.27). But you have to mind those phone settings. I think the combination of improvements in the mobile apps, improvements in Vera’s cloud infrastructure, and improvements in firmware, whatever they may be, will yield progressively better results. Never perfect, for sure, but demonstrably good and very usable.