Reactor on Altui/openluup, variable updates condition

Well, here’s your first problem.

I can confirm that the AltUI button for Home → Night works on a basic openLuup installation, after the statutory 30 second delay. It then stays in that mode.

So you must have something else going on.

What options do you have set on VeraBridge for the house mode? If it is controlled by the remote Vera, then changing it will get overwritten.

Also, a useful debugging tool is the log, which might clarify the order of events?

No, you don’t understand. The switching of house modes worked fine, but I think Reactor puts the house mode back to “Home”. And this is because the condition is triggered by openluup. The sl_CentralScene variable is updated apparently by openluup somehow. This shouldn’t happen, because the Hank device is never used.

OK, I haven’t been following this thread previously, but @rigpapa asked for a change in openLuup, which I made.

Indeed, I don’t understand at all your configuration: Hank / Vera / openLuup / …, which is connected to which, and how. Some description of that may help.

It’s really difficult to debug without a proper understanding of the setup, plus, I know nothing about Reactor internals. However, I’m happy to continue to try and resolve your issue along with @rigpapa, but I do need full information in order to do it, if it’s an openLuup issue. Of course, finding out where the problem lies is the first step.

AK

It definitely looks like sl_CentralScene got bumped at 09:47:44, that’s clear to see in the events. The run of the button 1 activity that precedes it has truncated history–we can’t see what started it. But at least for 09:47:44 we can confirm that the variable was rewritten and the callback triggered.

My best guess here to start is that the lu_status inquiry made by the VeraBridge got an update for sl_CentralScene when it shouldn’t have. Since Vera doesn’t generally send false positives, the other way this can happen in the context of openLuup and VeraBridge is if the status polling of the Vera requests a full update rather than a delta. In this case, it would get sl_CentralScene from the Vera, and write it, even though it hasn’t changed on the Vera.

@akbooer, I think the VeraBridge code bears this out… it looks like it does a full-list query every 20 requests. I guess I didn’t see this in my testing because I didn’t wait long enough, and the Vera I was testing with is very, very quiet (the Hank is its only ZWave device currently, for example). I definitely can see the value of periodic resets–I do this myself in both my automated test tool and my home-grown dashboard UI–but perhaps the frequency can be made a system option/parameter, with 0 meaning no periodic resets?

Yes indeed. So that is, in fact, an undesired side-effect of the recently made change. This is beginning to look like an ugly fix. I could add another exception for the sl_* variables on periodic updates, but then I’m bound to miss a legitimate change.

Needs a bit of thought.

As a side note, an easy fix would be available here, were it not for one of my biggest complaints about the “status” request/response… the response does not contain the timestamp of the states it returns. If it did, @akbooer would be able to simply examine the last known timestamp for each state he maintains against that provided by the status response, and determine if the variable should be (re)set or not. It would make resyncing a much tidier process.

Yes, certainly the case.

I haven’t looked at the details of the logic here that the OP is trying to fix, but is there not a work-around based on my comment in response to your original PM on this…?

“…which is why I always watched the timestamp of scene controllers rather than the scene activated!!”

I’m not quite following your comment… which timestamp (what state variable) are we meant to be looking at, and given that you don’t have the timestamps from the original Vera device states, how would that be different?

The “updates” condition in Reactor actually uses the timestamp to determine if a written variable is an no-change rewrite, so my sense is, we’re I’m likely already doing what you suggest, but I want to make sure…

		if op == "update" then
			-- State variable written, possibly same value, watch has been called.
			-- Refetch value to get timestamp
			_,vv = luup.variable_get( cond.service or "", cond.variable or "", devnum )
			D("evaluateCondition() service state update op, timestamp=%1, prior=%2, isRestart=%3",
				vv, cs.lastvalue, sst.isRestart)

Sorry for being unclear.

I’m suggesting a workaround not for Reactor, but for the logic it implements in this specific case.

I’m probably not understanding the original issue, I thought that there is a remote controller which can trigger the same scene sequentially, but you couldn't tell, because the activated scene number didn’t change. The LastSceneTime variable (if there is one) should change every time and then you can look and see which scene was activated.

Any clearer, or is this not the original issue?

What you are suggesting is what we are already doing. The problem is that the timestamps in openLuup are not reliable because (a) you don’t actually have them original from the Vera, which is what we need, you create them when you set the variables’ values, and (b) you set the variables’ values when you receive a status update for the variables from Vera, and a full update means the rewritable variables (the sl_ group) will be written in openLuup when Vera has not actually written them.

It’s not Vera timestamps which I am proposing to examine. Most of the scene controllers I have maintain their own variable as to when a scene is triggered…?

That is sl_CentralScene in this case. There is no timestamp other than the timestamp on that state variable as maintained by Luup.

Edit, for clarity, here is a list of all the changes that occur to the Hank Luup device when buttons are pressed. I press all four buttons one after the other here:

Watching #429 Hank Four Btn; waiting for changes in device states...
urn:micasaverde-com:serviceId:SceneController1 / sl_CentralScene = 1
urn:micasaverde-com:serviceId:HaDevice1 / BatteryDate = 1576161389
urn:micasaverde-com:serviceId:SceneController1 / sl_CentralScene = 3
urn:micasaverde-com:serviceId:HaDevice1 / BatteryDate = 1576161394
urn:micasaverde-com:serviceId:HaDevice1 / BatteryDate = 1576161394
urn:micasaverde-com:serviceId:SceneController1 / sl_CentralScene = 7
urn:micasaverde-com:serviceId:HaDevice1 / BatteryDate = 1576161399
urn:micasaverde-com:serviceId:SceneController1 / sl_CentralScene = 7
urn:micasaverde-com:serviceId:HaDevice1 / BatteryDate = 1576161399
urn:micasaverde-com:serviceId:SceneController1 / sl_CentralScene = 5
urn:micasaverde-com:serviceId:SceneController1 / sl_CentralScene = 5

Development version v19.12.12 implements this change.

Variable in question is CheckAllEveryNth.

  • default value is 20 (used to be 10 for synch polling, 20 for asynch)
  • 0 = don’t do it

Good luck.

I think that’s what it’s going to take in this case, and maybe more than that.

Unfortunately, I think at this point it may all be a non-starter. The short-coming here is not in openLuup, it is in Vera’s implementation of its status/lu_status response, which does not include timestamps for the states reported. In fact, those timestamps do not seem to be reported in any useful bulk response, such as the user_data request. Without openLuup having access to these timestamps, there is nothing for it to do but make its own. Even with all of @akbooer 's efforts to bring the behaviors of openLuup into sync with Vera Luup in his 19.12.11 release, and now the addition of a flag for the 19.12.12 release, it still remains the case that, from time to time, something is going to restart/reboot, and when it does, a full update from status/lu_status will occur, and that will cause spurious triggering of the updates condition on the openLuup side.

The workaround for this is to handle scene controller logic on the Vera side, since it is entirely dependent on Vera’s (unpublished) timestamps, for this device (Hank four-button scene controller) in particular. The openLuup side can then have separate logic which looks at the results of the Vera logic and implements whatever activities, executed on the openLuup side, are necessary. To be clear, I am saying that a ReactorSensor needs to be implemented on the Vera side to do the detection of button presses, and a separate ReactorSensor implemented on the openLuup side can examine those logic groups in the first ReactorSensor to determine when it needs to run its activities on openLuup.

Ok, I think I’m following. Long story short. The Hank controller is not suitable for openluup at this point.

Thanks for your workaround. Not what I was looking for of course, because I want all my logic in one place (openluup in my case), but I don’t seem to have much of a choice.

Thanks @rigpapa and @akbooer for your effort!

Well, sorry about that, although I must say that I’m surprised. I am anyway thinking of getting one myself, so there’s some hope.

Could you, perhaps, just post a snapshot of all the device variables? The openLuup > device > variables page for that device is perhaps the easiest format for this. Thanks so much.

AK

Sure, see attached.

1 Like

The only other thing I could suggest is that the Hank implementation seems to reliably update the battery level with every button message, so perhaps changing the “updates” condition from sl_CentralScene to BatteryLevel would bear some fruit. It’s kludgy, but it just might work for this particular device.

Yes, I noticed that in your log, but given some repeated times and dubious interleaving, I thought the same. Perhaps you were a bit quick with the button presses (sub-second)?

That could be debounced with Reactor’s “delay reset” option. I was pretty slow pushing the buttons. Given how long the ZWave comms take anyway, a one- or two-second (or even 5) debounce isn’t likely to have a big negative effect on the user experience (if you’re pushing buttons that fast on a ZWave scene controller, you’re gonna have a bad time).