API documentation - Pre-Alpha - For Ezlo controllers

Hi,

First of all, I would like make our position clear:
We want to make our API as open and as sophisticated as possible. Our vision is to give our community access to our internal capabilities and work with them closely to improve these capabilities.

We’re releasing the pre-alpha version of our API for Ezlo Platform.
For now you can see the documentation in the attached PDF:
HUB_API_doc.pdf (492.4 KB)
We’re currently working on having the API documentation available via website - stay tuned.

Please post here EVERYTHING you want to see in our APIs. We mean EVERYTHING. Please don’t hold back. We will do our best to deliver you the most sophisticated API capabilities.
Thank you,

Ioana - Product Manager

EDIT:
Here is the draft of the Core Events API Core Events - eZLO API.pdf (124.4 KB) Implementation will be available in the next Linux firmware release( we will update it here in this post once it is done ).

11 Likes

Ahh, Now I can do something with the new firmware! Thank you!

First question on the use of mDNS aka bonjour discovery. What happens when you have multiple ones resolving with ezlo on the network?

1 Like

Again, super!

C

1 Like

Another question: If this was not going to be released publicly, why did you have a 60 page PDF ready to post?
Genuinely curious, not having a pop

C

@Catman we literally finished the formatting of this document 10 minutes ago and ready to post. :upside_down_face:

1 Like

…and a great read it is too!

C

I am not a developer or a good coder for that matter. however the Vera local HTTP API for me as it is simple to use, has enabled me to do some exciting things with Vera over the years.

I like the Vera API as is now, where I can send simple one line HTTP commands to Vera and have it control a device or run a scene.

Personally I’d be happy if that same simple API function remained for dummies like myself.

I am sure the other more advanced users and developers will chime in with their own thoughts about the new API.

1 Like

OK so looks like if its web socket to connect then we might not be able to do what we can now and send simple one line HTTP commands to Vera to have it do something ?

This would be a barrier and big loss for myself and others who have been doing this with one line commands.

Run a Scene:

http://192.168.1.100/port_3480/data_request?id=lu_action&serviceId=urn:micasaverde-com:serviceId:HomeAutomationGateway1&action=RunScene&SceneNum=14

Turn a device On:

http://192.168.1.100/port_3480/data_request?id=lu_action&output_format=xml&DeviceNum=58&serviceId=urn:upnp-org:serviceId:SwitchPower1&action=SetTarget&newTargetValue=1

Turn a device Off:

http://192.168.1.100/port_3480/data_request?id=lu_action&output_format=xml&DeviceNum=58&serviceId=urn:upnp-org:serviceId:SwitchPower1&action=SetTarget&newTargetValue=0

Will such a thing still be possible in the new Firmware / local API ?

Should be:

{
“method”: “hub.item.value.set”,
“id”: “ID”,
“params”: {
“_id”: “switchDB1FCA84”,
“value”: true
}
}

As, I think an example.

You should be able to concatenate this into a single line

C

At a high level, nothing in this API so far suggests to me a mechanism for building plugins. I can see how this could be used to facilitate building an alternate UI for the system, but as an API for plugins, I’m not seeing it.

1 Like

Its very important I can send / post a single line HTTP command to Vera, if not a lot of my integrations with other stuff will be broken in the future.

I have other mobile apps that can send commands to Vera for controlling devices and scenes.

I use the Java HA Bridge software (Philips Hue Bridge Emulator) to integrate Harmony Hub / Remotes with Vera for the HA buttons on the remote handsets and also commands on the Elite LCD screen. Java HA Bridge sends one line HTTP commands to Vera.

I’ve used some scripts on my Kodi HTPC’s to send commands to Vera to control stuff.

I extensively use single line HTTP commands from Imperihome mobile app, to send commands to Vera and the Harmony plug-in for Vera, for certain controls over my AV equipment that is IR based and doesn’t support direct LAN IP control.

Blue Iris camera recording software sends HTTP commands to Vera to control virtual motion sensor devices, when IP cameras detect motion on their PIR.

Basically if I won’t be able to send single line HTTP commands to Vera as we can now, I am going to have big problems in the future.

Like wise in reverse, as we can now, we need to be able to send out HTTP commands from Vera to other devices and services, like IFTTT Webhooks and the Imperihome app HTTP server. And anything else that accepts HTTP control commands.

I take every small step in the right direction as positive… Indeed this does not provide anything about logic engine or coding protocol. Only device and stack control. It’s not very helpful for plugin writing but… you could now create a bridge for openluup. :face_with_hand_over_mouth: and reuse all our existing plugins!

1 Like

you are correct it is not in this API document. When we release the Ezlo LUA API document you will be able to see the beginnings of that.

we pushed that as a requirement into our development team. Thank you for the feedback, please keep them coming.

3 Likes

That’s great thank you !

I believe this was the original WIKI about sending HTTP commands to Vera.

http://wiki.micasaverde.com/index.php/Luup_Requests

In the Java HA Bridge software, I can also use a “Dim” command like this, to control dimming levels of a light, on a button on my Harmony remote control handset.

SetLoadLevelTarget&newLoadlevelTarget=${intensity.percent}

http://192.168.1.100/port_3480/data_request?id=action&output_format=json&DeviceNum=140&serviceId=urn:upnp-org:serviceId:Dimming1&action=SetLoadLevelTarget&newLoadlevelTarget=${intensity.percent}

Although I wouldn’t say no to native Vera integration with Harmony hub and its “Home Control” functions.

Philips Hue Bridge and Samsung SmartThings for example are natively supported by the MyHarmony software.

Where you can add those systems and some others I think into Harmony and then control devices and scenes using the dedicated buttons for “Home Control” on the Harmony remote control handsets and also on the LCD screen of the Elite Harmony remote.

Only way I could achieve this with Vera was to use the Java HA Bridge software running on my file server that emulates a Philips Hue Bridge. And the Harmony software is connected to that instead.

We then use HTTP commands in the Java HA Bridge to control devices and scenes within Vera via the Harmony remote etc.

Thanks! I hope the same payloads could be sent via http and the same responses could be get back, in order to simplify things.

All that said, a bridge from websockets to/from http or mqtt should be easily doable.

Not much can be added, but looking at this doc, this sounds promising and really multi-threading :grin:

Thank you for the API specification. this is good , here are some quick feedbacks:

Great points:

  • an API is published
  • JSON payloads
  • an Event based model is defined

Axes of improvements:

  • it seems a bit partial we need different kind of API. a) a server side API where plugin will be able to operate with the platform. this is a server side to server side API , b) a client to server API for alternative UI or plugin’s UIs c) a 3rd party integration api, could be server to server

  • API is very structuring , API will drive the presentation layers, the features even the UI, I would spend a lot more time on having the right API before jumping into the UI development

  • the API technology should be chosen so that it enables multiple client device types, it should be pervasive enough to be the same available api for multiple kind of device ( desktop , phone , tablet ) so http and cross browser supported api are typically a good choice.

  • from a quick initial read , I find the api a little bit complex. a nice elegant http API could be RestFul with JSON payload and adopting a strong RestFull consistent approach would ease developers to assimilate the complexity thanks to a standardization. a good RestFull standard would mean to have a data model , with object and collections and use a object driven semantic ( as opposed to action oriented like create_device , get_list_of_rooms ) , with a unique id per object and a normalized url approach like:

    • https://fqdn/path/collection/(id) to work with an object of that collection depending on the http method Verb . ( GET for read, PUT POST for create/update , DELETE for delete ) and the payload json being the actual parameters
  • collections could be: root, controllers, networks, rooms, devices, services, plugins, users, authorized external integration ( oauth client id ) , etc
    external integrations ( the voice assistant, IFTTT ) could be exposed as another type of controller

  • the API need to offer ability to query metadata about devices capabilities. for instance an ability, to determine dynamically based on the device type what actions does it support , with what parameters & types. VERA was indirectly doing so with its upnp standard using XML file. was cumbersome but the functionality was somehow there, it is possible to enumerate devices and determine what actions is supported then dynamically construct a UI to trigger these actions

  • we need http API to be CORS enabled and the api should be thought day one to be multi controller so we can control several controllers on the network and uniquely identify controller_id+ device_id as a globally unique id ( same for rooms etc … )

  • we need a much easier remote access method, either proper OAUTH2 or JWT tokens for security

  • web socket events is good for a push model sry to client but on some restricted devices, an alternate way ( based on a PULL model ) could be offered in http to read from a message queue

  • I would not call “id” the parameter used for matching a request with a response. id typically identifies and object , the match request/response could be named something like “context” or “callback data”

  • I would from day 1 state that any timestamp / date is in ISO format yyyy-mm-dd:hh:mm:ss.uuuuZ GMT time , the UI would be in charge to translate to local timezone of the user

  • very short actions responses should be synchronous , but medium to long actions responses could be asynchronous with the event based model for the actual result

  • events : very import to allow for easy filtering by the reader of events. so events must have mandatory fields ( type for the event type, source for the event source / sender ). then data fields can be event type specific

hope that helps, if you go that direction, I am happy to participate in beta program ( providing you consider Europe for radio protocols )

7 Likes

I find it somewhat reassuring that there are a full set of boolean operators (and/or/not), some timing functionality and a modicum of validation with some informative error messages.

Compared to luup’s error messages e of “no message but all scenes stored after this one are also corrupted”, this is quite an improvement.

I am not a web api person or automation developer so I dont have any other opinions worth sharing at this point.

1 Like

amg0,
thanks for your feedback, it would be great to receive more feedbacks from you.

the API technology should be chosen so that it enables multiple client device types, it should be pervasive enough to be the same available api for multiple kind of device ( desktop , phone , tablet ) so http and cross browser supported api are typically a good choice.

Support of http already in our roadmap

the API need to offer ability to query metadata about devices capabilities. for instance an ability, to determine dynamically based on the device type what actions does it support , with what parameters & types. VERA was indirectly doing so with its upnp standard using XML file. was cumbersome but the functionality was somehow there, it is possible to enumerate devices and determine what actions is supported then dynamically construct a UI to trigger these actions

In our API - items are abilities, it will be available in next version of API documentation

web socket events is good for a push model sry to client but on some restricted devices, an alternate way ( based on a PULL model ) could be offered in http to read from a message queue

In order to have a great performance its very important to have good point-to-point communication and overcome hurdles which were put forward by HTTP. Also we like a full-duplex communication and possibility to have real time responses from the hub. All the WebSocket handshakes can be “checked” by the browsers using embedded developer tools in them, so you can easily use it.
Great performance, speed and stability of communication are the main goals.
But again http already in our roadmap.

I would not call “id” the parameter used for matching a request with a response. id typically identifies and object , the match request/response could be named something like “context” or “callback data”

I think mostly depends on preferences

I would from day 1 state that any timestamp / date is in ISO format yyyy-mm-dd:hh:mm:ss.uuuuZ GMT time , the UI would be in charge to translate to local timezone of the user

We are running scenes locally on hub including sunrise and sunset scenes.
Its mandatory for hub to know the region.

Very short actions responses should be synchronous , but medium to long actions responses could be asynchronous with the event based model for the actual result

Most of our APIs are synchronous.

events : very import to allow for easy filtering by the reader of events. so events must have mandatory fields ( type for the event type, source for the event source / sender ). then data fields can be event type specific

It is implemented: we have events and mandatory fields, detailed description will be available in next version of API documentation

Waiting for more feedbacks from you.

3 Likes