`flows.json` development workflow and version control through Git

Hi forum, I have been following the VanPi project for a while and I am finally about to start tinkering with my relayboard. I am still waiting for a Raspberry Pi to arrive in my mail, therefore I couldn’t yet try out the system itself.

One thing I noticed though is that, with the current setup, custom mods to the open source software don’t appear to be straightforward because of the lack of a sane version control system (please correct me if I’m wrong).

Since I know I will want to modify flows myself, but also be able to keep up with updates coming from the main repo, this is the ideal workflow I envision:

  • I want to fork the main repo so I can build custom functionality on it through Node RED
  • I want to be able to occasionally open a PR to the original repo, in case I should develop anything that might be of interest to the rest of the community
  • I want to be able to merge changes from the main repo, in order to not miss out on updates

I did some reading (I’m a complete newbie to Node RED, Raspberry Pi and everything else) and I found out the standard solution for this is Node RED’s “Projects”.

So I set out and built a little wrapper that should allow me to do what described above. I packaged it into a small repo with a README detailing how to set things up.

I’m honestly not sure if this is anything new or if everyone is just doing it on their own. I couldn’t find clear guidelines in this forum or in the docs, so I thought I’d share what I did.

The next steps would be to somehow parametrize the init and update scripts, so they would pull files from one’s fork rather than the main repo - the reference to which is currently hardcoded. I’ll get to that once I have the hardware those scripts execute on.

Happy for any feedback / suggestion.

Hey,
You’re right, a sane version control system really is something the VanPi project lacks for now.
Node RED projects came up in the forums before and it seems to be the way to go, I just couldn’t get to tinker around with it yet. Introducing projects also may include a not so straight forwarded update which will probably mean that some manual interference will be needed.
That being said, I delayed it for the time beeing and we will probably introduce projects at a stage where a change in hardware (upcoming products for example) will require new software anyway, meaning to get a completely fresh start for users in terms of Node-RED, if that makes any sense :smiley:

Another possible way I thought about is node-red-contrib-flow-manager (node) - Node-RED. This module will save each flow in a seperate .json file instead of having one big flows.json.
We could then compare each flow with the new updated set of flows and may give users to choose whether to update specific flows or not. This approach may require quite some scripting though, to visually display the details and have some kind of GUI so that each user, even “non-programmers”, can understand what is going on.

For now, what most people do is to keep individual flows/nodes in a seperate flow and write down what they changed in detail, then export and import it again after an update. I know that this is far from optimal as it includes some work when updating, but it’s good to see someone actually tackling this problem.

By the way, at some point I introduced a function which will start a second NR instance with a backup flows.json file. It can be found in the update tab. So every time you press the udpate button, the current flows are backed up and can then be used to start that second instance on the next free port, so that these can be compared and flows can be exported from one instance and imorted into the other instance.

Hey Vincent :wave:,

Thanks for the detailed answer! I don’t think it’s necessary to disrupt the existing flows or to develop anything custom (UIs and such) in order to provide version control to the project.

I have been playing around with the setup I described here, which is available in the linked repo, and it actually works absolutely fine for the requirements I listed.

Node-RED projects already use Git under the hood and they provide a nice UI to the user whenever there are conflicts. If a user keeps their customizations away from the originally provided flows (i.e. by creating new tabs), merging just happens without conflicts arising.

For example, this means right now I have a couple of branches on my forked repo on which I am testing different (and separate) things. I can switch between one and the other and merge changes between them, so the same would be possible if a change would occur upstream, in the main repo.

This makes it much easier and less error-prone to keep up with updates, without having to copy-paste stuff around manually or relying on local backups. Also, it keeps my personal work in Git so I don’t risk losing it if my hardware catches fire / laptop is stolen or what not.

When it comes to installing my forked version on the RPi, I assume I would need to go through the “manual” process and change the init and update scripts that are currently provided, so they would point to my repo or my local folder.

These scripts are currently the main issue - as far as I can see - since they fetch files from hardcoded URLs pointing to the main repo. If those could be reworked to either build from their own local folder or to receive a Git ref through an environment variable, it would be possible for anyone to branch off and develop / build / deploy independently.

To sum it up, in an ideal world this would be the dev workflow:

  • Fork the main repo and customize :white_check_mark:
  • Pull and merge changes from upstream when an update happens :white_check_mark:
  • Implement the full Git workflow of branching / merging / reverting changes :white_check_mark:
  • Run build scripts that would create an image out of the local / forked repo :no_entry:

Still new to the project so of course I might be missing bits of info, but this is the state as far as I can see right now. Cheers and keep up the great work :+1: