-
Notifications
You must be signed in to change notification settings - Fork 26
Build ghdl-yosys-plugin as a module #58
Comments
Hi, thanks for the warning that you might deprecate this method of building the plugin. As per discussion in #4, yosys dynamic plugins aren't supported on Windows which is the main reason it's done this way. The other complication would be that building it as a dynamic lib would then typically result in dependencies on other dynamic libs, which is what I've been trying to avoid so far... At some point I tried linking static libs into a dynamic lib and found that, at least on Debian/Ubuntu, they are typically not built with So really, for now I'm not sure what a good alternative is. I'm happy to maintain the patch etc. downstream if you'd like to clean it up from your repo at least - unless there is also specific code needed to support a static plugin? |
First off, the patch is going nowhere in the short term. As long as there are use cases which can only be solved through it, I'm pretty sure that Tristan will maintain it "active". My main concern is that building the plugin in Yosys is not tested in CI (https://github.com/ghdl/ghdl-yosys-plugin/actions). The usage as a module is tested only. Hence
This is enough for considering NOT deprecating
My understanding about compilers is quite limited. From a user's perspective, I am aware of the following cases:
I wonder if it's possible to build ghdl-yosys-plugin statically (
You might need to tell it explicitly. For building GHDL itself, there is a
I would strongly recommend not to do so. The main benefit of this project is that everything is built statically, which theoretically allows distributing three main packages only (Windows, Linux and macOS). That is extremelly valuable for most new users to test examples (from ghdl, fomu, symbiflow, etc.) without dealing with installing or understanding how all the tools fit together. For most use cases it works very well already (see im-tomu/fomu-toolchain#20). Conversely, if libs were dynamic, the Linux package would probably no longer be compatible with multiple distributions (see im-tomu/fomu-toolchain#16 indeed). For a dynamic approach, I believe that hdl/containers (moved from ghdl/docker) is a better solution. That is, provide a single env (Debian Buster or Ubuntu 20.04), build all the tools there and provide a package to users. Then users of that distributions can use it on their host. Others can use containers (which work on either Linux -docker or podman-, Windows -docker or WSL- or macOS -docker-). In fact, I envision this project and hdl/containers as complementary approaches that might share multiple build scripts in the near future. The static build would be the 'nightly' approach for most use cases, and containers would be the 'nightly' approach for users on the bleeding-edge of advanced features. An alternative to containers would be Snap/Flatpak/AppImage, but I don't find any of those more appealing.
Given the context, I think that the current approach is good enough. I don't think you need to maintain the patch downstream. However, it'd be very interesting if you could help with keeping it up to date. That is, let's add the patch to CI in ghdl-yosy-plugin, so we get faster feedback when breaking changes are added. At the same time, please report the bugs/issues you find there, instead of fixing them here. Hence, I'm not asking you to maintain the patch, but to ensure that this project is not significantly decoupled, if that makes sense. On my side, I will think about how to best communicate these differences to users. The canonical CLI call is Ref ghdl/ghdl#1336 |
@edbordin, alpin3/ulx3s illustrates what I mean with fpga-toolchain and hdl/containers being complementary projects. @kost's solution is for ULX3S only, and the structure of the repo is not designed for generalisation/scalation. Yet, it provides essentially the same set of tools either as static packages or container images. /cc @mithro |
Sorry, I was thinking you were looking at this from the perspective of a GHDL maintainer! I see now you're looking at it in the context of fomu-toolchain.
To elaborate on what I was talking about, it's relatively easy to turn on
I have also been wondering if something like this would make things easier since it would mean the build scripts become much simpler. It would solve my issues around iverilog/VPI (#34). The main problem I see with it is it only solves the problem on Linux. It looks like Snap supports multiple entrypoints via "aliases" which was another concern I had about AppImage.
I have been meaning to at least make a note of this in the README here as a couple of users have already come asking about that. I really should get around to that... Although it would not be a clean solution, we could add a small patch to yosys to ignore |
I ain't a maintainer of GHDL 😉 . Although I am the most active contributor, it accounts for 1-2% of ghdl/ghdl and 5-10% of the org. Therefore, Tristan is the only authoritative voice, specially regarding language support, simulation and synthesis. I know mostly about cosim, CI, running on ARM/Android, packaging, documentation, etc. Nonetheless, I'm looking at this as a contributor of GHDL indeed. It is frustrating for me that VHDL seems to be ignored and/or dismissed in some open source hardware camps. I believe that's unfair and very misleading for newcomers, specially the ones in Europe. Fomu is the only board I know which is given for free in many conferences and hacker meetings. Hence, it is strategically important that users have a wider view of the ecosystem when they read Fomu's workshop. At the same time, I do have a Fomu, and I want to use it for testing VHDL and mixed HDL designs using GHDL: im-tomu/fomu-workshop#334 (using containers), im-tomu/fomu-workshop#338 (using fpga-toolchain).
In the context of GHDL, this was discussed in ghdl/ghdl#640 (comment). That is a different use case that I didn't want to bring yet, but it is also related to fpie/fpic. It's about co-simulation of VHDL and Python. More precisely, executing Python functions from VHDL. See https://umarcor.github.io/ghdl-cosim/vhdl202x/ and https://umarcor.github.io/ghdl-cosim/vhdl202x/use-cases.html#executing-arbitrary-python-callbacks-from-vhdl. I did not try co-simulation with the GHDL in fpga-toolchain yet. On Windows, MSYS2 packages allow it already, and containers can be used on any platform. See https://github.com/ghdl/ghdl-cosim/runs/1238489252?check_suite_focus=true. Hence, it is not critical for fpga-toolchain to support it, but I think it'd come very handy.
I think that trading optimisations for portability (static builds) is sensible. However, those pthreads issues might be more difficult to deal with...
It seems that the issue with iverilog/VPI #34 might also trigger when trying to co-simulate with GHDL... My conviction is that the solution for Windows is using MSYS2. MSYS2 is included in GitHub Actions and it will soon be the default
OTOH, build scripts for appimage/snap/flatpak are likely very similar to scripts for GNU/Linux containers.
See ghdl/ghdl-yosys-plugin#74 and YosysHQ/yosys#1640. Even if
I'd propose:
|
BTW, just not to forget about it: the version reported by |
Recap:
Therefore, I'm closing this issue. |
Building ghdl-yosys-plugin as part of Yosys is not recommended and might become deprecated: https://github.com/ghdl/ghdl-yosys-plugin#build-as-part-of-yosys-not-recommended. Instead, it is suggested to build it as an object, an load it in Yosys as a plugin.
However, in this repository ghdl-yosys-plugin is built as part of Yosys (https://github.com/open-tool-forge/fpga-toolchain/blob/main/scripts/compile_yosys.sh#L28). Moreover, Yosys is built without plugin support (https://github.com/open-tool-forge/fpga-toolchain/blob/main/scripts/compile_yosys.sh#L55).
Is this because of some limitation when building Yosys statically?
The text was updated successfully, but these errors were encountered: