Question about HDLmake

Hey all,
during the meeting last week I learned about HDLmake and wanted to give it a try and I came across some questions.
Hopefully someone of you can help me out.

  • How are constraint files handled with HDLmake? I did not find any special option for them in the wiki so I assumed they are also added via the files = [ ] option.
    But doing so led to an error that xdc and tcl are not parseable. Did I do something wrong or are XDC/TCL constraints not (yet) supported?

  • What actually is the library variable for? I thought, if I add library = "foo" into the Manifest of one of my modules, all source files of this module are placed inside the library “foo”, but from my tests it seems that this variable is not really used by HDLmake. At least, this library is not mentioned anywhere in the generated Makefile. I expected to find something like

set_property "library" "foo" [ get_files -of $src_fileset $module_srcs ]

for the source files of this Module.

  • Some of my files use VHDL-2008. Vivado only knows “verilog” or “vhdl” as target language for the project itself, so you have to mark each file individually to be VHDL-2008 by running:
set_property "enable_vhdl_2008" "1" [ current_project ]
add_files -fileset $src_fileset $vhdl_source
set_property "file_type" "VHDL 2008" [ get_files -of $src_fileset $vhdl_sources ]

Did I miss any flag/setting for VHDL 2008 or is it not (yet) supported?

A bit of a different topic: For CI/CD I actually evaluated the non-project workflow for Vivado. If you use project mode, first of all a lot of directories and files are generated that are not really needed (at least in my projects so far). Secondly if you start a synthesis run, Vivado generates scripts and then starts additional vivado instances in batch mode executing those scripts.
In non-project workflow no additional files need to be generated and only one instance of vivado is runnning. In theory there are even more advantages like additional directives for each step that are not available in project mode and the possibility to repeat certain steps iteratively with different directives for “better” optimization.
For the non-project workflow, you add files with

read_verilog [-sv] $verilog_sources
read_vhdl [-vhdl2008] $vhdl_sources
read_xdc $contraints

As HDLmake is written in python, I was wondering if it is possible to call parts of it from a python script in order to get the list of verilog and vhdl source files and if somebody could point me in the right direction on doing so.
It kind of feels wrong to me to use a system call and do something like

verilog_sources = subprocess.check_ouptut( 'hdlmake list-files | grep -E ".v$"')

to get the list of verilog files for example.
I would like to write a python script, which uses hdlmake to read all the Manifests and generate lists of verilog and vhdl sources (and constraints if possible), just like hdlmake does it itself when generating the Makefile.

Cheers,
Florian

Hi @florian, I suggest you have a look at Hog (cern.ch/hog).
We designed it to address exactly these issues, and the CI is already implemented for you.
It works in project mode, if you’re interested I can explain why we decided to do it that way.

I feel like I’m advertising Hog on this forum, but on the other hand, solving these problems is exactly what drove me to invent it… Let me know what you think!

Hello,

for the library and xdc/tcl, you should use the develop branch. That should fix it. (Yes, I have to merge that branch into master).

For vhdl-2008, I don’t think there is an option to set it. It shouldn’t be difficult to add a variable to globally set vhdl and verilog versions.

For the non-project mode, have a look at:
https://gitlab.cern.ch/be-cem-edl/chronos/wr2rf-vme/-/blob/master/hdl/syn/wr2rf_vme/wr2rf_vme.tcl?ref_type=heads
In a short, you use hdlmake to generate files.tcl (see build.sh), and then you can source it in your TCL workflow file. That’s where HDLmake is flexible.

For direct use with python, yes it’s possible in theory. It has been tried to integrate with other frameworks like vunit. But that’s not yet documented.
Tell me if you want to investigate further this integration, I hope I could find some examples.

Cheers,
Tristan.

Hi,
thanks for the replies.

@tgingold.
is the generated files.tcl really compatible with non-project workflow?
It uses the add_files command, while to my knowledge in non-project workflow you need to use read_verilog and read_vhdl instead (as there is no Project and no “filesets” to add files to).
Unfortunately I cannot open the link to gitlab.cern.ch as I do not have a CERN account.

@gonnella does Hog support conditionals like HDLmake?
E.g. my current firmware is supporting two different versions of our Front-End chip which differ a lot in pins and also used modules.
Also for Lab-tests of the front end we use the KC705 evaluation boards and the 1GbE Ethernet PHY via GMII for UDP while in the Experiment its a custom board with SFP+ (via the GPTs of the FPGA) and a different protocol.
With HDLmake the Manifests are more or less simple python scripts and I can use if then else statements to select specific modules based on a configuration in the top manifest.

As for project-mode vs non-project-mode: For the Lab setups we are using UDP for the communication of the FPGAs with our PCs. As we have multiple setups (but everything is one large LAN), each board needs a different MAC address.
I tried using the DNA_ID primitive to generate unique MACs, but some of our boards share the same DNA_ID, so this was not possible.
The only way I came up with, is storing the 24bit OUA part of the MAC inside the USR_ACCESS register of the FPGA.
In project mode changing the value of this register requires to redo synthesis and implementation runs. In non-project mode I can do all steps until and including route_design, store the generated netlist and then set this register and generate bitstreams… While synthesis, optimization, placing and routing takes some hours, setting the register and generating the bitstreams for our 6 setups takes about a minute or so. Therefore the non-project workflow saves a lot of time for me atm.

The focus of Hog is reproducibility and simplicity of use (use Vivado normally).
So in Hog one project corresponds to one binary file, because in Vivado if you open a project and without touching anything you build it, it will produce only one binary file.

However, if you have similar projects that differ just in some constraints or some values, you can use recursive list files to include differentiate the common part from the parts that are specific to each single design.

So in this sense is perfectly equivalent to a conditional statement, but it’s easier for a developer that doesn’t know the project to understand it quickly (which is another thing that is valued in Hog).

The point you make about non project mode, is a good one: it’s much faster if you do it like that. The problem is that is not reproducible, because someone else who clones your repo and will have to follow exactly what you have done, store the generated netlist etc. so reproducibility cannot be guaranteed.
This said, it could be interesting to be able to do something like that when in a development phase of the design, to speed up trial and error, and then fix it and make it reproducible, I’ll think about it.

Your point about reproducibility is the reason I thought about writing my own python script that uses parts of hdlmake (+ a tcl script that goes through the steps of the build process).
This way, you execute the script and will also get reproducible results.

If you look closely, vivado does exactly the same. If you create a project, vivado generates shell and tcl scripts. The shell script is executed in a virtual shell and starts another instance of vivado in batch mode which sources the tcl script to actually run a non-project workflow build based on the settings of your project and stores the netlist of the last step.

Therefore, if you provide a script for your workflow, non-project workflow and project workflow do not differ in terms of reproducibility in my point of view.

Agreed, Vivado just works in one way, the project-mode is just a wrapping of several steps run in batch mode. On the other hand, most of FPGA engineers use Vivado in project-mode, clicking on the GUI and pressing the button to generate the bitstream and I personally never found anything I couldn’t do in project mode.

Yes, add_files works in non-project mode. Here is the beginning of the file I mentioned:

set projDir [file dirname [info script]]

set_param general.maxThreads 8
get_param general.maxThreads

# Xilinx speed grades: 1,2,3: 1 = slowest, 3 = fastest
set speed   2
set kintex7 xc7k160tfbg676-${speed}
set device  ${kintex7}

set top     wr2rf_vme

# Check hdlmake has generated file dependencies
if {![file exists files.tcl]} {
    puts "File: files.tcl not found, please check hdlmake has generated the file dependencies."
    exit 1
}

source files.tcl

# constraint files
set swap_sfp false
if {$swap_sfp eq "true"} {
    read_xdc $projDir/${top}_sfp_swap.xdc
} else {
    read_xdc $projDir/${top}_sfp.xdc
}
read_xdc $projDir/${top}.xdc
read_xdc $projDir/wrcore.xdc
read_xdc $projDir/gencores_constraints.xdc

set start_time [clock seconds]

#synth_design  -rtl -top ${top} -part ${device} > ${top}_synth.log
synth_design -top ${top} -part ${device} -generic g_hwbld_date=${start_time} > ${top}_synth.log
write_checkpoint -force ${top}_synth

I gave it a try, and indeed it worked. Thanks @tgingold
Now that I figured out how to use HDLmake for my project, I’ll have a look into cheby to manage my registers.

@gonnella Just to clarify, I did not want to say anything against the project-workflow. For me, it just feels wrong to use the project mode for CI/CD scripts because the runner has to manage at least 2 vivado instances from which one is only generating scripts and waiting. Therefore I was writing this script myself.
I however use project mode for simulations and tests during my development (e.g. quick test if synth/impl runs are successful before pushing changes)

Great!
Tell me if you want to investigate the use of HDLmake as a python module.

Actually, as we do it in Hog, Vivado is called even one more time just to create the project!

However, the choice is not motivated by optimisation, rather by having the CI do exactly the same that you do when you work locally with your GUI and push the button. (In any case, with a big project, the time you waste is a couple of minutes over several hours.)

@tgingold
I tried using hdlmake the “normal” way and create a project with it, but I encountered a problem.
During the synthesize target vivado seems to hang:

/usr/local/bin/vivado -mode tcl -source synthesize.tcl
Setup Xilinx licences
Sarting Vivado 2021.1

****** Vivado v2021.1.1 (64-bit)
  **** SW Build 3286242 on Wed Jul 28 13:09:46 MDT 2021
  **** IP Build 3279568 on Wed Jul 28 16:48:48 MDT 2021
    ** Copyright 1986-2021 Xilinx, Inc. All Rights Reserved.

Sourcing tcl script '/opt/xilinx/Vivado/2021.1/scripts/Vivado_init.tcl'
source synthesize.tcl
# open_project pndLmd_mupix11_kc705_gmii_udp.xpr
WARNING: [filemgmt 56-3] Default IP Output Path : Could not find the directory '/data/jollyj/florian/fpgadev/lmd-fee-kc705/pndLmd_mupix11_kc705_gmii_udp.gen/sources_1'.
Scanning sources...
Finished scanning sources
# set_property steps.synth_design.args.flatten_hierarchy rebuilt [get_runs synth_1]
# set_property steps.synth_design.args.gated_clock_conversion off [get_runs synth_1]
# set_property steps.synth_design.args.bufg 12 [get_runs synth_1]
# set_property -name {steps.synth_design.args.more options} -value {-fanout_limit {10000}} -objects [get_runs synth_1]
# set_property steps.synth_design.args.directive Default [get_runs synth_1]
# set_property steps.synth_design.args.fsm_extraction auto [get_runs synth_1]
# set_property steps.synth_design.args.resource_sharing auto [get_runs synth_1]
# set_property steps.synth_design.args.control_set_opt_threshold auto [get_runs synth_1]
# set_property steps.synth_design.args.shreg_min_size 5 [get_runs synth_1]
# set_property steps.synth_design.args.max_bram {-1} [get_runs synth_1]
# set_property steps.synth_design.args.max_dsp {-1} [get_runs synth_1]
# set_property steps.synth_design.args.cascade_dsp auto [get_runs synth_1]
# set_property -name {steps.synth_design.args.more options} -value {-verbose} -objects [get_runs synth_1]
# reset_run synth_1
# launch_runs synth_1
[Fri Jul  5 09:14:10 2024] Launched synth_1...
Run output will be captured here: /data/jollyj/florian/fpgadev/lmd-fee-kc705/pndLmd_mupix11_kc705_gmii_udp.runs/synth_1/runme.log
# wait_on_run synth_1
[Fri Jul  5 09:14:10 2024] Waiting for synth_1 to finish...

*** Running vivado
    with args -log lmd_fee.vds -m64 -product Vivado -mode batch -messageDb vivado.pb -notrace -source lmd_fee.tcl


****** Vivado v2021.1.1 (64-bit)
  **** SW Build 3286242 on Wed Jul 28 13:09:46 MDT 2021
  **** IP Build 3279568 on Wed Jul 28 16:48:48 MDT 2021
    ** Copyright 1986-2021 Xilinx, Inc. All Rights Reserved.

Sourcing tcl script '/opt/xilinx/Vivado/2021.1/scripts/Vivado_init.tcl'
source lmd_fee.tcl -notrace
INFO: [IP_Flow 19-234] Refreshing IP repositories

Nothing more is happening since several minutes. My project does not use any IP cores (only 2 git submodules). Any idea what could cause this issue?

When I only run make project and start the Vivado GUI to open the project, I have the same issue. The GUI gets stuck while opening the project.

Regarding using hdlmake inside my own python script: I managed to figure it out!
I actually also wrote my own vivado tool object to generate a Makefile for non-project workflow.

Hi,

no idea. vivado is executing lmd_fee.tcl, which is not an hdlmake file.
Is it expected ?

Tristan.

lmd_fee.tcl is the script, that Vivado generates when launch_run is called.
The launch_run command in vivado generates <top_module>.tcl and starts another Vivado instance in batch mode to source this script to perform the actual build steps of that run.

So that part is expected.

I notice the same behavior when just running make project and then opening it with the Vivado GUI.
Vivado tries to refresh the IP repositories and doesn’t finish.

it seems to be related to this line

echo set_property "ip_repo_paths" "ip-cores" [current_fileset] >> $@

in the Makefile. It is generated, because I defined fetchto in the top Manifest.py.
Without the fetchto, hdlmake is not able to check if my git submodules are fetched and thus stops with an error.

After removing this line from the Makefile (in the project.tcl target) and then running make clean; make project the project can be opened.

But I found another problem. One of my git submodules uses tcl scripts as constraint files. Those cannot be added via the files variable inside Manifest.py as hdlmake wants to source them immediately, but they only work while a design is opened. So instead I added a tcl script to the files variable that adds the tcl-constraint files to the constr_1 fileset of the project.

hdlmake not only sources tcl files but also adds them to the projects fileset thus the script gets sourced during synthesis and implementation runs as well
The synthesis run is executed from within the <project>.runs/synth_1/ directory, So vivado cannot find the files that my tcl script tries to add and stops with an error.

Is there any particular reason hdlmake sources tcl scripts as well as adding them to the source fileset of the project?
I guess for most scripts it would suffice to source them without adding them?

For now I solved this issue by removing my tcl script from the files variable and instead use

syn_pre_project_cmd = "sed -i '/source files.tcl/a source tcl/add_tcl_constraints.tcl' project.tcl"

to source my script right after sourcing the files.tcl when creating the project.