A gentle introduction to CocoTb
In this tutorial, we look at cocotb, an upcoming popular verification framework that let's you control signals directly from python!
Verification of digital logic has always been an area that is very painful to get started with. It has a high barrier to entry in the form of involved and intricate frameworks like UVM etc that everyone seems to despise and heavy tools that cost crazy amounts to acquire. Additionally, there is so little information available about these tools and frameworks out in the open, that too many people just get intimidated by the task.
What I have realized during my time as a digital design engineer is that an easy to use verification methodology that enables you to write and run some simple tests very quickly can significantly boost the quality of your code and greatly reduce the time it takes to make your modules robust and error free.
In my search for quick and easy ways to write automated testbenches for my verilog modules, I developed this method for testing code. Despite the power it gave me, it still needed a more than optimal amount of effort owing to the fact that you still need to write a HDL testbench that interfaces with the module for you. It is still good enough when you have a simple 'give input take output' kinda module but it can get very messy when complicated stuff like bus transactions are involved.
My search for the perfect verification framework led me to Cocotb which is essentially a python framework that automatically interfaces with several HDL simulators (like Icarus, Modelsim, Questasim etc) and allows you to drive the signals in your design directly from python itself. With this, your entire testbench can be written in python and automation and randomization can be easily taken care of thus boosting your productivity.
Probably the strongest point of cocotb is that it allows one to manipulate the signals inside your module from a 'normal' language aka a Non-HDL. This is a gift because when you have designs that implement complex algorithms and computation structures in hardware, a HDL based testbench would require you to write the golden model of this complex algorithm in a HDL and I think you see why it can be a nightmare. It is going to take so much time to be sure that your golden model is perfect, let alone test the DUT. Python on the other hand, has probably the largest collection of libraries and functions that implement a plethora of algorithms in the most efficient and robust methods. Moreover, these libraries are vetted and constatly scrutinized by a hyperactive software-dev community that is much much larger than the one we have for digital design.
In this article, we'll take a look at what cocotb is all about and why so many people are excited about it. We'll write some good automated testbenches to understand the cocotb way of thinking. In the next article, I'll explore the most powerful and more involved features of cocotb that really give you ablities to achieve the level of coverage that methods like UVM, formal verification etc can achieve.
So let's dig into this!
First, a little bit of Python:
If you're like me, you spend most of your time doing digital design with HDLs or working on static timing analysis or maybe directly on hardware. But you also have a scripting language that you commonly use to automate the repetitive stuff or do something simple but laborious like parsing reports from tools etc. And chances are this scripting language of your choice is python, given its simplicity and power. It could also be something like TCL or perl or quite commonly bash itself, but more often than not you've used python sometime or another.
In such a case, you probably only know the basic elements of the python language and heavily borrow (read copy and paste) from stack overflow whenever you need to write a script. Cocotb however, uses some fancy features in the python programming language that are usually unseen in regular software code. While it is not important for you (as a digital design / hardware /verification engineer) to know the exact details of all these features, it helps to expand your mental model of python so that you can attempt to write more powerful and imaginative test scenarios for every testbench you write using cocotb.
Here's some terminology that you need to get used to -
coroutines - They are used for cooperative multitasking where processes can voluntarily yield (give away) control periodically (or when idle) in order to allow other processes to run. Coroutines are usually declared using the 'async def' keywords which basically tell the interpreter that it is an asynchronous function. This scheme is widely used in Cocotb to model the inherent parallelism of hardware.
NOTE: The same can be done with a @cocotb.coroutine decorator, but you are advised not to use it as it is depreciated. I'm putting this note here so that any legacy code doesn't confuse you.
async functions - They cannot be called directly, they either have to be awaited or passed to an event loop ( a program that manages asynchrony and schedules async functions). to await
a function means to pause the current function and let the awaited function progress by scheduling it in the event loop.
NOTE: Don't use the yield keyword, its deprecated. Use await.
eg. await Timer(10,'ns') means to pause the current coroutine and let the simulator time (being accessed via Timer coroutine here) move forward by 10ns. Once that is done, the current coroutine resumes execution.
decorators - A decorator is just a function that takes another function as an argument, adds some kind of functionality, and then returns another function.
1 2 3 4 5 6 7 |
|
Some commonly used decorators in cocotb are
- @cocotb.test - This marks the coroutine as a cocotb test to be run. It also adds additional functionality like reporting, logging , timeout , expected fail flags etc to the coroutine despite the user not writing any of these features explicitly. Marking a function as a test using this coroutine is enough for cocotb to automatically pick up and run the test.
- @cocotb.coroutine(OUTDATED) - Marks the function and adds some generic logging capabilities to the user defined coroutine. I have included this here to help you understand older Cocotb code but as of now, you can directly use 'async def' functions in stead of @cocotb.coroutine
generators - A type of python function that executes in steps as and when it is called instead of processing an entire set of data at once. This is a very useful feature of python that lets us write efficient code that does not waste resources. To visualize a generator function, think of the ink stamp that consists of a number on the stamp and increments the number each time you use the stamp.
Generators can be used to mimic hardware by creating infinite data generators, i.e as long as the clock is running, the generator will output some data each time it is called.
Some cocotb specific keywords:
dut - a default object pointing to the top level module instance built-into cocotb.
trigger - Something that can be used to indicate a condition and take action, the simulator usually paused or started based on triggers. eg. Timer, RisingEdge
result - a reporting mechanism eg. TestFailure, assertion etc
Scoreboard - The Scoreboard is a built-in cocotb class that is used to compare the actual outputs to expected outputs. Monitors are added to the scoreboard for the actual outputs, and the expected outputs can be either a simple list, or a function that provides a transaction.
Testfactory - A provision in cocotb that enables us to randomize the test stimulus by modifying all the possible 'test knobs' in all the possible permutations. This, I believe is one of the most powerful features of cocotb since it saves lots of time that would have been needed to write individual tests for each possible configuration. However, this does not let us modify the parameters declarations and conditional compilation flags in our verilog module, but there is a way around that as we'll see further.
Logging - A facility used to generate meaningful and helpful logs and messages that aid in debugging later.
monitors - built in cocotb classes that can observe the a certain signals of a particular interface and enable scoreboarding, logging and other features on those signals.
drivers - Input generating functions that can continuously create input stimulus in the required format.
The setup:
Cocotb works both with linux as well as windows. In my case, I'm using the development version of Cocotb directly built from the source on Windows, in tandem with Icarus Verilog for windows. This also comes with the 'Gtkwave' waverform viewer tool that makes it effortless to check the waveforms without having to use some bulky IDE.
NOTE: If you're using windows, it is advised to use the Anaconda environment manager. Otherwise your life could become miserable. More installation instructions can be found at the official documentation.
There are a few elements required for a cocotb testbench:
- The Makefile
- The test module (python file)
- The HDL files
I like the fact that Cocotb uses a Make based flow. It is really a good practice and encourages code reuse by making it easier to call modules from all over your filesystem instead of making a different version of every module inside every project. It also helps when you have different versions of modules with similar names and functionalities, since the path to the Verilog files is mentioned in the Makefiles. This might look like an overkill for hobbyists just trying to get their little project done but most companies use some form of setup that mimics this flow and hence is a useful habit to develop.
Another good practice is to maintain a proper folder structure to separate your HDL source files from your test files which may include HDL testbenches, python files or the waveform/log files generated during simulation runs. Here is what I'm using:
1 2 3 4 5 6 7 8 9 10 11 |
|
The HDL:
If you're coming here directly and need a motivation for the module under test here, it is a fixed point MAC (Multiply and Accumulate) unit that I had designed here for adding fixed point arithmetic capabilities to the convolutional neural network project of mine. That article can be found here.
In a previous article, as mentioned above, I had tested this module using python to generate inputs and run the simulator via OS commands. Here is the top module for reference:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
The Makefile:
Here is how the Makefile looks for our testbench:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
The COCOTB testbench:
Cocotb does not specify how you should write a testbench. However, there are lots of examples in its official repository and they give you an idea of how to go about writing an automated testbench using cocotb.
Also, some very good projects in the opensource world have started using cocotb for their verification and that gives us starting points to work from. This is one such repository by alexforencich. We'll be taking the template from the testbenches in this project and also from the examples given in the official repository here.
Let's take a look at the testbench I wrote for the 'mac_manual' module that we tested in the last article. If you're coming here directly and need a motivation for the module under test here, it is a fixed point MAC (Multiply and Accumulate) unit that I had designed here for adding fixed point arithmetic capabilities to the convolutional neural network project of mine.
Here we're writing only a basic test that creates and passes one set of inputs (a,b,c) and checks the output P against the golden value. All this purely via python!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
|
NOTE: We occasionally see the use of '<=' for signal assignment and '=' for the rest. This might confuse you into relating it to the blocking and non-blocking assignments in Verilog, except there is no such thing in Cocotb. The simulator needs to be stopped anyway to assign new values. Both above operators give the same results. To quote a snipped from a discussion on the official Cocotb repository:
1 2 3 4 5 6 7 8 9 10 |
|
Now we can run this testbench via the Makefile. Just cd into the tests folder and type 'make' you should see a huge log being printed. It should also show you the result of the tests that you've written.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
However, despite all this power, there's one important thing we're still missing. Take a look at the code and you'll see that we have a couple of parameters 'N' and 'Q' that we use to represent the total number of bits used to represent each number and the number of fractional bits among them. During this test, the values of these parameters were fixed to (N,Q) = (16,12). I mean you can still change it via the makefile wherein they have been overridden by passing them as arguments to the 'iverilog' command. But what this means is that you cannot change the parameters between tests or within a test. This can be very limiting because checking with several possible combinations of parameters is an integral part of verifying highly parameterized code (which good code usually is). We need to make sure that everything works in every configuration. Of course you can re-run the test for each configuration of parameters but turns out there's an even better way to automate this stuff.
In comes Cocotb-test!
Cocotb-test is another python framework built around pytest, which is a unit testing framework very commonly used by python developers. What this enables us to do is run multiple versions of the same test, each time varying some parameter of the configuration by setting them as local environment variables that can be picked up by the Makefile before each test run begins. To simplify, pytest acts like a wrapper around our cocotb testbench and sets it up with different environment variables for each test.
This way, we can give pytest all the possible values of each parameter that we'd like to vary and it will generate tests for all the possible combinations of these modules. Then we can easily check which combination fails.
Lets add this ability to our testbench above...
Before that we'll need to do just one more bit of setup. Pytest requires a tox.ini file that it uses to figure out stuff like the directory structure and python version among other things. Here is the .ini file that I'm using
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
- The directory structure now looks something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
- With that out of the way, let's move on the the actual testbench. First we'll need a couple of extra imports
1 2 3 4 |
|
- Next, we'll write our meta-test using the run function we just imported...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
The important thing to see here is the parameterization. In the above code:
1 |
|
This is giving two valid sets of values for the pair N and Q. So pytest will detect two possible tests and run them for you. What I mean by that is the test 'test1' (defined above) will be run two times, each time with a different set of (N,Q).
If for example you had given your parameters like this:
1 2 |
|
the pytest would have run 9 tests for you each representing one combination of the above two variable N and Q ( (16,16),(16,12) ........ ). To check this, you can access these parameters from within the tests by using the os.getnev("PARAM_N") or os.getenv("PARAM_Q").
Once the tests are run, the sim_build folder (created by pytest inside the tests folder) contains the reports separately for each test in a conveniently named set of folders.
As mentioned earlier, we have only just gotten started and there's so much more we can do to improve our verification with (and even without) cocotb and its features. In the next article, I'll be exploring other features like Drivers, Monitors, Testfactory, scoreboards and CI/CD integration among other things. Stay tuned!