Running larger benchmarks locally - now easy!



The core competence in doing science is being reproducible. Everyone who has achieved cold fusion in his kitchen knows that; so why would Software Defined Radio developers go for something less than a fully integratable, graphical workflow (that, of course, can easily be scripted)?

Currently available (and in a really badly documented state, to make the SDR guys feel as much at home as the cold fusion experts) is the code to all the tools that you'll need to define bigger tasks and run them locally.

That includes my Measurement Toolbox Project as well as my version of the GNU Radio master branch just to include a button in the GNU Radio companion :).

The Workflow

Now, without delving too deep into the software architecture below, the graphical workflow for running flowgraphs should look something like this:


Now, how does that work in detail?

Let's move along a very simplistic example, so we don't get confused in the process.

Developing Applications in GRC

Now, a flowgraph that we want to test with a lot of different parametrizations should run with at least one of them; so let's create such a flow graph:


As you can see, absolutely no magic happens here: We have a constant source for which we set the value based on our value variable:


The same happens for the number of samples (as limited by the head block) and the length variable.

Then there's the sink. It's a plain, old, boring, vector_sink, available everywhere. I decided to name it value_sink.

Phew. That was actually a bit boring. So let's now have a look at what my GNU Radio branch actually introduces into the GRC:


Defining Tasks using task_frontend

Clicking that button saves the Flow Graph to a file, and runs task_frontend with that file:


What we see here is the configuration tab of the task definition user interface. As instruction "RUN_GRC" is selected, an option that will embed the source code of our GRC file into the task, and will only generate a python file when running.

Now, since we know (ok, we just assume this for the sake of this example) that we have used the companion to create a flow graph whose python implementation we install as module to our out-of-tree module, we want to change that to "RUN_FG":


I've already filled in my own modules' name, mtb, and the name of the python module, extraction_test_topblock.

Now that we have the task defined as is, we should have a look at the available sinks. These are the points where we actually extract data out of the flow graph after running it. Analyzing the flow graph, the task frontend has already filled in every sink it could find.

Now, having defined only one sink in our flow graph, this is adequate. However, you might have multiple vector sinks, and if you're not planning on analizing data from every one of them, you might just want to remove some sinks.


Ok. Now the interesting part: The parametrization tab. As you can see, it's a table of available parameters. Now, by default these all are set to "STATIC", meaning that at benchmarking time, that specific value will be set and kept constant for all runs.


However, as you can see, I've set the type to LIN_RANGE and LIST respectively. They work like follows:

  • When using LIN_RANGE, one specifies a triplet of numbers: (start, stop, number_of_steps), and at benchmarking time number_of_steps values equidistantly spread over the interval [start, stop] will be set
  • When using LIST, the supplied text will be executed and converted to a list, and will be stored in the task file. Valid entries are 1,10,100, things like range(100), or any other python expression that can be evaluated and converted to a list.

Now, let's run! the task from the File menu.

Visualizing Results

After a while, after having run all 150 measurement points (3 values for length, 50 for value), the visualization user interface will appear:


Now, if we don't select a variable from the top list, we can look at the different values our parameters have. I chose the length parameter here, which, to little surprise, assumes the values of 5, 20 and 30. You can also select multiple lines, but since our value parameter is only between 0 and 1, we won't see much here.

Now, let's find out if our experiment is sane: We select value_sink, of which we know that the sink value (reduced to a single number by applying a mean) should always assume the numper that is set for the flow graph parameter called value; therefore, we select value in the parameters list:


Luckily, GNU Radio still works, and the number we put in is the number we get.

Architecture, Networking and Scripting

As you can see, the task_frontend tool has functionality to load and store tasks as JSON; the simple reason is that you might not always want to run tasks locally, for example if you little time but many PCs at hand, or if you want to test the same flow graph on a lot of different machines. Now, to understand how network based execution can take place, I'll explain in tomorrow's blog post how the whole system is set up, how to use it to generate versatile flow graphsand how to use my python classes in your own applications.

So long,

Comments powered by Disqus