Midterm Evaluation Update


Posted:

Hello Community!

As the midterm evaluations are upon us, I should take the time to share my progress (or lack thereof) with you. Wishing that more of my original plans would have worked out, this feels quite unsatisfactory for me; so here's my state and what I'm planning to do about it:

Benchmarking

Whole-application benchmarks

Like gr-benchmark, the key to these is the clever usage of what is available as Performance Counters from within applications. This means that we should be able not only to read how much time each block has consumed comparison to the total time, but how much of that was spent doing actual computation, searching for tags, etc. As described, this calls for a general approach to make it easier for block developers to measure and publish these values. Sadly, this hasn't gotten very far.

Data Extraction Blocks

The remote_agent (see below) automatically extracts vector_sink_*s from the flowgraphs employed and collects their data; this seemed much more intuitive and non-repetitive that I, for now, abandoned the idea of writing new extraction blocks. However, with the current agent structure, any property of a flow graph can be measured.

Infrastructure

This was where most of my time went so far -- I've made a few bad design choices, so I'm right now in the process of rewriting most of my dispatcher infrastructure to be zeroMQ-based instead of relying on execnet.

RPC Framework

RPC with python is rather easy. Execnet has a function remote_exec(obj), which takes a python string, a methodname or a modulename, and executes that on a remote target, after bootstrapping there autonomously (i.e. you only need SSH access, execnet will transfer itself and the module source code over there without your help). Sadly, I relied to much on things working remotely like they do when I use a local gateway. There is one caveat, though: remote_exec does not guarantee that state is kept between calls. That basically means that everything you want to do would have to be done in one standalone module file; also, the execnet communication method relies on channels -- which are fine, but things like "this gateway dropped out, you can't use the channel anymore to communicate" mean a lot of exception handling everywhere, and execnet's channel model does not seem to have been written with implementation of continously running servers, especially multithreaded ones, in mind.

Since that was clearly not the direction I wanted to take, I have reduced my execnet usage to two things:

bootstrap_agent

Which will test if you can import gnuradio (or anything you ask for) on the remote machine, can download and transfer the pybombs package to the remote, extract such packages to a common path, and call pybombs to install GNU Radio into a definable prefix. This prefix can later be used when setting python library search path and binary and shared object paths.

remote_agent

The actual zeroMQ server. It has multiple ZMQ sockets, so I might have to explain a few ZMQ concepts first:

  • ZMQ sockets behave somewhat like sockets, done right. You can have different types of sockets for different types of information flow, and ZMQ will make sure that data gets where you want it
  • ZMQ Request/Reply Sockets: Like their unixoid brothers, these are point-to-point connections. Functionally, these behave like you would expect from a client (Request) or server (Reply) socket
  • ZMQ Push/Pull Sockets: These behave more like a global round robin queue: A number of Pull Sockets may each consume messages from the Push socket, which then pop from the Queue, or multiple Push sockets might feed a single Pull one
  • ZMQ Publish/Subscribe Sockets: Think GNU Radio message passing. An arbitrary number of subscribers might subscribe to a publisher, and all published messages reach all subscribers (they have individual queues)

So there's three interfaces (four if you count the execnet bootstrap) in my remote_agent:

  1. Using execnet, the remote_agent gets bootstrapped and an instance of the class of the same name gets started
  2. Upon instantiation, a reply socket binds to the address received via the original execnet channel
  3. The remote agent receives commands on that reply channel; typically, these cover things like setting a unique name, setting the python paths, but especially setting up addresses for the dispatcher's push socket
  4. The Pull socket (given the appropriate control socket command) connects to the dispatcher's Push socket; this way, multiple remote_agents can easily form a pool for tasks that need to be computed by only one agent (e.g. speeding up a laaarge BER curve by letting the computers in your lab run the simulation for parts of the noise power range each)
  5. The Subscribe socket attaches (on command) to the dispatchers Publish socket. This is convenient for all-have-to-do-this tasks, like running your new trellis module on the six different computing platforms and 2 VMs for testing and benchmarking

Task Storage

Everything is done in JSON (ZMQ even has nice wrapper functions that can receive and send JSON and give you python objects); so storing tasks is directly done in that format and loadable from disk.

Result Gathering

Results from the remote agent contain the ID of that agent; however, consistent ways to concentrate and tabularize these JSON strings/python dictionaries have yet to be found.

Integration

The integration objective to extend the GNU Radio Companion in such a way that defining benchmarking parameter sweeps is possible from within GRC, which means that I want to be able to choose "distributed benchmark" as generate option and get an interface to define the overall parameters to test and a way of defining which machines are subject to these benchmarks.

For now, I have settled with an approach that will work with the default "no GUI" generate option: GRC generates a top_block subclass. Using a walk through the members of an instance of that, we can find the vector_sinks and extract data from them. For parametrization, users can use the normal variable blocks, and the remote_agent translates parametrization tasks to calling the setters of these variables prior to running the block. Note that this is not the level of integration, since it does neither allow the user to define which variables should take a range of values nor assists him by offering a list of variables existing.

The most elegant way to do that is from my current point of view extending the Cheetah template used to generate the python files, adding @property decorators along with custom decorators for range setting; this is in the "where do I integrate that into GRC without messing up the concept too badly" phase.

Comments powered by Disqus
Share