Convolutional Radio Modulation Recognition Networks

In an arxiv pre-publication report out today, Johnathan Corgan and I study the adaptation of convolutional neural networks to the task of modulation recognition in wireless systems.   We use a relatively simple two layer convolutional network followed by two dense layers, a much smaller network than required for tasks such as ImageNet/ILVC.

net2

We demonstrate that blind time domain feature learning can perform extremely well at the task of modulation classification, achieving a very high accuracy rate on both clean and noisy data sets.

conf_conv_18

As we compare the classifier performance across a wide range of signal to noise ratios, we demonstrate that it outperforms a number of more traditional expert classifiers using zero-delay cumulant features by a large margin.

modreq_snr

While this is preliminary work, we think the results are exciting and that many additional promising results will come from the marriage of software radio and deep learning fields.

For much more detail on these results, please see our paper!  http://arxiv.org/abs/1602.04105

3D Printing a USRP B200 Mini Case

2016-02-10

[edit] Download or Order this model here

USRPs are incredibly handy devices, they let us play all over the spectrum with the signal processing algorithms and software of the day.  The USRP B210 was an awesome step in practicality requiring only USB3 for I/O and power, minimizing the number of cables required to haul around.   However, it’s size and chunky case options have been a source of frustration.   It’s a nice thing to always keep with you, but when packing bags and conserving space, it just can’t always make the cut.

The Ettus Research USRP B200 Mini recently changed all that by releasing a very compact version of the B200 which takes up virtually no space, but frustratingly doesn’t ship with a case to protect it from abuse!   Carrying around padded electrostatic wrap bags isn’t particularly appealing or protective, so I set about to put together a functional case for the device that would at least protect it from physical abuse.

The top and bottom case renderings of the resulting design are shown below, about the size of a stack of business cards. As long as a GPS-DO isn’t needed, this is now pretty much the perfect compact carrying companion for GNU Radio.

case_bottom case_top

The first print, on a pretty low end 3D printer is shown below.   After a few tweaks, fitment around the SMA plugs is very tight, the board fitment into the case is snug otherwise, a bit of space was added around the USB port to allow various sized plugs to clear it.

2016-02-09 (1)

For scale, we show it here next to a full size B210 + case.   While much of the design of the underlying board is the same here, the size reduction, and more tightly fitted case, and resulting hauling size of this device step is pretty amazing!

2016-02-10 (1)

The fit isn’t completely perfect, it could use a little bit more clearance in a few spots, but it shouldn’t be putting too much tension on any overly fragile areas, and seems like it could take quite a bit of beating.   We’ll see how long this one survives!

For anyone interested in having one of these, the STL Case Models have been made available for purchase, or download on shapeways at https://www.shapeways.com/shops/osh

A bit more eye candy below …

the micro-shibu

case3

case4

case2

Note: I would suggest using something like a #4-40 thread and 3/8″length screw size for securing this, see links below.

Black #4-40, 3/8″ Machine Screws   Black #4-40 Hex Nut

 

GNU Radio TensorFlow Blocks

TensorFlow is a powerful python-numpy expression compiler which supports concurrent GPP and GPU offload of large algorithms.  It has been used largely in the machine learning community, but has implications for the rapid and efficient implementation of numerous algorithms in software.   For GNU Radio, it matches up wonderfully with GNU Radio’s python blocks, which pass signal processing data around as numpy ndarrays which can be directly passed to and from TensorFlow compiled functions.   This is very very similar to what I did with gr-theano, but with the caveat that TensorFlow has native complex64 support without any additional patching!  This makes it a great candidate for dropping in highly computationally complex blocks for prototyping and leveraging highly concurrent GPUs when there is gross data parallelism that can easily be leveraged by the compiler.

A quick example of dropping TensorFlow into a python block might look something like this

class add(gr.sync_block):
 x = tensorflow.placeholder("complex64")
 y = tensorflow.placeholder("complex64")
 def __init__(self):
   gr.sync_block.__init__(self,
     name="tf_add",
     in_sig=[numpy.complex64, numpy.complex64],
     out_sig=[numpy.complex64])
   self.sess = tensorflow.Session()
   self.op = tensorflow.add( self.x, self.y)
 def work(self, input_items, output_items):
   rv = self.sess.run([self.op], feed_dict={self.x:input_items[0], self.y:input_items[1]})
   output_items[0][:] = rv[0]
   return len(rv[0])

We simply define self.op as an algorithmic expression we want to compute at run time, and TensorFlow will compile the kernel down to the GPP or GPU depending on available resources, and handle all of the data I/O behind the scenes after we simply pass ndarrays in and out of the work function.

grtfplot

Dropping this block into a new gr-tf out of tree module, we can rapidly plug it into a working GNU Radio flowgraph stream! Clearly there are algorithms which make a lot more sense to offload than “add_cc”.  Things like streaming CAF or MTI computations with lots of concurrent integration come to mind and would be pretty trivial to add.  For now this is just a proof of concept, but it seems like a great way to prototype such things in the future!

The module is available on github @ https://github.com/osh/gr-tf/

Simple Python Bayesian Network Inference with PyOpenPNL

The state of python libraries for performing bayesian graph inference is a bit frustrating.   libpgm is one of the few libraries which seems to exist, but it is quite limited in its abilities.   OpenPNL from Intel is a great c++ implementation of the Matlab Bayes-Net toolbox, but its C++ and Matlab interfaces are both not particularly convenient.   So we set about to properly swig the OpenPNL out to python where it can be used rapidly.   Additionally some of the build infrastructure for OpenPNL was a bit dated and needed some cleaning to work on modern Linux systems.

Our updated modules for both of these can be found at: https://github.com/PyOpenPNL

Not all of OpenPNL has yet been swigged and some of the python interface is still a little bit rough, but it does work.   Here we’ll work through the canonical Bayes-net example from Russell and Norvig, also used in Matlab BNT docs.  The Bayes network of interest is illustrated below.sprinkler

We have a simple graph with four discrete nodes and we would like to instantiate the model, provide evidence and infer marginal probabilities given this evidence.

We focus on the example included in the repo which can be viewed in full here simple_bnet.py

Syntax for defining a DAG’s adjacency matrix and conditional probability distribution types for the Bayes net reads as.

nnodes = 4
# set up the graph
# Dag must be square, with zero diag!
dag = np.zeros([nnodes,nnodes], dtype=np.int32)
dag[0,[1,2]] = 1
dag[2,3] = 1
dag[1,3] = 1
pGraph = openpnl.CGraph.CreateNP(dag)
# set up the node types
types = openpnl.pnlNodeTypeVector()
types.resize(nnodes)
isDiscrete = 1
types[0].SetType( isDiscrete, 2 )
types[1].SetType( isDiscrete, 2 )
types[2].SetType( isDiscrete, 2 )
types[3].SetType( isDiscrete, 2 )
# node associations
nodeAssoc = openpnl.toConstIntVector([0]*nnodes)
# make the bayes net ...
pBNet = openpnl.CBNet.Create( nnodes, types, nodeAssoc, pGraph )

We can verify the DAG structure by plotting with python-networkx, shown below and verifying it matches our goal.

figure_3

In this case we have allocated a Bayes-net with 4 nodes, each with 2 discrete states (T|F).   Next we assign CPDFs to each of the nodes.

pBNet.AllocFactors()
for (node, cpdvals) in [
 (0, [0.5,0.5]),
 (1, [0.8, 0.2, 0.2, 0.8]),
 (2, [0.5, 0.9, 0.5, 0.1]),
 (3, [1, 0.1, 0.1, 0.01, 0, 0.9, 0.9, 0.99]),
 ]:
    parents = pGraph.GetParents(node);
    print "node: ", node, " parents: ", parents
    domain = list(parents) + [node]
    cCPD = openpnl.CTabularCPD.Create( pBNet.GetModelDomain() , openpnl.toConstIntVector(domain) )
    cCPD.AllocMatrix( cpdvals, openpnl.matTable )
    cCPD.NormalizeCPD()
    pBNet.AttachFactor(cCPD)

Assigning these known distributions we now have defined the DAG and the corresponding CPDFs.   We can allocate an inference engine and begin posing problems to it.

We start by assigning evidence that we know Cloudy=False and then seek to measure the marginal of WetGrass resulting.

# Set up the inference engine
infEngine = openpnl.CPearlInfEngine.Create( pBNet );
# Problem 1 P(W|C=0)
evidence = openpnl.mkEvidence( pBNet, [0], [0] )
infEngine.EnterEvidence(evidence)
infEngine.pyMarginalNodes( [3], 0 )
infEngine.GetQueryJPD().Dump()

This provides an output value of [0.788497 0.211503] shown below when plotted.

figure_1

Changing the evidence to Cloudy=True, we can compute this marginal and plot it in comparison.

# Problem 1 P(W|C=1)
evidence = openpnl.mkEvidence( pBNet, [0], [1] )
infEngine.EnterEvidence(evidence)
infEngine.pyMarginalNodes( [3], 0 )
infEngine.GetQueryJPD().Dump()

figure_2

This is an extremely simple BN, but it illustrates the relatively straightforward simplicity with which we can now set up and work with such problems using the PyOpenPNL interface.   Hopefully this project will be of much use to a number of people!   We’ll be largely adding and testing python API support now on an as needed basis.

Installing OpenPNL and PyOpenPNL is now relatively straightforward, I’ve gone through and put together some relatively sane build systems to make these usable, they can be built roughly by following

git clone https://github.com/PyOpenPNL/OpenPNL.git
cd OpenPNL && ./configure && make && sudo make install
git clone https://github.com/PyOpenPNL/PyOpenPNL.git
cd PyOpenPNL && sudo python setup.py build install

This is of course only scratching the surface of BN/PGM style inference.  OpenPNL support DBNs, GMM based continuous distributions, and numerous CPD & Structure learning as well as inference engines under the hood, much more to come soon.

http://inferred.info