TensorFlow is a powerful python-numpy expression compiler which supports concurrent GPP and GPU offload of large algorithms. It has been used largely in the machine learning community, but has implications for the rapid and efficient implementation of numerous algorithms in software. For GNU Radio, it matches up wonderfully with GNU Radio’s python blocks, which pass signal processing data around as numpy ndarrays which can be directly passed to and from TensorFlow compiled functions. This is very very similar to what I did with gr-theano, but with the caveat that TensorFlow has native complex64 support without any additional patching! This makes it a great candidate for dropping in highly computationally complex blocks for prototyping and leveraging highly concurrent GPUs when there is gross data parallelism that can easily be leveraged by the compiler.

A quick example of dropping TensorFlow into a python block might look something like this

class add(gr.sync_block): x = tensorflow.placeholder("complex64") y = tensorflow.placeholder("complex64") def __init__(self): gr.sync_block.__init__(self, name="tf_add", in_sig=[numpy.complex64, numpy.complex64], out_sig=[numpy.complex64]) self.sess = tensorflow.Session() self.op = tensorflow.add( self.x, self.y) def work(self, input_items, output_items): rv = self.sess.run([self.op], feed_dict={self.x:input_items[0], self.y:input_items[1]}) output_items[0][:] = rv[0] return len(rv[0])

We simply define self.op as an algorithmic expression we want to compute at run time, and TensorFlow will compile the kernel down to the GPP or GPU depending on available resources, and handle all of the data I/O behind the scenes after we simply pass ndarrays in and out of the work function.

Dropping this block into a new gr-tf out of tree module, we can rapidly plug it into a working GNU Radio flowgraph stream! Clearly there are algorithms which make a lot more sense to offload than “add_cc”. Things like streaming CAF or MTI computations with lots of concurrent integration come to mind and would be pretty trivial to add. For now this is just a proof of concept, but it seems like a great way to prototype such things in the future!

The module is available on github @ https://github.com/osh/gr-tf/