Learning to Synchronize with Attention Models

Synchronization is often one of the most involved tasks to get right when building, testing, and deploying a radio system.  In this work, we look at treating synchronization as a learned attention model in a deep neural network to provide a canonical form signal for classification.  We use the same discriminative network as used in prior work and obtain slightly better classification performance.  We introduce a handful of new layers into Keras to build a domain specific set of radio transforms to supplement those used in imagery and described in this paper on Spatial Transformer Networks.

RTN

Classification is perhaps not the most interesting task to apply an attention model for synchronization.  Due to the extremely low SNR of much of the data-set, good synchronization is hard to achieve on short data samples with learned or expert synchronization metrics, and many of the learned discriminative features seem to be relatively robust to synchronization error.  We plan to revisit this attention model more in future work, potentially for other sorts of tasks for which it may be more beneficial, regardless, plotting a color-coded distribution over the density of constellation points before an after the transform on the QPSK subset of the data-set, we can definitely see some qualitative improvements in orderly signal structure.

density_plots

Checkout the paper on arXiv for more details!

Unsupervised Radio Signal Representation Learning

We’ve just posted a brief new arXiv article (https://arxiv.org/abs/1604.07078) on learning to represent modulated radio signals using unsupervised learning.  We employ a small autoencoder network with convolutional and fully connected layers to fit a sparse signal representation with no expert knowledge or supervision.  Mean squared error reconstruction distance and regularization are used during training.

net (1)

One example of a noisy test set example, its compressed representation, and its reconstruction is shown below for a QPSK signal, additional details are available in the arXiv paper!  We achieve a 16x compression in information density (2x88x4->1×44), and 128x in storage space (2x88x32->1×44)!  We’re looking forward to doing many more things with these ideas!

recon1 (1)

As a side note, since drawing hundreds of neural network connection lines in diagramming tools manually is really not fun, I’ve posted a small tool called NNPlot on github which attempts to make generating high level conceptual neural network diagrams much easier.  Hopefully someone else will find this of use some day, the network diagram above is the first example in it.