Bit Error Rate Testing in GNU Radio

When testing modems, its often a good idea to make sure the bit error rate (BER) of your receiver lines up with what you might expect from theory.  To this end, GNU Radio has long needed a handful of blocks which make this easy.  Test equipment often has built in psuedo-random test bit sequence (PRBS) modes which can produce known long strings of whitened bits for this sort of testing, but we’ve not had handy blocks to do this in a nice way without manually using the lfsr block, xor block, and something to count bit errors.

Today I added prbs_source_b and prbs_sink_b to the gr-mapper OOT module which provide ready made blocks for this purpose.  An example flowgraph application has been provided in gr-mapper called “prbs_test.grc” which provides a QPSK loopback test of these BER calculation blocks.  For the moment its just printing statistics to screen and averaging them linearly from startup to the current time, at some point these could output async messages if they needed to be incorporated into a larger suite or some downstream logic, and in the case of wanting a recent-rolling BER rather than an absolute BER over the entire run, we could implement some kind of IIR based averaging in the update.  Regardless, these blocks aren’t super exciting, but they are perhaps useful tools that others can use in modem verification!  Screenshot below –

These same blocks should work equally well over the air — or with other modulations, so long as your framing/sync keeps them properly aligned!

ber_testing

Learning to Synchronize with Attention Models

Synchronization is often one of the most involved tasks to get right when building, testing, and deploying a radio system.  In this work, we look at treating synchronization as a learned attention model in a deep neural network to provide a canonical form signal for classification.  We use the same discriminative network as used in prior work and obtain slightly better classification performance.  We introduce a handful of new layers into Keras to build a domain specific set of radio transforms to supplement those used in imagery and described in this paper on Spatial Transformer Networks.

RTN

Classification is perhaps not the most interesting task to apply an attention model for synchronization.  Due to the extremely low SNR of much of the data-set, good synchronization is hard to achieve on short data samples with learned or expert synchronization metrics, and many of the learned discriminative features seem to be relatively robust to synchronization error.  We plan to revisit this attention model more in future work, potentially for other sorts of tasks for which it may be more beneficial, regardless, plotting a color-coded distribution over the density of constellation points before an after the transform on the QPSK subset of the data-set, we can definitely see some qualitative improvements in orderly signal structure.

density_plots

Checkout the paper on arXiv for more details!

3D Printing a USRP B200 Mini Case

2016-02-10

[edit] Download or Order this model here

USRPs are incredibly handy devices, they let us play all over the spectrum with the signal processing algorithms and software of the day.  The USRP B210 was an awesome step in practicality requiring only USB3 for I/O and power, minimizing the number of cables required to haul around.   However, it’s size and chunky case options have been a source of frustration.   It’s a nice thing to always keep with you, but when packing bags and conserving space, it just can’t always make the cut.

The Ettus Research USRP B200 Mini recently changed all that by releasing a very compact version of the B200 which takes up virtually no space, but frustratingly doesn’t ship with a case to protect it from abuse!   Carrying around padded electrostatic wrap bags isn’t particularly appealing or protective, so I set about to put together a functional case for the device that would at least protect it from physical abuse.

The top and bottom case renderings of the resulting design are shown below, about the size of a stack of business cards. As long as a GPS-DO isn’t needed, this is now pretty much the perfect compact carrying companion for GNU Radio.

case_bottom case_top

The first print, on a pretty low end 3D printer is shown below.   After a few tweaks, fitment around the SMA plugs is very tight, the board fitment into the case is snug otherwise, a bit of space was added around the USB port to allow various sized plugs to clear it.

2016-02-09 (1)

For scale, we show it here next to a full size B210 + case.   While much of the design of the underlying board is the same here, the size reduction, and more tightly fitted case, and resulting hauling size of this device step is pretty amazing!

2016-02-10 (1)

The fit isn’t completely perfect, it could use a little bit more clearance in a few spots, but it shouldn’t be putting too much tension on any overly fragile areas, and seems like it could take quite a bit of beating.   We’ll see how long this one survives!

For anyone interested in having one of these, the STL Case Models have been made available for purchase, or download on shapeways at https://www.shapeways.com/shops/osh

A bit more eye candy below …

the micro-shibu

case3

case4

case2

Note: I would suggest using something like a #4-40 thread and 3/8″length screw size for securing this, see links below.

Black #4-40, 3/8″ Machine Screws   Black #4-40 Hex Nut

 

GNU Radio TensorFlow Blocks

TensorFlow is a powerful python-numpy expression compiler which supports concurrent GPP and GPU offload of large algorithms.  It has been used largely in the machine learning community, but has implications for the rapid and efficient implementation of numerous algorithms in software.   For GNU Radio, it matches up wonderfully with GNU Radio’s python blocks, which pass signal processing data around as numpy ndarrays which can be directly passed to and from TensorFlow compiled functions.   This is very very similar to what I did with gr-theano, but with the caveat that TensorFlow has native complex64 support without any additional patching!  This makes it a great candidate for dropping in highly computationally complex blocks for prototyping and leveraging highly concurrent GPUs when there is gross data parallelism that can easily be leveraged by the compiler.

A quick example of dropping TensorFlow into a python block might look something like this

class add(gr.sync_block):
 x = tensorflow.placeholder("complex64")
 y = tensorflow.placeholder("complex64")
 def __init__(self):
   gr.sync_block.__init__(self,
     name="tf_add",
     in_sig=[numpy.complex64, numpy.complex64],
     out_sig=[numpy.complex64])
   self.sess = tensorflow.Session()
   self.op = tensorflow.add( self.x, self.y)
 def work(self, input_items, output_items):
   rv = self.sess.run([self.op], feed_dict={self.x:input_items[0], self.y:input_items[1]})
   output_items[0][:] = rv[0]
   return len(rv[0])

We simply define self.op as an algorithmic expression we want to compute at run time, and TensorFlow will compile the kernel down to the GPP or GPU depending on available resources, and handle all of the data I/O behind the scenes after we simply pass ndarrays in and out of the work function.

grtfplot

Dropping this block into a new gr-tf out of tree module, we can rapidly plug it into a working GNU Radio flowgraph stream! Clearly there are algorithms which make a lot more sense to offload than “add_cc”.  Things like streaming CAF or MTI computations with lots of concurrent integration come to mind and would be pretty trivial to add.  For now this is just a proof of concept, but it seems like a great way to prototype such things in the future!

The module is available on github @ https://github.com/osh/gr-tf/

Such Samples 2

Recently Tom Rondeau did a bunch of work to add message passing support to GNU Radio’s Qt based plotters.  This is really cool because now there’s not a whole lot of need for the gr-pyqt (pyqwt based) message plotters anymore other than prototyping custom plotting things.   The obvious thing to do was update Such Samples to switch over to the more efficient, portable, stable, feature filled, and well supported plotters from the main GNU Radio distribution.

Building the Graph

Now that the Qt Gui message plotters have complex PDU message port input, we can simply hook them up to the pyqt file message source and everything works as expected!   To make wandering around recordings easier, there is now an open dialog which passes messages into the message source, and a drop down for file type would be nice to add too.  Overall the flowgraph is super simple and all message based, shown below.

ss2_graph

Graphical Interface

Running the graph, the new plotters look quite a bit cleaner and better than the old ones!  The spectrogram/waterfall plot now supports messages as well, so we include that in addition to the time/frequency plots.   Below you can see a wideband look at the 2.4 GHz ISM band which can be easily explored and intuited in each plot dimension.

ss2

This flowgraph is available on github @ such_samples2.grc

 

 

 

Burst PSK Modem with LDPC Coding in GNU Radio using Message Passing and Eventstream

Bursty wireless communications are widely used as a multi-user channel access sharing technique in the real world.   This has long been a frustration in GNU Radio as it has traditionally focused heavily on continuous stream processing blocks and applications.   More recently the message passing additions to GNU Radio and gr-eventstream’s burst schedulers have made building burst radio transmission and reception much simpler and more logical to implement than was possible before.   In this post we detail a demonstration of a burst QPSK modem with LDPC coding and a minimal packet header which is implemented in C++ and Python blocks, and can be put together and run easily in the GNU Radio Companion tool (GRC).

Transmit Waveform

The transmitter waveform is comprised of two processing domains which are joined by a an eventstream source block.   The first is a message passing framework which uses PDUs to pass around discrete “bursts” in various forms, and the second which is a traditional GNU Radio stream processing graph.   The eventstream source block provides the insertion of a discrete length “burst” of samples into the continuous, unbounded, stream of samples.

Transmit Waveform: Message Domain

In the message domain we start off with a “Message Strobe” block, this generates a PDU periodically ever 1000ms which triggers the creation of a random data PDU corresponding to each of these.   These random data PDUs are simply a vector of bits with random length and an empty dictionary which could hold information about the burst.

In reality an applicaiton might implement a higher layer MAC here, or simply use a TUNTAP or SOCKET block to allow real data PDUs to flow into the graph, but this is a convenient model for testing.

We pass these random bits into the framer block which adds a length field, a header checksum, and a payload checksum – which will allow us to verify correct reception and determine packet length upon receipt of the burst at the downstream receiver.

After this framer we pass the PDU through a “burst padder” block which simply adds zero bits to the end of the frame until the length reaches an integer multiple of the uncoded forward error correction (FEC) block size.   Since we must encode entire FEC blocks at a time this is a required step, when operating without FEC this could be easily removed.   At this point we go through a randomizer which XORs a random sequence onto the data to whiten the payload bits and pass it through the forward error correction encoder.

We then insert a known binary preamble sequence onto the front of each burst, and pass the burst bits through a QPSK bit to symbol mapper which converts the uint8_t bit vector into a complex float32 sample vector ready to be inserted into our transmission stream.

The image below shows the total transmit flow-graph as described above.

psk_burst_ldpc_tx.grc

Transmit Waveform: Bridging Domains

The last message block is the burst scheduler which decides when in time to schedule the burst.   It is simply a slotted aloha type of scheduler in this case which drops the burst in as soon as it can along some fixed slot boundary.   It sends bursts annotated with a sample-time to schedule them into the stream on to the eventstream source block, and receives asynchronous feedback from the same eventstream source block letting it know where in the stream it is “now”.   Out of this eventstream source block, we get a sample of complex zero samples with events copped into the correct offsets specified by their event time.

Transmit Waveform: Stream Domain

The stream domain then for this transmitter is quite simple, we go through a throttle block to limit the speed of the outgoing sample stream, go through an interpolating RRC filter to convert from 1 sample per symbol up to 2 samples per symbol for transmission, go through a standard channel model block, and finally stream the resulting continuous sample stream containing bursty transmissions out into a file sink and some QtGui plotters.

The graphical display below shows the plot of a single burst’s samples on the left side, and the mixed burst and off states with noise added in the standard stream plotters on the top and right side.   The bottom text area shows the contents of the PDU dictionary as it travels from the burst scheduler to the eventstream source, in this case the only key value pair is the event_time to schedule the burst at.

tx_waveform

Receive Waveform

The receive waveform similarly spans a message and a stream domain, bridged by using an eventstream sink block to extract discrete events from the continuous sample stream at appropriate times.

Receive Waveform: Stream Domain

The receive waveform begins with a sample stream, either from a radio or out of a stored sample file in this case.   To simulate real time operation, we throttle the data stream at the intended sample rate and then run through a matched filter for the presence of a preamble.   The output of this filter is then further filtered through a local comparison to a moving average, in the correlator filter block, and then the resulting detection metric is run into the eventstream trigger rising edge block.

This block detects whenever the detection metric rises over a certain threshold at the beginning of a burst and sends an asynchronous message to the eventstream sink block to extract a burst event beginning at that time in the sample stream.

Receive Waveform: Bridging Domains

 

The eventstream sink block, when triggered by the threshold detector, schedules a stream event which fires off the PDU handler.   This is kind of a dummy handler which simply converts the extracted samples into a standard PDU format message to be handled by any message passing blocks downstream.

Below you can see the flowgraph for the receive waveform as described.

psk_burst_ldpc_rx.grc

Receive Waveform: Message Domain

In the message domain, we start off with a chunk of samples extracted from the continuous stream containing the burst somewhere in it.   We have not done a fine synchronization and we do not yet know the length of the burst, so for now we pull out some upper bound on the length of all bursts worth of samples.    This is immediately plotted as power over time, and then run through a length detector block which attempts to trim some of the noise off the end of the burst based on its power envelope fall off / trailing edge.

At this point we have hopefully minimized the number of samples to run through our synchronization algorithm, which is of non trivial compute complexity.   We plot the constellation three places, before synchronization, after CFO correction, and after equalization and timing correction.   Since the synchronization block is operating on a whole burst atomically we are no longer restricted to using tracking loops which take time to converge at the beginning of a burst.   Instead we can produce an ML estimate for the CFO over the length of the burst, compute optimal timing and equalizer taps over the length of the burst, and then apply them atomically to the entire burst worth of samples.    The lower left hand plot below shows the correlation peak obtained during timing synchronization within this block,  with a clearly observable peak at the preamble.

Coming out of the synchronization block we have the burst’s symbols lined up in the correct rotation based on the preamble, at this point we can go through a soft demapper block, which translates all complex float symbols in the burst into a series of floating point soft bits in a PDU.   We can then use a “burst frame align” block which first strips the preamble bits off the front of the burst, and ensures that we trim all the remaining bits to a multiple of the coded FEC block size (in this case 271 bits).   We pass these soft bit vectors through an FEC decoder block to perform LDPC decoding and output hard bits, and then finally through a burst randomizer to remove the XORed whitening sequence.     The bottom middle plot shows the soft bits of a single frame before this decoding process, we can see that there is a bi-modal distribution representing clusters centered around the soft values for 0 and 1 bits for the useful portion of the frame, which is what we would hope to see.

At this point we can pass the decoded, derandomized hard bits through a deframer which provides a header CRC check, a payload length field to ensure we throw away bits past the end of the frame, and a payload CRC check which ensures none of the bits in the payload were corrupted.

Lastly we send the resulting PDU to a “meta text output” GUI widget, which allows us to look at the PDU’s dictionary values for each burst in a nice clean and easy way.    This is shown in the top right of the GUI below and contains a handful of dictionary entries which have been appended through event scheduling, synchronization, decoding, and deframing.   These include event time, center frequency offset, number of FEC iterations to decode, number of frames passed and failed, and header and payload pass rates, among other things.

rx_waveform

Future Use

The hope of this modem is largely that the message based components allow for very simple implementation of atomic burst operations, without needing to worry about book-keeping complex bursty state information within stream processing blocks.   A lot of effort has been put into trying to develop components which are NOT tightly coupled in any way, and have general applicability to many configurations with the same modem architecture.   I’m quite excited about having this available as a baseline because I think it will be a great starting point for building many projects in the future which seek to tailor various bits of the modem stack for specific situations without having to reinvent anything monolithic.   For the moment this does not represent an “adaptive” modem, but many of the components are already ready for that.   The demapper block can switch per burst based on a “modulation” dictionary entry, and a header deframer could easily schedule the extraction of an additional payload burst region back into the eventstream sink block if this were necessary, so I believe the architecture is already very well set up to handle these sorts of challenges which we are likely to throw at in the near future.

Waveform Information

This waveform both with and without LDPC can be put together using the GRC graphs in the gr-psk-burst repository which can be installed via pybombs.   This is available on github at https://github.com/osh/gr-psk-burst.   It leverages a number of tools from gr-vt as well as gr-pyqt to provide transmission, reception, and plotting.

I should also acknowledge the LDPC encoder and decoder was the work of Manu TS, originally in an out of tree module (https://github.com/manuts/gr-ldpc) developed as part of Google Summer of Code.   This work has now been integrated into GNU Radio’s gr-fec API and has been moved in-tree.   Additionally much of the underlying  work on burst syncrhonization and other burst modem blocks has been made available from Virginia Tech in the gr-vt repo, https://github.com/gr-vt, and represents the work of a handful of very talented people there.

Visualizing Radio Propagation in GNU Radio with gr-uhdgps, gr-eventstream and Google Earth

Radio propagation is a complex process in the real world, elaborate models are often used to predict expected propagation effects over known terrain, but in many cases there is no substitute for measuring ground truth.  This can now be accomplished quite easily, quickly, and cheaply using GNU Radio for a wide range of protocols.   Using gr-uhdgps (https://github.com/osh/gr-uhdgps), gr-eventstream (https://github.com/osh/gr-eventstream) and Google Earth, we demonstrate a tool to measure broadcast VHF FM Radio received signal strength over a metropolitan area. In this case we use WAMU 88.5 MHz, the local NPR radio station in Washington D.C., to demonstrate the tool and plot the results in Google Earth.

Building the GNU Radio Flowgraph

Looking at the WAMU FM Broadcast channel in a QtGui frequency and time sink plot is a good first sanity check.  Since FM broadcast is typically allocated a 200khz channel, we set up a USRP B210 to tune to 88.5 MHz with a complex sample rate of 200 ksamp/s shown below, the expected time and frequency spectrum behavior of the FM audio signal can be seen below.

      FM_WAMU_Plot   uhdgps_rssi_log.grc

In this flow graph we extract and event of 512 samples exactly every 1 second’s worth of samples from the incoming complex sample stream off the UHD source block.  Since we are throwing away the rest of the samples not extracted, this should be a low duty cycle, low CPU load process to run for the most part.   The trigger sample timer event allows us to trigger events at this sample-periodicity of a fixed length and pass them downstream to the “Handler PDU” block, this block simply converts the eventstream events into normal complex float PDUs which can be handled by any message passing block.

At this point we introduce a simple python message block called “CPDU Average Power” which computes the average sample power within the event’s samples, tags a “power” key and value to the PDU’s metadata structue and passes it downstream.    The functionality of this python block is contained within this simple python/numpy handler function which computes the power, adds the tag, and sends the output message — for now we are throwing away the PDU data field and sending PMT_NIL in its place since we don’t do anything with it downstream.

def handler(self, pdu):
    data = pmt.to_python(pmt.cdr(pdu))
    meta = pmt.car(pdu)
    data = data - numpy.mean(data) # remove DC
    mag_sq = numpy.mean(numpy.real(data*numpy.conj(data))) #compute average Mag Sq
    p = self.k + 10*numpy.log10(mag_sq)
    meta = pmt.dict_add(meta, pmt.intern("power"), pmt.from_float( p ) )
    self.message_port_pub( pmt.intern("cpdus"), pmt.cons( meta, pmt.PMT_NIL ) );

At this point we remove the “event_buffer” item from each PDU’s metadata structure using the “PDU Remove” block since it is also large and we don’t really want to keep it around downstream for any reason — trying to serialize this into our JSON storage would waste a lot of space and makes the message debug print outs massive.

Since the B210 is set up with an internal GPSDO board, we have both a GPS disciplined clock and oscillator as well as GPS NMEA fix data available from the UHD host driver.  By using the uhdgps gps probe block, we can poll the USRP source block for the NMEA fix data every time we need to store an updated signal strength record as shown in the flow graph above.  Passing our CPDUS through this GPS Probe block appends the NMEA data and a bunch of current USRP state information from the device which we can look at later.

Finally, we store a sequence of the JSON’ized PDUs to two files, one which has been filtered for records where “gps_locked” is “true” and one which simply stores all records regardless of GPS lock state.

Record Format

Each JSON’ized record corresponding to a single PDU from the flowgraph and a single eventstream extracted sample event, now looks like this in our output files.

We see that this structure now contains gps state information pulled from the GPS probe block, the stream tags which were added to the event/PDU by eventstream, and our power value computed by the python CPDU power block.

{
 "es::event_length": 512,
 "es::event_time": 20201024,
 "es::event_type": "sample_timer_event",
 "gain": 30.0,
 "gps_gpgga": "$GPGGA,003340.00,3852.6453,N,07707.3960,W,0,10,1.2,85.4,M,-34.7,M,,*6E",
 "gps_gprmc": "$GPRMC,003340.00,V,3852.6453,N,07707.3960,W,0.0,0.0,290315,,*3D",
 "gps_locked": "true",
 "gps_present": true,
 "gps_servo": "15-03-29 307 47943 -469.10 -3.21E-09 15 13 2 0x20",
 "gps_time": "1427589220",
 "power": -119.32362365722656,
 "ref_locked": "true",
 "rx_freq": [
 0,
 88499999.9999999
 ],
 "rx_rate": [
 0,
 199999.99937668347
 ],
 "rx_time": [
 0,
 [
 892243202,
 0.5858168473168757
 ]
 ]
}

Drive Testing

To test this graph I securely fasten a macbook into the passenger seat, wedge a B210 between the center console and passenger seat of the Subaru test vehicle, and plunk a GPS antenna and a poorly band matched magnetic duck antenna onto the trunk of the car and go for a drive.   After launching the application, it takes a while for the GPSDO to obtain GPS lock since it is not network assisted like many devices these days.  After a few minutes of waiting and impatiently moving the GPS antenna to the roof of the car, I obtained lock and pretty much kept it for the entirety of my drive.

seat-scaled  antenna-scaled

Inspecting Results

After driving a thorough route around town and past the broadcast tower, I dump JSON file through uhggps’s json_to_kml.py tool to convert the rss measurements into a Google Earth friendly KML file with vertical lines of height proportional to the RSS at each event location. This provides a nice intuitive visual way to look at and understand received signal strength effects over various terrain changes, obstacles, urban canyons and other such phenomena.   An overview of the route can be seen here.

plot_overview

A few notable spots around the route are shown below.

Driving through Rosslyn, VA we can see massive variation in received signal strength at each sample point likely due to massive amounts of urban multi-path fading resulting in large amounts of constructive and destructive interference.

rosslyn_variance

Driving from the George Washington Parkway North onto the US 1 memorial bridge on the far side of the river, we can see that going from a lower elevation shadowed by a small hill and a handful of trees up onto the elevated and unobstructed bridge resulted in a significant sustained climb in signal strength level.

memorial_bridge

Driving north on Connecticut Avenue, taking the tunnel underneath Dupont Circle, we briefly lose GPS lock, but upon reacquiring show a very low received signal strength down in the lowered, partially covered, and heavily shadowed roadway coming out of the tunnel.   To deal with this kind of GPS loss of lock we may need to implement some kind of host based interpolation of fixes occurring on either side of the positioning outage, but for now the samples are discarded.

dupont_tunnel

There are numerous other effects that can be observed in the KML plot from the drive which is linked below.

KML Issues

One issue that I’m still considering is how exactly to map RSSI to amplitude, there are two modes which KML supports and each has their own issue.   These plots are using “aboveGroundLevel” which maps “0” elevation to the ground elevation at a specific LAT/LON, this is good because we can just use 0 for our base height and not worry about putting a line underground somewhere, however its annoying in that the line elevation tracks with the ground contours distorting your perception of RSSI a bit.   Absolute doesnt have this problem, since 0 is (I think) locked at sea level, however in places farther from the coast this is annoying because low elevations wind up underground.  Ultimately it might be neccisary to do some kind of aboveAverageGroundLevel for a local area, but this is as far as I know not supported in Google Earth / KML.

References

The resulting KML file can be opened in Google Earth from https://github.com/osh/kml/blob/master/WAMU_RSS.locked_1427589112.17.json.kml.   Countless hours of fun can be had zooming around inspecting fades that occur passing various obstacles along the way and signal strength variation going up and down hills in various areas.

This flow graph is available at https://github.com/osh/gr-uhdgps/blob/master/examples/uhdgps_rssi_log.grc   To use it make sure you have gr-eventstream, gr-uhdgps, and relevent python modules installed.