Sunday, January 31, 2021

Writing a custom trait deriver in Rust


Recently I have been experimenting with template metaprogramming in Rust, to write a deserializer for key=value pairs into structs. I usually find Rust documentation awesome, so I was surprised when I was not able to find detailed documentation on this topic. The example in The Rust Programming Language book is very basic and only explains how to start. Then there is The Little Book of Rust Macros that has more info, but again I could not find clear examples on writing a custom deserializer. Searching the net provided some examples, but they were not clear enough for me, and usually used old versions of the proc-macro and quote crates that did not work with recent versions. So I decided to write the deserializer to figure out the puzzle.


The key=value deserializer

I am working on a project that needs to interface wpa_supplicant. There is a wpactrl crate that handles the connection to the daemon and allows making requests and getting the results. But this crate does no parsing of the results output by wpa_supplicant, that are provided in a string with key=value formatting, one pair per line. I could have parsed every line, matching the keys I want to obtain, to assign then the corresponding struct member, but this looked like the perfect opportunity to write an automatic deserializer. So you just write the struct with the fields you want to obtain, tell the compiler to derive the key-value extract code, and profit. Before I start, you can check in GitLab the code I wrote.

So, what I want is to take the following string:
And automatically derive the code that parses it and writes the corresponding values to this structure (also containing an enum):
enum TestEnum {

struct Test {
    num: u32,
    num2: u8,
    txt: String,
    txt2: String,
    choice: TestEnum,
    choice2: TestEnum,
The keys available in the string but not available in the struct shall be ignored, and the members of the struct not available in the input string must be filled with default values. Execution of the test program must output this after filling the struct Test:
Test {
    num: 42,
    num2: 0,
    txt: "Hello",
    txt2: "",
    choice: Def,
    choice2: Two,

The kv-extract crate 

We already know what we want to achieve, let's get to work! I will not be explaining how to setup the Cargo.toml files, sure you are familiar with them, and the example I linked above from the Rust book explains this perfect, so if you have problems with Cargo, please read the example and check the complete code in my GitLab repository.
First we have to create the crate for the key-value deserializer. I have named it kv-extract. This crate just defines the trait with the function to deserialize data, taking the input string and returning the filled structure:
pub trait KvExtract {
    fn kv_extract(input: &str) -> Self;
For technical reasons, Rust 2018 requires the code implementing the derive macros to be located at its own crate (this restriction might be lifted in the future), so we have finished with the kv-extract, that was a short crate!

The kv-extract-derive crate

Time to create the kv-extract-derive crate with the derive macro code. The convention is creating derive macros inside the crate we are deriving code for (so we create kv-extract-derive inside the kv-extract crate).

For the derive code, the basic program flow is as follows:
  1. We use proc_macro to assist in the code derivation process when the user writes the #[derive(KvAssign)] sentence over a struct.
  2. Using syn crate, we generate the compiler abstract syntax tree (AST) corresponding to the struct we want to derive the deserializer code for.
  3. We iterate over the AST data to extract the tokens useful for code generation (in this case, the struct name and the struct members.
  4. Finally we use the qoute! macro to generate code tokens that use the data extracted from the AST to build the derived sources.

Extracting the abstract syntax tree

Our entry point in the derive module is defined using the proc_macro_derive macro. The first step is easy, we obtain the AST corresponding to the structure, and pass it down to the function implementing the derive code. The function implementing the derive code will have to return it as a proc_macro::TokenStream representation.
use proc_macro;
use quote::quote;
use syn::{ Data, Field, Fields, punctuated::Punctuated, token::Comma };

const PARSE_ERR_MSG: &str = "#[derive(KvExtract)]: struct parsing failed. Is this a struct?";

pub fn kv_extract_derive(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
    let ast = syn::parse(input).unwrap();

The impl_kv_extract function will have to extract the following data:
  1. The name of the struct we are deriving.
  2. A vector with the data for each structure field.

I found no documentation about the AST and how to traverse it. If you know where to find it, please let me know. Fortunately I was able to figure out the puzzle by printing the AST debug info (this requires using full features on syn crate). First we get a reference to the structure name stored in ast.ident. To reach to the structure fields we have first to destructure as data_struct. Then destructure data_struct.fields as fields_named, and then we have the fields in fields_named.named, that we asign to fields variable through a reference:

fn impl_kv_extract(ast: &syn::DeriveInput) -> proc_macro::TokenStream {
    let name = &ast.ident;

    let fields = if let Data::Struct(ref data_struct) = {
        if let Fields::Named(ref fields_named) = data_struct.fields {
        } else {
    } else {
    // [...]

Generating code

Now the fun is about to begin, we have to start generating code. The general idea behind code generation in the form of a proc_macro::TokenStream, is to enclose the code snippets we want to build inside a quote! macro. This macro will help us with two tasks:

  1. Convert the enclosed code snippets into a proc_macro2::TokenStream.
  2. Expand tokens (related to the fields we just collected) to build code.

Note that this macro returns proc_macro2::TokenStream type, that is different from proc_macro::TokenStream, but converting from the first one to the later is as simple as invoking the into() method.

We expand the code using two quote! blocks. One of them we will see later, is run once for each struct member to initialize it: each resulting code snippet in the form of a proc_macro2::TokenStream is added to a vector. The result is returned by the kv_tokens() function into the tokens variable. The second quote! block generates the skeleton of the derived code, and expands the tokens variable to complete this skeleton. The resulting derived code is returned as a proc_macro::TokenStream using the into() method:

let tokens = kv_tokens(fields);

    let gen = quote! {
        fn kv_split(text: &str) -> Vec<(String, String)> {
                .map(|line| line.splitn(2, "=").collect::<Vec<_>>())
                .filter(|elem| elem.len() == 2)
                .map(|elem| (elem[0].to_string(), elem[1].replace("\"", "")))

        impl KvExtract for #name {
            fn kv_extract(input: &str) -> #name {
                let kv_in = kv_split(input);
                let mut result = #name::default();



In the code above, inside the quote! block, we generate the kv_split() function, that returns a vector of tuples in the form of (key, value) pairs, obtained from the input string. Then we generate the implementation of the KvExtract trait for the structure (referenced using #name).

The trait implementation first obtains the key-value pairs from the input string and then creates the result variable with default values (so we will need every member of the struct to implement the Default trait or code will not compile). Then we expand the tokens vector with the code assigning the struct members using the #(#tokens)* syntax, to finally return the result.

The only thing we are still missing is how the tokens vector is generated in the kv_tokens() function:

fn kv_tokens(fields: &Punctuated<Field, Comma>) -> Vec<proc_macro2::TokenStream> {
    let mut tokens = Vec::new();

    for field in fields {
        let member = &field.ident;

            quote! {
                kv_in.iter().filter(|(key, _)| key == stringify!(#member))
                    .for_each(|(_, value)| {
                        if let Ok(data) = value.parse() {
                            result.#member = data;


The code above, adds a block of code in the form of proc_macro2::TokenStream to the tokens vector for each struct member. This code block takes a specific struct member and iterates the key-value tuples obtained from the input string, to see if any of them matches. When a key matches, its corresponding value is converted using the parse() string method and assigned to the specific struct member that matched. The parse() method requires the FromStr trait to be implemented for the returned datatype, so if we use custom enums or structs as struct members, we will have to implement the trait ourselves (in addition to the Default one as explained earlier). But if we place inside the struct a type already implementing the Default and FromStr traits (for example the MacAddress struct from eui48 crate), it will be beautifully deserialized without us having to write a single new line of code. Nice!

Testing the derive macro

The only thing remaining is to test this works with the following program:

use kv_extract::KvExtract;
use kv_extract_derive::KvExtract;
use std::str::FromStr;

enum TestEnum {

impl Default for TestEnum {
    fn default() -> TestEnum {

impl FromStr for TestEnum {
    type Err = String;
    fn from_str(s: &str) -> Result<Self, Self::Err> {
        match s {
            "Def" => Ok(TestEnum::Def),
            "One" => Ok(TestEnum::One),
            "Two" => Ok(TestEnum::Two),
            unknown => Err(format!("\"{}\" does not match TestEnum", unknown))

#[derive(KvExtract, Default, Debug)]
struct Test {
    num: u32,
    num2: u8,
    txt: String,
    txt2: String,
    choice: TestEnum,
    choice2: TestEnum,

fn main() {
    let data = "num=42\n\

    println!("{:#?}", Test::kv_extract(data));

We had to provide our own implementations of the FromStr and Default traits, but everythings works great, execution outputs the exact same data we expected.

I hope you enjoyed this entry. I had a hard time writing this because I found documentation a bit scarce, but sure derive macros are powerful and beautiful!

Saturday, August 22, 2020

Using Android without Google services in 2020

We all know Android is a mobile operating system used by Google as a data collection platform mainly for ad delivery. It should hardly come as a surprise if I tell you Google is mining every bit of your personal data on any average Android phone. But Android has a very important characteristic making it much more interesting than iOS for privacy minded people: its roots are buried in Open Source grounds.

Android Open Source Project

Android base is known as AOSP (Android Open Source Project). Every Android version is developed by Google, and then sources are released to the world, for any developer to browse and tinker with them. Anyone with the appropriate knowledge can download AOSP sources and (with patience and a powerful computer) build them. Add the corresponding drivers for your phone, and you will have an image you can install and use (assuming your phone has the bootloader unlocked), and that has no Google data collection services inside. There is a very active community around AOSP that will do for you the hard work of collecting the sources and building them, and also will add some more interesting features to the base. These AOSP compilations with added features are called "Android distributions" (or Android distros for short). The most well know one is "Lineage OS", but there are a lot more you can try. You can download ready to install images of these distros for a lot of phones and tablets.

The Google Apps

So far, (almost) everything is fine: you can install on your phone an Open Source OS that is not sucking your data and sending it to Google. But there is a catch: AOSP does not include the "Google Apps" or (GApps). So you will have no Play Store, no YouTube, no Maps, no Gmail, etc. If you are a privacy minded person, maybe you did not want these apps anyway, but there is one thing that is troublesome even if you do not want Google applications: you will lack Google core services, so any application trying to use them (for example any application trying to use Firebase to send a notification to the phone, or trying to use the Maps API to send your location) will lack functionality or completely fail and crash.

Google Apps are closed source. Some people are confused and think GApps are Open Source, maybe because of the OpenGapps project. This project let's you download different GApps bundles, ready to install on your AOSP based device. But you should not be fooled by the name of the project: Google Apps are 100% proprietary and we have no control on what they do. If you install them in your AOSP based device, there will be little difference regarding the data collection performed by Google with respect to the one performed by any Android phone you can buy on the market.

So if you install LineageOS on your phone (without the Google Apps), it will not collect your data and will work mostly as any other Android phone. But a lot of applications will refuse to work, because they will try to use services your phone lacks, or will directly detect you lack the Play Store and will refuse to start.

What can you do if you want a "Google-free" phone, but need some applications that refuse to work without the Google services? Here is where microG project comes to the rescue.



microG is a free (as in freedom) re-implementation of the Google core services and libraries. It implements in a varying degree of completeness services provided by the proprietary Google Apps core, and also spoofs the Play Store so any app that requires it to be installed, can pass this check. For example, using microG, application calls to the Maps API will succeed, but instead of using the Google Maps services under the hood, they will be using Mapbox (an Open Source mapping system). Isn't this cool?

Unfortunately microG is far from perfect. First of all, it is far from completion. Some APIs are completely lacking, and most of them are only partially implemented. You can check here the implementation status. The table looks discouraging, but in my experience, most applications I tried are completely usable.

Another problem when using microG, is that even though the services implementation running on your device is free, the underlying services running on the cloud sometimes are not. If you need the device to log into Google services for using things like the Firebase messaging or SafetyNet, you will need to connect to Google machines. microG will strip identifying data other than the account name, but your device will not be 100% Google free anymore.

Finally, installing microG can be troublesome. It needs to be installed as a system application, and requires that the underlying distro supports "signature spoofing", in order for the microG Play Store application to convince other apps that it is the one coming directly from Google. Out of the box LineageOS builds do not support signature spoofing. There was a heated discussion about this topic, but LineageOS devs decided not to implement it, so something must be done here.

LineageOS for microG

If you want to get rid of Google services in your device, my advice is to install "LineageOS for microG". It is a fork of the official LineageOS distro, but with microG preinstalled (including the signature spoofing support that official LineageOS lacks). As a nice bonus, this distro also includes F-Droid preinstalled, a substitute of the Play Store that has only Free and Open Source Software (FOSS). First you have to check your device is supported. Then for the installation, instructions will vary depending on your device. You will have to check places like XDA-Developers to get them, or even better, find a friend that can do the work for you.

Once you have LineageOS for microG, or any other AOSP based distro with microG on your device, you will have regained total control of it. You can connect to Google servers only if you really need them, and in my experience you will not need to connect at all if you are ready to make some sacrifices. Also no matter what you do, some applications will refuse to work, so you will have to search a substitute (F-Droid is a great tool for this) or just live without them, that is if you are resolved to not use Google services anymore. I use LineageOS for microG on a daily basis, and for me it is pretty usable, I will try documenting some of the things I use and work, and some of the things I lost in this adventure.

What works

Unless otherwise stated, all the applications I will comment here, either come preinstalled or are downloaded directly from F-Droid. If you need downloading an application from the Play Store, there are several methods that work (setting a local repo, using browser plugins, some special apps, etc).

Where's my bloat?

  • "Standard" smartphone tools (calls, SMS clock and alarms, taking photographs, videos, gallery, etc.) work perfect out of the box. You will have some added bonus you will hardly find on any phone you can buy on a shop, such as the ability to record calls.
  • Contacts and Calendar synchronization: I use them with Nextcloud (with DAVx5 app) and they work perfect.
  • e-mail: The included email application works great. In addition to implementing the standard e-mail protocols (SMTP, POP3), it can interact with Exchange (Outlook) and also with Gmail accounts. The only drawback is that if you use second factor authentication, you will need to set up a password for this application. But this is a one-time hassle.
  • Messaging: Telegram works just perfect and does not require connecting to Google services, this is the app I would recommend for messaging. There is an old Telegram version on F-Droid, but if you want a recent one, you will need to get it from the Play Store or elsewhere. Some years ago I used WhatsApp and it also worked, but Firebase/Google Cloud Messaging was needed to get real time notifications. Also WhatsApp is closed source and has ties to Facebook, so if you are still using it, my advice is to uninstall it ASAP. Of course WhatsApp can only be download from the Play Store.
  • Mapping: I use OsmAnd+. It is not as great as Google Maps, but it has many features and is a pretty good substitute.
  • Browsers: Firefox works just perfect. I have not tried Chrome, but I think it also works great. Chrome is not available in F-Droid. Neither is Firefox, but in this case you can download in F-Droid a Firefox updater app that will fetch the most recent version for you.
  • Cloud synchronization (files and others): All the Nextcloud apps I have tried work perfect (Nextcloud, Bookmarks, Notes and Deck).
  • TOTP password generation: andOTP works great and has a lot of options to securely store your OTP configurations.
  • YouTube: I have not tried the official app, maybe it works. But I use NewPipe to watch YouTube videos. It works great most of the time and also has some added features, like downloading videos or reproducing them on a popup window. Make sure you have it up to date, because it sometimes breaks (when Google makes changes to YouTube internals) and developers are quick to fix it.
  • Twitter: I have not tried the official client, I use Twidere app. It is lightweight and full featured, I really like it. The only thing I lack in this app is that it does not yet support Twitter polls. Other than this, it is just superb.
  • Reddit: I use Slide app, and it works great. 
  • Games: All the games I tested downloading them from the Humble Store worked perfect without exception. I have tried at least Monument Valley, Plants vs Zombies, Canabalt HD, Super Hexagon, World of Goo...
  • Emulators: I have only tried RetroArch and it works perfect. Unfortunately it is not available in F-Droid (even though it is Open Source).
  • Multimedia playback: just install VLC. It can play any file you are able to throw it. If you have a media center based on Kodi, both official Kodi app and also Kore app work perfect (I personally prefer the later). I also use TVHGuide to view TDT TV emissions directly streamed from my media center (that has a TDT decoder and Tvheadend software).
  • Spotify: the official Spotify app works perfect. You will not find it in F-Droid, as it is closed source.
  • QR Scan: SecScanQR works perfect and also supports standard linear barcodes.
  • Passbook handling: PassAndroid works perfect.
  • Podcasting: just install AntennaBox.
  • PDF readers: PDF Viewer Plus works great.
  • Dictionary: WordReference (not in F-Droid) works great. I have yet to try some of the Open Source dictionary apps in F-Droid.
  • Rooting: If you need root permissions, Magisk works like a charm. It is not in F-Droid (you have to flash it from the Recovery). Say goodbye to passing SafetyNet checks if you are going to install any root application, including Magisk.
  • Backups: I use TitaniumBackup and it works perfect. Even the license works without having the Google Apps (you have to ask the developer for a license key file). This app is proprietary and as such it is not in F-Droid.
  • Networking: OpenVPN works perfect.
  • Swype type keyboard: AOSP keyboard already supports swype typing style. But unfortunately, for this to work it requires a closed source library ( You can grab this library from the OpenGapps package and manually flash it for this function to work. Otherwise you will lack this function.

What doesn't work

Here are the things I found not working, and some more comments about things I have not tested but I suspect will not work.

  • BBVA España: this is a spanish banking app. It warns the device OS has been modified and then crashes.

Others I have not tested but I think that have many chances of not working:

  • Other banking apps. I suppose there are many chances for them not to work, or at least to force you to connect to Google (and not have root) to pass SafetyNet checks.
  • Applications requiring DRM media playback (like Netflix). I have not tested but I would be surprised if they work.
  • Android Auto. API has not been implemented, so do not expect it to work.
  • Some games using the Games API (but I had no one to test).


For enhanced privacy, it is possible to use an Android phone lacking Google Services. Install LineageOS for microG (or any other Android distro, and then microG) and most apps should just work. Unfortunately a few of them will fail with this setup, so if you can find alternatives, great. Otherwise, you have to balance what you prefer: the enhanced privacy or the app you cannot use.

Wednesday, May 6, 2020

MegaWiFi, and why you should code for it

It's been a long time since I wrote here about MegaWiFi. To date, there are still no games released using it, so you might think this project is long dead, do you?

Well, if you do, you are wrong! Progress has been slow (my spare time is limited), but lot of things have happened:

  • MegaWiFi API has greatly matured (currently it is version 1.3), and I spent a lot of time working on making it easy to use. I have created an implementation of pseudo threads and timers, allowing to use the APIs in both synchronous and asynchronous flavors. It has support for TCP, UDP, SNTP, HTTP/HTTPS, etc.
  • I also spent a lot of time working on API documentation. All API functions are pretty well documented, and I have provided properly explained examples for most common use cases.
  • There was an initial effort long time ago to add MegaWiFi support to the great Blastem! emulator. Unfortunately this was halted prior completion. But recently I have resumed work on this, and now MegaWiFi support in Blastem! is very usable.
  • There is a game finished and ready for release (WorldCup 1985 by Pocket Lucho) with MegaWiFi support! It should be released soon by 1985alternativo. Go to my previous post to reach the repos
Megadrive/PC cross play, what a time to be alive!
  • There is at least another game with MegaWiFi support in early development, again by Pocket Lucho.
  • I added support to use MegaWiFi with the popular SGDK development kit, with detailed instructions and a precompiled toolchain for Windows, ready to use.
  • I wrote a Gaming Channel implementation, allowing devs to publish games online, and users downloading them directly to the cart. This shows the platform is rock-solid stable, allowing to flawlessly download and play games.
Steam is something from the past!

  • And there are many many more features I will not bring here now, to avoid making this post too long.
So, to sum it all, if you like coding for these vintage devices, you should consider starting coding a WiFi enabled Megadrive game NOW! The reasons?:
  1. Hardware and Firmware are mature.
  2. API is mature, well documented and easy to use.
  3. You can code your way or use SGDK for an easy kickstart.
  4. There are great development tools readily available, and emulator support, including integrated debugger with breakpoints, variables watch, etc. Yes, Blastem! is great!
  5. You should be able to publish the games in physical cartridge format, make them available as digital downloads directly to the Megadrive, or play them via emulator.
  6. Everything is Open Source Software and Hardware: the cart and programmer schematics, the firmware for the WiFi module and programmer, the API for Megadrive, the PC tool for programming, the WiFi bootloader...). Go to my previous post to reach the repos.
  7. Megadrive is cool, and WiFi on the Megadrive is even cooler!

Friday, March 24, 2017


Past year, I created a cartridge for the mythical 16-bit console SEGA Genesis/Megadrive. But it was not a boring standard 32 Mbit cartridge, it had an interesting feature: WiFi connectivity.

MegaWiFi cartridge, plugged into a MegaWiFi programmer

To achieve the WiFi connectivity, I have added an ESP8266 wireless module, and a small UART chip, used as a bridge between the ESP8266 and the parallel port of the 68000 CPU inside the Genesis/Megadrive.

Although adding an ESP8266 to almost everything is usual a trivial task, I took a lot of effort trying to make using this cart as easy as possible. This required:
As you can see if you browse the repositories, it is a considerable amount of work. Hardware is under CERN OHL 1.2 license, and software uses a mixture of GPLv3 and BSD licenses.

Although not still finished, with this cart you can currently use your Genesis/Megadrive to scan for access points and join them, create TCP sockets, send and receive data through them, generate random numbers, write and read to/from the internal 32 Mbit flash memory of the WiFi module (ideal for DLC contents ;-), synchronize the date/time to NTP servers, etc.

Echo test between a MegaDrive and a PC

I'm also currently writing a WiFi bootloader, to try easing game testing (allowing to upload and flash game ROM through WiFi). I hope this makes the platform attractive enough for developers. So if you like developing for old systems, and are brave enough, give it a try!

And remember: Genesis does what Nintendon't

Wednesday, December 28, 2016


I finally got some time to upload MOJO-NES to GitHub. MOJO-NES is a NES cartridge with no mapper support, so it can hold up to 32 KiB (PRG) + 8 KiB (CHR) ROMs. It was designed to host projects from the infamous The Mojon Twins (hence the name) and 1985alternativo.

As far as I know, it has already been used for the following NES titles:
  • Sir Ababol
  • Sgt. Helmet Training Day
  • Jet Paco
  • Alter Ego
  • Super Uwol
Not bad for such a simple cart!

Schematic and PCB have been designed using the awesome and free (as in freedom) KiCad. You can find the design files here. If you want to make your own cart, be warned that the PCB must be 1.2 mm thick (they are usually 1.6 mm thick). You will also need to flash the AtTiny microcontroller with avrciczz firmware to defeat the lockout chip (the CIC) inside NES consoles.

Since I completed this cart, I have also designed two more NES cartridges (the last one supports an extended version of MMC3, but is still WiP), so stay tuned for more NES cart awesomeness!

Sunday, December 22, 2013

Balsamo Reloaded

A whole lot of time ago, I started a homebrew project called Balsamo: a gadget to block unwanted calls on standard PSTN lines. The first revision of Balsamo (Rev.A) featured a FSK decoder and an implementation of the CID protocols needed to obtain the caller number (and also to get the date and time). Caller number was printed on the LCD, and if it was inside a blacklist (stored in the microcontroller internal Flash memory), Balsamo picked up the call and hung it a second later. It worked well, but I wanted to make a lot more improvements. Unfortunately I didn't have the time to work in this project until recently. I have made a new PCB (Rev.B) and added a lot of improvements. Here comes Balsamo Reloaded (or just Balsamo Rev.B):

The heart of the system is the same: a dsPIC30F6014, a 2x16 LCD, some analog chips (amplifiers and a linear regulator) and some more discrete components. The RS-232 serial port connector and the level transtator have been removed (I used them only for debugging), but a lot more things have been added. Balsamo's features are:

  • Caller ID (CID) decoding. Decoding is done entirely inside the dsPIC. No external CID decoder has been used.
  • Capability to blacklist/whitelist calls, based on caller's number and on whether the caller number is private/hidden.
  • Both blacklist/whitelist modes are supported. Also private/hidden calls can be configured to be allowed or blocked.
  • FAT formatted microSD card support. The microSD card holds the configuration file (that includes the blacklist/whitelist), the RAW audio files played when a call is rejected, and the call log file.
  • Now when a call is rejected, an audio message is played, to inform the caller about why the call has been rejected.
  • Two different audio messages can be played to the caller: one for blocked numbers (blacklisted or not in the whitelist) and the other for blocked private/hidden calls.
  • Logs to microSD card all the calls, and the action performed for each of them (ALLOW/BLOCK).
  • Clock function. The user has only to set the year. All the other date/time parameters are automatically set each time a call is received (they are extracted from the CID data).
  • Simple user interface with a 2x16 LCD, 4 LEDs and 5 pushbuttons (only 4 of them are used so far: up, down, enter, esc). The user interface is complete and easy to use: you can add/remove numbers to the blacklist/whitelist, enable/disable the call filter, browse the list, browse recent calls, add recent calling numbers to the list, etc.
  • Low power design: the system drains 4.4 mA while idle (most of the time, while waiting for a call) and around 20 mA when active. The design is entirely 3.3 V, but uses a low drop-out input regulator to be able to use a wide input voltage range.
  • Backup battery capability: the PCB allows for a battery to be used along with a wall AC adapter. If there is a power fail, the backup battery allows the system to continue working until power is restored.
The new PCB has been designed using the GPL licensed gEDA suite. Schematics have been drawn with gschem and PCB has been routed with PCB. This is the first time I seriously use PCB, and the learning curve is really steep, but I'm pretty pleased with the results. I also tested KiCad and Eagle (warning, the last one is not free), but I find PCB more professional (and also harder to learn). I have used its experimental (and buggy) toporouter and I really like the way it draws tracks, in any angle, with curved corners and minimizing track length. Too bad it needs a lot of polish. For the track to pin connections, I have used the Teardrops plugin by DJ Delorie.

You will not find any digital chips in the design, other than the dsPIC and the LCD. The dsPIC does almost all the work, there is no external ADC, DAC, SD/FAT controller, CID decoder, etc. Only a dsPIC, some analog chips and discrete parts. FSK signal is sampled using the internal 12-bit ADC inside the dsPIC, and audio messages are played using a 32 kHz PWM signal. SD card FAT filesystem is handled by the impressive FatFs by Elm Chan software implementation.

Another addition to the Rev.B PCB is an audio amplifier. It can drive a small speaker. The amplifier has been tested and works, but it is still unused. It might be used in the future to play acoustic notifications when a call is blocked, and to implement answering machine capabilities (to play recorded messages). There are also plenty pins available to hack and extend the board functionality, including two UARTs, an SPI bus and an I2C bus, but I don't think I'll need them in the future.

Now this is where a video of the gadget working would be nice. Unfortunately I don't want to mess right now with video editing for removing real telephone numbers and that kind of stuff, so we will have to forget about it (at least until I get the time and motivation).

You can find a lot more information about this project, along with the source code, design files (schematics and PCB), GERBER files, etc. on my Balsamo GitHub repository. Everything is GPLv3+ licensed, so feel free to grab and modify it as you wish.

Happy hacking!

Sunday, November 25, 2012

The complete tutorial for Stellaris LaunchPad development with GNU/Linux (III)

We set-up the toolchain and built StellarisWare libraries and lm4flash tool. We are also able to debug programs using gdb + openocd. But command line building and debugging projects isn't fun, is it? No problem. In this chapter we will create an Eclipse project, and will be able to build the sources, flash them and debug with a few mouse clicks.

Installing Eclipse

You have to install Eclipse + CDT (C/C++ Development Tooling). If you are using Ubuntu (or any other Debian based distro), use this command:
sudo apt-get install eclipse-cdt
If like me, you are using Arch Linux, try this one:
sudo pacman -Sy eclipse-cdt
And that's all for the installation. This tutorial has been written using Eclipse version 4.2.1. If you are using other revision, there might be some differences, but you should be able to configure everything anyway. When watching screenshots, if the text is not legible, click the image to watch it full size.

Let's create a new project with the files from the template we built in the previous chapter.

Creating the project

  1. Launch Eclipse. You'll be asked to select a directory for the workspace. Select src/stellaris/projects directory, under your home (/home/jalon in my PC):
  1. Create a new project. Click File/New/Project...:
  1. Select C Project (under C/C++) and click Next >:

  1. In the Project name text box, write "template". In the Project type tree, select Executable/Empty Project. Then select Cross GCC Toolchain and click Next >:
  1. Now click Advanced Settings:
  1. Select C/C++ Build/Settings in the tree. In the Configuration combo box, select [ All configurations ]Make sure you keep [ All configurations ] selected for all the following steps, until number 14. In the Tool Settings tab, in the Cross Settings section write "arm-none-eabi-" into the Prefix text box:
  1. Click Symbols under Cross GCC Compiler. Add the following symbols: PART_LM4F120H5QRARM_MATH_CM4TARGET_IS_BLIZZARD_RA1:
  1. Jump to the Includes section and add the path to StellarisWare libraries. It should be in src/stellaris/stellarisware directory, under your home:
  1. In the Miscellaneous section, in the Other flags: text box, you should see "-c -fmessage-length=0". To these two flags, add these all: "-mthumb -mcpu=cortex-m4 -mfpu=fpv4-sp-d16 -mfloat-abi=softfp -ffunction-sections -fdata-sections".
  1. It's time to add StellarisWare driver library. Go to the Cross GCC Linker / Libraries section, add "driver-cm4f" to the Libraries (-l) list, and "src/stellaris/stellarisware/driverlib/gcc-cm4f" prefixed by your home to the Library search path (-L) list:
  1. In the Miscellaneous section add the following Linker flags"-Wl,--static,--gc-sections,-T../LM4F.ld -mthumb -mcpu=cortex-m4":
  1. Go to the Build Steps tab, and in the Command text box inside the Post-build steps frame, type "arm-none-eabi-objcopy -O binary ${ProjName}.elf ${ProjName}.bin". Then in the Description: text box below, type "Generate binary file from elf file":
  1. Switch to the Build Artifact tab and add ".elf" to the "${ProjName}" text inside the Artifact name: text box. The resulting string should be "${ProjName}.elf". When finished, click OK:
  1. That was a long configuration, but when you click Finish, the project will be ready. You might need to advance to the next step before the Finish button becomes enabled. If that's the case, click the Next > button to advance to the next step, enter "arm-none-eabi-" in the Cross compiler prefix text box, and finally click Finish. If Eclipse asks you if it should open the C/C++ perspective, say yes. Also, if it's still opened, close the Welcome tab.
  1. You should see the Eclipse layout for an empty project. We will not use the Java perspective, so right click it and then click Close:
  1. It's time to start adding files to the project. We will use the template project by Scompo. We downloaded it in the previous chapter. Let's copy the source files. Go to the terminal and type:
cd ~/src/stellaris/stellaris-launchpad-template-gcc
cp LM4F.ld LM4F_startup.c main.c ../projects/template
  1. Files are automatically added to the project once you put them in the project folder. If you don't see the files in the project explorer, just right click the template project and then click Refresh. If the project tree is collapsed, also make sure to expand it.
  1. The source files should apper in the project tree. Everything is set to start using Eclipse for coding. I'll not explain how to use Eclipse, I'll only say you can open a file by double clicking it in the project explorer, and you can build the project (and select the configuration to build) using the hammer button. Try it, the project should be built without a problem. If something goes wrong, right click the project name in the Project Explorer, then click Properties, and repeat configuration steps from 6 to 14.

Flashing programs

You have built a program using Eclipse and you want to test it, but you refuse to flash it using a boring terminal. Today it's your lucky day, I have the solution to your problem. You can configure Eclipse to launch lm4flash and flash your program.

  1. Click Run/External Tools/External Tools Configurations...:
  1. Right click Program, then click New:
  1. Change Name for example to "Release flash", Location to the place where lm4flash is (we installed it to sat/bin/lm4flash under your home), Working Directory to "${workspace_loc:/template/Release}" (Release directory of your project) and Arguments to "template.bin" (the binary file we want to flash):
  1. Switch to Common tab and enable External Tools in the Display in favorites menu frame. Then click Apply and finally click Close:
  1. And that's all. To flash the binary generated in the Release configuration, just pop the External Tools menu and click Release flash:
Each time you flash a program, in the Eclipse Console tab should appear a message similar to "Found ICDI device with serial: XXXXXXXX. It confirms lm4flash was called, found the MCU and flashed the program. I don't know why, but it looks like the first time I try to flash a program, this message doesn't appear, and lm4flash appears to be blocked. If this happens to you, go to the Eclipse Console tab and terminate lm4flash (click the button with the red rectangle). Try flashing again and from now on, it should work.


The main reason I have, to embrace Eclipse or other similar IDEs (like for example Code::Blocks), its because of it's wonderful integrated debugger. If you don't like command line debugging with gdb, you'll love Eclipse once you set up the debugger. Let's get to it.
  1. First we have to add another External Tool, to launch openocd. Repeat steps 1 and 2 in the previous subchapter (Flashing programs), to add a new program.
  2. Change Name to "openocd", Location to your home directory plus "src/stellaris/openocd-bin/openocd"Working Directory to your home directory plus "src/stellaris/openocd-bin" and Arguments to "--file LM4F120XL.cfg". Then click Apply and finally click Close:
  1. Now we have to configure gdb. Click Run/Debug Configurations...:
  1. Right click GDB Hardware Debugging, then click New. James Kemp pointed me out some Eclipse installations lack GDB Hardware Debugging options. If that's the case in your setup, you'll have first to install the GDB Hardware Debugging by using the Help / Install New Software dialog.
  1. Change Name to "gdb", C/C++ Application to "Debug/template.elf" and Project to "template":
  1. Switch to the Debugger tab. Then change GDB Command to "arm-none-eabi-gdb", and uncheck Use remote target:
  1. Now go to the Startup tab. Uncheck Reset and Delay (seconds) and Halt checkmarks. In the Initialization Commands text box enter two lines: "target extended-remote :3333" and "monitor reset halt". In the Run Commands text box enter "monitor reset init". Then click Apply and finally click CloseWARNING: If you had the problem with gdb/openocd explained in the troubleshooting section in the previous chapter, you also will have to copy to the project directory the "target.xml" file you used, and add the line "set tdesc filename target.xml" to the Initialization Commands. This added line must be the first one in the list.
  1. We could start debugging right now. To do this, we could launch using Eclipse menus, first openocd and then gdb. But we can make Eclipse launch both programs with a single menu action. Click Run/Debug Configurations...:
  1. Right click Launch Group, then click New:
  1. Change Name to "Debug", then click Add... button:
  1. Change Launch Mode to "run", select "openocd" and click OK:
  1. Click Add again. The same window will pop up. Now select "gdb" and click OK.
  1. Go to the Common tab. Add a checkmark to Debug in the Display in favorites menu frame. Then click Apply and Close:
  1. It took us some time, but I swear everything is configured now. No more configuration steps from now on. To start a debug session, click the Debug template menu, and then click Debug. If Eclipse asks you if you want to switch to the Debug Perspective, say yes. I have found that if I use lm4flash tool before debugging, openocd doesn't start properly until I unplug the LaunchPad from the USB port and plug it again, so if the debug session doesn't start, try unplugging and plugging the LaunchPad again.
Here you can see the debug layout. In the Debug window you can see the launched applications. There you can see openocd and gdb, and also the Debug launch group. Over the Debug window, you can find the buttons for controlling the program execution (continue, stop, step into, step over, etc.). You can set breakpoints, watch variables, registers and memory, you have a disassembler, etc. Really cool, isn't it?
To stop the debug session, I'd recommend to click the Debug launch group, then the Terminate button (the one with the red square), and then the Remove all Terminated Launches button (the one with the two grey crosses, to the upper right of the Debug window). If you want to continue coding, it's also recommended to switch back to the C/C++ perspective.

That's all! It was a looooooooooooong entry! I hope you enjoy coding with Eclipse as much as I do. For the next chapter, I'll show you how to build the CMSIS DSPLib, a powerful library for signal processing and other CPU intensive maths algorithms.

Happy hacking and stay tuned!