Side By Side Diffs in a Terminal

Today I set up side by side colored diffs for Mercurial. This may not seem like a big deal, but there were a few problems I encountered:

  • Most solutions online point to using the extdiff extension – this doesn’t work too great with hg qdiff.
  • Side by side diffs require more screen real estate, but when I’m not viewing a diff, I want my terminal window to stay in its corner on my screen, at its usual 92×35.

My final solution involves using aliases, Xterm control sequences to resize my window, and cdiff.

Installing cdiff was easy using pip. Once that was done, I set up my aliases in ~/.hgrc:

[alias]
ddiff = diff
qqdiff = qdiff
diff = !printf '\e[9;1t'; $HG ddiff $@ | cdiff -s -w 0; printf '\e[9;0t'
qdiff = !printf '\e[9;1t'; $HG qqdiff $@ | cdiff -s -w 0; printf '\e[9;0t'

That first two lines “back up” the original diff and qdiff commands, then aliases them to use cdiff!

Before diffing, the aliases printf the Xterm control sequence to maximize the window, and then restore the window after diffing.

The -s flag makes cdiff do a side-by-side diff, and -w 0  makes it use all the available real estate.

That’s it! I’ve been using it all day and absolutely love it, so I thought I’d share.

Cheers!

Raspberry Pi as an OpenVPN Gateway/Router

Over the last week, I got myself a VPS on DigitalOcean and have been playing around with it. Something I’ve wanted to do for a while is to set up a VPN tunnel for myself, and I finally did it.

I decided to write a blog post on my setup. I have a Raspberry Pi set up as a router on my Wi-Fi network, and it sends all traffic over the VPN. I’m not going to get into the reasoning for why I’m using something versus something else for fear of getting into rants in what’s going to be a long post anyway.

The Server

I got myself a “droplet” on DigitalOcean with 512MB of RAM and a 20GB SSD and Ubuntu 14.10 x64. I uploaded my pubkey on creation of the droplet, so it automatically set up ssh to work with it. If you choose not to, it will email you the default root password. I recommend disabling root login and setting up pubkey authentication immediately.

The first thing I did was create a new user account for myself and grant it sudo access. Then I enabled ssh on an additional port (just in case) and disabled password authentication. Finally, I took a “snapshot” of the basic setup as a backup.

Installing OpenVPN

I followed the instructions here to set up the OpenVPN server. Make sure you get the right deb file for your OS – the one in the post is for Ubuntu 12.x. OpenVPN offers an auto-login config profile – I grabbed this from the web UI so my Raspberry Pi could connect without me having to type in a password every time.
That’s it! Now for the client side.

The Wi-Fi Router

My Wi-Fi router is setup in IP sharing mode. This means that traffic from all the devices in my room will appear to my dorm’s router as coming from the same IP, and I have a local network on the 192.168.1.0/24 subnet.

The Raspberry Pi as an OpenVPN Client

The distro I’m running is Raspbian “wheezy” from Septeber 2013. I’m using this because the image was already available on the campus FTP server. Setting up OpenVPN is easy:

$sudo apt-get install openvpn

After that, I copied over the auto-login config file:

$scp /path/to/client.ovpn pi@<pi's ip address>:/tmp/client.ovpn
$ssh pi@<pi's ip address>
$sudo mv /tmp/client.ovpn /etc/openvpn/client.conf

Now to start the client and test if it’s working:

$sudo service openvpn restart
$curl ifconfig.me

The output should be the VPS’s public IP – that means everything is working. If it’s not, keep curl’ing a few times – it might take a few seconds to take effect.

Finally, I added the following line in the OpenVPN config file to bypass the VPN for intranet IPs:

route 10.0.0.0 255.0.0.0 192.168.1.1

That will bypass the VPN for any connections to the 10.0.0.0/8 subnet (192.168.1.1 is my Wi-Fi router’s local IP).

The Raspberry Pi as a Router

I wanted the Raspberry Pi to serve as a gateway and DHCP server for my Wi-Fi network. To achieve this, first it needed a static IP. I edited /etc/network/interfaces for this:

auto eth0
iface eth0 inet static
address 192.168.1.11
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1 # Wi-Fi router IP

Then, I needed to allow NAT:

$sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

To make this rule persist (from https://wiki.debian.org/iptables):

$iptables-save > /etc/iptables.up.rules

To restore the rules after a reboot, create this file:

$nano /etc/network/if-pre-up.d/iptables

Add these lines to it:

 #!/bin/sh
 /sbin/iptables-restore < /etc/iptables.up.rules

The file needs to be executable so change the permissions:

$chmod +x /etc/network/if-pre-up.d/iptables

Now, I was able to connect any client to the Wi-Fi network and browse through the VPN using the Pi (192.168.1.11) as the gateway!

The Raspberry Pi as a DHCP Server

Finally, I wanted devices to automatically use the Raspberry Pi as the gateway without any “advanced” manual configuration. To do this, I installed dnsmasq:

$sudo apt-get install dnsmasq

And edited the config file (/etc/dnsmasq.conf) to set the DHCP ip-range:

interface=eth0
dhcp-range=192.168.1.2,192.168.1.254,255.255.255.0,12h #start,end,mask,lease time

Now all I had to do was disable my Wi-Fi router’s DHCP server and voilà. Now any device connected to my Wi-Fi would automatically go through the Pi and hence the VPN connection.

Making DC++ Work in Active Mode

DC++ is widely used for file sharing on campus. Behind a firewall or router (like in my setup), I could only use DC in passive mode – which limits my search results greatly. To make active mode work, I set up my Raspberry Pi as a virtual DMZ station on my Wi-Fi router. This makes the router redirect all inbound packets to the Raspberry Pi. After that, it was a matter of setting up port forwarding.

First, I added this line to /etc/dnsmasq.conf to give my Macbook a hostname (nhnt11-mbp) and static IP with an infinite lease time:

dhcp-host=<macbook's mac="" address="">,nhnt11-mbp,192.168.1.12,infinite

Then, I made my Raspberry Pi forward port 1412 (TCP and UDP) to my Macbook:

$sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 1412 -j DNAT --to-destination 192.168.1.12:1412
$sudo iptables -t nat -A PREROUTING -i eth0 -p udp --dport 1412 -j DNAT --to-destination 192.168.1.12:1412
$sudo iptables-save > /etc/iptables.up.rules

And that was it! My room is now fully VPN’d.

Automator App to Connect Pi to VPN

As a bonus, I decided to make a small Automator app to run a shell script to reconnect the Raspberry Pi to the VPN and display a notification when the connection was good to go. The content of the shell script is as follows, you can figure out Automator yourself 😉

#!/bin/bash
ssh pi@192.168.1.11 sudo service openvpn restart
IP=`curl ifconfig.me`
while [ "$IP" != "<VPS's public IP>" ]; do
    sleep 1
    IP=`curl ifconfig.me`
done
echo "Connected!"

That’s it! Cheers!

Workflows (Part 2)

Last time, I talked about effective window management and my personal practices. In this post, I want to talk about what you can do to improve your experience in a terminal window.

Note: This post focuses on Unix shells (i.e. bash, mainly) and applies to Linux/OS X users. If you’re using a Windows command prompt, you’ll have to use Google to find alternatives. I’m afraid I haven’t used a Windows prompt enough to give advice. Also, I’m going to be a tad bit lax with terminology for the sake of readability.

Part 2: Terminal Tricks

Where to begin? There are a ton of things you can do to make your terminal more usable.

Learn your basic commands.

“Know your basics” sounds like obvious advice, but sometimes there’s a “basic” command that you just never heard of. It happens. Lifehacker has a great article covering some essentials.

Learn more about your shell, and what exactly is going on when you enter commands.

It’s not essential that you know exactly what you’re doing when you’re running terminal commands (yeah, I said it), but knowing some background on what’s going on when you type a command and press enter can be useful. Google is your friend, and the difference between a shell and a terminal/console is explained well in this StackExchange answer.

You should at least know what bash is, and learn about standard input/output.

Discover man pages.

If you’re ever clueless about how to use a command, or why it isn’t working as expected, take a look at its manual page by typing `man <command name>`. These pages provide detailed explanations of the program’s syntax and features.

Use tab completion!

When you’re typing a directory path or anything in your PATH variable (Google if you don’t know what that is), bash can autocomplete it for you if you press tab. This will save you loads of time – if you’ve ever seen someone typing at mach 4 on a terminal, chances are they’re just making good use of tab completion.

Discover the .bash_profile/.bashrc files.

These files are scripts that you can place in your home directory, and bash will source them when it starts. This means that you can add commands in this file, and they will be run in order whenever you open your terminal. This is great for aliases and such (see below).

Use aliases for common tasks.

If you find yourself typing a not-so-short command more often than you’d like, aliases are your friend. An alias lets you map a (short) command to another (longer) one. Here’s an example:

alias ..='cd ..'

Now, you’ll be able to just type “..” to go up one directory. Put this alias in your .bash_profile to have it set by default.

Remember to use single quotes here! Single and double quotes do different things – when you use double quotes, the quoted command is run and its output is stored in the alias.


EDIT: I wrote this completely wrong (thanks clokep). What I was thinking of is that if your alias has a sub-expression and you use double quotes, the sub-expression is evaluated only once, when the alias is set (i.e. it isn’t escaped). If you use single quotes, the sub-expression is stored and evaluated every time you run the alias.

What’s a sub expression? You can include the output of any bash command in another using these:

$foo $(bar)

There are plenty of uses for this. For example, many daemon programs (=programs that run in the background, like servers) create a “PID file” somewhere that contains the process id. You can then kill the daemon using something like this:

$kill $(cat /path/to/pid-file)

In many cases this can be substituted by piping output (I talk about this later in the post), but it’s a nice trick to know. Anyway, back to aliases…


There are a lot of great aliases out there, search around and set up your own!

You can have shortcuts for folders too, not just commands (though these aren’t called “aliases”). Just put something like this in your .bash_profile:

export DEVDIR="~/Dev"

Now whenever you want to switch to your Dev directory, just type in `cd $DEVDIR` (maybe setup an alias for this?). Presto. This is obviously more useful for longer paths.

The reason you need a $ symbol is because you’re actually setting a bash variable, and this is how bash variables are referenced (by prefixing a $).

Colourize all the things.

Coloured output makes a huge difference when you’re trying to view directory contents or look at diffs! It’s easy to enable colours – just insert the following snippet in your .bash_profile:

# Pretty colors!
export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
# Pretty colors in less
alias less='less -R'

This is for the colour scheme I use, you may want to customize it to your liking. Search around on how to do this, I don’t really remember the syntax for LSCOLORS myself. 😉

Note: Lines in bash scripts that start with a # symbol are comments. See below for more info about `less`.

Learn how to use output formatting/parsing programs.

The output of any command you run can be piped to another, meaning that it will be fed as input to the second program. The syntax for this is below:

$foo | bar

This will take the output of `foo` and give it to `bar`. This opens a realm of possibilities to format and parse output. Here are some useful programs you can use:

  • grep – This tool lets you filter the output of a program and display only lines matching a pattern. For example:
    $ls | grep foo

    This will show you all files and folders in your working directory which contain “foo”. To make grep even more useful, you can use regular expressions (Google is your friend) for powerful pattern matching.

  • less – This lets you scroll through long output rather than dumping it all at once.
  • pbcopy – An incredibly useful tool on OS X that takes the output of a command and puts it in the clipboard for your pasting convenience. Linux users, see this.
  • sed, cut, awk – Powerful parsing tools that are very useful for scripting. Refer to their man pages for more info.

You should also learn how to use > and >> to write output to files:

$ foo > bar.txt # Write the output of foo to bar.txt, overwriting any existing content.
$ foo >> bar.txt # Append the output of foo to bar.txt.
To infinity and beyond!

The stuff above is nowhere near exhaustive. There’s always another neat trick you don’t know about, waiting to be discovered. Here are a few things you should check out:

  • Keyboard shortcuts – bash has a load of nifty shortcuts waiting to be discovered. OS X users: see what happens when you Option+click. You’re welcome.
  • Try a shell other than bash. I use zsh – it’s got superior tab completion and oh-my-zsh makes it easy to customize themes and put useful info in your prompt (I particularly like that I can see version control info, like the current branch, right in my prompt).
  • Learn how to do basic on-the-fly file editing from a shell – `touch` and `nano` are tools I find useful.
  • You can quickly do basic process management from a terminal: `ps aux` lists current processes, `kill` kills processes by id, and `killall` kills processes by name. Look at their man pages for more uses.
  • Write scripts to automate compiling/running your project. There are plenty of resources on the internet to teach yourself how to write bash scripts.

That’s all for now! Stay tuned for more posts.

Workflows (Part 1)

A topic I’ve been thinking about a lot lately is workflow, and for once I decided to gather my thoughts in a blog post.

The reason the topic has been stuck in my head is because I’m seeing so many people trying to get work done without first figuring out a workflow that speeds up the “in-between” work (like compiling, or looking something up, switching between windows, and so on). There are a lot of things you can do to speed this stuff up, and this post will cover some of the tools/tricks I use personally. Note that I use a Macbook Pro along with an external display, so you may have to adapt or find alternatives for them to work with your setup.

Note: Changing your workflow to a theoretically more productive one might actually make you slower – having a workflow that is consistent and works for you is more important than trying to incorporate every trick in the book!

Part 1: Window Management

I can’t stress enough how important it is to manage your windows. Let’s say you’re working on something programming related. Most likely, the least you’ll be using is a code editor, a web browser, a terminal, and maybe an IM client. Here are some things you can do to manage it all:

Get good window management software.

On a Mac, this means making good use of Mission Control and hot corners! It’s incredibly convenient to be able to see all your windows or show the desktop by quickly zipping your mouse to a corner of the window. You should also make good use of desktop Spaces, though I’ve found I don’t have a fixed way of using these.

One feature found in Windows that I really miss on OS X is Aero Snap. Not to worry though, BetterTouchTool is the answer! This app has killer window management features and allows powerful customization of keyboard shortcuts and trackpad gestures. I highly recommend you get it now if you don’t have it already.

I’m sure Linux users have their own alternatives for the stuff mentioned above, just search around! 😉

Organize windows effectively around your desktop so that you can see as much as you can at once, without everything getting too cluttered.

When I’m not around my external display, this usually means that I run my Macbook at a higher resolution (“looks like 1680×1050”). I have my code editor occupying one half of my screen, and a browser window occupying the other half. My terminal and IM windows take roughly 1/6th of the window and are positioned at the corners, and will overlap the browser when they’re focused. The reasoning is simple: I need to be able to see my code all the time, but likely only need one of the browser/terminal/IM windows at once.

When I’ve got an external display, I make my code editor take up the whole laptop screen (which I keep at native resolution, by the way), but use a split view so I can see multiple files (or even two views of the same file) at once. My browser window takes 66% of the width of my external screen. A terminal window and an IM window take the top and bottom halves respectively of the remaining space. It may seem tedious to have to rearrange windows every time I reconnect my display, but BetterTouchTool makes it really easy: I have it configured to make a window occupy 66% of the screen when it’s dragged all the way to the left edge, and similar settings for the corners. This is what it looks like (click for a larger image):

desktop

By the way, as you can see, I have my display set to extend my primary one, not mirror it (translation: the two screens show different things, and I can move windows around between them). I’ve noticed a lot of people don’t know that this is even possible – please know that it is, and it’s great! Mirroring is useless in my opinion, except maybe if you’re connected to a projector or something, and even then… well.

Miscellaneous

All the programs you use have customization options to help you make them work with your workflow. Go through the settings available for the apps you most often use and figure out what’s most usable for you!

For example, almost everyone who uses a PC interacts with files a LOT. There are a lot of things you can do to make your life easier when working with files! For example, most file managers (Windows Explorer, or Finder on a Mac) use icon view by default. I find that column/list view is way better for a few reasons:

  • You can clearly see the names of all the files/folders you’re looking at. Icons are pretty, but they don’t really give you much information other than the type of the file – this is true when the icons are small too, so you’re not losing out on anything.
  • In column view, you can see what’s in the parent directories as well! This is great for navigation, for example when you’re manually trying to find a file in a maze of subfolders.
  • Also in column view (on a Mac), the last column shows you useful info about the highlighted file that you can peek at quickly. List view also shows you info in columns, but I prefer the navigational benefits of column view.
  • Once you’re using list or column view, the number of files you can see at once greatly increases – meaning you can keep your window smaller and use the extra real estate for something else.

I may have to write a separate post about file management, there’s a lot of scope to improve productivity there! This is it for now though, I hope all of this info is useful to someone. Cheers!

 

I’m alive!

So obviously, it’s been ages since my last blog post. Here’s a GSoC update:

  • First off, GSoC ’14 is over (a couple of weeks ago actually)! Thanks to aleth and the #instantbird team for everything over the summer.
  • Log indexing still hasn’t landed. It’s mostly waiting for me to look at the gloda changes for split log files (bug 1025522).
  • My WIP for infinite scrollback has reached a stage where prepending works, along with most message bubble features (unread ruler, message grouping). Unfortunately it remains a WIP.

In other news, I’ve been invited to the Thunderbird summit in Toronto! I’m excited to meet the Instantbird team and look forward to a weekend of hacking. We plan to make progress on WebRTC video calling among other things – I personally hope to finish up my GSoC log indexing WIPs (I’ll be on planes for ~40 hours ;)).

I hope to blog more frequently in the future, but let’s see :]. Cheers!

GSoC ’14 Progress

It’s been way too long since my last blog post. Progress since then:

  • Async logging has finally landed after a long period of tree closures and bustages. If you’re running a recent nightly (which are only available for OS X and Windows at the moment), your logs will be logged asynchronously.
  • Log indexing calls for controlling a log file’s size. We decided on the scenarios under which a log file should be split, and a patch for it is awaiting review in bug 1025522.
  • Florian gave me feedback comments for my log indexing WIP. I’ve addressed a number of issues he pointed out and some bugs I found myself, and uploaded a new patch to bug 955014. I hope this can land soon after review iterations, and am excited for easily searchable logs (finally!).

I’m now working on infinite scroll:

  • First step is to add the ability to prepend a message instead of appending.
  • After that’s done, I’ll look into how messages are added to the UI when a conversation is restored from hold. Currently they’re added oldest to newest. This needs to be reversed – add the newest message first, then prepend the older ones.
  • The above allows for showing only the latest few messages, and keeping the rest in memory – these can be prepended as the user scrolls up – setting the stage for true infinite scroll.
  • Finally, fetch messages from the logs and prepend these as the user scrolls further. This step of course is a lot more complicated than I just described, I’ll be blogging about it as I get to it.

It’s worth mentioning that in between blog posts, midterm evaluations happened, and I passed – thank you aleth and the Instantbird team!

I should also acknowledge that I’m behind schedule and need to work faster if I want to do justice to the second half of my proposal.

Until next time!

GSoC ’14 Progress

Another week has passed and midterm evaluations are around the corner!
I’ve pushed the async log patch to try and uncovered a bug with the tests – but haven’t been able to fully debug this due to tree bustages and the weekend keeping me busy with other stuff.
While awaiting try builds and so on, I worked on log indexing, and wrote the basic UI for the log viewer (a big search bar at the top). Log searching works, and is fast!

In other news, I got to meet my fellow Instantbird GSoC students Saurabh and Mayank today! Writing this on the way back, in fact.

That’s all for now, I think. Cheers!

GSoC ’14 Progress

Here’s what I’ve done over the last week:

  • Review iterations for the async logs bug.
  • Wrote a log-indexing module that sweeps the logs and indexes them, and runs a basic benchmark on search. I was able to query single words at about ~10ms per ~1k results.
  • Had some discussions on the implementation of log indexing, these are summarized here.
  • Wrote a patch to move the log sweeping code currently residing in the stats service to logger.js, to be accessed via the logs service. (bug 1025464)

The async logs patch is nearly r+ (as is the patch that moves the log sweeping code). After it is, I’ll be pushing to try and making sure tests pass, and it’ll be good to go. As detailed in the etherpad, I’ll then be making use of the log sweeping code to index all logs, and add a method to query the index. The log viewer UI will be updated (likely an awesometab-esque search bar at the top), and there’ll be easily searchable logs for everyone :)

GSoC ’14 Progress

It’s been over a week since my last post, and my recollection of work done is a bit hazy :(. Here’s a quick summary though:

  • Tests! I’ve written pretty comprehensive tests for writing and reading of log files (including grouped-by-day reading). I learned about Assert.jsm – something so new that my tests broke one fine day because of ongoing changes in m-c (I expected it to be stable because do_check_* had already been marked deprecated in MDN).
  • Bug fixing. Writing tests uncovered several intricacies that have now been taken care of. One discovery that hasn’t been addressed yet is that if queued file operations exist during AsyncShutdown, they fail because stuff like JSON (and even dump) get removed from the scope. A simple (pending) solution is to yield on all of them when the “prpl-quit” notification is fired.
  • Thunderbird! I got a Thunderbird build running and tested the new API changes. The UI is properly listing and displaying logs. Gloda indexing has also been updated: I had to implement yet another promise queue to make sure we read only one conversation at a time (and not load all of them into memory). Debugging was a hassle and took ages: Components.utils.reportError is disabled from chrome code by default in Thunderbird! I searched around for hours to see why my code wasn’t being called when indeed it was – just that the reportError call was never firing.
  • In the middle of all this, I took a day to play with the convbrowser code to see if I could manipulate the way messages are added and so on. I succeeded in getting messages to be added in reverse, but bubbles grouping wasn’t working and my code got messy enough that I reverted everything. It was a good experiment though, a kind of warm-up exercise for the real task that looms ahead 😉
  • I also spotted and fixed a couple of minor miscellaneous bugs.

While the async-logs patches undergo review cycles, I plan to either play more with the convbrowser and churn out something concrete (my first goal is to make context messages load from the bottom up), or to start messing with databases – test out storage/retrieval performance and disk usage with full-text-search enabled if we store all messages in it for example. Maybe both!

Until next time! Hopefully the next progress update won’t take as long as this one did.

GSoC ’14 Progress: File I/O Performance and Promise Chains

In my last post I included a to-do list of sorts that I expected to complete before writing the next one. None of the items in the list have been crossed off, but the progress over the last couple of days call for another post so here goes.

First off: file I/O strategies. I had a few enlightening discussions on #perf with Yoric and avih about the strategy I proposed about log writing and file I/O in general. The strategy – having an OS.File instance open and repeatedly appending to it using write() – was deemed feasible, but then I started thinking about a problem Florian hinted – what happens when we try to read a log file that’s currently open for writing (and possibly during a pending write)?

I talked to Yoric about possible race conditions between reads and writes, and it turns out this isn’t a problem because OS.File does I/O on a single thread. However, he warned me that opening a file for reading while it was already open for writing might fail on Windows (in general, opening a file twice concurrently).

As a solution to this, I proposed that, instead of keeping the file open, we open it, write to it, and close it immediately whenever we need to append a message. Not keeping the file open means that we don’t have to worry about opening it twice simultaneously, but now I had to worry about overhead added from opening and closing the file every time. What would the impact be if, for example, 50 conversations were open and each of them had up to 30 incoming messages per second? Would the overhead added by opening/closing visibly impact performance in this kind of a situation?

I asked about this on #perf again, and this time avih responded with some valuable insight. He explained that OSes cache opens and closes (and even seeks) so that successively opening a file would cause negligible overhead. This was of course only considering OS level file handling, not counting overhead caused by the OS.File implementation.

Now that I was confident that opening/closing the file every time wasn’t totally insane, I wrote a small benchmark to compare performance between the two strategies for appending a string to a file 1000 times. I ran it on my MacBook’s SSD, a FAT32 USB 2 flash drive, and a HFS+ partition on my USB 3 hard drive. The results were similar: opening/closing the file every time was about 3-4 times slower than keeping it open (absolute values were between 0.5-1.5 seconds keeping it open, and 1.5-5 seconds opening/closing every time).

However, that was for 1000 consecutive writes – not likely in a realistic scenario, and even so, decent enough to go unnoticed by a user. As avih put it, “optimization is great, but if applied where it’s not really needed, then it just adds code and complexities overheads, but giving you nothing meaningful in return”. Of course, Florian might have something to say about it when he’s back 😉

With the strategy decided, I set about adapting the code accordingly, and realized it was still possible for a read to be called on a file during a pending write. I needed a queue system to ensure all operations on a given file happened one after another. Since all OS.File operations are represented by promises, I decided to map each file path to the promise for the ongoing operation on it. Then to queue an operation on a file, do the operation in the existing promise’s then. Here’s some code to make that clear:

let gFilePromises = new Map();

function queueFileOperation(aPath, aOperation) {
  // If there's no promise existing for the
  // given path already, set it to a
  // dummy pre-resolved promise.
  if (!gFilePromises.has(aPath))
    gFilePromises.set(aPath, Promise.resolve());

  let promise = gFilePromises.get(aPath).then(aOperation);
  gFilePromises.set(aPath, promise);
  return promise;
}

Now whenever I have to do any file operation, I just do |queueFileOperation(path, () => OS.File.foo(path, bar, …));| and presto! An async file I/O queue.

An interesting side effect of the above code snippet is that once a path is added to the map, it’s never removed (=memory leak). This is solved by a slight modification:

function queueFileOperation(aPath, aOperation) {
[...]
  let promise = gFilePromises.get(aPath).then(aOperation);
  gFilePromises.set(aPath, promise);
  promise.then(() => {
    // If no further operations have been
    // queued, remove the reference from the map.
    if (gFilePromises.get(aPath) == promise)
      gFilePromises.delete(aPath);
  });
  return promise;
}

And that’s about it! Long post, but it was a great learning experience for me and I figured it deserved one.

Cheers!