# Shape matching on road maps

Geogrid is a rust library which approximately quantizes map data into a two-dimensional grid. I suggest such a representation has application for visualization or applying image processing to local geographic data. In my case I developed the library to enable me to match arbitrary shapes against a city’s road map.

Given some input data (for example from GeoJSON), the grid computes the latitude/longitude boundaries and calculates the size of each grid square by dividing into the requested grid height/width dimensions. The data structure stores the bounds along with the resolution in units of meters per row/column to provide translation between real coordinates and grid coordinates. Since OpenStreetMap data defines road data as a list of map nodes which should be connected by line segments, the from_roads grid constructor also marks points along the slope between consecutive road nodes. A future feature would be to filter the roads used to optionally exclude types such as railroads and subway tracks.

Having constructed such a grid, Geoshaper finds closest matches to arbitrary shapes using Chamfer matching. First, I compute the distance transform of the road grid, as illustrated in the following figure.

The distance transform computes at each location the distance to the closest nonzero point (in the grid road pixels are 1 and the rest is 0). Chamfer matching then filters this distance transform with a binary shape mask and finds the minimum point. This minimum point corresponds to the location on the map which minimizes total distance from the provided shape.

The reader may try out some cities in Geoshaper, which packages this algorithm in a web interface that allows shape drawing and results display. While the server is very slow, the code is available on github, and it includes a GPU-accelerated version which shows tremendous speedup with a capable processor.

# Edge detection and drawing

This post will serve as a walkthrough of a couple of edge detection methods and an outline of an implementation in Javascript (see demo).

### Basics

Consider an image $m$ pixels high by $n$ pixels wide. For our purposes, we will map the image to grayscale values in the range [0, 255] and represent this data as a matrix $M$. Now let’s consider how to do a relatively simple operation such as blurring the image. Intuitively, blurring a pixel means smearing its neighbors on top of it; mathematically this amounts to taking a weighted average of the pixel and its neighbors. This leads us to the idea of a two-dimensional discrete matrix convolution, a form of mathematical convolution where two functions are combined to produce a third function.

The matrix convolution process takes an input matrix and a kernel matrix and produces an output matrix that applies the kernel to each entry in the input by computing a weighted sum (where weights are defined by the kernel) of the entry’s neighborhood as the new value for the entry in the output matrix. For example, to blur an image, we may apply convolution with the kernel $% $. This kernel will map each entry to the average of its immediate neighbors, effectively blurring the image, a method called a box blur. We normalize the kernel by dividing by 9 to make sure that the output has the same relative magnitude as the input (i.e. the picture does not become brighter or darker).

Here we delve into the details behind the process. Consider following along with a sample image.

### Sobel Operator

The star of the show in this section is the Sobel operator, which is defined as a pair of convolution kernels that estimate the image’s horizontal and vertical gradients. The kernel for horizontal gradient, $G_x$ is defined $% $. Note that the entries on the left side are negative, and the entries on the right are positive, so when we apply this operator to an entry, we are finding the difference between its left and right neighborhood, which is the definition of the horizontal gradient. Of course, this approach is only an approximation: we are attempting to construct a continuous gradient of a discrete function (the pixel values). We construct an analogous matrix $G_y$ to estimate the vertical gradient.

Now we apply matrix convolution to $M$ with both $G_x$ and $G_y$ to produce $C_x$ and $C_y$. Since we have gradient values in the horizontal and vertical direction, we can find the overall direction of the gradient at each point. We encode these directions as angles, and we compute $G_{ij} = \arctan\frac{C_{y_{ij}}}{C_{x_{ij}}}$, so each entry in $G$ is the angle of the gradient in radians. By adding together the absolute values of $C_x$ and $C_y$ we arrive at $C$, a matrix that holds the magnitude of the gradient at each point. We return these $G$ and $C$ as the results of applying the Sobel operator.

### Nonmaximum suppression

If we only apply the Sobel operator, we may find that the edges are not very fine; we would prefer our edges to be as precise as possible. Nonmaximum suppression is a technique to thin edges by suppressing pixels which are not maximal on the edge gradient. Recall the direction of the gradient $G$ produced by the Sobel operator corresponds to the direction of greatest change in pixel value. Remark that this fact implies that it is directed across edges, and that therefore the actual edge direction should be orthogonal to the gradient’s orientation.

Armed with this observation, we find it simple to explain the process of nonmaximum suppression. We visit each entry and decide whether its magnitude is greater than its two neighbors in the direction of the gradient. If not, we suppress the entry by setting its value to 0. Because the entry has only eight neighbors, we can interpolate between neighbors in the gradient direction to obtain a more precise comparison. Now let $S$ be the suppressed matrix.

### Hysteresis

What follows is a procedure called hysteresis where the results are refined by only allowing what we consider ‘‘real’’ edges and suppressing the rest. As input, the hysteresis function takes an image matrix and two threshold values $h$ and $l$, both in the range [0, 255], where $h$ is consider the high threshold and $l$ is the low threshold. Supplied with these inputs, hysteresis visits every pixel and decides whether to keep it based on a couple of criteria:

1. If the value of the pixel is greater than $h$, then it is real and we keep it.
2. If the value of the pixel is greater than $l$ and it is connected to a real pixel along a path of pixels where each has value greater than $l$, then we mark it as real and keep it.

To implement these criteria we can iterate over the matrix, and at each pixel greater than $h$, we perform a recursive search along all neighbors greater than $l$, marking all visited pixels as real along the way. At the end of traversal, we have our set of real pixels and can suppress the rest to obtain the final output.

### Canny method

Well, this section will be brief. The Canny method simply applies these steps in order.

1. Gaussian blur
2. Sobel operator
3. Nonmaximum suppression
4. Hysteresis

On the demo page, the Automate process button executes precisely this process.

# Making a checklist app for Android

### The idea

“I’ll remember that,” you tell yourself as your friend tells you his birthday, or your professor makes note of a theorem that will be on the final exam. Possibly it’s just me, but nine out of ten times I tell myself that I won’t forget something, I will not be able to recall it within a week.

And when I realize that I forget something, I’ll tell myself that next time something I want to remember happens, I will write it down. The problem? I’ll forget my forgetting. Indeed, I think that the main reason forgetfulness is such a difficult yet universal problem is that we simply forget the experience of forgetting.

So I decided to write a phone app to help me with remember the things I want to remember. But more on that later. Since I hadn’t written any substantial app before, I decided to start with something basic to understand the process: a simple todo app.

### Getting started

Recently Google has stopped actively developing the Eclipse ADT, deciding that Android Studio is the way to go. After downloading and installing the package (via the AUR for example), you’re ready to roll. To make a simple checklist todo app, we’ll need just a couple of components to handle different parts of the application:

1. An interface for adding, modifying, and marking items on a particular list.
2. A way to switch from one list to another, or to create a new list.

For this particular example, we’ll use a navigation bar to implement the second functionality, and a regular old ListView to do the first.

Conveniently, Android Studio will do half the work for us if we select to create a new project with a Navigation Drawer Activity: Now, right off the bat, we can run the application on a connected Android device or emulator. Here’s how that looks on my Android 4.4 Moto X. (you can browse the code here). As you can see, a simple structure has been given to us, and all we have to do is fill out the functionality. First, we’ll add a ListView to the main fragment’s layout (fragment_main_todo.xml) using the design window in the IDE:

### Digression: Activities vs. Fragments

You may be wondering what a fragment layout is, and how it’s different from an activity. If you’re not wondering or already know, feel free to skip to the next part.

An activity was originally, at a high level, a single screen in an application, meaning that one activity can have focus at a time. However, as tablets entered the market and more people began using them, Android developers realized that this idea of discrete activities was restricting. Something that would best take two screens on a phone may make better use of a tablet by showing them side by side. The typical example is a news reader, with a list of articles and a view for the selected article.

This issue is what fragments address. Fragments represent not an entire screen, but a single behavior in an application. Activities manage fragments, and fragments can be reused across different activities. For example, if you set up a pull-out navigation bar as a fragment, all the activities in your application can easily add it.

### Adapters and Lists

ListViews are very complicated widgets (watch World of ListView). They are best at displaying large amounts of repetitive data. To turn data into Views that can be displayed, they can take an Adapter. Why call it an adapter? These things adapt a potentially very large set of data to a small number of views that can be displayed at once on the screen. ListView is kind enough to handle most of the details, such as measuring views to fit the screen and recycling views to keep the UI snappy. All we have to do to make an Adapter is to implement the getView method. To keep things simple, we’ll store the todo items in a List and subclass the built-in ArrayAdapter, which turns an array or list of objects into TextView widgets. We’ll still be using TextViews as well, but our items each have two states: completed and incomplete, which we’ll want to reflect in the views. Here’s the code:

Nice and simple. Note that TodoItem is simply a wrapper for Tuple<String, Boolean> to indicate the item’s name and completion status. markComplete and markIncomplete are small methods that make changes to the style of the passed in View to distinguish their status.

Here’s the final app:

The code’s on Github, feel free to look at it and take anything that’s useful. I might post a follow-up that goes into other details, such as the lifecycle graph.

# apodjs and ngpodjs

NASA’s Astronomy Picture of the Day (APOD) and National Geographic’s Photo of the Day each provide a stunning photo each day. I present a couple of node.js scripts to interface with these sites, apodjs and ngpodjs, to make it easy to find or download pictures from these sources. I personally use these tools to set my wallpaper background each day to something different, and I’d like to share them in case anyone else is looking for something similar. apodjs is slightly more featureful, providing access to the entire APOD catalog since June 16, 1995, and allowing display of picture explanations. National Geographic does not appear to provide simple access to past pictures, so ngpodjs is limited to the picture of today. Otherwise, these programs share the same simple interface.

### What I do

In my openbox/autostart file, I keep the following line:

ngpodjs --download=$HOME/Desktop/ | xargs nitrogen --set-zoom-fill &  Each time I log in to my computer, the script will check the National Geographic Photo of the Day. If there’s a new picture, it will download that to my Desktop folder and print out the full path to its location, which I send along to nitrogen to set my desktop background (however this method would work with any program that can set the desktop background). When I want something different, I have an alias stored which will set the desktop to a random APOD from this decade: alias nasa-desktop="apodjs --type random --date=020101 --download=$HOME/Desktop/ | xargs nitrogen --set-zoom-fill"


# Quine

### Quine? Quoi?

A quine is a computer program that, when run, prints out its own source code. Programs such as print(open(__file__).read()) that read their own source from disk violate the spirit of the problem; rather, the program must somehow compute it when it is run. One cool fact about quines stems from Kleene’s recursion theorem. I’ll use Wikipedia’s explanation here:

Quines are possible in any Turing complete programming language, as a direct consequence of Kleene’s recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language.

### “For Amusement”

And who doesn’t like amusement? Let’s make a quine in Rust, the new programming language being developed by Mozilla and the community. If you’ve never done a quine before, you may think the task is trivial since the problem is so simple. However, once you start writing it, you may quickly stumble into the problem of self-recursion. For example (with Python), you may start with print("print(print(...))") and realize that your source code will always have one more print than you are actually printing.

One way around this block is to split the problem into two parts: turn your program into a “data” section and a “code” section, such that the code will at runtime transform the data into a textual representation of itself. That probably sounds a little obtuse, so let’s begin with a simple example:

Now, we can extend this idea to a full program, this time in Rust. See the code at main.rs, or run it for yourself on the playpen. The idea is similar to that illustrated in the example above, where an array of numbers represents the code. This code first prints out the data, then it prints out the code itself by using the data. I take advantage of the built-in slice Show implementation to save some manual string building. The data is an array of offsets of the ASCII value of each character with the preceding, with an initial value of 0. We simply add the values to an accumulator and print as we go, with the following code:

### Bonus: the “error code” quine

Another way to make a quine is to take advantage of the compiler’s error printing. The idea is that you write a program which generates a particular error that matches itself. Depending on the kind of error being generated by the compiler, you can apply an interative process to make this program:

1. Make a program that does not compile.
2. Make the compiler’s error output the contents of the program.
3. Repeat 2 until the contents and the error match.

Applying this process in the playpen, here’s the result:

Running this code will produce itself as a compiler error message.

# Hello, World!

### About me and about this blog

I’m Peter Elmers, person at Rice University and fan of many things open source.

It’s hosted on Github pages.

My online footprint.