22.7.15

Sharing family photos in the post-Snowden area

When the Snowden revelations hit in 2013, my trust in the digital life crumpled within days. As an immediate result, I stopped posting on Facebook, switched from a handful of passwords to a mental password algorithm, started encrypting everything and became much more careful about leaking private data into the internet. My phone is in airplane mode now about half of the time, purely due to worries about blind location tracking. I'm living in a jungle of paranoia and caution, and the terrain should be well known to every other geek that was blindsided by the unveiled power of the secret services of this world.

But there are some occasions when this jungle needs to be clear-cut, because people need to cross it quickly and conveniently. Of course, I'm talking about the dear family. Family that is either lacking technical skills to hide in the jungle, or not involved enough with the digital world to try.

For these people, we need solutions that push the old heuristic - that security and convenience are polar opposites - to its very limits. We need services that look and feel as simple as Facebook, but actually work completely different.

Keeping this background in mind, I needed to share the photos of my wedding online. Traditionally, I'd upload them to flickr and be done with it. But of course, with this approach I could be pretty certain that the NSA would perform metadata analysis and face recognition on these pictures, and add yet another confirmed location data point to the database of each of my guests. Today, my first impulse would be to hoard these pictures on an encrypted hard drive, and never even attach that to an internet-connected PC. Obviously, this approach wouldn't be too compatible with my family's expectations.

So, I set off to work out a reasonable compromise.

Usually, I'd distribute USB sticks to everybody that's interested in the pictures. In this case, with 40+ guests, this strategy wouldn't scale at all, neither with my workload nor with with my budget.

Maybe I could distribute an encrypted archive? But zip encryption security is awful, and decrypting an archive is orders of magnitude more complicated and less convenient than to visit a web gallery (plus, this complexity is multiplied for mobile users).

My next thought went to the Raspberry Pi 2+ that's gathering dust since the last project was finished. If the hardware was under my control, these photos would be immune to casual data gathering routines. And if some branch of my government really wanted these pictures, they couldn't do it with a gag order - they'd have to enter my apartment with a search warrant. I've wanted to set up a personal server for a long time already, and the RPi would be ideal for home use - not too underpowered, but unobtrusive, very energy efficient and perfectly silent.

But which software to run on this machine? Obviously, I needed a web server and a web gallery. I'm no professional web developer, so I needed something out of the box. I fired up Raspbian and tried my first software combination. lighttpd with the optional PHP plugin, together with Photoshow Gallery seemed like a reasonable choice.

First attempt: LightTTPD + Photoshow Gallery with Docker
Its documentation said that it absolutely needed Docker. I've had Docker for the first time, but judging from the website, it was quite useful: develop once, deploy everywhere, don't contaminate your system with useless dependencies. Just a minor caveat: "everywhere" apparently doesn't include the Raspberry Pi. At least not out of the box. Linux binaries were for the x64 architecture only. After a quick search, I found a [port for Raspbian](http://blog.hypriot.com/downloads/). The website strongly hints at its full firmware image, and so I wasn't too surprised when the Debian port didn't work with Raspbian. So I re-flashed the SD card with the full firmware, Hypriot, which apparently came in the "Arch" flavor of Linux distros. I quickly found out that some of the essential stuff was missing with this firmware: notably, wifi driver support. When this unfamiliar firmware then even denied my first approach ("pacman": not a supported command), I was already repulsed. But when I read that I needed to recompile the kernel to get wifi support, I decided that I was clearly out of my depth, and abandoned Docker (after all, only a dependency to a PHP web gallery).

Second attempt: LightTTPD + Photoshow Gallery without Docker
In stark contrast to the documentation, Photoshow Gallery worked like a charm without Docker. There were a few permission hickups and the documentation could be quite a bit more extensive, but I got it running and it looked quite OK. Not more than the bare minimum of customizability, but reasonable for a home project. The only thing that bothered me was that the thumbnails were all just scaled, not cropped - which caused ugly thick white borders between rows of images when the occasional vertical image came along. But then I loaded the page for the second time, and realized that it started generating thumbnails all over again. No thumbnail caching was a deal-breaker my image set: it took over 50 seconds to load every single time. I was baffled, since I configured a thumbnail folder - but apparently, this software decides to delete its entries immediately. I haven't found a setting to convince it otherwise. Already halfway into another issue (Photoshow Gallery only supports WebM video, and ffmpeg support for WebM is really bad on OSX), I abandoned Photoshow Gallery and searched for something I could debug, and possibly even expand, if it didn't work.

Third attempt: LightTTPD + PhotoFloat
PhotoFloat was written in Python (which I'm fluent in), looked nice, promised a ton of performance and seemed easy enough to install. So I threw out Photoshow Gallery and installed PhotoFloat instead. It ran quite nicely, and once I decided to render the thumbnails on my laptop, I didn't even have to wait for 30+ minutes for the weak ARM chip to fulfill the task. During the thumbnail creation, I proceeded to the next step, password protection. PhotoFloat promised user authentication with just two configuration files, and although that process seemed simple enough, it simply didn't work. After a quick dive into the source code, it turned out that the authentication was provided by an internal add-on, written in Flask (a web-oriented Python framework). I had contact with Flask once before, and had a hunch that it would conflict with my existing webserver (LightTTPD). The documentation for PhotoFloat mentioned a configuration file for NGinx, which turned out to be another type of webserver. And it seemed as if the existing configuration would only work in conjunction with a package called uWSGI. I'm still not sure what it actually does, but I tried installing it anyways.

Fourth attempt: NGinx + PhotoFloat + uWSGI for authentication
As you might have guessed, this attempt was not successful. After a few desperate attempts to split the PhotoFloat project into different folders, as implied by some parts of its documentation, I gave up on uWSGI and tried to get PhotoFloat running without authentication first.

Fifth and successful attempt: NGinx + PhotoFloat
Getting PhotoFloat to work completely took me a surprising amount of time, because I didn't know what I was doing wrong. Everything showed up perfectly, the makefiles had nothing else to do, there were no errors, I managed to reload all the Javascript, but still - the pictures were very obviously not displayed in the correct order. After a somewhat lengthy dive into the source code, and a small debug session in PyCharm, I found my suspicions confirmed: the build-in sort routine didn't actually do anything. A quick and dirty fix later, I had a running web gallery, and it performed very well - even on 3G.

But there was still no authentication on this little server. With gloomy visions of hordes of Chinese hackers invading their easy target, I set out a final time to find a secure solution. For that, I needed to enable https, so that my guests' authentication info wouldn't go over the line in plain text. Apparently, you need a SSL certificate to do that. After a tiny bit of research, I realized why so many websites have self-signed certificates - these things need to be bought, or more accurately, rented. Letsencrypt, which set out to solve these issues, wasn't launched yet. So I ordered a free certificate (with a 1 year expiry date, which should be more than enough for my purpose) from startssl.com and prepared for six hours of waiting for the confirmation of my account. With product descriptions like this, literally listing the feature "gives you the gold padlock", it was hard to take this process very seriously. Even for an outsider like me, it seems that SSL certificates are more security theater than actual chains of trust.

Now I just needed some method for limiting access to my guests. With an official nginx tutorial on the first result page for "nginx access restriction", access restriction was very straightforward. I noticed that I it was an nginx "plus" tutorial, but decided to try it anyways. I didn't want to switch webservers again.

Meanwhile, the SSL certificate from startSSL was ready (earlier than expected), and during the setup process, the site cheerfully noted that I need to be the domain owner; no subdomains were supported. This was awkward. My little setup was using no-ip.org, and I didn't exactly plan to buy a domain for this project. But apparently this was how it worked. No-IP.org was eager to help with my dilemma, and offered me the low price of just $69.98 (per year) for a cheap'n'cheerful certificate to secure my subdomain.
So... maybe self-signed certificates weren't that bad after all. But first, finishing access control. Nginx didn't flinch when I rebooted with the added htaccess restrictions, so maybe I didn't need to change servers after all. I set up a single user with a single password (converted to htpasswd format with this tool) and hoped that my guests could stomach the additional difficulty. After testing for success, I performed a reverse test: I typed in the absolute address of an image while not authenticated, expecting to get rejected. I didn't. Ugh.
A couple of stackoverflow posts later, I finally fixed my mistakes: I wrote "auth_basic" into the server-level nginx configuration (previously, I wrote it into the location level), and left "deny all; satisfy any;" out completely. Now hotlinking was still possible, but only with correct authentication. Good enough for today. To be continued...

Keine Kommentare: