by R.I. Pienaar | Jan 21, 2015 | Code
I host a local Docker registry and used to just have this on port 5000 over plain http. I wanted to put it behind SSL and on port 443 and it was annoying enough that I thought I’d write this up.
I start my registry pretty much as per the docs:
% docker run --restart=always -d -p 5000:5000 -v /srv/docker-registry:/tmp/registry --name registry registry |
% docker run --restart=always -d -p 5000:5000 -v /srv/docker-registry:/tmp/registry --name registry registry
This starts it, ensure it stays running, makes it listen on port 5000 and also use a directory on my host for the file storage so I can remove and upgrade the registry without issues.
The problem with this is there’s no SSL and so you need to configure docker specifically with:
docker -d --insecure-registry registry.devco.net:5000 |
docker -d --insecure-registry registry.devco.net:5000
At first I thought just fronting it with Apache will be as easy as:
<VirtualHost *:443>
ServerName registry.devco.net
ServerAdmin webmaster@devco.net
SSLEngine On
SSLCertificateFile /etc/httpd/conf.d/ssl/registry.devco.net.cert
SSLCertificateKeyFile /etc/httpd/conf.d/ssl/registry.devco.net.key
SSLCertificateChainFile /etc/httpd/conf.d/ssl/registry.devco.net.chain
ErrorLog /srv/www/registry.devco.net/logs/error_log
CustomLog /srv/www/registry.devco.net/logs/access_log common
ProxyPass / http://0.0.0.0:5000/
ProxyPassReverse / http://0.0.0.0:5000/
</VirtualHost> |
<VirtualHost *:443>
ServerName registry.devco.net
ServerAdmin webmaster@devco.net
SSLEngine On
SSLCertificateFile /etc/httpd/conf.d/ssl/registry.devco.net.cert
SSLCertificateKeyFile /etc/httpd/conf.d/ssl/registry.devco.net.key
SSLCertificateChainFile /etc/httpd/conf.d/ssl/registry.devco.net.chain
ErrorLog /srv/www/registry.devco.net/logs/error_log
CustomLog /srv/www/registry.devco.net/logs/access_log common
ProxyPass / http://0.0.0.0:5000/
ProxyPassReverse / http://0.0.0.0:5000/
</VirtualHost>
This worked on the basic level but soon as I tried to push to the registry I got errors, it seems after the initial handshake the docker daemon would get instruction from the registry to connect to http://0.0.0.0:5000/ which then fails.
Some digging into the registry code and I found it’s using the host header of the request to return a X-Docker-Endpoints header in the replies to the initial handshake with the registry service and future requests from the docker daemon will use the endpoints advertised here for communications.
By default Apache does not keep the host header in the proxy request, I had to add ProxyPreserveHost on to the vhost and after that it was all good, no more insecure registries or having to specify ugly ports in my image tags.
So the final vhost looks like:
<VirtualHost *:443>
ServerName registry.devco.net
ServerAdmin webmaster@devco.net
SSLEngine On
SSLCertificateFile /etc/httpd/conf.d/ssl/registry.devco.net.cert
SSLCertificateKeyFile /etc/httpd/conf.d/ssl/registry.devco.net.key
SSLCertificateChainFile /etc/httpd/conf.d/ssl/registry.devco.net.chain
ErrorLog /srv/www/registry.devco.net/logs/error_log
CustomLog /srv/www/registry.devco.net/logs/access_log common
ProxyPreserveHost on
ProxyPass / http://127.0.0.1:5000/
ProxyPassReverse / http://127.0.0.1:5000/
</VirtualHost> |
<VirtualHost *:443>
ServerName registry.devco.net
ServerAdmin webmaster@devco.net
SSLEngine On
SSLCertificateFile /etc/httpd/conf.d/ssl/registry.devco.net.cert
SSLCertificateKeyFile /etc/httpd/conf.d/ssl/registry.devco.net.key
SSLCertificateChainFile /etc/httpd/conf.d/ssl/registry.devco.net.chain
ErrorLog /srv/www/registry.devco.net/logs/error_log
CustomLog /srv/www/registry.devco.net/logs/access_log common
ProxyPreserveHost on
ProxyPass / http://127.0.0.1:5000/
ProxyPassReverse / http://127.0.0.1:5000/
</VirtualHost>
I also made sure it was using localhost for the port 5000 traffic and now I can start my registry like this ensuring I do not even have that port on the internet facing interfaces:
% docker run --restart=always -d -p localhost:5000:5000 -v /srv/docker-registry:/tmp/registry --name registry registry |
% docker run --restart=always -d -p localhost:5000:5000 -v /srv/docker-registry:/tmp/registry --name registry registry
by R.I. Pienaar | Jan 18, 2015 | Code
Last weekend I finally got to a point of 1.0.0 of my travel map software, this week inbetween other things I made a few improvements:
- Support 3rd party tile sets like Open Streetmap, Map Quest, Water Color, Toner, Dark Matter and Positron. These let you customise your look a bit, the Demo Site has them all enabled.
- Map sets are supported, I use this to track my Travel Wishlist vs Places I’ve been.
- Rather than list every individual yaml file in a directory to define a set you can now just point at a directory and everything will get loaded
- You can designate a single yaml file as writable, the geocoder can then save points to disk directly without you having to do any YAML things.
- The geocoder renders better on mobile devices and support geocoding based on your current position to make it easy to add points on the go.
- Lots of UX improvements to the geocoder
Seems like a huge amount of work but it was all quite small additions, mainly done in a hour or so after work.
by R.I. Pienaar | Jan 15, 2015 | Uncategorized
I’ve been with Linode since almost their day one, I’ve never had reason to complain. Over the years they have upgraded the various machines I’ve had for free, I’ve had machines with near 1000 days uptime with them, their control panel is great, their support is great. They have a mature set of value added services around the core like load balancers, backups etc. I’ve recommended them to 10s of businesses and friends who are all hosted there. In total over the years I’ve probably had or been involved in over a thousand Linode VMs.
This is changing though, I recently moved 4 machines to their London datacenter and they have all been locking up randomly. You get helpful notices saying something like “Our administrators have detected an issue affecting the physical hardware your Linode resides on.” and on pushing the matter I got:
I apologize for the amount of hardware issues that you have had to deal with lately. After viewing your account, their have been quite a few hardware issues in the past few months. Unfortunately, we cannot easily predict when hardware issues may occur, but I can assure you that our administrators do everything possible to address and eliminate the issues as they do come up.
If you do continue to have issues on this host, we would be happy to migrate your Linode to a new host in order to see if that alleviates the issues going forward.
Which is something I can understand, yes hardware fail randomly, unpredictably etc. I’m a systems guy, we’ve all been on the wrong end of this stick. But here’s the thing, in the longer than 10 years I’ve been with Linode and had customers with Linode this is something that happens very infrequently, my recent experience is completely off the scales bad. It’s clear there’s a problem, something has to be done. You expect your ISP to do something and to be transparent about it.
I have other machines at Linode London that were not all moved there on the same day and they are fine. All the machines I moved there on the same day recently have this problem. I can’t comment on how Linode allocate VMs to hosts but it seems to me there might be a bad batch of hardware or something along these lines. This is all fine, bad things happen – it’s not like Linode manufactures the hardware – I don’t mind that it’s just realistic. What I do mind is the vague non answers to the problem, I can move all my machines around and play russian roulette till it works. Or Linode can own up to having a problem and properly investigate and do something about it while being transparent with their customers.
Their community support team reached out almost a month ago after I said something on Twitter with “I’ve been chatting with our team about the hardware issues you’ve experienced these last few months trying to get more information, and will follow up with you as soon as I have something for you” I replied saying I am moving machines one by one soon as they fail but never heard back again. So I can’t really continue to support them in the face of this.
When my Yum mirror and Git repo failed recently I decided it’s time to try Digital Ocean since that seems to be what all the hipsters are on about. After a few weeks I’ve decided they are not for me.
- Their service is pretty barebones which is fine in general – and I was warned about this on Twitter. But they do not even provide local resolvers, the machines are set up to use Google resolvers out of the box. This is just not ok at all. Support says indeed they don’t and will pass on my feedback. Yes I can run a local cache on the machine. Why should every one of thousands of VMs need this extra overhead in terms of config, monitoring, management, security etc when the ISP can provide reliable resolvers like every other ISP?
- Their london IP addresses at some point had incorrect contact details, or were assigned to a different DC or something. But geoip databases have them being in the US which makes all sorts of things not work well. The IP whois seems fine now, but will take time to get reflected in all the geoip databases – quite annoying.
- Their support system do not seem to send emails. I assume I just missed some click box somewhere in their dashboard because it seems inconceivable that people sit and poll the web UI while they wait for feedback from support.
On the email thing – my anti spam could have killed them as well I guess, I did not investigate this too deep because after the resolver situation became clear it seemed like wasted effort to dig into that as the resolver issue was the nail in the coffin regardless.
Technically the machine was fine – it was fast, connectivity good, IPv6 reliable etc. But for the reasons above I am trying someone else. BigV.io contacted me to try them, so giving that a go and will see how it look.
by R.I. Pienaar | Jan 10, 2015 | Code
As mentioned in my previous 2 posts I’ve been working on rebuilding my travel tracker app. It’s now reached something I am happy to call version 1.0.0 so this post introduces it.
I’ve been tracking major travels, day trips etc since 1999 and plotting it on maps using various tools like the defunct Xerox Parc Map Viewer, XPlanet and eventually wrote a PHP based app to draw them on Google Maps. During the years I’ve also looked at various services to use instead so I don’t have to keep doing this myself but they all die, change business focus or hold data ransom so I am still fairly happy doing this myself.
The latest iteration of this can be seen at travels.devco.net. It’s a Ruby app that you can host on the free tier at Heroku quite easily. Features wise version 1.0.0 has:
- Responsive design that works on mobile and PC
- A menu of pre-defined views so you can draw attention to a certain area of the map
- Points can be catagorized by type of visit like places you've lived, visited or transited through. Each with their own icon.
- Points can have urls, text, images and dates associated with them
- Point clustering that combines many points into one when zoomed out with extensive configuration options
- Several sets of colored icons for point types and clusters. Ability to add your own.
- A web based tool to help you visually construct the YAML snippets needed using search
- Optional authentication around the geocoder
- Google Analytics support
- Export to KML for viewing in tools like Google Earth
- Full control over the Google Map like enabling or disabling the street view options
It’s important to note the intended use isn’t something like a private Foursquare or Facebook checkin service, it’s not about tracking every coffee shop. Instead it’s for tracking major city or attraction level places you’ve been to. I’m contemplating adding a mobile app to make it easier to log visits while you’re out and about but it won’t become a checkin type service.
I specifically do not use a database or anything like that, it’s just YAML files that you can check into GitHub, easily backup and hopefully never loose. Data longevity is the most important aspect for me so the input format is simple and easy to convert to others like JSON or KML. This also means I do not currently let the app write into any data files where it’s hosted. I do not want to have to figure out the mechanics of not loosing some YAML file sat nowhere else but a webserver. Though I am planning to enable writing to a incoming YAML file as mentioned above.
Getting going with your own is really easy. Open up a free Heroku account and set up a free app with one dynamo. Clone the demo site into your own GitHub and push to Heroku. That’s it, you should have your own up and running with place holder content ready to start receiving your own points which you can make using the included geocoder. You can also host it on any Ruby based app server like Passenger without any modification from the Heroku one.
The code is on GitHub ripienaar/travlrmap under Apache 2. Docs for using it and configuration references are on it’s dedicated gh-pages page.
by R.I. Pienaar | Jan 5, 2015 | Code
In a previous post I showed that I am using a KML file as input into GMaps.js to put a bunch of points on a map for my travels site. This worked great, but I really want to do some marker clustering since too many points is pretty bad looking as can be seen below.
I’d much rather do some clustering and expand out to multiple points when you zoom in like here:
Turns out there are a few libraries for this already, I started out with one called Marker Clusterer but ended up with a improved version of this called Marker Clusterer Plus. And GMaps.js supports cluster libraries natively so should be easy, right?
Turns out the way Google maps loads KML files is done using a layer over the map and the points just are not accessible to any libraries, so the cluster libraries does nothing. Ok, so back to drawing points using my code.
I added a endpoint to the app that emits my points as JSON:
[
{"type":"visit",
"country":"Finland",
"title":"Helsinki",
"lat":60.1333,
"popup_html":"<p>\n<font size=\"+2\">Helsinki</font>\n<hr>\nBusiness trip in 2005<br /><br />\n\n</p>\n",
"comment":"Business trip in 2005",
"lon":25,
"icon":"/markers/marker-RED-REGULAR.png"}
] |
[
{"type":"visit",
"country":"Finland",
"title":"Helsinki",
"lat":60.1333,
"popup_html":"<p>\n<font size=\"+2\">Helsinki</font>\n<hr>\nBusiness trip in 2005<br /><br />\n\n</p>\n",
"comment":"Business trip in 2005",
"lon":25,
"icon":"/markers/marker-RED-REGULAR.png"}
]
Now adding all the points and getting them clustered is pretty easy:
<script type="text/javascript">
var map;
function addPoints(data) {
var markers_data = [];
if (data.length > 0) {
for (var i = 0; i < data.length; i++) {
markers_data.push({
lat: data[i].lat,
lng: data[i].lon,
title: data[i].title,
icon: data[i].icon,
infoWindow: {
content: data[i].popup_html
},
});
}
}
map.addMarkers(markers_data);
}
$(document).ready(function(){
infoWindow = new google.maps.InfoWindow({});
map = new GMaps({
div: '#main_map',
zoom: 15,
lat: 0,
lng: 20,
markerClusterer: function(map) {
options = {
gridSize: 40
}
return new MarkerClusterer(map, [], options);
}
});
points = $.getJSON("/points/json");
points.done(addPoints);
});
</script> |
<script type="text/javascript">
var map;
function addPoints(data) {
var markers_data = [];
if (data.length > 0) {
for (var i = 0; i < data.length; i++) {
markers_data.push({
lat: data[i].lat,
lng: data[i].lon,
title: data[i].title,
icon: data[i].icon,
infoWindow: {
content: data[i].popup_html
},
});
}
}
map.addMarkers(markers_data);
}
$(document).ready(function(){
infoWindow = new google.maps.InfoWindow({});
map = new GMaps({
div: '#main_map',
zoom: 15,
lat: 0,
lng: 20,
markerClusterer: function(map) {
options = {
gridSize: 40
}
return new MarkerClusterer(map, [], options);
}
});
points = $.getJSON("/points/json");
points.done(addPoints);
});
</script>
This is pretty simple, the GMaps() object takes a markerClusterer option that expects an instance of the clusterer. I fetch the JSON data and each row gets added as a point. Then it all just happens automagically. Marker Clusterer Plus can take a ton of options that lets you specify custom icons, grid sizes, tweak when to kick in clustering etc. Here I am just setting the gridSize to show how to do that. In this example I have custom icons used for the clustering, might blog about that later when I figured out how to get them to behave perfectly.
You can see this in action on my travel site. As an aside I’ve taken a bit of time to document how the Sinatra app works and put together a demo deployable to Heroku that should give people hints on how to get going if anyone wants to make a map of their own.