Ansible: Misleading dopy errors

Ansible works pretty well with DigitalOcean, and where it is not, it is mostly the fault of dopy, not DigitalOcean’s or Ansible’s.1 I am using macOS and out of convenience I installed Ansible via Homebrew. When trying certain actions, for example destroying a Droplet, I got this error message:

dopy >= 0.2.3 required for this module

This was weird, as I was sure I had installed dopy via pip (and pip via Homebrew). After double-checking and some digging I found the issue to be with the Ansible installation. To fix the issue at hand, one can either add

localhost ansible_connection=local ansible_python_interpreter=python

to the hosts file, or run /usr/local/Cellar/ansible/ install dopy==0.3.7a.

Let’s Encrypt – Beta Impressions

The Let’s Encrypt Beta has finally started. I registered a couple of weeks ago and the domains I use regularly got white-listed.

Certificate information screenshot

Just a few impressions so far:

  • The client ist still pretty basic, it comes with a little wrapper that builds a virtual environment for all the required python modules (which is very nice and comfortable)
  • The plugin to automatically configure Apache is still in alpha
  • The plugin to automatically configure Nginx is buggy and what seems pre-alpha (I think it is not delivered/used currently at all)
  • Don’t manually mess with /etc/letsencrypt as in never ever!

It is already comfortable to use — if you compare it to the manual process you had to undergo before. Once it is finished and all the bugs are ironed out, this thing will kick ass.

The certificates are already deployed on all my major sites, now I just have some maintenance work to do (remove unsafe ciphers etc). I started with my blog and the SSL Labs test looks pretty good.

SSL lab test results

I will try to do more with it in the upcoming days and weeks, but between work and university I currently don’t have that much time for personal projects.

If you are not part of the beta program but want to support the Let’s Encrypt initiative and go bug hunting, or simply want to try how it works, just grab the client from GitHub and use the testing infrastructure they provide (the testing CA is called “Happy Hacker CA”). News and announcements about the beta can be found here, there are also configuration examples for Nginx and Apache.

Last but not least, Kenn White published a little script suite on Github that downloads the official client and runs it to generate a certificate. It helps a lot to run the client on older Linux distributions or AWS instances — but on newer distributions a bit redundant in my opinion. Everything is in the early stages, and as the Let’s Encrypt initiative matures, I am sure the scripts will grow and be a great resource in the future! Kenn mentions other available clients on the page, make sure to check them out:

Running your own mail server in 2015

Running your own mail server always was a pain in the ass1. Ever since I started to become involved with IT, operating you own mail server was a lot of work. You always had to tweak something here or update something there, because you wanted it to be safe and secure. In my early days it was even possible to use DynDNS and run your mail server behind a dial-up modem or ISDN line. It started to change even then, but it was still possible.

Nowadays this is neither recommended nor possible, as all major email service providers are blocking dial-up IP ranges. It became a necessity with all the zombies2 out there trying to push their spam to the masses. Even though it got a lot easier to have your own server “on the Internet”, with a public non-dial-up IP, it also became even harder to operate your own mail server. Cloud providers like Amazon Web Services or DigitalOcean make it easy for you to deploy your own virtual private server within minutes. Unfortunately it is as easy for the spammers and scammers as it is for you. This means many email service providers started to block whole IP ranges from cloud providers or automatically assign a higher “score” to the email originating from these IPs — which makes them to disappear more often into the “digital abyss” of the junk or spam folder. Most of the time this can only be remedied by using a 3rd-party provider like Mailgun, specialised in email services and taking very good care of the reputation of their servers and IPs.

But that is not all. If you are like me and have several domains with even more mailboxes, aliases and domain aliases, the configuration can get quite complicated. You might want to have a backend database and — to keep maintenance as low as possible — some sort of configuration utility (usually a web interface of some kind). If you decide against a 3rd-party provider, you will have to configure SPF and DKIM yourself. This configuration requires extra attention, but without it many of the larger email providers will put your email almost directly into the junk folder. Besides all this, the system has also to be kept up-to-date at all times and logs need to be watched3 for stray emails, break-in attempts and spammers who try a DOS or even a DDOS attack. You see that operating your own mail server is not a good idea if you are inexperienced or cannot spare the time for all the design decisions, configuration and the debugging involved.

Currently I am using Dovecot, Postfix, a MySQL database as backend and PostfixAdmin for administration. But i am currently looking for another solution that requires less time. I tried Google Apps for Business, which was not for me, and also several other solutions. At the moment I am trying to get into the Amazon WorkMail preview, which could be exactly what I am looking for. DigitalOcean recently wrote an article about Mail-in-a-Box, which I wanted to take a closer look at as well.

Of course there are other important topics I have not covered in this post. For example data security and surveillance. But this is subject for another article, which will describe my thoughts on email in general and what implications the different solutions can have.

1. Maybe not always, I heard the early days were pretty sweet [↩](#fnref:1 “return to article”)
2. Malware infested desktop hosts [↩](#fnref:2 “return to article”)
3. Usually with the help of special tools like [Pflogsumm](, [rrdtool]( and/or [mailgraph]( [↩](#fnref:3 “return to article”)

EDNS UDP packet size

A couple of weeks ago I set up a local BIND in a CentOS 6.5 VM to have an internal DNS server for my VMs to use. After creating several local zone files and successful initial tests, everything worked fine. Some domains had high query times, and SSH logins sometimes took a bit longer than expected, but I did not have enough time to investigate further.

Today I had some free time on my hands and decided to re-visit the issue. The DNS server works fine, but whenever the cache is empty, it takes up to 3098ms for a DNS query. Once the result is in cache, everything works as expected. To get a better overview I started with enabling debug logging in named.conf:

logging {
    channel default_debug {
        file "data/";
        severity debug;
        print-time yes;
        print-severity yes;
        print-category yes;
    category default {

After a restart of the DNS service I tested several domains and found this suspicious log entry:

05-Oct-2014 11:24:28.014 edns-disabled: debug 1: success resolving '' (in ''?) after reducing the advertised EDNS UDP packet size to 512 octets

I then remembered something a friend of mine told me a couple of months ago. Hetzner is using several Firewall and anti-DDOS techniques to prevent attacks. One of these techniques is blocking UDP packets greater a specific size (which is not publicly revealed).

I found a knowledgebase article and a way to test if this is a problem with my local DNS server, or the one remote. It looks like the Hetzner is blocking UDP packages which are greater than 1440 or sometimes 2200 bytes, sometimes I get even lower values. Pretty inconsistent results.

To be on the safe side, I edited the named.conf and set the values for “edns-udp-size” and “max-udp-size” to 512 bytes:

edns-udp-size           512;
max-udp-size            512;

This seems to work for me now. I also contacted Hetzner and applied for the UDP whitelist.

PAM & NSS LDAP Authentication

The amount of servers and services I administrate and manage is rising slowly but steadily. I started with one little VPS and now I have 8 Virtual Machines — the three test environments not included. Clearly the need for a centralized authentication platform was given.

Since I am using Ubuntu (12.04) for my servers, this post will describe how to install and configure LDAP client authentication on Ubuntu systems. This ‘guide’ is

389 – LDAP server

First thing to do is to install the 389 LDAP Directory Server from the Ubuntu package repositories:

apt-get install 389-admin 389-admin-console 389-console 389-ds 389-ds-base 389-ds-base-libs 389-ds-console

To complete the 389 Directory Server installation and configuration, just run:


Answer the questions asked during the setup. I would recommend to take some time to read and understand the setup questions properly.

After the 389 DS is properly installed, configured and running, the LDAP user and group needs to be created and imported to the DS. To achieve this, generate the following base.ldif file with your favourite editor:

## An example entry to add a user to LDAP
dn: cn=aaronson,ou=people,dc=mydomain,dc=com
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: posixAccount
cn: aaronson
sn: Aronson
givenName: Aaron
uid: aaronson
userPassword: my-secrect-password
mail: [email protected]
telephonenumber: 1234567890
l: Town
postalcode: 12345
uidNumber: 5001
gidNumber: 5001
homeDirectory: /export/home/aaronson
loginShell: /bin/bash
## An example entry to add a group to LDAP (same name as the user)
dn: cn=aaronson,ou=groups,dc=mydomain,dc=com
objectclass: top
objectclass: posixGroup
cn: aaronson
gidnumber: 5001
memberuid: aaronson

Then use ldapadd to add the user to the 389 LDAP server:

ldapadd -x -f base.ldif -D "cn=Directory Manager" -w mysecretpassword

NFS Server

After importing the LDIF file to the DS server, we need to prepare the NFS server so that it shares the user’s home directory and can be (auto)mounted during login.

Edit the ‘/etc/exports’ file to export the share where the home directories will be created on:


After restart the NFS server, you should be good to go. Test your configuration by manually mounting the share on your servers. If it works, you can go ahead and edit the pam files. If not, consult the NFS manpage (especially when you have permissions issues, this can be tricky sometimes).

Also edit the file ‘/etc/pam.d/common-session’ and add the following entries at the end of the file:

# end of pam-auth-update config
session    required    skel=/etc/skel    umask=0022

This makes sure that the user’s home directory is created during login if it does not exist. This will only happen when you login to the file server for the first time. This is some lazy workaround for me, as I will explain at the end of this post.

Client LDAP configuration

Now the clients need to be configuration. Install the following packages on your designated LDAP client:

apt-get install auth-client-config ldap-auth-client ldap-auth-config libnss-ldap libpam-ldap

You will be asked a variety of questions similar to the those asked when you were installing the server components:

LDAP server Uniform Resource Identifier: ldap://**LDAP-server-IP-address**

Change the initial string from "ldapi:///" to "ldap://" before inputing your server's information

Distinguished name of the search base:
Our example was "dc=mydomain,dc=com"
LDAP version to use: 3

Make local root Database admin: Yes

Does the LDAP database require login? No

LDAP account for root: uid=admin,ou=Administrators,ou=TopologyManagement,o=netscaperoot
LDAP root account password: secret-ldap-admin-password

vim /etc/ldap/ldap.conf

BASE        ou=people,dc=mydomain,dc=com
URI         ldap://
DEREF       never
# TLS certificates (needed for GnuTLS)
TLS_CACERT  /etc/ssl/certs/ca-certificates.crt

vim /etc/nsswitch.conf

passwd: ldap compat
group:  ldap compat
shadow: ldap compat

First, edit the /etc/nsswitch.conf file. This will allow us to specify that the LDAP credentials should be modified when users issue authentication change commands.

sudo nano /etc/nsswitch.conf
The three lines we are interested in are the “passwd”, “group”, and “shadow” definitions. Modify them to look like this:

passwd:         ldap compat
group:          ldap compat
shadow:         ldap compat

Client Automount configuration

Install autofs packages:

apt-get install autofs-ldap autofs

Add the following lines to the autofs configuration files. First of all the master file:

vim /etc/auto.master

# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).
#/misc    /etc/auto.misc
# NOTE: mounts done from a hosts map will be mounted with the
#    "nosuid" and "nodev" options unless the "suid" and "dev"
#    options are explicitly given.
#/net    -hosts
# Include central master map if it can be found using
# nsswitch sources.
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.

/export/home    /etc/auto.home

Then edit the autofs config file for the home directories ‘/etc/auto.home’:

*    -fstype=nfs,soft,intr,rsize=8192,wsize=8192,nosuid,tcp

The last step needed is the creation of the users home directory on the NFS share so that it can be (auto-)mounted every time a LDAP user logs in. You can either write yourself scripts or a web UI for this or do the first login on the file server (remember the NFS server configuration above).

After all these configuration steps you should be able to login to every host with your LDAP credentials.