Ethernet offloads

Recently, I experienced strange symptoms at a network involving very short connection outages resulting in only partial database queries, observed by a competent admin. While doing some further research, the problem – tho under slightly different circumstances –  sounded familiar: Last time at the other site, pretty large downloads were interrupted, w/ content malware scanning on the list of potential culprits, but soon the customer site’s network hardware itself was identified in playing a crucial role. Messages sporadically seen in syslog were e.g.:

e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None


MASTER conntrack-tools: no dedicated links available!

Last time, it was a Gbit NIC using the e1000e kernel module connected to a D-Link Gbit Switch, this time it was the same NIC connected to a Cisco and a HPGbit Switch.

However, having a closer look into what kind of network negotiation took place and checking my notes from last time, subsequently implementing

ethtool -K eth0 gso off gro off tso

seems to have fully resolved the issue(s) mentioned above so far. What this does is disabling

  • generic segmentation
  • generic receive and
  • tcp segmentation 

offloads @ the NIC itself. On the other hand, maybe this is only an issue w/ HQ switch hardware that triggers the NICs function in the first place.

Either way, above command eliminates the problematic aspects of all this.

Roundcube 1.2.0: Enigma PGP

The rollout of Roundcube 1.2.0 went smooth, what follows is a short howto to get the PGP plugin practically up and running for users being new to e-mail encryption.

At first, one has to set a username under Settings => Identity. Then, under Settings => Encryption, at least the upper 4 options should be selected.

Afterwards, Settings => PGP Keys should be selected and a new key be generated using the + symbol at the bottom. It is rather important to choose a strong password, as this will also protect the keyfile itself on the server. For each user identity we want to be able to communicate w/ in an encrypted manner, the relevant public key has to be Imported, and we should send every user our Exported public key, preferably over a completely different, secure channel, e.g. Jabber + OTR. (Hint: The Import and Export buttons are found in the upper left corner of the UI in the Settings => PGP Keys context).

If you wonder how you can export an existing public key suitable for the Import mentioned above, you can type e.g.:

gpg –armor –export 800e21f5 > 800e21f5_key.asc

All the initial tests went smooth, if any problems should arise, updates will be posted here.

NWS0: H/W Upgrade


As we all know, IOWAIT is the Professor Moriarty of good performance. So, I decided to replace the oldest, least powerful rootserver w/ a brandnew one. The following describes a rough overview of the steps involved to conduct a non-parallel upgrade of a productive system, which – at least in my case – succeeded perfectly.

By using virtualization, we gain a modularization of the overall functionality. However, the rootserver itself needs to be a stable, scalable (as in having enough RAM and CPU power) and ideally secure platform. In my case, that also involved server hardening, VPN, a robust firewall ruleset, GPG and monitoring of e.g. netflow and system logs as well as libvirt for better VM maintenance.

The first thing on the todo is planning the productive VM backup, which must be taken right before the actual upgrade to pertain actuality of all data. A while ago, I used to convert the RAW images to QCOW2 first (also as some sort of disk integrity check), then encrypted the resulting file using gpg. However, since the I/O of the old rootserver was rather a pain, I just skipped the conversion and directly encrypted the RAW images (gpg also compresses data) which has proven to work out very well.

What you should ask yourself now is:

  • How big is every VM (and how much data is actually in it)
  • How long does it take to create a backup of it
  • How long does it take to transfer the backup
  • Result: How long does it take for all VMs to be safely backed up

As we are already using git annex for all of our backups thanks to the tip from rpw, we of coz use this to actually store the backups redundantly @ multiple other root servers. All the configuration files are also redundantly backed up so that we are able to conduct a fast rollout.

So, after having done the steps mentioned above, the old server can be securely deleted. Wiping is most secure, but not really an option b/c we are in a hurry to have a productive system again. Reinstalling the old server 4 times w/ a fresh linux works, and the fact that most sensitive data was already stored using GPG allows to take less care @ secure data deletion.

The data center involved worked really fast, and it took less than 30 minutes for them to disconnect the old and connect the new server in their rack, even tho it was around 2 A.M.

When the fresh system was connected, the configuration rollout could take place, and after the backup restoration via git-annex (hint: git annex reinit $OLDUID really made sense here!) or via scp for only the productive VMs at first, we should take the time and adjust the CPU featureset for the VMs, e.g. using virt-manager to meet the new specs, also to overcome some ugly kvm warnings.

Last but not least, here are the times needed for the actual migration steps in my case:

  • Backup of 5 productive VMs and configs + transfer: ~ 3 hrs
  • Implementation of configs @ new server: ~ 30 minutes
  • Transfer and extraction of productive VMs: ~ 1.5 hrs
  • “Production Gap” caused by the whole migration: ~ 5hrs
  • Transfer of all git-annex data (430GB): ~ 3 hrs

In most cases, a good planning and realistic preparation (which clearly involves experience) is really a must, and if you ask me, it is also alot of fun, especially when the new server has ~ 4 times the capacity and performance of the old one while being cheaper at the same time.

AVS3: Productive

Finally, I implemented the Debian/GNU 8.0 based AVS3 Mail VM running qmail @ two production servers. Surprisingly, this initially produced quite some issues, which I will describe subsequently.

Certificate Issues

If you are using a mailclient such as thunderbird, the Diffie-Hellman parameter of the IMAP certificate must be at least 1024bits in size. However, it is better to create a 2048bit version by

export DH_BITS=2048

prepending the actual creation command, e.g. mkdhparams.

RBL Issues

When we initially connected to the AVS3 via SMTP, there was always a timeout of 10-30 seconds. Suspecting that it had to do w/ some lookup mechanism, the relevant line in the qmail-smtpd runfile is


and should not contain any servers that are unresponsive or unavailable. After having removed the culprit server, all subsequent connections went smoothly w/o any delay.

Since I was able to simply migrate all existing maildir (hail to this greatest of all formats) data from the AVS2 into the AVS3 (after having created the relevant domains and accounts), all mails were available exactly as before.

Best Practice

As an important step to eliminate backscatter, John M. Simpson’s jgreylist is also implemented to avoid sending hundreds of unneccessary mails per week (and also getting our graphs poisoned).

In general, it is recommend for all networksec users to send mail via 465/tcp, thus forcefully requesting SSL/TLS and SMTPAUTH.

In the end, besides having to accept the new (still self-signed) certificates for incoming and outgoing connections to the AVS3, there is nothing to be done on the client side, except perhaps closely watching for spammails as the filter has to get initially trained in the first week of the deployment w/ greylisting turned off.

Howto: Boost your old Laptop


Many “old” Laptops from around 2003 seem to have been produced using quality components, as most of them still function properly. However, bottlenecks most certainly appear: Lack of RAM, and sluggish HD performance. The old enemy, IOWAIT, jumps on the stage. Also, the BIOS does not support booting from USB, burning a CD is rather environmentally ill and really not neccessary.

However, having thought about these issues for a bit, a pretty fast, nice and simple rollout including a complete O/S reinstall and full data migration was a matter of only minutes.

In my case, the laptop was manufactured in 2003, equipped w/ a 1.5GHz Celeron, 512MB RAM and a 80GB IDE HDD running Ubuntu 14.04 LTS.


Requirements of this howto:

  • old 32bit laptop
  • other laptop running KVM
  • ext. mSATA SSD chassis
  • Converter IDE 44 Pin > mSATA with 2.5′′ Frame
  • ext. 2.5″ IDE HDD chassis


Installation of O/S

After having inserted the mSATA SSD into the external chassis, we can connect that to the laptop running KVM via USB and directly install linux onto the mSATA SSD:

sudo kvm -hda /dev/sdb -cdrom ubuntu-14.04-desktop-i386.iso -m 1024

This is only a matter of a few minutes, and allows to install the i386 architecture onto the mSATA SSD even tho the host laptop’s arch is x86_64.


Booting up the “refurbished” Laptop

Now, remove the old IDE HD from the old laptop, put the mSATA SSD into the converter and implant that one where the old IDE HD was located. You will see that the bootup speed is really boosted by factor 5 up to 10.


Migrating the old Data

Put the old HDD in the ext. 2.5″ IDE chassis, connect that via USB to the freshly booted computer, and simply copy your home directory (including hidden directories e.g. for firefox or thunderbird data) in my case w/ ~ 23MB/s. This takes less than a minute / GB of data.


Some I/O Background

The performance of the old laptop is really astonishing, as especially one problem is eliminated: The read/write access of the O/S to the swap partition no longer hinders the actual execution flow, and subsequently, deadlock situations mostly vanish, e.g. when copying some files while at the same time opening a new terminal to copy configs and install additional software – now no longer a problem.

Santa brings GRSEC Repo


Thanks to this tweet from ioerror, we now all know about the Debian/GNU GRSEC repository kindly provided by Yves-Alexis Perez.

GRSEC is a bunch of really cool security extensions and hardening features for the linux kernel by Brad Spengler.

Installing this on a wheezy lab VM is pretty straightforward and involves the following steps:

  1. Add the sources, then download the repository key

root@NWS3-D7-BUILD:/home/d1g# apt-key adv –keyserver –recv-keys 0x71ef0ba8
gpg: fordere Schlüssel 71EF0BA8 von hkp-Server an
gpg: Schlüssel 71EF0BA8: “Yves-Alexis Perez <>” nicht geändert
gpg: Anzahl insgesamt bearbeiteter Schlüssel: 1
gpg: unverändert: 1

2. Update package sources:

apt-get get update && apt-get upgrade

3. Install the grsec kernel:

apt-get install linux-image-3.2.0-4-grsec-amd64 linux-headers-3.2.0-4-grsec-amd64 linux-grsec-base

4. Reboot, continue to use the system normally, observe anomalies and report them back:

Linux NWS3-D7-BUILD 3.2.0-4-grsec-amd64 #1 SMP Debian 3.2.68-1~grsec1 x86_64 GNU/Linux

Happy testing and merry xmaz 🙂

Backup 2.0


Since and b/c we are currently on a maintenance- and backup-spree, trouble arose when dealing w/ the gluster distributed-replicated setup involving four rootservers. After ranting a bit about the incompatibility of gluster e.g. in wheezy and jessy, we received a tip (thanks to rpw) and decided to start using git-annex from now on.

Note: To run a really fresh version (5.2), using the neurodebian repo is a good option.

Create Backup Data

As with every backup, it is ideal to have a spare b0x to host all the initial backup data and to function as an initial place to sort and clean-up stuff. As w/ every backup, it is best practice to always keep local backups until we really have a fully functioning, redundant backup. So, in our case, we get most of the backup data via rsync from two other hosts, and some more via scp. It is also a good idea to create a listing of all included files for later comparison, e.g. by doing

tree -ah | tee -a TREE

Initialize local repo

When we have all the data we want in place, we can continue by

git init
Initialized empty Git repository in /home/d1g/ANNEX/.git/
git annex init "NWS4"
init NWS4 ok
(recording state in git...)
git annex direct

Now, files should be added:

git annex add . (takes quite some time to create the initial checksums)
git commit -m "Initial git-annex NWSx REPO"
Initialize remote repo

On all of the remote nodes, the preparation should be something like

mkdir ANNEX
git init
git annex init "NWS3"
git annex direct
Add remote repo
git remote add nws4 ssh://nws4/home/d1g/ANNEX
Synchronize data

Now we can start to synchronize the data in between all involved repos, first copying from the remote location:

git annex sync --content

or, if we want to be more specific:

git annex copy VMB2015 --from-nws4
Keep data up to date
git annex watch

If annex watch is not running, and If now one host sends a new file via

git annex add .
git annex sync --content

we can use nws4 to “spread” it to all the other hosts automatically, if we want – after having added them all as remotes – by issuing the exact same command there.

For common things that could possibly go wrong, this page may contain some useful information.



OC client update

ownCloud logo square

After having upgraded the owncloud server, older clients are no longer compatible. If you are running a version of Ubuntu or Debian/GNU Linux, upgrading is straightforward. For Ubuntu 14.04 LTS, it is as simple as:

sudo apt-key add - < Release.key
sudo sh -c "echo 'deb /' >> /etc/apt/sources.list.d/owncloud-client.list"
sudo apt-get update
sudo apt-get install owncloud-client

The current version of the owncloud-client is 2.0.1. Server Admins might also want to read this.

Malware Deobfuscation


Recently, a customers outdated WordPress installation started misbehaving by sending out lots of typical spam mails.


1. Correlation

At first, some correlation of what exactly was going on was required. Tailing the webservers logfiles together w/ running ngrep shows a clear connection: When a certain URL is called, a new spike in the mailqueue happens. Also, ngrep data reveals an interesting string:


This looks like some base64 to me. Decoding this results in


which potentially looks like some c&c data. Facts so far: An attacker uploads a php file, and regularly calls that file to send out spam mails.


2. Code Analysis

When checking out the php code of the malicious file – only two scanners @ virustotal detect CPR17F2.Webshell respectively PHP.Packed.11 – it becomes clear that the code is obfuscated not only by base64, but also in some proprietary way. Most of the files content consists of lines like:


Function names are random, and code seems to be stuffed in obfuscated and encoded manners. However, in the beginning of the file, we got some hints, and at the end there is something like a key scheme for all this:

$felhrwy = Array(‘1’=>’m’, ‘0’=>’g’, ‘3’=>’Y’, ‘2’=>’B’, ‘5’=>’y’, ‘4’=>’M’, ‘7’=>’l’, ‘6’=>’Q’, ‘9’=>’b’, ‘8’=>’7′, ‘A’=>’A’, ‘C’=>’2’, ‘B’=>’v’, ‘E’=>’j’, ‘D’=>’w’, ‘G’=>’0’, ‘F’=>’F’, ‘I’=>’O’, ‘H’=>’H’, ‘K’=>’k’, ‘J’=>’1’, ‘M’=>’T’, ‘L’=>’U’, ‘O’=>’x’, ‘N’=>’q’, ‘Q’=>’C’, ‘P’=>’R’, ‘S’=>’N’, ‘R’=>’o’, ‘U’=>’6’, ‘T’=>’4’, ‘W’=>’P’, ‘V’=>’K’, ‘Y’=>’X’, ‘X’=>’G’, ‘Z’=>’8’, ‘a’=>’p’, ‘c’=>’S’, ‘b’=>’n’, ‘e’=>’L’, ‘d’=>’3’, ‘g’=>’E’, ‘f’=>’r’, ‘i’=>’i’, ‘h’=>’I’, ‘k’=>’5’, ‘j’=>’t’, ‘m’=>’h’, ‘l’=>’z’, ‘o’=>’9’, ‘n’=>’e’, ‘q’=>’f’, ‘p’=>’Z’, ‘s’=>’s’, ‘r’=>’u’, ‘u’=>’W’, ‘t’=>’c’, ‘w’=>’a’, ‘v’=>’V’, ‘y’=>’d’, ‘x’=>’D’, ‘z’=>’J’);


3. Code Deobfuscation  – partial only

Okay, we got that list, so we can use tools like sed to change the data by the rules of the author. After having compiled a list that looks like


and so on, a small shellscript (that I call intentionally)  is needed to do what we want, containing:

for i in `cat ARRAY `
echo $i $j $k
sed $i FILE$j > FILE$k
j=`expr $j + 1`
k=`expr $k + 1`

Later on, it becomes clear that this is not the quick and dirty way, as that would rather have been

sed -f ARRAY $1 > $2

Output in both cases becomes a lot clearer, but is still heavily obfuscated, and fiddling around w/ all the textmanipulation utilities is a very abstract thing for sure. Also, the techniques used so far do not deal w/ things like CRLF or “\r\n” and so on.


4. Code Deobfuscation – SUCCESS

It looks a lot more reasonable to use the code that the attacker already gave us to deobfuscate and decode the whole php file. So we take a closer look to the very last function that does all that:

eval(xlvgapr($wufa, $felhrwy));?

In short, this runs the deobfuscated and decoded code directly on the machine the file is executed on. All we gotta do is not run the code, but print it, so all it takes is

print(xlvgapr($wufa, $felhrwy));?>

and a command like

php MODIFIED_inc.php > DECODED.php

The resulting DECODED.php file is ~ 108kb in size (vs. ~ 152kb originally), and seems to heavily borrow code from phpmailer. What we got now, is the source code to some sort of complete framework to send mails featuring things like DKIM as well.