Smartphone Hardening

A smartphone like the Samsung S4 bought only a few years ago will most probably run Android 4.4.x “Kitkat” (or 5.x  if upgraded), as this is the stock ROM it contained right after market introduction. New devices are still sold for ~ 150€ running Android 5.x “Lollipop” which is nearly equally old. I already flashed Cyanogenmod 11 back then to have more control over the device along w/ root access which enabled me to configure netfilter and install VPN S/W.

But if you follow the Android OS version history it becomes immediately clear that  – as the ppl at LineageOS state – 7/10 run outdated operating systems on their phones. This is a matter of upgrading your device, and that is what I just did, involving testing of lots of different ROMs and Android versions, which I’m going to skip in this post.

The steps to upgrade a S4 LTE (official release date may 2013) from 4.4.x to a quite actual and rooted 8.1 “Oreo” are as follows if you reduce them to the minimum and exclude all the time spent on testing and research:

Step 1: Use heimdall to flash TWRP recovery system onto the device. This can simply be done from the commandline after you put the phone in Download Mode (by pressing VolumeDown+Power while turning the phone on):

sudo heimdall flash --verbose --RECOVERY recovery.img

Step 2: Use heimdall to flash an updated baseband firmware containing an updated kernel and phone/modem related firmware. I prefer the GUI for that step as it gives a far better overview of what we are doing.

This is not as hard as it looks: After you downloaded the .tar file, extract it to a temp folder and see which files it contains. Afterwards, use heimdall to download the devices partition layout table (PIT). Next thing to do is select the PIT file, then hit the “Add” button and select each partition and its according file from the folder you extracted the .tar file, select “No Reboot” and “Resume” and finally hit the start button.

Step 3: Flash a new ROM onto the device via TWRP. Start the Device by pressing VolumeUP + Home + Power to enter its recovery mode. From there, select relevant files in the right order (and compare its checksums) which in my case were:

lineage-15.1-20180915-UNOFFICIAL-ks01ltexx.zip d3213c4895e2565ee3a7f3dd0d47aedcbe9f621eb8f89f9c51351d92573ae5dd
addonsu-15.1-arm-signed.zip  b5cc465abb3d9b7ad0177e74693e1bbd085775fd38808f640be537e8dcd1a3e8
open_gapps-arm-8.1-nano-20181013.zip  e544ad0aea8702d73f2b2451e42c83cb96157881ce7879dcdea11e2bb4835718

It appears to me that it is easily possible – and even by means of only using freely available S/W – to update all those horribly insecure smartphones out there, and it’s even far more easy to achieve than back in the days. So – I ask myself – why is there no public service offered by the shop you bought your phone at that enables non-technical ppl to get this done eradicating that bad thing called planned obsolescence ?

Addendum: Upgraded from stock Android 6.0 onto LineageOS 15.1 / Android 8.1 on a SM-T585 Tablet (2016) as well (search for “sm-t585” or “gtaxllte” for relevant TWRP and LineageOS images):

sudo heimdall flash --verbose --RECOVERY recovery.img
Initialising connection...
Detecting device...
      Manufacturer: "SAMSUNG"
           Product: "Gadget Serial"
...
100%
RECOVERY upload successful
Ending session...
Rebooting device...
Releasing device interface...

Interesting to note that this time the device itself does not really get identified. Last but not least: Do not forget to create and redundantly store  backups of the device(s) when finished w/ configuration et al.

Addendum 2: Doing the same for a S4 mini LTE a.k.a. GT-I9195i a.k.a. serranovelte (official release date june 2015) running stock android 4.4.4. TWRP already flashed, important to note that heimdall v1.4.2 – as for the two previous devices – has to be built from source to really work:

git clone https://gitlab.com/BenjaminDobell/Heimdall.git
cd Heimdall
cmake . && make && sudo make install

Remember to install some dependencies (like libusb-dev, libqt5 etc.) mentioned in cmake warnings / errors and it builds w/o error and flashes the device successfully. Flashing the lineage 15.1 image now is only a matter of copying a ZIP to SD or USB-OTG and booting the device into recovery.

L3 Hardening: GWx DDoS Mitigation

In the newer ages of the internet, denial-of-service attacks (DoS), their distributed variants (DDoS) and its newest reflected species (DrDoS/rDDoS) took, take and will take place increasingly often. To explain this very quickly: A Denial-of-Service (DoS) attack takes place when a single host attacks another host over the network. Distributed Denial of Service (DDoS) means that lots of often geographically dispersed aggressor hosts conduct the attack (trinoo, tfn2k and stacheldraht were famous tools for that purpose in 2001). As you can imagine, the first DDoS attacks were pretty spectacular b/c of the bandwidth achieved. Afterwards, more sophisticated Layer 7 (Application Layer) attacks were developed, then reflected attacks and finally amplification came into play (see wikipedia for more details).

All these attacks are not only proof of  weaknesses and/or design errors in underlying internet protocols or network service daemons implementing them. They are also depicting their potential power, as such attacks can be a equally handy and  efficient tool for governmental entities and/or their military executive branches that have an interest in e.g. wreaking havoc to a countries essential infrastructure. On a even more sophisticated level, such attacks may be conducted as part of a larger operation w/ the intent to ultimately spoof, intercept or overtake certain communications to and from target host(s) or network(s). The much hyped term “cyberwar” comes into mind, accompanied by a bitter taste of being instrumented by the military-industrial complex to justify questionable regulations and defense budget extensions to “make the internet a safer place”.

Basic Mitigation Theory

If we look into nature, we see that e.g. a river is able to transport certain amounts of water, but when a flood happens b/c of heavy rainfall (a.k.a. distributed denial of service taking place), the original riverbed will be too small to carry all the water which ultimately finds its own ways, forming and rearranging its surrounding landscape by whatever lies on its path. Now, if we look at that on a larger scale, a single river is most of the time only one vein of a certain area’s water transportation system, and if floods happen more often, new smaller rivers might be formed to fulfill the need for larger overall capacity. The more rivers there are, the more water can and eventually will be transported w/o the harsh effects of the previous flood. So, a more complex and dynamic river system is potentially able to fully compensate the initial problem. This split-up principle can also be applied during the mitigation of a large-scale DoS, DDoS or DrDoS/rDDoS attack, subsequently described at a basic technical level.

Technical GWx Principle

Each of the GWx systems is configured to forward and/or proxy packets for given services to the real IP of the productive server. This could be achieved  by implementing packetfilter or routing rules on incoming layer 3 IP traffic or by certain configurations that implement a dedicated proxy / loadbalancer on the application layer.

If a network or host has or itself acts as a single gateway, it can be flooded if the amount of data reaches its maximum bandwidth capacity. So, a 1Gbit DDoS attack will most probably fully saturate and thus take down a system connected via a single 1 Gbit link @ GW0. But, if we implement a second, geographically distant GW1 w/ the same linkspeed and use round robin DNS to evenly spread the requests to both gateways, a 1Gbit attack can no longer fully saturate the bandwidth as each of the GWx systems will only receive its 50% share of it. So, a GWx cluster consisting of 3 systems will reduce that to even shares of 33:33:33 percent, 4 systems to percentages of 25:25:25:25 and so on: x systems = overall bandwidth/x per system.  You see that this system comes w/ auto-grown scalability in mind and is ready to be expanded in realtime just by adding more GWx to the cluster and its underlying round robin DNS configuration.

GWx Hardening

As each and every GWx system will be directly exposed to attack traffic, it should be hardened thoroughly on host and network level. To name only a few, implementing basic packetfilter rules for filtering certainly known-bad, unneeded traffic, and even more sophisticated advances like blocking, delimiting or restricting bandwidth of hosts that send too many requests in a certain timespan, or a mechanism to filter out brute-force attacks to certain services or webpages could be implemented.

Extended host and network monitoring also makes sense here, but may heavily depend on your research capabilities or your intention to analyze and further develop your mitigative skillset. Security is a process, and should neither be seen as, nor advertised and marketed as a snake-oilish product.

Last but not least, it is of course crucial to retain secrecy of the real IP and also deploy packet filtering there to allow only inbound traffic from GWx boxes to the services protected by them.

Practical Insights and Perspectives

Having dealt w/ 30+ large scale (that means at least hundreds of megabit up to a few gigabit) attacks only in the last two years, I observed that they shared all of the specific characteristics (4x GWx, 2 providers, 4 DCs):

  • overall attack duration mostly only a few minutes
  • usually shifted by a few minutes
  • maximum + overall attack bandwidth limited
  • attacker unable to fully disrupt GWx protected services ever since

As DDoS attacks and certain, questionable mitigation techniques (as opposed to lotek, simple, functional and achievable) recently have also become a lucrative business model, the “customer” (or rather attacker) most probably pays for a certain package that seems to limit him to a certain target IP at a time and of course a limited bandwidth. Staying rather stealthy in a long-term period seems to also be a plausible demand for the DDoS provider on the one as well as its “customer” on the other hand, so the average attack will take place mostly during high-load periods and last rather short but occur often, so that fully legitimate clients get really frustrated.

Generally speaking, and if we left out the fact that core network providers are also able to filter e.g. using BGP, one efficient way to mitigate DoS, DDoS and DrDoS/rDDoS attacks would be to form a cyberarmy of GWx machines, geographically spread all over the world and using different providers and physical datacenters – a technique similarly deployed by the circumventive/anti-censorship tor network. But the GWx cyberarmy – in contrast to  botnets – does not have to consist of hundreds or thousands of machines; we only have high bandwidth servers, ideally carefully chosen dedicated root servers, optionally already DDoS protected in their own network.

It could also make sense to have a variable list of GWx systems that could change IPs or even providers every few months (e.g. if the monitoring shows that certain gateways are attacked more often and w/ more bandwidth). In the end, the efficiency of network offense as well as network defense heavily depends on the skillset and creativity of the red and the blue team respectively. Variability and flexibility have always been and always will be an essential part on the road to success, be it for natural species or the survival in a clearly overhyped but nonetheless unambiguously fought cyberwar. From my personal experience, and if you generally look into the successfull spread of lots of things, be it historically relevant inventions or open source software, simplicity is often the key element of consecutive efficiency and widespread usage.

L7 Hardening: Anti-Bruteforce

No matter which services you run – it will not take long until somebody or something will start bruteforcing them. Instead of manually constructing a network-based mechanism like using netfilter string matching combined w/ ipt_recent, it might probably make sense to use what we already have and which does the same: fail2ban.

So, as a simple example, lets deal w/ wordpress login bruteforcing. If we look into the server logfiles, relevant entries will contain:

"POST /wp-login.php HTTP/1.1"

So now simply extend fail2ban to include that by first creating

/etc/fail2ban/filter.d/wp-auth.conf

and filling that w/ the following if it fits your site’s structure:

[Definition]
 failregex = ^<HOST> .* "POST /wp-login.php

Now just include the new configuration to the (hopefully) already existing

/etc/fail2ban/jail.local

by adding

[wp-auth]
 enabled = true
 filter = wp-auth
 action = iptables-multiport[name=wp-auth, port="http,https"]
 logpath = /var/www/log/error.log
 bantime = 1200
 maxretry = 5

If implemented properly, we just need to restart fail2ban and it should mention the new rule by

2017-10-12 17:39:16,554 fail2ban.jail [7910]: INFO Creating new jail 'wp-auth'
2017-10-12 17:39:16,554 fail2ban.jail [7910]: INFO Jail 'wp-auth' uses pyinotify
2017-10-12 17:39:16,593 fail2ban.jail [7910]: INFO Jail 'wp-auth' started

If underlying principles are well understood, protecting other – not necessarily web-based – services should not be a hard task.

Basic IP Recon

Out of curiosity, it might be quite interesting to find out where the logged WordPress login bruteforce attacks (in my case, about 150 in only a few hours) originate from. So, lets first write a very basic skript to extract relevant data from our fail2ban.log:

#!/bin/bash
grep 'WARNING \[wp-auth\]' /var/log/fail2ban.log
exit 0

This will printout all the bans as well as the unbans which take place 20 minutes later if the configuration is left in its default state. Now, lets process that data a bit more to first reduce it to relevant content, eliminate double entries, and finally try to lookup the IP adresses involved. A simple approach could look like:

sudo ./WPBF.sh | grep Ban | cut -d " " -f 7 | sort | uniq | nslookup | grep "name ="

and gives us quite some valid information. Mostly originating from .ru and .cn,  perhaps some .jp and .tr, this is quite the usual background noise, of which only 4 look a bit uncommon:

178.135.28.218.in-addr.arpa name = pc0.zz.ha.cn.
213.171.28.218.in-addr.arpa name = pc0.zz.ha.cn.
99.76.28.218.in-addr.arpa name = pc0.zz.ha.cn.
189.224.30.60.in-addr.arpa name = no-data.

Lets checkout the WHOIS information for each of them:

inetnum: 218.28.135.176 - 218.28.135.191
netname: HA-ZZ-USAT-LTD
country: CN
descr: Henan University Science And Technology Limited Company,
descr: No 7 Dongqing Road,
descr: Zhengzhou City,
descr: Henan Province.

Okay, a university network. Like back in the old days 🙂 The next one:

inetnum: 218.28.171.208 - 218.28.171.215
netname: HA-ZZ-HUANJBHJ-GOV
country: CN
descr: HUANJBHJ Gov,
descr: SSDDYEBH,
descr: ZhengZhou City,
descr: Henan Provice.

Hmm, the government….and the last one

inetnum: 218.28.0.0 - 218.29.255.255
netname: UNICOM-HA
country: CN
descr: China Unicom Henan province network
descr: China Unicom

is at least in my experience seen very often in any of portscan, spam, or bruteforce attacks. Now to the last highlighted one:

inetnum: 60.30.224.0 - 60.30.239.255
netname: CNPC-TJ
country: CN
descr: CNPC Dagang Oilfield Communication Corporation

Never heard something like this before – could be interesting if that is really some kind of measuring device or a “normal” PC. Since it got 1723/tcp open, I suspect the former. Also, the question always remains: Are these attacks really originating from these adresses or are they just backdoored boxes?

If we do a quick portscan, all four IPs got one thing in common:

9999/tcp open abyss syn-ack

and a quick search reveals that it might be a remote access trojan called “The Prayer”.

To checkout all hosts for that backdoor, we can simply do s/t like

for i in `sudo ./WPBF.sh | grep Ban | cut -d " " -f 7 | sort | uniq ` ; do nmap -p 9999 $i --host-timeout=1 | grep open -B 3; done

L7 Hardening: Security Headers

There are quite some directives at hand that can be added to your webserver configuration to achieve hardening against many attacks. Most websites – even those that really should – do not care, and thus receive a grade F when being checked by schd.io.

It is pretty straightforward to change that completely. In nginx 1.6.2, just edit the site’s config file and insert:

add_header Strict-Transport-Security "max-age=31536000";
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Xss-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";

The equivalent in nginx 1.8+ would be:

add_header Strict-Transport-Security "max-age=31536000";
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Xss-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;

This already gives us a grade C, but there is another powerful mechanism: the content security policy (CSP) is restricting the abilities of the browser to those predefined by you, especially only allowing certain servers to serve certain elements of the site’s content in the first place.

So lets take a closer look into this basic rule:

add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' ; style-src 'self' 'unsafe-inline' ";

This is restrictive and works only on static websites not involving any other sources for images, scripts or fonts.

As in most if not all cases when dealing w/ security, all this also involves the well known, eternal conflict: security vs. usability.

For example, webmail applications and underlying plugins often include inline javascript and thus need a bit less of restrictions expressed by

add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' ";

WordPress for example would require restrictions similar to

add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com data:";

The most efficient way to implement a valid CSP for your website and/or application is to use the debugging shortcut F12 in the browser of your choice and check the console for relevant messages while at the same time creating parameters for your CSP that fit the actual operational needs.

AVSx: Hardening the SPAM Perimeter

In the course of a recent AVSx rollout, we had the opportunity to mitigate the serious SPAM problem of a customer. This included analyzing the specific situation, considering different approaches to eliminate or at least minimize known issues and also involved a detailed measurement of the spam statistics over time, ultimately leading to a short whitepaper which is yet to be published. One important element of our first approach is to not depend too much on external (or internal) servers at runtime, so that the solution is pretty much self-contained and thus working autonomously.

Pre-Rollout Situation

The customer receives thousands of spammails per day, many of which get forwarded to the internal mailserver. Since at that internal mailserver, no catchall account is defined (the sub-contractor says that this is not possible), it occurs that non-delivery-reports (NDR) are sent out. And lots of them. This has also led to the problem that the customer itself even got blacklisted in recent months by this well-known backscatter problem. Also, there was heavy usage of black- and whitelists, which may have influenced the overall situation in a negative way as well, since that badly interferes w/ bayes filtering and the overall learning process if not handled carefully.

Improvement v1: Static LDAP

There are multiple ways to reduce the amount of bad e-mail, but the most commonly used  overall practice is to make use of spamassassin and clamav. Having these tools at hand together w/ a good mailserver S/W like postfix, things are to be tuned in a very efficient way.

Postfix hardening already brings quite some mechanisms. Using block- and blacklists is another approach which should not be used from the start on, because the spamfilter might not get trained if nearly no mail reaches the server itself. Imagine a non-trained spamfilter if the blacklist is unreachable. In my eyes, it is crucial to closely survey the situation of spam occurence and scores as a basis to create statistics. This again serves as a base for the definitition of the SPAM tag- and kill-levels, which should normally be set to 4.5 and 16 respectively. When analyzing the score distribution, we can identify a peak which should ideally be not too far left of the kill-level, or in contrast, on a well trained system, even right of the kill level, which means that already most mails get discarded.

But what about all the NDRs? We can query the internal LDAP server to get the information we need. So, at first, installing ldapvi makes sense. Then, if we have valid credentials, we can simply do

ldapvi -b "ou=xxx, dc=xxx, dc=xx" -h v.w.x.y -D you@yourdomain > ALLDATA.ldap.txt

and then get a list of valid e-mail adresses out of this by doing

grep "yourdomain" ALLDATA.ldap.txt | grep mail | cut -d ":" -f 2 | cut -d " " -f 2 | tee -a /etc/postfix/relay_recipients

This list of emails is the base for the definition of a relay_recipients file, which will accept mails only if the corresponding adresses are also found in the LDAP directory.

This eliminates the complete problematic NDR situation previously found, b/c the internal mailserver normally never has to state that mail is sent to a user w/o a corresponding mail: entry in the LDAP directory dump. In a static LDAP scenario, if an account gets removed or the internal mailserver itself has problems, would produce a NDR. One could query the LDAP DB once a month and recreate the relay_recipients file from that, but in general I guess manually changing/adding/removing users in the file makes more sense, and again, we do not want to depend too heavy on other servers. 

To use the relay_recipients file, the following parameter should be set in the postfix config

d1g@isp:~$ sudo postconf relay_recipient_maps
relay_recipient_maps = hash:/etc/postfix/relay_recipients

The maintenance of the valid recpients – if needed at all – could optionally be done by the local admin if we have a (cron-)skript in place that rebuilds the DB regularly by doing e.g.

postmap -v /etc/postfix/relay_recipients
postmap: name_mask: all
postmap: inet_addr_local: configured 2 IPv4 addresses
postmap: inet_addr_local: configured 2 IPv6 addresses
postmap: set_eugid: euid 1000 egid 1000
postmap: open hash relay_recipients
postmap: Compiled against Berkeley DB: 5.3.28?
postmap: Run-time linked against Berkeley DB: 5.3.28?

In this static LDAP setup, the mailserver accepts mail only for previously defined valid users, and simply by this already rejects a very high percentage of all spam mails, especially all that produced the problematic NDRs.

Improvement v2: Dynamic Lookups

While solely relying on dynamic lookups might get you into trouble as described above, having dynamic lookups only in cases where the recipient is not found statically makes sense, and also brings any change made in the LDAP directory immediately to the outside world. In order to enable the dynamic lookup feature, we have to first install postfix-ldap, and then define

relay_recipient_maps = hash:/etc/postfix/relay_recipients, ldap:/etc/postfix/ldap-aliases.cf

The tricky part is the ldap-aliases.cf file itself, as in our case it had to contain

server_host = x.x.x.x
search_base = ou=xxx, dc=xxx, dc=xx
version = 3
timeout = 10
leaf_result_attribute = mail
bind_dn = user@domain
bind_pw = userpassword
query_filter = (mail=%s) 
result_attribute = mail, addressToForward

Afterwards, restart postfix, and/or optionally test the setup by doing

postmap -vq user@domain ldap:/etc/postfix/ldap-aliases.cf

So, we now have both a static and dynamic mechanism in place, which makes the system rather failsafe and ready for immediate LDAP directory change propagation.

Last but not least: Keep in mind – if a valid user is listed in LDAP, but the corresponding mailbox is not available for whatever reason on the local mailserver, non-delivery receipts (NDR) will be sent out!

Improvement v3: Query Proxy Addresses

In some cases, the ldap query has to be adjusted to the given scenario:

server_host = x.x.x.x
search_base = ou=xxx, dc=xxx, dc=xx
version = 3
timeout = 10
leaf_result_attribute = mail
bind_dn = user@domain
bind_pw = userpassword
query_filter = (proxyAddresses=smtp:%s) 
result_attribute = mail, addressToForward

After a restart of postfix, the mechanism works as intended.

OpenPGP Key Recreation and Revocation

Despites nearly having forgotten to blog about it, time has come to get myself a stronger OpenPGP keypair. But what about the folks I already established a secure connection with using the old key 0x800e21f5 and what about the rest of the internet? It’s not as complic as one might think.

1. Key Creation

Key creation is very simple if you use GnuPG on Linux:

0x220b:~$ gpg --gen-key

You can leave the default options (RSA/RSA,  4096bit, never expires) until it comes to name, e-mail and comment, where you should fill in your personal data associated w/ the use of the key. In most cases, one e-mail address is not enough, but you can just add one like this:

0x220b:~$ gpg --edit-key 6C71D217
gpg> showpref
[uneingeschränkt] (1). Peter Ohm (NetworkSEC/NWSEC) <p.ohm@networksec.de>
 Verschlü.: AES256, AES192, AES, CAST5, 3DES
 Digest: SHA256, SHA1, SHA384, SHA512, SHA224
 Komprimierung: ZLIB, BZIP2, ZIP, nicht komprimiert
 Eigenschaften: MDC, Keyserver no-modify
gpg> adduid

Now enter the other e-mail addy and a relevant comment if you wish. So, we now got a fresh key – but what about the old one(s)? At first, we should use them to sign the new one:

0x220b:~$ gpg --default-key 800e21f5 --sign-key 6C71D217
0x220b:~$ gpg --default-key 7BB7A759 --sign-key 6C71D217

and then finally give everybody access to our new public key by:

0x220b:~$ gpg --keyserver pgp.mit.edu --send-key 6C71D217

2. Key Revocation

Okay, now everybody must be able to know that the old keys are not used any longer. This can easily be achieved by first creating a revocation certificate for each of them, then importing that into the own keyring and finally exporting the revoked keys to the internet. Lets do it w/ a small shell skript and gpg2:

#!/bin/bash
for i in 7bb7a759 800e21f5
do
 gpg2 --output revoke.asc --gen-revoke $i
 gpg2 --import revoke.asc 
 gpg2 --keyserver pgp.mit.edu --send-keys $i
done

3. More Key Distribution

I also recommend to send everybody you already set up an encrypted communications channel with your new public key as they will be the only ones possibly using the old key material (most OpenPGP clients refuse to use revoked keys for encryption) and as it’s especially them who would need to be informed about any changes.

So, even if anybody interested in establishing a secure communications channel did not yet get your new public key, all that remains to be done is:

gpg --keyserver pgp.mit.edu --recv-keys 6C71D217
gpg --keyserver pgp.mit.edu --refresh-keys

…and don’t forget to attach your own public key if its a first-time contact.

TLS Hardening: Postfix

Dealing w/ TLS in postfix is straightforward, but there are too many options to list them all. As a prerequisite, a researcher maybe wants to be able to look at TLS information in more detail – in the logs of the server, as well as in the header of the mail itself.

This can be achieved by setting

smtpd_tls_received_header = yes
smtpd_tls_loglevel = 1
smtp_tls_loglevel = 1

in /etc/postfix/main.cf. Furthermore, if not already set

smtpd_use_tls = yes
smtp_use_tls = yes
smtpd_tls_security_level = may
lmtp_tls_mandatory_ciphers = high
smtp_tls_mandatory_ciphers = high
smtpd_tls_mandatory_ciphers = high
smtp_tls_security_level = may
smtpd_tls_mandatory_exclude_ciphers = MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL
smtpd_tls_exclude_ciphers = MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3
smtpd_tls_protocols = !SSLv2, !SSLv3

also makes sense, but still gives us a C+ rating. Folks at Qualys recommend

tls_export_cipherlist = aNULL:-aNULL:ALL:+RC4:@STRENGTH
tls_high_cipherlist = aNULL:-aNULL:ALL:!EXPORT:!LOW:!MEDIUM:+RC4:@STRENGTH
tls_legacy_public_key_fingerprints = no
tls_low_cipherlist = aNULL:-aNULL:ALL:!EXPORT:+RC4:@STRENGTH
tls_medium_cipherlist = aNULL:-aNULL:ALL:!EXPORT:!LOW:+RC4:@STRENGTH
tls_null_cipherlist = eNULL:!aNULL

but this also gives us C+. Doing more research, we finally get a B – still w/ a self-signed cert – by using

tls_export_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_high_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_legacy_public_key_fingerprints = no
tls_low_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_medium_cipherlist = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!MEDIUM:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
tls_null_cipherlist = eNULL:!aNULL

Together w/ a letsencrypt cert, we finally receive an A grade and full PCI-DSS compliance. Mission accomplished!

P.S.: If you are interested in a complete TLSv1.2 cipherlist, just issue

openssl ciphers TLSv1.2

Addendum: If you are on the lookout for governments deploying weak encryption, then this is for you:

May 5 01:44:08 isp postfix/smtpd[24611]: connect from correo.palmira.gov.co[190.144.251.105]
 May 5 01:44:09 isp postfix/smtpd[24611]: SSL_accept error from correo.palmira.gov.co[190.144.251.105]: -1
 May 5 01:44:09 isp postfix/smtpd[24611]: warning: TLS library problem: error:1408A0C1:SSL routines:SSL3_GET_CLIENT_HELLO:no shared cipher:s3_srvr.c:1440:
 May 5 01:44:09 isp postfix/smtpd[24611]: lost connection after STARTTLS from correo.palmira.gov.co[190.144.251.105]
 May 5 01:44:09 isp postfix/smtpd[24611]: disconnect from correo.palmira.gov.co[190.144.251.105]

A small snippet to look for more of that kind:

#!/bin/bash
#
# See which clients try to connect w/ old and insecure SSL
#
grep "accept error" $1 | cut -d ":" -f 4 | sort | uniq
exit 0

Backup 2.0

Since and b/c we are currently on a maintenance- and backup-spree, trouble arose when dealing w/ the gluster distributed-replicated setup involving four rootservers. After ranting a bit about the incompatibility of gluster e.g. in wheezy and jessy, we received a tip (thanks rpw!) and decided to start using git-annex from now on.

Note: To run a really fresh version (5.2), using the neurodebian repo is a good option.

Create Backup Data

As with every backup, it is ideal to have a spare b0x to host all the initial backup data and to function as an initial place to sort and clean-up stuff. As w/ every backup, it is best practice to always keep local backups until we really have a fully functioning, redundant backup. So, in our case, we get most of the backup data via rsync from two other hosts, and some more via scp. It is also a good idea to create a listing of all included files for later comparison, e.g. by doing

tree -ah | tee -a TREE
Initialize local repo

When we have all the data we want in place, we can continue by

git init
Initialized empty Git repository in /home/d1g/ANNEX/.git/
git annex init "NWS4"
init NWS4 ok
(recording state in git...)
git annex direct

Now, files should be added:

git annex add . (takes quite some time to create the initial checksums)
git commit -m "Initial git-annex NWSx REPO"
Initialize remote repo

On all of the remote nodes, the preparation should be something like

mkdir ANNEX
cd ANNEX
git init
git annex init "NWS3"
git annex direct
Add remote repo
git remote add nws4 ssh://nws4/home/d1g/ANNEX
Synchronize data

Now we can start to synchronize the data in between all involved repos, first copying from the remote location:

git annex sync --content

or, if we want to be more specific:

git annex copy VMB2015 --from-nws4
Keep data up to date
git annex watch

If annex watch is not running, and If now one host sends a new file via

git annex add .
git annex sync --content

we can use nws4 to “spread” it to all the other hosts automatically, if we want – after having added them all as remotes – by issuing the exact same command there.

For common things that could possibly go wrong, this page may contain some useful information.

Malware Deobfuscation

If a PHP based website installation like WordPress starts misbehaving e.g. by sending out lots of typical spam mails, some quick analysis w/ a simple and manual deobfuscation approach may make sense.

1. Correlation

At first, some correlation of what exactly was going on was required. Tailing the webservers logfiles together w/ running ngrep shows a clear connection: When a certain URL is called, a new spike in the mailqueue happens. Also, ngrep data reveals an interesting string:

 

YToxOntzOjE6InIiO2E6NDp7czoxOiJ0IjtpOjE7czoxOiJlIjtpOjA7czoxOiJnIjtpOjMwO3M6MToiYiI7aTowO319

This looks like some base64 to me. Decoding this results in

 

a:1:{s:1:"r";a:4:{s:1:"t";i:1;s:1:"e";i:0;s:1:"g";i:30;s:1:"b";i:0;}

which potentially looks like some c&c data. Facts so far: An attacker uploads a php file, and regularly calls that file to send out spam mails.

2. Code Analysis

When checking out the php code of the malicious file – only two scanners @ virustotal detect CPR17F2.Webshell respectively PHP.Packed.11 – it becomes clear that the code is obfuscated not only by base64, but also in some proprietary way. Most of the files content consists of lines like:

 

'aAwukaYdS7yQ0b9uFTYCvTpuSJyX7B97oGwuJ7z5D04QK8QK2lpYPqyX7jpvoswuJayQ0DV'.

Function names are random, and code seems to be stuffed in obfuscated and encoded manners. However, in the beginning of the file, we got some hints, and at the end there is something like a key scheme for all this:

 

$felhrwy = Array('1'=>'m', '0'=>'g', '3'=>'Y', '2'=>'B', '5'=>'y', '4'=>'M', '7'=>'l', '6'=>'Q', '9'=>'b', '8'=>'7', 'A'=>'A', 'C'=>'2', 'B'=>'v', 'E'=>'j', 'D'=>'w', 'G'=>'0', 'F'=>'F', 'I'=>'O', 'H'=>'H', 'K'=>'k', 'J'=>'1', 'M'=>'T', 'L'=>'U', 'O'=>'x', 'N'=>'q', 'Q'=>'C', 'P'=>'R', 'S'=>'N', 'R'=>'o', 'U'=>'6', 'T'=>'4', 'W'=>'P', 'V'=>'K', 'Y'=>'X', 'X'=>'G', 'Z'=>'8', 'a'=>'p', 'c'=>'S', 'b'=>'n', 'e'=>'L', 'd'=>'3', 'g'=>'E', 'f'=>'r', 'i'=>'i', 'h'=>'I', 'k'=>'5', 'j'=>'t', 'm'=>'h', 'l'=>'z', 'o'=>'9', 'n'=>'e', 'q'=>'f', 'p'=>'Z', 's'=>'s', 'r'=>'u', 'u'=>'W', 't'=>'c', 'w'=>'a', 'v'=>'V', 'y'=>'d', 'x'=>'D', 'z'=>'J');

3. Code Deobfuscation  – partial only

Okay, we got that list, so we can use tools like sed to change the data by the rules of the author. After having compiled a list that looks like

 

s/1/m/g

and so on, a small shellscript (that I call BRAINFUCK.sh intentionally)  is needed to do what we want, containing:

 

j=1
k=2
for i in `cat ARRAY `
 do
  echo $i $j $k
  sed $i FILE$j > FILE$k
  j=`expr $j + 1`
  k=`expr $k + 1`
 done

Later on, it becomes clear that this is not the quick and dirty way, as that would rather have been

 

sed -f ARRAY $1 > $2

Output in both cases becomes a lot clearer, but is still heavily obfuscated, and fiddling around w/ all the textmanipulation utilities is a very abstract thing for sure. Also, the techniques used so far do not deal w/ things like CRLF or “\r\n” and so on.

4. Code Deobfuscation – SUCCESS

It looks a lot more reasonable to use the code that the attacker already gave us to deobfuscate and decode the whole php file. So we take a closer look to the very last function that does all that:

 

eval(xlvgapr($wufa, $felhrwy));?

In short, this runs the deobfuscated and decoded code directly on the machine the file is executed on. All we gotta do is not run the code, but print it, so all it takes is

 

print(xlvgapr($wufa, $felhrwy));?>

and a command like

 

php MODIFIED_inc.php > DECODED.php

The resulting DECODED.php file is ~ 108kb in size (vs. ~ 152kb originally), and seems to heavily borrow code from phpmailer. What we got now, is the source code to some sort of complete framework to send mails featuring things like DKIM as well.

FreeBSD as KVM guest

FreeBSD is a great O/S in itself but let’s just run it as a KVM hypervised instance for testing purposes as described in the following quick introduction.

1. Base Install

At first, we create an image by e.g.

qemu-img create -f raw NWS3-FBSD10 64G

Using VMM, the VM can be setup easily, choosing linux/wheezy as O/S will activate virtio drivers, which is quite important. For performance reasons, the VM should be set to use 4GB of RAM (also to enable zfs preloading) and 4 CPU cores.

FreeBSD will be installed onto the vtblk device, and it will use vtnet as  10Gb ethernet adapter. Benchmarking via iperf shows that it comes quite close to the max:

[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 9.68 GBytes 8.32 Gbits/sec

2. S/W Installation

After having installed the O/S, a couple of tweaks have to be made to be able to really run and use the KVM guest. First, I always want my tmux, so I build it via:

cd /usr/ports/sysutils/tmux; setenv BATCH yes; time make install

Installing other S/W from binary packages is as simple as e.g.

pkg install pidgin-otr firefox xorg nmap ettercap trafshow ngrep

3. Building a VESA Kernel for Xorg

The window manager shall startup after reboot, and we still need a VESA kernel for that, so we edit /etc/rc.conf and add:

gdm_enable="YES"
gnome_enable="YES"
allscreens_flags="MODE_280"

Now its time to build a new kernel w/ VESA enabled:

cd /usr/src/sys/amd64/conf
cp GENERIC VESAKERN

Edit VESAKERN and add:

options VESA
options SC_PIXEL_MODE

Then build and install the kernel:

cd /usr/src
time make buildkernel KERNCONF=VESAKERN
time make installkernel KERNCONF=VESAKERN

Gnome needs procfs, so we add the following line to /etc/fstab:

proc /proc procfs rw 0 0

4. Further Tweaks

Now, it is time to halt the VM, and do a backup if you wish so. I recommend using qemu-img convert (64GB => 8GB) and gpg (8GB > 4.5GB) to accomplish that.

To make the mouse work, it is essential to do the following in VMM:

Add USB Mouse,Remove Tablet, Set VMVGA Graphics

Done! Overall time should be ~ 1hr