Own automatic phishing list and Tor-to-Tor tunneling

Getting own malware lists

Many extensions like uBlock will provide you with curated lists of malicious URLs and domains. This is easy to use and will stop most bad things from happening. But what if you wanted to have your own updates that do not have to take time to trickle down through the curators, then the periodic list updates? Is there a way to get recent bad domains if you’re not an operator of a large network with many users on it, and do not have a huge honeypot project?

Yes! Because you can be a large network operator in just a few hours. There are thousands of people who will be happy to send you their traffic. And even though they seem to want better anonymity, in general they don’t seem very security concious. I’m talking of the Tor network of course, where anyone can become an exit node operator and get a random, possibly very biased, very small chunk of global web traffic.

The setup

What I’m after are the domains, not the URLs at this point. And only those dedicated to tricking users, not the hijacked ones that just happen to host malware. That means I’m really after the DNS requests that users make. Unfortunately Tor tunnels only the TCP traffic, not UDP, so there’s no way to request only the DNS as the exit traffic. Fortunately it does tunnel the DNS requests usng Tor-specific channel and it sends them to the same exit node you’re sending your other traffic to. This is great, because it means I can advertise a port 80 exit and get all the related DNS requests at the same time.

This can be done by adding the following to the policy.

ExitPolicy accept *:80

Now both the requests and the HTTP traffic will be sent to you. The amount of received requests depends only on the advertised bandwidth. In practice if you advertise 1Mbyte link, you can expect ~4000 DNS requests per hour. Not a huge amount, but it is a decent sample.

The safety

Now what if you want to reproduce this without actually being responsible for other peoples’ traffic? The Tor information is pretty carefree telling everyone that it’s great to run a Tor exit. But in practice that can result in either your internet access, or your server being cancelled and potential LEA interactions. So when collecting the data, I didn’t really want to send anyone’s traffic from my connection.

To work around that I decided to send all the received traffic through a safe network instead. One that would still get to the destination, but encrypt all the data and not expose the source. Specifically through the Tor network!

The official Tor client doesn’t really allow you to split the exit and the onion router traffic to different interfaces, but that can be fairly easily fixed in the source. With that in place, you can split the exit traffic to a dummy interface - one which is isolated by default.

modprobe dummy
ip addr add 10.0.0.1 dev dummy0

echo "OutboundBindAddressExit 10.0.0.1" >> /etc/tor/torrc

The final part is to redirect all the relevant DNS requests and all the traffic back to the Tor network.

echo "nameserver (own resolver)" >> /etc/tor/resolv.conf
echo "ServerDNSResolvConfFile /etc/tor/resolv.conf" >> /etc/tor/torrc

iptables -t nat -A OUTPUT -s 10.0.0.1/32 -p tcp -m tcp -j DNAT --to-destination 10.0.0.1:9999

Quality of the data

Now that we can capture all the data, how can I tell that it’s a representative sample of real internet traffic? I can’t, not easily anyway. One way is just browsing the list for known names and more specificaly looking at histograms for usual features. The top looks like this:

311 category.auctions.yahoo.co.jp.
256 ((censored))
189 www.google.com.
185 clients1.google.com.
176 www.google-analytics.com.
173 closedsearch.auctions.yahoo.co.jp.
142 wing-auctions.c.yimg.jp.
135 google.com.
116 fonts.googleapis.com.
110 ocsp.digicert.com.
....

Looks like someone didn’t have DNS caching turned on for the yahoo auctions, or there was some serious interest in scanning them. But otherwise it checks out - with the added irony of google analytics being a popular domain accessed from the Tor network.

What about the long tail?

# awk '{print $9}' < domains.list | sort | uniq -c | sort -nr | grep -c ' 1 '
25344
# wc -l domains.list
45468 domains.list

Around half of the domains were resolved only once. Pretty similar to my home network. I’m not going to pretend this is the best possible source of data, but for a proof of concept, it’s good enough. It’s likely to be noisy (cached -vs- not cached entries), it will miss a lot of known phishing domains due to adblockers (that’s actually good, I’m only going to see what’s not filtered yet), it will have a different profile than the open internet (a number of people on Tor will filter a lot of traffic).

Phishing domains, finally

With all that data, I started from just selecting things that stood out and then forming a few rules for easy classification of new traffic. The rules here are not supposed to be 100% correct, they just add points. (like spam filters)

Whitelist

To quickly reject false positives, I whitelisted some domains. This removes them completely as trusted. (from the phishing perspective anyway) Google, Apple, Yahoo, Amazon, and other domains with known country TLDs are dropped.

Long labels

What really stands out if some domain consists of more than 3-4 labels. Standard pages usually try to avoid it.

# awk '{print $9}' < domains.list | tr -d 'a-z0-9-_:' | sort | uniq -c
31 .
14239 ..
24255 ...
4896 ....
1534 .....
86 ......
344 .......
11 ........
12 .........
9 ..........
14 ...........
30 ............
6 .............
1 ........................

With winners looking like this:

www.apple.com.(random).(random).(random).(random).(random).(random).isecure.com.my

So let’s give it 2 point for each domain over 3 labels long.

Hidden TLDs

From the previous example we can see the name tries to impersonate apple.com. I could go with the company list again, but instead I’ll check for the popular TLDs which are in the middle of the name. 3 points for each TLD not in the last 2 labels.

L33t sp33k

Another pattern spotted in entries from the previous list is that some domains try to use misspelling to prevent easy discovery. For example:

paypai.com-resolvedlimited.com

So the next rule is to replace all “1”, “0”, “i” (none of the known names happen to include “i”) with “l” and “o”. If a company name appears, 10 points to that domain.

Garbage labels

Finally some labels themselves are either too long, or random. The length is easy - 1 point for each character over 16. The randomness is harder. For the purpose of this exercise, I’ll just assume that random means not similar to english-like words. Or specifically, if it contains 2-tuples of letters not found in local system dictionary. The list of tuples is filtered a bit - by default there are 588 of them, but if you include only those available in at least 0.1% of words it’s only 372.

This allows to pick out labels like “qkjv4fgxgcrds” which are not very long, but are not a real word in any language either. On the other hand it penalises the punycode, so domains using it need special treatment.

Actual entries and usefulness of the list

In the end I caught a lot of domains which can be easily identified as malicious just from the name. The top scorer with 158 points was a ridiculous:

www1.royalbank.com.ouxjgztigncfxjnczofxvxbpyvdvfs.ouxjgztigncfxjnczofxvxbpyvdvfs.ouxjgztigncfxjnczofxvxbpyvdvfs.ouxjgztigncfxjnczofxvxbpyvdvfs.ouxjgztigncfxjnczofxvxbpyvdvfs.ouxjgztigncfxjnczofxvxbpyvdvfs.youngchristianleadersscholarshipfund.com

But a number of better domains were also spotted, like:

www.paypal-support-update-your-account-information-security.come4movies.com

In the end, with a managed whitelist this way of collecting domains is not bad. I managed to find hundreds of phishing domains in a day, although most of them disappeared after a few hours. Some patterns were addded to my domain blocker just in case, although I don’t think there’s anything in there that I wouldn’t normally spot anyway. I’m sure that people running honeypots as their day job saw the same URLs, but I’d have hours/days of delay before getting information from them.

If I was running an actual network edge for nontechnical people and didn’t have a better data source, it could be an interesting source. (off the main network of course) But for personal use it’s unlikely to be useful. Extracting the rules used here and inserting them in the local DNS resolver would likely be a much easier solution.

Was it useful? BTC: 182DVfre4E7WNk3Qakc4aK7bh4fch51hTY
While you're here, why not check out my project Phishtrack which will notify you about domains with names similar to your business. Learn about phishing campaigns early.

blogroll

social