I’ve mentioned before how much time I spend investigating spam. It’s allowed me to observe some pretty interesting, sometimes amusing, often annoying, criminal behavior. I also enjoy analyzing Twitter spam as well, and have built a pretty interesting collection of spammer examples. One of the most common things I see on Twitter, though, is spammers using shortlinks to try and pull off their crime.
Shortlinks (goo.gl, bit.ly, etc) have made sharing links handy, especially on character limited communication mediums such as Twitter. Though you don’t see them as often in e-mail spam, shortlinks are also a critical component of Twitter spam, and often on Facebook as well, as they provide a way to not only fit the text limitations, but a URL like (http://bit.ly/kiZs18) appears much more benign than a URL such as (http://www.somelamedomainyouveneverseen.info).
While they can be useful, shortlinks can also be incredibly evil. Yeah, I know, you’ve heard you should always be careful what you click on (most people aren’t), and perhaps you run anti-malware that investigates what you click before you navigate to it (most people don’t). Personally, I question the efficacy of most software that purports to do this). But personally I believe the worst kind of malicious shortlink is a smart malicious shortlink.
All shortlinks work the exact same way – simplistically, when you request their URL, they provide another URL back as the location that your browser needs to redirect to (they actually tell the browser that the document you have requested has moved).
So what do I mean by a smart shortlink? Most smartlinks are simple – you request navigation with the short URL, and one, perhaps 2 levels of indirection later, you’re at the actual document you want. So when you click (http://bit.ly/kiZs18), it navigates through Google’s Feedburner (feedproxy.google.com – this is another problem I’ll talk about another day) and finally to www.wired.com. But smart shortlinks, which are inherently malicious, lie to you if you get too close while you’re investigating them.
Surely you can think of a location where the local police have established a constant patrol for speeders (a “speed trap”)? Say one where you’ve been by for months and months, and 50% or more of the time, there was a highway patrol car there? What do you do? You likely a) get ticketed or b) become conscious of your speed every time you drive by to avoid getting one. Grifters on the Internet are no different. They also like to avoid getting caught.
There aren’t speed traps on the Internet, sure. But there are services that “unshorten” URLs, such as unshort.me or unshorten.com so that you can see where you’re going to before you click that short URL. To the bad guys, these are speed traps. Many times, I have tested URLs through unshorteners, and saw the final URL returned as “google.com”, “youtube.com”, or similar generic, benign URLs (usually without any actual further path information such as a specific page, making it look even more suspect). But if I pasted that same URL into a browser (within a victim virtual machine), it would navigate to an actual hostile URL.
How does it work? Grifters put a teeny bit of logic into their redirection code, and if they recognize the source of the HTTP request as an unshortener (or I can only imagine, most anti-malware link checkers), they lie about the destination. If it comes from elsewhere, they assume it is someone to hit with their grift, and provide the hostile URL.
I don’t believe any URL unshortener can defeat this at the present time without developing pretty explicit countermeasures – once the bad guys spot a common source of destination URL sniffing, they’ll demarcate it in their redirection logic, and lie to it. The only safe solution is to use special software that sniffs the link from the client itself but doesn’t complete navigation (a special script or app), or navigate from an isolated VM/a machine you’re willing to potentially risk losing, or send all requests through Amazon EC2 or perhaps Windows Azure, which are so expansive it becomes hard for the bad guys to blockade completely without risking the potential effectiveness of their crime. Though Twitter’s t.co “unlinker” is supposed to help keep you safer, I’m not sure if, or how they have protected against this kind of explicit attack vector.