People often forget that the Internet wasn’t created in a dramatic flash of omnipotence by an all-knowing God, but rather patched together by thousands of squabbling humans over decades. A piece here, a piece there. Lift that server, tote that fibre. Let’s call the process, oh, ‘evolution’.
Over the weekend two events highlighted this ramshackle nature, and highlighted yet another risk of secret ISP-level Internet censorship. If you did a Google search between 1.30am and 2.25am Sunday AEST, the message “This site may harm your computer” accompanied every single search result. Huh? Surely not every single website is infected with malware? Of course not. It was a mistake.
Google gets its list of possibly-infected websites from StopBadware.org, a US- and UK-based non-profit. Due to a “human error“, the list of bad websites supplied to Google contained a line with a single slash character “/”. Unfortunately, every possible website address has a slash, so every website was marked as dangerous.
To Google’s credit, they found and fixed the problem in under an hour. On a weekend. Impressive. Of course, people could ignore the warning and click through, and we could still use any of the other search engines like Microsoft Live or Yahoo.
But if a similar mistake happened to our fancy-pants ISP-level Internet filters, it wouldn’t just be a warning. It could shut down the entire Internet across Australia for hours. Alarmist? Hear me out.
Conroy’s Rabbit-Proof Firewall has an externally-supplied list too: the ACMA blacklist, provided by an external organisation. But unlike Google’s list, the ACMA blacklist will presumably be encrypted to preserve its secrecy. Otherwise, several thousand ISP systems administrators could get their hands on the filth list. The filters which are about to be trialled in ISPs come in three main flavours.
The simplest requires all traffic to be routed through the black-box computers running the filter software. That slows everything down, so my guess is it won’t be chosen except for the very smallest ISPs.
The other two use a split approach. First, all traffic is checked against a list of potentially-bad Internet addresses. Most is legit, and passes straight through with negligible slowdown. Only a small proportion of traffic is routed to the computer running the detailed filtering system.
(For the hypergeeks, BGP tells the ISP’s core routers which traffic to route to the filter box. That box does DPI to determine if the HTTP request contains a bad URL, or whatever is appropriate for non-web traffic, and it responds in one of two ways. In one, it denies the connection. In the other, it fires three TCP RST packets to each end of the connection to kill it. Serious Network Engineers are welcome to pick apart this overly-simplistic explanation in the comments.)
But what happens if the list is wrong? What if the routers send all traffic to the filter box, which is then overloaded? What if the filter box starts blocking all traffic?
Unlike the Google glitch, this system is now blocking the traffic itself. And here’s the irony: systems engineers can’t connect to the routers to reset them, because all the traffic is being blocked. That means systems engineers across Australia would have to physically visit every router to reset it. They’d also need a new ACMA blacklist. Where do we get one of those on a Sunday?
Maybe I’m paranoid. So I ran this past A Very Experienced ICT Security Specialist Who Cannot Be Named.
“Broadly speaking, you’re right. The network would have to be extremely well designed to recover from an error like that, and that’s most unlikely,” he said.
So how well are our networks designed? Try these two examples.
On 24 February last year, Pakistan tried to block the evil YouTube using they very BGP technique I described — and they killed YouTube globally.
And only yesterday, swathes of Victoria and Tasmania were without the Internet for hours because power failed at a data centre. There were standby power generators, but, well, best-laid plans, etc.

I don’t think they can keep the list all that secret. I’m sure the Government will make the ISP’s sign confidentiality agreements, but it will be leaked eventually. And who cares really. There are many commercially developed URL black lists out there, all able to be downloaded without issue and encrption. I think the government only wants to keep the list under wraps until the filtering is in place, then they will release it. At that stage, it won’t matter as requests to the dodgy URL/IP’s will be blocked!
As for blocking addresses using BGP routing, unlikely. Modifying route tables on routers to block sites is like using a sledge hammer to crack open a nut. Too dangerous and unnecessary. The Pakistan guys cited in the example didn’t have a filtering system in place so tried to be a bit cute. Standard ACL modifications would have been a better option. You only change route tables on routers if absolutely necessary, and a smart admin always leaves themselves a back door (for the very reason that it is always the admin who has to get in the car at 2 AM to go onsite and fix the issue)
Come on guys, talk about dramatics.
System Engineers will always be able to communicate/config the routers (unless the router is without power)
Yes, routers are usually configured using a browser. But they never use Port 80 (the http port). Usually there is a vender allocated system port that is utilised that allows the Comms Engineer to log in and view/edit the configuration of the router remotely.
So even if the blacklist was stopping all port traffic on Port 80 (http) and 443 (the https port), the admin can still fix the router remotely by using port 1567 (or equivalent)
As for the black list being encrypted, unlikey. If you think checking the blacklist every time someone tries to connect to a URL is going to be slow, it will be really slow if you add the decrypting process. Plus you would have to give the decryption key to the ISP Admins anyway so they can configure the router to decrypt the list
There are lots of reasons why the filtering should not go ahead but these aren’t the reasons. Personally, I just think the fact that by using a oversea’s proxy (remote computer which acts as a relay to grab content, which any 15 year old kid know how to configure in IE), the filter will be bypassed makes the whole exercise a bit of a waste of money. The blacklist could try to block these proxys as well, but there are thousands of them and they will never get them all. Makes it harder to track the real criminals on the web as well as these proxy’s hide the true IP address of the bad guy’s computer.
Thank you for your excellent reporting on these issues, Stilgherrian.
Scott, why would the blacklist only apply to ports 80 and 443? HTTP servers can be run on almost any port, and surely the evildoers will think to host their forbidden content on, say, port 81…
Scott, if we’re talking about the mandatory Tier 1 of the planned internet filtering, then commercial blacklists are not what it’s about. Senator Conroy has, at least in the current incarnation of his proposal, stated that the filter is being set up to block the ACMA’s blacklist. This is and will continue to be secret (at least in theory).
The second tier, which is opt-in, opt-out or whatever Conroy says at the time, which seems to depend on which way the winds of criticism are blowing that week, might well be a commercially-available list — but we simply don’t know because the entire process is backwards. Instead of having a clear policy and saying “block this”, we have random tests of whatever gets put forward, and presumably we’ll see a policy built from that which will whatever is easiest to sell politically at the time.
Does anyone think this is a sensible way to run anything?
This entire internet filtering “policy” is an episode of “The Hollowmen” writ large, but with poorer editing.
CH raises a god point, too. Fast Flux techniques can flip the hidden nasties around the internet in minutes. ACMA’s bureaucracy will take… how long to catch up?
The joke is that politicians, who can barely use a computer at all, are trying to decide how to block internet traffic and argue with network engineers wiht 20 years’ experience. Look up “Dunning-Kruger Effect” some time: thisis a classic example.
The important point you highlight is that we still had a viable fall-back position on the weekend, and could survive the systems failure. If the intended censorship/filter system gave each citizen the final say on how they used it (like enable or disable it), then we’d survive some types of failure there too. Some people actually believe that we can design and build a 100% reliable censorship/filter system that won’t ever malfunction. This sounds like more faith-based-reality (to quote a recent Crikey article).