Satellites and possibilities

The announcement

Many news services reported already on the SpaceX plan to launch a number of satellites to provide internet access; for example Ars, or more specific discussion at reddit. In short, SpaceX is planning to set up a constellation of over 4k satellites that provide two directional internet access to ground radios. Since the satellites don’t work yet and the best real project we can compare it with is the relatively slow Iridium network, a lot of the comments I’ve read seem to concentrate on questions you’d ask about a broadband / fiber network. And I think they largely miss the big picture.

Questions asked

A lot of questions are about standard things we care about now:

  • What’s the available bandwidth? (23Gbps per satellite)
  • What’s the total bandwidth of the project? (depends on ground ISPs)
  • What’s the latency of the communication? (first hop around/under 20ms)
  • How well is it going to work in cloudy weather? (pretty well - atmosphere and water don’t affect Ku band / 12-18GHz that much)

And those questions are relevant. And it’s great that the access speed will be pretty much unified around the world - no more dialup because you live in the middle of nowhere, or telco exchange run out of available connection ports.

But those questions assume the current model of accessing the internet. And soon that’s going to be soooooo 2016.

Questions that we should ask

Global satellite internet used all over the world gives us opportunities that weren’t available before. Multicast and caching will be possible like never before. Not multicast as specified and available right now, but the idea. The latency may be actually lower than what we’re used to right now.

Multicast

We don’t have to be stuck with the current 1:1 connection model. It’s pretty convenient for simplicity, but it just doesn’t scale that well. Did you watch all of House of Cards episodes within the first weekend after release? So did ~670,000 others. And that means the whole size of 13 episodes were copied ~670,000 times and transferred to the viewers, every time, all the way from your nearest Netflix data store. It didn’t just take Netflix’s bandwidth. Your local ISP’s network probably made an “Ugh…” sound a few times - just as your neighbours trying to play low-latency games during the same time. Most consumer networks are oversubscribed and you can see that when the bandwidth goes down in the evenings.

But that doesn’t have to be the case. We’re not that unique. A lot of what we do is pretty common and likely fairly area/language-specific. If you’re watching that new cat video on Facebook, or reading that article in NYT, you can be almost certain that someone within 100km of you is doing the same thing right now. If the data is literally broadcast, it doesn’t matter who requested it if we can all get it at the same time. Multicasting would stop the data duplication and actually free up some bandwidth for unique requests.

Opportunistic caching

Let’s go further. You don’t have to start receiving the data after you request it - you can do it earlier. There’s a limitted number of transmitters on the satellite, so even if the data is ready to be beamed to you, sometimes you’ll just have to wait. But let’s assume some transmissions are marked as popular. If your receiver sees them while waiting for something else, why not grab it anyway and save to a local disk? It doesn’t matter that you don’t even know what it is at the moment. It could be the new popular article. It could be a funny video. It could be a header of a popular page that everyone will visit today. The ISP knows it’s popular, so save it, even to a cheap but large spinning disk. If the disk dies, it was just a cache anyway.

Using that free time to cache whatever data is broadcast anyway can result in effectively 0 latency. It doesn’t matter that the resource you wanted to reach is 800ms away in total and hammered by broadband users. You already have it on the disk attached to your radio. And the entity hosting the original content is happy because they don’t have to serve it again.

Technology

The common internet we’re using today is not based on the tools we need for this new system. We’re accessing specific names, like http://example.com/blah and expect to do the whole journey - resolve the dns, connect to the host, make request, receive the bytes in one continuous stream. It’s been around forever and it’s starting to get old. To use the features described before we need to make two improvements:

  • stop caring who sends the data
  • start accessing content rather than names

This is already possible in some specific cases. For example bittorrent requests a specific content hash and you can get it from anyone. It’s a hash of the content itself, so you can always verify it, regardless of the source.

IPFS tries to get a similar technology available everywhere. For example parts of the website can be retrieved using js-ipfs from any IPFS node. That node could be the cache attached to the radio in your house.

While the change is pretty slow, the foundation of this technology makes its way into the standard browsers as well. Resources requested by the website may be annotated with the content hash using Subresource Integrity. In practice it means that the browser can be allowed to make a request for a file with that hash - wherever it happens to be. This ties in nicely with protocols like BT and IPFS.

Security

If you were paying attention to the internet development lately, you may be asking “That’s completely incompatible with HTTPS, why did we do that push for HTTPS everywhere then?”. And you’d be right in some cases.

HTTPS provides authentication of parties, encryption of traffic, privacy, and integrity. Sometimes you’ll want all of those for each element of the service. Sometimes it’s not that critical. For example, when contacting your bank, you may want to encrypt all the user-specific content, but since that content embeds “and also download file with hash 0xDEADBEEF that I call helpers.js that’s not user-specific”, you may choose to do that in public. (yes, multiple caveats apply here, I’m simplifying) Since the integrity is still verified, encryption and authentication may not matter.

In other case we’ve got a Netflix movie, which is going to be protected by some form of DRM, so encryption is largely meaningless, authentication is available via other requests, and the privacy can be handled via other channels. You could easily request the DRM key over standard HTTPS request, but then play the files which were broadcast over your area a few minutes ago.

There’s still the question of whether this tradeoff is important to you in each case and how to differentiate between them. But that’s a large topic we won’t solve straight-away. The important change to the model is that the requests and responses don’t have to be easily related anymore. You could make a secret request for data which is broadcast without identifying you. Or use things downloaded by other users without any communication at all. This is a large scale Outernet.

Summary

I, for one, welcome our new broadcast internet protocols.

Was it useful? BTC: 182DVfre4E7WNk3Qakc4aK7bh4fch51hTY
While you're here, why not check out my project Phishtrack which will notify you about domains with names similar to your business. Learn about phishing campaigns early.

blogroll

social