Everyone has been fascinated by the MIT $100 laptop project, what with the radical prospect of disseminating computers throughout the developing world's classrooms (sweet version), small businesses (brutally pragmatic version), or terrorist cells (brutally cynical version). Some thought it was genius, others a distraction...and some of us realised with a degree of depression that its specifications were rather more impressive than those of our office computers. There is one problem, though, I don't think anyone's really dealt with.
That is to say, a cut-down PC is of very limited use in the role that is suggested. Apart from offering an introduction to programming and maths (the biggest application its designers were thinking of), most other really interesting uses for it depend on Internet access. Otherwise, whatever content that isn't user-generated (the Wikipedia Foundation's free curriculum project springs to mind, as do dictionaries, maps, and such) and all software will have to be distributed on physical storage media to all those computers...and as the whole point is to effectively set them free to swim through society, it's doubtful whether they will have a supply link to whoever will provide this stuff for long.
The lapster does include a Wi-Fi (IEEE802.11b/g) radio, but this is not really a solution. Wi-Fi is a nice technology for places where there is a good fixed-line or microwave infrastructure. It is not a telecommunications replacement. Essentially, Internet access via Wi-Fi is always restricted to a radius corresponding to the access point's range around its location at the end of a fibre or DSL line. This is as good as useless in this context.
MIT hopes Wi-Fi's other mode, peer-to-peer rather than access point networking, will provide the answer. This is OK as far as linking the computers in a class together goes, but no farther. Using it for wide-area networking relies on what is known as mesh networking, in which one user passes on traffic from another to the next user until either the destination or the backbone network is reached. Essentially, the users act both as end-points and as routers. This is nice, and geeks (especially academic and lefty geeks) love it because they see it as a way of escape from the grip of big telcos and even ISPs into the pure, fresh skies of free connectivity.
The trouble arrives, though, if everyone, absolutely everyone, isn't meshed in. Theoretically, if all the users are part of the meshnet, any user is routable from any other without leaving it. But, of course, everybody isn't. For a mesh network to work, there must be a line of users, all online and within range of each other, from you to every other user. If there's a gap, the users on the other side of the gap are their own private internetwork and you can't reach them.
That would be no trouble if people were evenly distributed across the Earth's surface, but we aren't. There are deserts, oceans, and mountain ranges around, most of which are considerably larger than the theoretical maximum range of a Wi-Fi connection. Not just that, there are large areas of the world where the density of population is sufficiently low to put our laptops out of touch with each other and the wider world. The other problem with mesh networking is the so-called n+1 problem, which arises when we pragmatically accept the last problem and hook our mesh network up to the Internet. The computer nearest the backbone, the first (or last, depending on how you look at it) hop, must carry the total bandwidth required by all the others, all the time. The closer you get to that point, the heavier the load, and the more critical the link's reliability. If that one fails, you have no Internet access. You may talk, however, among yourselves.
If the mesh is of any size, that last link must be at the very least a T-1/E-1, too. Try obtaining one of them in, say, the provincial Ivory Coast...at best it will be seriously expensive, and at worst impossible. Mesh networking is a cool idea if you're on the MIT campus with plenty of other users and bandwidth to burn. It's also not such a bad idea if you have a longer-range radio link (we'll come back to this).
My point, then. Whilst all this was going on, the GSM Association, the mobile network operators' club, announced that Motorola had got its latest Emerging Market Handset Initiative contract, this time for a mobile phone at a price below $30. Now, mobile telephone networks have been spreading in Africa and Asia with a speed that regularly surprises the people who build them. It's one of the industry's conscience salves of choice. While European and North American operators have struggled to come up with a working mobile payments system, African ones invented a function to transfer airtime credit by SMS, which meant that a new and highly accessible, secure, and instant payments system suddenly appeared (and, arguably, a new currency).
A couple of months ago, GrameenPhone, an arm of the Grameen Bank of Bangladesh (you know, the darling of the World Bank) switched on the EDGE (EGPRS) upgrade to its network, pushing up data transfer to a peak rate of 200Kb/s. Within a week, 100,000 subscribers had upgraded to the new system. It's not 3G (although it's not far off the speeds achieved by the first 3G networks in practice), and certainly not post-3G speed, but it is Internet access at a speed comparable to most US fixed Internet connections, in the jungles of Bangladesh. This is a well tried, carrier-grade, mass production technology that is already there, or on its way, in many of the places these lapsters are going.
So where is the $10 datacard for the $100 laptop? Why doesn't the thing already have an embedded GPRS radio? Dah. Less optimistically, though, one thing neither the GSMA, CDMA Development Group, nor MIT have tackled is the other end of the link, the $1000 base station and the $3000 switch. There is no Emerging Market Base Station Initiative - yet. What might perhaps do that would be success with the mobile version of WiMax, which will at some future date be IEEE 802.16e when the WiMax Forum decides how it works. Motorola's "pre-standard" (read: non-standard) WiMax base station drinks only 10 watts of electricity and is about the width of The Guardian long and my notebook wide. Samsung (who invented most of it as a proprietary tech called WiBro) claim to have tested theirs at speeds of 1-3Mbps from moving vehicles.
Most of the claims (70Mbps over 30 miles!) you may have heard for WiMax are crap, except perhaps for highly managed point-to-point links, but if it can do that on 10w, we can easily drive the base station with a Rutland 913 wind turbine and some batteries, which means no fixed infrastructure at all. One of its first applications in the "fixed wireless", 802.16d, version (which is already standardised) may be to provide backhaul for the cellular systems.
But, before WiMax gets its act together, the cellular systems are already unwiring the places the $100 laptop was intended for, and there's no suitable radio on the thing. Or is the plan to encourage them to hack a mobile phone together with the computer?