For the sake of argument, let’s say that over the next four years that most non-stationary devices that can be Internet-enabled are made so — and that most of those devices have embedded GPS and telemetry. Yes, phones, PDAs, cars, trucks, laptops, motorcycles, and cameras, but a host of other devices too, ranging from the sinister to the sublime.
So, privacy issues cheerfully aside, how many location-aware network-ready devices are we talking about? Initially probably approaching a billion devices, including the 500-million cellphones being sold each year, on out through 20-million cars and trucks being sold annually, and a host of other industrial devices and geegaws.
Well, there is going to be a data explosion from the bottom up. Why? Because assuming all those data devices are constantly sending GPS and (likely) telemetry data, the data will be flowing inexorably outward. Continuous telemetry (at 10Hz) and GPS (at 1 Hz) requires about 1 kb/s (or, to put it another way, one 512MB SD card can hold about 160 hours of such data).
That may sound teensy, but location-awareness across so many devices adds up: Two hundred million devices will require, well, 200,000,000 kb/sec of bandwidth — or 200 gb/sec. It is a large load on networks, one that could, in aggregate, soak up considerably otherwise unused bandwidth out there as devices continually ping out location and direction. It is also going to require off-device storage, as most of these devices have no capacity for logging all the data — but keeping the information will be crucial as people search through and find new optima in these large, location-enabled datasets.
Pardon the musing, but as I was registering for Where 2.0 I got thinking about some of the infrastructural implications as we march forward with location-enabled devices.