I’ve been thinking about the way that the economics of cloud computing infrastructure will impact the way people write applications.
Most of the cloud infrastructure providers out there offer virtual servers as a slice of some larger, physical server; Amazon EC2, GoGrid, Joyent, Terremark Enterprise Cloud, etc. all follow this model. This is in contrast to the abstracted cloud platform provided by Google App Engine or Mosso, which provide arbitrary, unsliced amounts of compute.
The virtual server providers typically provide thin slices — often single cores with 1 to 2 GB of RAM. EC2′s largest available slices are 4 virtual cores plus 15 GB, or 8 virtual cores plus 7 GB, for about $720/month. Joyent’s largest slice is 8 cores with 32 GB, for about $3300/month (including some data transfer). But on the scale of today’s servers, these aren’t very thick slices of compute, and the prices don’t scale linearly — thin slices are much cheaper than thick slices for the same total aggregate amount of compute.
The abstracted platforms are oriented around thin-slice compute, as well, at least from the perspective of desired application behavior. You can see this in the limitations imposed by Google App Engine; they don’t want you to work with large blobs of data nor do they want you consuming significant chunks of compute.
Now, in that context, contemplate this Intel article: “Kx – Software Which Uses Every Available Core“. In brief, Kx is a real-time database company; they process extremely large datasets, in-memory, parallelized across multiple cores. Their primary customers are financial services companies, who use it to do quantitative analysis on market data. It’s the kind of software whose efficiency increases with the thickness of the available slice of compute.
In the article, Intel laments the lack of software that truly takes advantage of multi-core architectures. But cloud economics are going to push people away from thick-sliced compute — away from apps that are most efficient when given more cores and more RAM. Cloud economics push people towards thin slices, and therefore applications whose performance does not suffer notably as the app gets shuffled from core to core (which hurts cache performance), or when limited to a low number of cores. So chances are that Intel is not going to get its wish.