A few thoughts on CPUs and concurrency
Kyle sent me this article on why concurrent programming is going to be the next big thing. It makes some interesting and valid points about how CPU speed gains have been trailing off.
http://www.gotw.ca/publications/concurrency-ddj.htm
A few random thoughts:
1) I mostly agree, but see point 3.
2) While this may be a shock to C++ desktop application developers, this isn’t really news to web application programmers. Sure, there’s a question of scale, but fundamentally, parallelizing an application to run on 20 webservers at once is very similar to the problem of parallelizing an application to run on 20 CPU cores at once. I think web application programmers are often seen as less sophisticated than desktop application programmers, but they’ve got an edge in making the paradigm shift here. They’re already used to dealing with race conditions, serialized bottlenecks, and asynchronous event queues, at least in some form.
3) In my experience, it’s been a while since most applications have been CPU-bound. Memory, network, disk, and even video latency have been the limiting factors more recently. I find that when people say “the machine will just get faster to compensate for my inefficient algorithm”, they mean the whole architecture, not just the CPU. So it goes.
4) This article barely touches the impact of 64-bit architectures, other than to say “cache size offsets any performance gain”. This seems like it deserves a lot more discussion than the glossing over it’s gotten here. Particularly as I’ve been hearing lots of rumor of AMD’s advances towards reducing the cache memory size. I dug up this extremely detailed article on AMD’s 64-bit architectures, which I haven’t had time to read fully, but which you might find interesting or mind-numbingly boring: