This article is getting a lot of attention lately:
http://www.gotw.ca/publications/concurrency-ddj.htm
It basically suggests that programmers must learn concurrency techniques in order to squeeze more power out of future chips.
I would like to see more evidence that ObjectOrientedProgramming allows for more complex software. Another thing, the article claimed that very few developers know how to do concurrency correctly. The fact is that anybody who uses and understands basic database transactions and client/server techniques has been "doing" concurrency and often do it well. The database abstracts away many of those details. I see no reason to limit the discussion to just "RAM-based" solutions. Database concepts can and do extend to RAM.
-- top
Somebody else suggested that the OperatingSystem can do it. If an OS is running multiple applications, then it can simply divide them among multiple processors. The program doesn't have to know or care. Similarly, a web server can divide up each in-coming request onto a different processor. It just has to find a way to coordinate any shared state. Maybe the article is talking about the systems level instead of the application level.
See also: EndOfMooresLaw