There was an article in GameDevelopers magazine (www.gamasutra.com) a while back on rendering as done in the game Black&White. They used three different scene renderings with RenderToTexture using different sized textures, and then collated those three renderings into one if I remember correctly. This way you get a fuzzy image in the back, a moderately crisp one in the middle and a sharp one close to the camera.
My concern with this technique is that RenderToTexture is unpredictable, depending on graphics cards and used to be SLOOOW on some. With Radeon 9800s and FX5600s there shouldn't be any problems, but it might be a gamble with older ones. If you want to avoid this potential problem, then going with ello's recommendation is probably better: jitter the camera position slightly and render the scene full-screen with varying alpha transparencies. There is an introduction to this in the OpenGL red book.