I’ve just posted a comment over on Mashable* that might be of interest (Link). And because I think industrial designers might be especially interested in some specifics, I’ll post the example I used in that comment in here as well:
Imagine the near future. I’m a toy maker with a rapid-manufacturing system (aka “fabber”) and I’m looking for new products to fab and sell. I don’t have my own designs and I don’t want to pay for any, so I play an online MMORPG that’s streaming to my PC. I see something I think would sell so I rip the 3D data from the stream, which – using today’s methodologies – would likely be a low-rez 3D mesh. It’s not sufficiently detailed like what I see on the screen, but that’s no problem. I search my cache for the 2D normal map image (or similar) that the game engine uses to visually enhance the low-quality mesh. The normal map, having been generated by a much higher-quality model of production quality, can then be used in reverse to create an entirely new model from the captured low-rez data (typically a command like “Edit>Convert>Use Displacement Map”). The result is that I now have a high-rez 3D model (much like the original) which I can further manipulate for fabrication and sale.
Who does this screw? Well, if that happened today, it’d be the game developer who gets the shaft. But if virtual worlds continue down the path of Linden Lab’s Second Life, where users create the content, it might be some average guy working as a janitor to feed his family and staying up all night designing these things and dreaming of a better life for himself and his family… just like his musician friends.
And of course this goes beyond simple toys. Someday someone out there will laser scan their prize gun and post the 3D data on the net. Of course it might be easier to simply check the cache of a service bureau’s rapid-prototyping system for those files.