I recall reading some time ago that a team at Purdue University were involved in a project to simulate the September 11 attacks and was seeing an occasional blog headline regarding their effort. It’s been getting some increased attention again these past couple of weeks, but it was only yesterday that I bothered to watch the video people have been discussing. I guess my lack of interest had to do with my having seen plenty of simulations, so this wasn’t particularly intriguing to me.
That’s now changed.
–
About a month ago I was trading emails with Ben Mahan of MTek. You might recall an earlier post of mine discussing their niche, rapid manufacturing-based business (reLink). During the exchange he mentioned something that should have occurred to me but never did. I normally don’t post excerpts from emails, but with Ben’s approval I thought this worth sharing:
The RP machines have the capability to “blend” materials from one type to another. These transitions can be multidimensional shapes, and not just gradients. Designers have few if any options to calculate the benefits of this. So capitalizing on the RP machine transitioning technology is being held up by the inability to effectively calculate these very complex transitions. This is how it was explained at an RP seminar I attended last year. Using Mechanica, or even Cosmos, there are no provisions (at least I’m not aware of) for performing a stress analysis on a “blended” part.
I know about blending materials and I’ve done finite element analysis, but it had never occurred to me how problematic this particular computational analysis would be.
Since then I’ve only seen one mention of simulating composites: an entry on Desktop Engineering about some new version of “injection simulation software for composites manufacturing” (Link). Yeah, I should look more, but I suspect the advances are still kinda slow. For now.
–
So while watching the above video what struck me most was how it was seemingly forced to treat everything in discrete components; the plane, the fuel, the support structures, the concrete, the glass, etc. Everything on its own layer. The plane itself likely contained some composite materials or non-homogeneous sections, but I assume they made educated guesses and logical assumptions to simplify the calculations and focused their attention on the big picture.
That doesn’t mean the result isn’t compelling or useful; it’s doubtlessly quite accurate. But it does highlight that as computing power increases and applications are written to leverage that power, simulations like the one above will eventually seem as unrealistic to our children’s sensibilities as Dire Straits’ old Money For Nothing music video looks unrealistic to today’s generation. We won’t need layers. Or 95 hours of computation time to output 1/10th of a second of simulation.
I’m not talking about the visual elements here, but accurate physical simulation that starts to approach the visual fidelity of what we can do now. The ability to do it all at the same time – in real time – at amazing levels of realism.
–
A while back I was going to post about water. I’d read something somewhere about how complex it was to simulate it, and the next day I happened to watch a video highlighting the visual effects for the upcoming videogame “Bioshock“. Go watch the water effects video (Link – one of many on YouTube) and then imagine it was more than just a visual effect. Imagine that it was a simulation where the visual image represented the fidelity of all the other physical property calculations; properties of the kind a CAD system allows a person to include in their 3D model, and which software intended for composite material analysis also finds a different but related virtual use.
When we have that kind of computational power, what will virtual worlds look like? What will products start looking like? And what kinds of cross-overs will be commonplace?
{Update:
Wish I’d seen this entry on Next Generation earlier. From “Bringing Water to Life” (Link):
Blade Interactives new toy, a fluid dynamics engine that accurately models water and its physical effects on objects… The engine’s creator, Huw Lloyd, has wanted to implement proper fluid dynamics in a game since studying flows of gas in space for his PhD in astrophysics. “Water hasn’t been done very well in games, it’s just a flat plane,” he says, explaining that this is no cut-down simulation. “This is proper fluid dynamics, with all the emergent behaviour in eddies and turbulence.”
…
The system works by calculating a flow field – the speed and direction of flow at any point in space. This flow field affects all objects and characters in that space, as well as graphical representations of bubbles and foam, so it truly becomes part of the game environment.
…
So bullets will cut Saving Private Ryan-style trails of bubbles into the water and leave clouds of blood that the system will naturally diffuse, and the acoustics of rooms will alter as water levels change. With the knowledge that the water is the star of Hydrophobia, care is being given to making sure the water will look as good as it behaves, with it reflecting and refracting accurately, being murky or clear, creating caustics – light reflections on walls and ceilings – and leaving damp evidence of its passing.
Excellent.
Would love to contribute some 3D models to this thing.
{Image Copyright © 2007 Blade Interactive Studios; note that the above image brightness/contrast altered from the original.}