Having played with the OGLE tool for a few days (version 1b), I’ve learned some things that might be helpful to those hoping to have their customized Second Life avatars or objects fabricated. My own goal is to take the avatar geometry and convert it to “solid” CAD data, but the process I’m using to get there might be useful even if your goal is to just send out an .obj or .stl polymesh file.
First off, be aware that the captured videostream data most likely isn’t “clean”… even if it looks good. Be sure to search for duplicate triangles/polymeshes since there will probably be plenty. After deleting any duplicates, be sure to select and “merge” all the remaining polygon edges and vertices since there will almost certainly be multiples of those as well.
Secondly, for a Second Life avatar, there are some areas on the head which probably need some attention. The variability of the base mesh (aka “Ruth”) can really yield some tortured areas. So before doing anything I isolated the head from both the torso and hair and then tweaked a few vertices – mostly around the eyes – to let me do some other things later. I deleted the triangle strips for the eyelashes and fixed the mouth; deleting the “teeth” and cleaning up the area in general. I didn’t mess with the ears although they probably needed some attention as well.
After those areas I addressed the openings at the eyes and the base of the neck; closing them with additional triangles and making the head a sealed volume. I did the same with the eyeballs. With closed volumes I could then do a boolean operation and join them all together. Not everyone likes booleans and that’s fine; this is just how I did it because I didn’t feel like pushing polys.
That done, I combined the head, the torso and the legs into one mesh. The hair, unfortunately, varies according to your individual avatar settings and consequently may have – as mine appears to have – both an enclosed volume and sections that extend from edges. This is a mess to deal with, so I didn’t join the hair to the rest of the form. In the above image you can see the finished head with the unmodified/unattached hair in place. Note how rough the mesh still is.
Because I’m hoping to go solid I converted the body (minus the hair of course) to Sub-D geometry; and from there to NURBs surfaces. The final stage of this conversion took some time and the resulting file is an unwieldy 140Meg.
For this effort the surface file was then imported into an older/lighter version of Pro/E. Although the geometry came in relatively well, there were still a number of gaps on import. You can see the gaps – they look like the yellow areas around the eyes (here’s a Link to a big image). I “zipped” a few gaps already, but to be honest, this will require some significant effort. Additionally, since each avatar can be unique and present different problems, the practicality of doing such a conversion on a regular basis probably doesn’t make sense. So at this point I might try one or two other options, but I’m not hopeful a quick solution will be found. We’ll see.
Pingback: reBang weblog
Nice work Sven. I’m keeping my eye on this blog for your other solutions.
That mesh looks nasty, btw. I’m glad I don’t have to zip those gaps. Yikes.
Prims: no problems anymore. These meshes on the other hand…
I’ve only played with the 2D aspect of this, but bringing what (small) bits I learned playing with Poser: does the avatar have a name? Would it be possible, maybe, to just programmatically extract the “good” data and ignore the bad?
-Andy
(aka Jarod Godel)
When captured, the avatar not only has no name, it’s in pieces. Now anything can be programmed I suppose. But under the circumstances, these morphable, tortured meshes are rarely “good” to start with. So the issue comes down to both time and whether the effort is worthwhile. I seriously doubt this comes anywhere near to being worth either.
Going “solid” isn’t mandatory. I just wanted more options. I’m also more interested in the prim data anyway and that’s a non-issue now.