Next Generation Product Development Tools, Part 15

In Part 10, I claimed “the future of CAD is cooperative”. This might seem obvious, but one look at current ongoing difficulties reveals the magnitude of the non-cooperative problem. It’s an inconvenient fact of life for too many of us and the reason I spent so much time examining the ongoing shift in the CAD industry; a shift which is arguably a result of efforts to circumvent artificial barriers and market reactions arising from those efforts. However, having spent a fair portion of that time discussing two of the three roadblocks – data portability and proprietary formats – I want to briefly touch on the third: extensible semantics.

Before I start, however, let me first preface this section by stating that I know only enough about this topic to be aware I don’t know very much. Assuming you know less than I, let me share what I think I know and invite those with expertise to comment.

The Future of CAD is Semantic

By “semantics“, we mean the elements of meaning, and thus understanding, within communication. And 3D models are nothing more than a different form of communication based on mathematics and coordinate systems.

If I create a house using a CAD application, almost anyone could identify the elements by sight: walls, windows, doors, etc. Suppose each of those elements were tagged appropriately – “wall”, “window”, or “door” – such that the system also understood their function. Furthermore, imagine those 3D representations behaved within the context of their meaning, their function, because the system understood what it is they represented. For example, attempting to put a 3D thing tagged “window” on a floor might raise a system alert: “Are you sure you want a window to the dirt underneath?” The system understands that windows belong on perimeter walls; not on floors.

A good example of how a 3D modeler might use semantics is the soon-to-be-released videogame Spore where different creature parts – feet, hands, mouths, eyes, aso – understand what it is they are (most likely by virtue of the category from which they are chosen). You can see the system in action in the following video:

Notice how the drag-n-drop “eye” knows it’s an eye; the “leg” knows it’s a leg. The resulting creature doesn’t try to see with its foot or walk using its eyes, which may sound stupid but let’s not forget that computers are stupid. They only know what we tell them. And my very expensive, high-end CAD application isn’t even as smart as this relatively cheap videogame.

Now watch the following video and imagine such functionality were built into our CAD applications; imagine those “foreign elements” both understood their own function and were tagged such that another CAD system understood that function as well:

Let me provide an example. Imagine that the reinforced boss in the above video (the first bit of geometry selected and moved from one CAD application to the other), understanding its proper function, immediately upon placement searches the parent model for the necessary complementary geometry (perhaps nothing more than a protruding post on another component, or maybe a hole cut in a mating part through which a screw is inserted). Upon finding no complementary geometry, the boss flashes red and sounds an alert: “Boss #2039 has no suitably functional complementary geometry.” At that point the designer could make the changes shown in the video; changes that may or may not have been made without the element alerting the designer to potential problems.

A boss is relatively simple, so imagine scaling this kind of intelligence up to more complex components. Imagine instead dragging and dropping an entire plumbing system onto a virtual house, or an automatic transmission into a virtual car. Imagine designing a car in CAD and then going to the websites of different engine and transmission suppliers where 3D models of their products could be downloaded and dropped into that virtual car just like the legs on that Spore creature. This is the potential power of semantics.

The Future of CAD is Extensible

By “extensible“, we of course mean the ability to extend. Consider the Internet’s “markup languages”. HTML has pre-defined elements; you can’t just create your own. On the other hand, XML, or extensible mark-up language, allows anyone to define their own elements.

The Extensible Markup Language (XML) is a general-purpose specification for creating custom markup languages. It is classified as an extensible language because it allows its users to define their own elements.

Combining these two ideas, “extensible semantic” capabilities would allow the definition of new semantic types in the same way that XML allows new element types.

For example, imagine someone is designing a house and using whatever pre-defined semantic tags come with the CAD application being used. Now imagine the design incorporates “interactive tiles” (i.e. when someone steps on the tile, it reacts by glowing or making a sound or triggering an alarm or whatever). But the CAD application, for whatever reason, has no tag for such a component even though you can go out and buy them today. With an extensible system, the designer could create a custom tag as necessary. This would ensure that the full functionality of the component is communicated across systems and the virtual interactive tiles are properly triggered during simulation just as they would when deployed in the tangible world.

Even more compelling is when designers can invent entirely new components and assemble them in lock-block fashion with existing elements. This would be the equivalent of my being able to model something entirely new outside of Spore‘s 3D modeler yet still be able to drop it onto a Spore creature and have the system recognize it as having the functionality with which I’ve tagged it (i.e. I could tag my new model as “legs” but perhaps they’re actually wheels, and the Spore engine would attempt to use them as legs, or rather, as a means of locomotion).

With all three barriers removed engineering concepts could be generated much more quickly and with greater confidence than they are today. Such tools could potentially usher in an innovation Renaissance. We might even witness advancements on par with the widespread adoption of part standardization in the late 19th and early 20th century; the birth of whole new industries and the emergence of unique business models alongside them. However, I doubt we’ll see such inter-operative, extensible semantic-based systems any time soon. Short-term returns rule, and there are plenty of legitimate and not-so-legitimate problems with which to contend.

5 thoughts on “Next Generation Product Development Tools, Part 15

  1. … this is a great series – I’m referencing it extensively in an upcoming publication which among other things has kept me quiet in the blogosphere lately – hopefully you’ll find something of value in these rapid fire comments … @Spore – without in any way detracting from the immense talent of Will Wright, it should be noted that game makers have the luxury of bending the laws of physics to the needs of the game which is a large part of why we haven’t seen the Sims as a generic platform – there are interviews where Wright talks about this in some detail

    … in the context of where the industry currently sits along it’s evolutionary path, the the three roadblocks you articulate do loom large. However, zooming out a bit reveals that they are symptoms of a more fundamental problem … the real key here is simulation and although the idea put forth 22 years ago by Autodesk’s John Walker to make CAD the Heart of Computer Science was thwarted by the decoupling of hardware from it’s software roots, trends towards massively parallel, reconfigurable hw combined with the reinvention of programming may bring us back to the future faster than most think possible …

  2. Appreciate the reply. Always good to hear from you.

    Regarding Spore, while what you say is true, I’d play devil’s advocate and say that engineering also bends some laws … when doing so falls within reason (e.g. ignoring compressible flow effects for low subsonic flight). That said, you make a valid point, and the degree to which Wright is bending laws is undoubtedly problematic. However, the intent wasn’t to suggest Spore is at that level; only that it points to where CAD could and should be in terms of high-level usability.

    Regarding the “more fundamental problem”, I actually struggled with what I consider to be the Problem and had an additional paragraph at the end touching upon it. But that’s a can of worms I didn’t want to open here, because the issue I perceive isn’t whether we are capable, but whether we are willing.

    As to putting CAD – or rather the mindset that accompanies the goal of “bottling reality” – at the heart of things, I saw a glimmer of that light a few years ago when I first gave serious thought to the underlying issues I was encountering during a conversation I had (mentioned here – Link). The basic thought seems very much in line with what Walker seems to be saying. Better late than never, eh?

    And thanks for the links. I’ll have to give them additional attention.

  3. >ignoring compressible flow effects …
    quite true and we even fiddle with constants to the extent that Mother Nature allows so you’re right about it being a matter of degree. I also agree that Spore is pointing in the right direction. Some interesting synergy exists between the “procedural generation” techniques Wright is using, direct-model/synchronous technology and the hw & sw approaches I linked to – they’re all dynamic approaches that avoid making hard to undo decisions until the last possible moment. Conceptually this really isn’t a radical notion – “synchronous” technology in automatic transmissions is used everyday. Imagine an alternate universe where techno-priests win over investors and pundits with “benchmarks” which clearly “prove” that unsynchronized, manual transmissions are faster, more powerful and generally more efficient. Many investors, pundits and consumers are too embarrassed to admit that they were uncomfortable double-clutching through a dozen gears – let alone that they didn’t understand why they had to in the first place. Then MicroSync comes on the scene with a semi-syncronized manual transmission that works with limited horsepower engines … it’s always better late than never, but it would be even better if institutions of higher learning and pundits would turn on the lights :-)

  4. synergy exists between the “procedural generation” techniques Wright is using, direct-model/synchronous technology and the hw & sw approaches I linked to

    I’d not made quite the same connection, but I understand what you’re saying. And enjoyed your transmission analogy. Nice.

  5. Pingback: reBang weblog

Comments are closed.