Spatial Pixel

Making Design Computable

How software and other media can influence thinking, and how designers can begin to control it.

When designers engage a design problem, they often utilize multiple tools and media to describe and develop their design objects before actually building those objects (or having them built). Their thinking must be "mediated," that is, translated from the language of one medium into the language of another, which, depending on the tools, can burden design work in an unproductive way.

In general, several layers of mediation exist:

  1. The "medium" used to represent the design object. We don't usually have an opportunity to work directly with the final object. Painters and sculptors do, but designers of objects like buildings, websites, and cars must use drawings or software to make representations and models first, then operate on those representations. Book designers may use Adobe InDesign, which uses a book metaphor, but also employs additional concepts like "master pages" that must be learned.
  2. The properties of the design object itself. For book designers, this means the properties of, well, books, meaning pages, paper types, binding, grid, etc. For architects, this means walls, column grids, structural systems, and so on. Each of these systems has its own language, rules, and properties.
  3. The design domain. For book design, this means readability, information organization, meaning, style, etc. For architecture, it includes the functions of buildings and their spatial or experiential qualities, form, organization, etc. This is where we want to be making design decisions.

We can think of each of these points as employing a different semantic model. The term "semantic model" comes from computer science and roughly refers to organization of, relations between, and language to describe the entities in a system. Each such model has its own rules which are typically foreign to those of other models, and contracts must be written to coordinate them. Similarly, when we use any tool, we must adapt to its usage model, a proprietary and artificial means of manipulation necessary to leverage its benefits. The problem explored here exists in the substantial and problematic disconnect between the semantics of tools (or media) and those of designed objects.

Example: Web Design Software

Even web designers have a disconnect between the developers that make their software and the actual tools they'd need to achieve their goals. In this insightful post and extensive thread on Jason Santamaria's blog, Jason argues for a tool that supports the web designer's thinking and workflow, that engages accurate typography, dynamic styles, intelligent page handling, Photoshop-like raster editing, which purposefully disregards the "direct-to-production" workflow that most web software supports. Generating HTML and CSS is not conducive to designing a website, and in fact, it hinders that thinking. If web designers think in terms of interaction, sequencing, typography, layout, information hierarchy, etc., then their primary tools should support that thinking.

Example: Paper Drawing and CAD Software

CAD and 3D modeling software do not mimic the semantics of architecture or buildings.

When a designer uses a constructed drawing method like 2-point perspective to represent a building, she isn't thinking about the building while producing the drawing. She thinks about lines, line weights, geometry, parallelism, perpendicularity, intersections, vanishing points, station points, picture planes, and so on. At a higher level, a drawing involves portrayal, outline, emphasis, clarity, etc. Only when the representation begins to emerge from the collection of construction lines can she begin to perceive the image of a building.

When we use a software application instead, we continually negotiate with an app's interface and data model to manipulate yet another model, the one that represents the building. The point is that when we use software, we never manipulate the object directly, nor even the object's model; we can only manipulate the interface model of the tool (clicks, selections, points, etc.), which is typically completely foreign to the language of thing we're designing.

Switching Modes Requires Unproductive Cognitive Burden

Once the digital representation appears on screen, we are freed from the mental burden of imagining, and we can begin a process of perception and interpretation. We engage the building's semantic model, which consists of structure, columns, windows, stairs, floorplates, etc., with the hope of controlling a building's architecture, i.e. form, order, space, experience. The problem lies in having to switch mentally between these incompatible modes of thinking.

The medium's representation always exposes a partial, biased picture. A perspective projection of a building is not a building, nor does it even express an authentic picture of the building. A medium will only emphasize what it is tailored to emphasize.

Perspective projection represents space as converging towards vanishing points. Given this emphasis, spaces designed through perspective projection (a rare occurrence in the digital age) tend to be more linear, being conceived from imaginary, "privileged" positions.

Similarly, axonometry tends to yield spaces that are cubic in nature, designed from a "rational," third-person point-of-view. These projections maintain true measures regardless of the angles used, but curvatures are more difficult to draw. And more important, our design decisions are affected by the media we employ.

Designers Have Lost Control Over Tools

Today, commercial design software companies purposefully engineer to serve the broadest possible audience. McNeel Associates's Rhino 3D is a prime example, which they market to everyone from nautical engineers to jewelry designers. Universality necessitates generic functionality. What's in common among these customers? Geometry.

Designers have lost control over their tools. Whereas perspective and axonometry were simultaneously discoveries and inventions, designers have mostly relegated the production of tools to software companies and developers. Thus 3D modeling software speaks a language of digital geometry (control vertices, lofting, etc.), not a language of buildings or real-world objects. Thus to be most conducive to design thinking, a wall should behave like a wall, not a NURBS surface or triangle mesh.

Medium and Creativity

One form of creativity comes from the struggle between medium and designer, either from constraint (attempting to describe something beyond the comfort zone of the medium), or from serendipity (a mistake where the designer loses control for a moment, or from the medium revealing something that couldn't be seen in another medium).

The guiding questions are: Is this a beneficial disconnect between action in the tool and effect in the model? Is there is a better mechanism to express design intent? What if we had a tool that mimicked our design language more closely?

Designers can benefit from developing tools that support their design voices. With platforms like Processing, we can now invent our own media. We can also invent models and systems of models that morph and behave over time, that simulate real-world environments or activate design concepts.

Designers can build better systems than medium-based tools.

The generic functions of commercial software comes not from the nature of code itself, but from market demand. Flexibility must be built into the software, which can be difficult to design, expensive to implement, and unsustainable as a business move. But code is language and thus inherently flexible. Because of this, designers have an opportunity to build tools and models that more closely reflect the semantics of design objects and concepts.

Instead of being separated from our design concepts by keyboard and mouse gestures, NURBS surfaces and triangle meshes, and then modes of representation, we can build customized software tools, models, and systems, that show richer, more meaningful behaviors using techniques from design computation. We can create semantic design models, which abstract the elements of a system and their behaviors, and enable a kind of straight-from-idea-to-object process. These models will behave like design objects, but will be built in such a way that systems of behavior and concepts can be applied to them fluidly.

Imagine being able to think in terms of #3 above, the language of design domain. Books could be designed more abstractly by describing layout ideas against an information system, and the model would decide where to "break the grid," etc.

Moving Ahead

Partnerships between designers and creative programmers are needed to implement this. The goals are so that:

  1. programmers and designers can develop a coding "language" that makes it easier for designers to create and manipulate new models, and
  2. designers can write high-level descriptions of design propositions in interactive and semantically rich forms.

I'm Allan William Martin, a product manager, computational designer, and software engineer in New York City. I work at Pivotal on Cloud Foundry, a cloud-native application platform. I've taught at the Yale School of Architecture, New York Institute of Technology, and General Assembly.

This post was published on by .