This is the first of this term’s lectures from CoDe 1110 - Computational Design Theory 1. It’s my take on how we got here as a broad survey of the origins of what we’ll be talking about for the rest of the term.

Below is a lightly edited transcript:


okay so we’re going to talk about this thing which people call architecture’s “digital turn” but I don’t think you can really get into that without a whole bunch of context.

I wanted to go back over the…

I had this in last week’s [lecture] and I thought that it would be good to keep asking this question; you keep asking this question of yourself. Why are you here what’s the point of being here? Are you just here to like please your parents or is this something that is of value to the world? So I just want you to keep asking that question and coming back to what’s the point of this? What’s the value in it? Whenever you kind of come up with some ideas on that let’s talk about it.

Really this kind of computing in architecture is a is a smaller part of “what is technology in architecture” and so the big question that surrounds everything is “what is a technology?

I think that the easy way to think about that is that if we split the world much like they did in the—I don’t know, 16th 17th century—into two things: everything that is art and everything that is nature. Art is anything made by people anything that is artifice or artificial, and so the world of art contains all of technology all of human endeavour in general. It’s difficult to kind of keep track of that because of this. The Arthur C Clark quote: “any sufficiently advanced technology is indistinguishable from magic”. This idea that if something works really well you have no idea how it works it just does the job and you think well maybe it’s magic. If you were to show an iPhone to an 1850s pilgrim they would probably burn you for being a witch but then we have a sort of similar problem. I don’t know if this is an official quote because I think I made it up. but I think I might have made it up after hearing it and then forgetting it. The idea that “any sufficiently traditional or well-known technology is indistinguishable from nature”.

People talk a lot about what is “natural”, particularly if you look at things like what American Conservatives consider to be natural that sort of stuff about the idea of wearing clothes or modesty or something like that. It’s important to remember that everything from poking a stick into an anthill, language, clothes, shoes, candles, maps, writing, all those sorts of things all of those are technologies; those essentially set us, as humans, apart from most of the rest of the animal kingdom. Let’s give that a second: if we’re looking at “technology” that could be a lot of things and so this lecture is a few separate ideas that we can then tie together at the end but it’s also kind of a history it’s just not a history that I’m going to tell in a single line it’s just a history I’m going to tell in a series of little examples. These are ideas that come up in a lot of the readings and so I thought I’d kind of have a go at a few.

The first one is “material computation”. The OG material computation was build something see if it falls down, see if it leaks, if it doesn’t you’ve computed that it works and you can keep going. Cathedrals would traditionally take up to like a thousand years to build. It’s not unusual for a cathedral to take over 500 years to build. So if you are saying a generation is roughly 20 years you’re talking like 25 generations of people who’ve passed this down from generation to generation as the thing that they do and it’s knowledge about whether you can build this insane flying buttress on Notre-Dame. These things would have just fallen down all the time until they figured out exactly how to build it and it didn’t fall down anymore. Then we get onto stuff like this—you’re going to see this project a lot over this the next three years—this is the Sagrada Familia in Barcelona by Gaudí.

Did anyone go to the talk the other day or watch the talk the other day who was talking about this hopefully it’s recorded somewhere and you can dig it out.

What he did was he was more actively computing these load paths. He’s interested in making these shapes be as efficient as possible, in terms of getting gravity to the load to pass through the structure into the ground. He realized that you could do this by turning the model upside down and using gravity to pull it into position and then turn the picture of the model back up again and build it. You can see this inverted model here the hanging chain model which then you flip over and you get the structure: the compression structure. He was using this to through gravity compute—in an analog way—the shape of the building that was going to come out of it. People say “oh well this is very natural”, “this is kind of unnatural” depending on how they think about it but the shape of that is dictated by the forces of nature in the sense that it’s a science concept and so you can see the diagrams of how this all pans out, because these weights are little weights on here are different weights and sometimes he has sheets in there to represent walls rather than buttresses, that kind of stuff.

Then you can come right up to much more modern stuff and this is an early [Frank] Gehry model for a Maggie’s Centre in Scotland, where he’s using the fact that this special thick paper that he uses doesn’t stretch at all so any shape that he can make with that he knows is going to be singly-curved and could be made out of relatively simple materials. Whereas if you used another model making technique you wouldn’t be able to guarantee that it could be built as easily. So people—particularly like The Simpsons—like to say that Gehry was just screwing up paper and seeing what happened, but he’s using the paper to produce some kind of approximation of a thing that’s buildable; he knows it’s buildable because the material won’t let him do other things.

That leads into this idea of you’ve got single-curvature; that’s a mathematical concept. There’s all these mathematical concepts in designing buildings and it starts with the Greeks—well it possibly starts in the Greeks it might have been earlier we just don’t know!

Temple design seems like a relatively simple thing. it’s just like a birthday cake with some sticks around it, but they were very sophisticated in the way that they understood optics and so if something looks if something is dead straight it will look like it bows when you look at it across the long distance so the whole of the temple was domed slightly, all of the columns rake in ever so slightly to correct for the fact that you’ve got this perspective thing going on, so by building this way it made everything look a little bit more magnificent when you were standing there. Which is kind of the idea that if Zeus is looking at your temple you want it to be the most magnificent temple to Zeus you can get otherwise he’ll transform into a swan and steal your wife or something like that. So it’s important to understand this stuff, and the way that they would bow those shapes in order to give you the impression that it’s actually a much more imposing building than it really is.

These mathematical concepts started to come into a more general understanding in the renaissance when people invented perspective drawing and the use of perspective drawing to be able to think through problems so it’s not just representing things that already exist it’s the idea that you can represent ideas and use perspective to get you there. Then the few hundred years after that, much in the same way that now if you make a really cool Grasshopper package you’re kind of a little celebrity for a bit, the tool and instrument makers were making these incredibly fancy devices. This is an “architectonic sector” which swings out and it gives you different perspective proportions. (I’ve actually got the book here if you want to have a look at some of those things.) These guys were basically the rock stars of their time as they created new tools they allowed new types of drawing and so there was an interaction; a discussion between the toolmaker and the drawer who wanted to do new and cooler things and we’ll see this discussion—this interaction between technology and imagination—coming up over and over again.

In the Carpo reading, does anyone remember that Olivetti adding machine that he was playing with? This idea that the machine gave up and couldn’t do it? We’ve been trying to put intelligence into machines for a really long time.

This is Talos who is one of Hephaestus’s creations. He created this giant robot full of ichor (which is god-blood, basically) that he would walk incredibly fast around this big island twice a day, and this is Jason and the Argonauts running away from him. One of the Argonauts, very cleverly, pulled the plug out of his heel and all his, essentially, hydraulic fluid fell out and he wasn’t able to defend the island anymore.

But the idea that machines could replicate things has been around for a super long time. This is a scene from R.U.R., Rosamond’s Universal Robots [1921] which was a play where the word “robot” first got used to describe this, because traditionally it was “rabot” which is the Slavic root word for worker.

So this is a “worker” this is an undressed Olivetti adding machine. Inside it we’ve got all these levers that are doing work. You press, like, eight times eight and it will go and end up with 64. There it is with its clothes on it’s quite an elegant little machine and so the idea that you can get computers to do things is interesting and it’s a relatively recent idea but the idea that machines could do intelligent things is very old and so the question of whether the machine is thinking is something that people still kind of fight over.

There was there was a fight last week over whether neural networks were conscious and one of the guys in London Murray Shanahan said that “[machine] neural networks are a little bit conscious in the same way that a field of wheat is a little bit pasta”.

So where did these things come from? This is Ada Lovelace who some of you might have come across. She often gets called the “first computer programmer” which is I think is an uncharitable way of describing her. She is the first theoretical computer scientist because she not only worked out the logic of how the machine works she also worked out the kind of philosophical implications of what you could do with that. She worked with Babbage, and Babbage designed this machine. He never actually built it never built a working version of the fully functional “difference engine” here which was rebuilt in the London science museum. There’s a close-up of it. What would happen is you would put in a calculation by setting the dials and then you turn the handle a bunch of times and then eventually an answer would pop out. If you really want to understand how these work there’s a bunch of really interesting videos. Just a little side note is these don’t work in binary. The idea that computers are all binary is actually quite a recent thing it was only settled really in the last 50 or 60 years. Before that there were computers that were running on five level logic, 10 level logic, that kind of thing. they just became a little bit unwieldy and difficult to use. So now when you think about ones and zeros they’re not actually ones and zeros they’re highs and lows in voltage so anything above three and a half is a one anything below about one is a zero and they just bounce up and down, and that ends up taking us all the way into Bletchley Park.

Which is a historical manor house in the English countryside, but it was also the place where Alan Turing and all of the code breakers on the British Intelligence were working. If you search for Bletchley Park, Google does this nice thing where it encodes the web page and then decodes it for you.

At this point, these women are “computers”. Until Turing comes up with the idea of the stuff we’re going to talk about in a second these women were what you would point to when you were saying “computer”. They had the job—in much the same way as the sort of hidden figures women in the in the movie—of doing basically the grunt work of doing the calculations. We don’t, or we might never, know the full extent of their contributions because they’ve been largely written out of history. If we see in this picture these two women operating one of the more advanced machines later in the war they were often very heavily involved in actually doing the computing and doing the logic and that kind of thing whereas they were written into the history as assistants or they just got cups of tea that kind of thing and so a lot of this is to do with how the history was recorded rather than the way that history actually was. The key thing that came out of this era was the work that Church and Turing did, kind of separately, on the idea of the Universal Turing Machine. Which is not a thing, this is an important concept, that the Turing machine is not a thing it’s an idea it’s a thought experiment: that given an infinite tape you could mix data and instructions on the tape and then it would decide what to do depending on what was presented to it. That was the key breakthrough in this idea of how computers were going to work. Around that time all kinds of interesting stuff was happening: [John] Von Neumann came up with Von Neumann Architecture which is how we understand computers to work now, they have storage, memory, processor, input and output, that together, basically, is the Von Neumann architecture. With a computing machine that was able to implement a Universal Turing Machine so you’ve got a hard drive or you’ve got a paper tape or whatever and it’s doing stuff and with that you can compute any computable concept which it may take almost infinite time. (This idea of almost infinite time is something we’ll probably get into next term if you do my programming course.)

Anyway so this takes us up into the 20th century. They were doing that in the 30s and 40s. After the war there was this great sense of enthusiasm the computers have kind of saved us, we were able to do stuff in the war because we had computers, and now what could we do with them afterwards? There were all these people who were optimistic about what we could do with computers and a lot of them were working on stuff, and I think Nicole will talk about this a bit in second year, she has a much more in-depth view into the work of Archigram and Cedric Price and those sorts of people, who were inspired by ideas of computers but not actually really using them or particularly understanding them.

We do get into the work of people like Christopher Alexander who is largely shunned in architecture because people don’t like being told what to do, but he is treated as something of a god in computer science because he came up with this concept of “patterns” and the idea of iterating on a pattern to make it better and then composing patterns into bigger things. This is some math, some data, and some beautiful diagrams from his book Notes on the Synthesis of Form. All of this stuff is in the lecture you’ll be able to look at that later. He’s talking about problem decomposition and how you can think about design from a logical perspective. But if we look at that page full of data, all the numbers, that’s very small data and so he’s talking about a kind of relationship set of problem types not actually data about the problem itself and these ideas are still very early on. You’ve also got people like Phil Steadman, who while we think of him as an OG person he’s still working, he’s still one of the most fearsome PhD supervisors you can get if you want to do an architectural computing PhD. (If you can get Phil Stedman you’ve done very well.) He was doing things like enumerating all of the possible rectangular layouts of plans and then trying to understand concepts like can you represent the arrangement of buildings through graphs? You can see if two building plans are similar because they have a similar graph representation. This room is connected to that room, which is connected to that room, which is connected to those two, and so actually we use this graph representation idea all the time for all kinds of things in the kind of nerd world. Most regular architect people don’t really know about it so it gets very interesting.

This is well William Gray Walter, he’s a robotics guy, he invented these two little Roomba things they’re kind of hard to see, but if you look them up they’re called Elmer and Elsie after two dancers of the era and they didn’t really do much. They just saw the light on the other one and followed it, but because they were quite slow at processing where they were, by the time the robot had moved towards it [the light], [the robot] was no longer there so they had this chaotic dance around each other. That’s actually docking itself, turns out Roombas do that now and Elmer and Elsie, possibly some of the first autonomous robots, were doing that too.

They were doing this little dance around and this kind of was one of the early pieces of the cybernetics world. The idea that systems interact in interesting ways and have time sequences to them. Cybernetics then moves into this idea of interactions between systems and the idea that systems talk to each other so a lot of people say that cybernetics is the study of conversations. If I’m interacting with a bicycle I know that I can’t just lean in one direction or I’ll fall off and I have to interact with the bicycle system, so me and the bicycle, two separate systems, become one system in a super-system, a larger system.

When do we get into some real computation in architecture?

One of the kind of claimants to the first architecture project that used any kind of computational system was Paul Coates and John Frazer when they were at the AA. They used a computational system, but not a computer because they were rich kids but they didn’t have the millions and millions of pounds needed to build a computer, so they were doing design projects based on the outputs of a random number generator, which was in their case a plastic roulette wheel, that they got basically in a cracker. Paul wrote one of the readings that you just read and John Fraser was head of school until very recently at QUT, so these people go on to do interesting things because they started off doing interesting things! A lot of these things end up being kind of silly and they’re kind of metaphorical computer uses and when you actually start getting into the meat of things, especially with the computers that were available at the time there’s not a lot you could do, so everyone had these grand aspirations of doing complete enumerations of all possible rectangles or automatic layouts, all that sort of stuff and actually you just can’t do that kind of thing generally at all, and less generally with the kind of compute power that they had on a DEC10 or a PDP or something like that in the late 70s and early 80s. So a lot of things that people actually came up with were really interesting low-compute systems that produced what we call “emergent behaviour”. This idea that by giving a set of very simple rules you can come up with a much more complex system by iterating it lots of times so Craig Reynolds came up with this system called Boyds, which is a New Jersey accent for birds, where they have a couple of rules. Steer to avoid: so you don’t crash into anything. Steer for cohesion: which is where you stay near the other things, and match your speed to your surroundings and so with those three rules (I think those are the three rules) you end up with the behaviour that you see in this [slide]. These incredible shapes that look just like birds flocking. When you look at a flock of starlings you can simulate that very similarly using very little code, less than a page of code, but huge amounts of compute power because it’s crunching through and every one of them is doing all sorts of vector calculations. (You should be able to program something very similar to that in not too long.)

The other one the other kind of classic finding was this beady ring idea that. There were certain fundamental urban design principles in older cities so a lot of the time they were looking at places in Yemen and things like that where they they’d find these structures that they could simulate on their not-very-good computers and so those end up being really interesting.

The thing that then comes out is that we have a lot of fairy tale fairy tale ideas about how an algorithm will work. A lot of that is to do with how we want it to work so when Richard Dawkins—in his period of being a biologist rather than a religious leader—got to doing these things which he called these Biomorphs. Genetic algorithms to produce these sorts of things. At this stage he’s selecting for “interestingness” so he’s saying “this is the best one”, “this is the most interesting one” but he’s tried to do trees that would grow upwards he was like how can I evolve the tallest tree and so the rule set he came up with was grow tall—that’s good—and don’t use too much material because too many sticks is bad right, and so after a few generations of running this tree design algorithm can anyone have a punt at what he ended up with? What what’s the tree that uses the least material and grows the tallest what shape is it?

It’s basically a straight stick! so the tallest and least material you can get is a telegraph pole, just go straight up and so Dawkins was completely surprised by this. [An] otherwise intelligent person understands the system but he had these romantic ideas about how the algorithm was going to work. You’re going to find this a lot when you start programming yourselves that you’re like “but it’s supposed to do that”, well where does it say that in the code? It doesn’t say that in the code and one of the things that this really teaches you, and has taught society in general is that the beliefs that you have don’t matter it’s what’s written on the page.

This is all very well and good, this is kind of theoretical it takes us into a bunch of interesting questions, but how is it going to pan out in architecture or in any real design field? This is a book I brought it with me it’s a great book, I think amazon has a print on demand copy but there’s no real copies in the world, I got it from a library that was throwing it out. It’s from ‘73 and they’re talking about all the cool things that you can do with computers in architecture, and at some point in a later section it’s like “oh and also maybe we can do graphics”. If we look at the table of contents, none of those are about drawing pictures they’re all about solving design problems and so in real building design there’s all there’s often going to be questions like how much outdoor space do I need as a ratio of indoor space? How many toilets is a ratio of the number of people? and once you’ve stacked enough of those ratios together they have they start to have quite complex interrelations so you can have so many people inside if it’s so big, but you also need to then have that much outside space, and you also have need to have that much parking, and those things end up being very difficult to solve all together and so a lot of the early work in computers “can we solve these real design problems?”, “can we simulate the environmental properties of this building?”, that kind of thing. The last line in that book is “the architectural profession as a body, has been slow to recognize the computer’s worth and has not yet fully realized the potential of this new tool.” If you were to go into a practice right now you would see a bunch of people just sitting around drawing pictures and not really “realizing the potential of this new tool” this is 50 years ago and not enough has happened! One of the interesting things is that all of you, as you come out of this course can change this and do something about this problem.

The problem really comes from this distinction: computer-aided design at the beginning was this idea that you could use a computer to help you design. That very quickly got changed to computer-aided drawing when people realized that they could sell computer programs to help you draw things much better than you could sell computer programs to help you change the way that you think, so that you can design better. The the key offender in this area is AutoDesk’s AutoCad. This [slide] is AutoCad two and a half. (I don’t know I don’t know why they have half versions in those days) but this was in DOS and they’re drawing, I don’t know, a region or something like that. The big selling point of AutoCad was that you didn’t have to change the way you thought you just “computerized” your existing process. The benefits of this were that you could just print out the same drawing over and over again, you didn’t have to take it through the ammonia copy room, and that you could make small edits to it. There was no help of “design”, no concept of design added into this. This was just drawing and that persisted unmolested until like, 2004-ish, the early 2000s.

Then we meet up with these people who have been doing a bunch of work in weird places like inside architectural firms there were a bunch of people working at SOM developing their own design tools there were people working shipyards there were people working at Foster and Partners, and they’d all come up with their own custom tools which were like Excel plugins and all that sort of stuff to end up with things that you could design. (I’m pretty sure that’s Achim standing there wearing his like traditional Achim Mengis outfit.) That ended up being a bit of software as it got published to the public called Generative Components, which for five or six years was the coolest thing you could get your hands on. You had to go to a workshop to be allowed to have a copy of this.

Each one of these nodes—have you started using grasshopper yet? Do you have your hand up?—each one of these nodes is similar to a Grasshopper node in that data is flowing in and then data is flowing out and then what data it is, is over here. That gives you all of these sorts of things. It was a bit crusty and it crashed a lot but it sort of paved the way for this idea that you could create design tools as a normal person you didn’t have to be a software developer to make that work. Then around that time we went through this kind of terrible time which I’m going to call the Deluze patch where the theory wankers—don’t write that down—the people who, similar to the people in the 70s who were doing stuff inspired by the idea of computers, were writing a load of stuff inspired by the ideas of what was possible with computers but the problem we had, and this is this is conjecture, is that those people in the 70s who were doing this stuff with the metaphorical idea of computers were working with an entirely new idea that you couldn’t go out and test but they were designing as if they were doing speculative projects, whereas the writers were largely either doing speculative writing and no projects or they were doing projects that weren’t very difficult and then writing about them in a way that made them sound difficult. That’s the kind of open question that I want you to kind of think about when you’re reading these things because some of the readings are written by people who are kind of hardcore and did things and were doing interesting stuff and some of them were written by people who just kind of needed to write to live.

The next thing that came along after generative components was Grasshopper. Grasshopper’s early name was Explicit History which now is the name of something else in Rhino, but Explicit History was actually a really nice name because the whole point of it was that it captured your design process and so the process required to make that towery thing is captured by the Grasshopper connection graph on the left. Which means that at some point you can go along and you can change the decisions you made. You’re like “oh I want to have seven sides not six” and then it can all play forwards. It doesn’t strictly capture the design thinking because it doesn’t have a history of that, whereas generative components did, but that was the main thing that caused Generative Components to crash so I think that’s why they took it out. Now we’ve got this kind of platform which allows people to build plug-ins on top of it and the platformization of Grasshopper has been the thing that’s made it so successful.

The big question now is what’s next, because I think that we’ve got to the point now where geometry creation, geometry manipulation, all that sort of stuff is pretty well understood and we’ve got this idea of most of the fancy stuff is up here, we’re dealing with one-off towers, pavilions, all that sort of stuff is in this zone here. Then down here you’ve got really basic stuff like putting an extension on someone’s house or putting a porch door on that kind of thing. In terms of the work there’s this chunky middle section, it’s what people have called the “fat middle” of really quite straightforward design tasks. Like “how do I lay out a car park?”, “how would I put this building in if it was pretty simple building?” and these tools like Giraffe or Test Fit have actually come an incredibly long way in not very long at all and I think we’re going to see a lot more a lot more work done along these lines which are “how do I design essentially generic boring stuff and make a tremendous profit from it because I can just crank through this and see which of 500 options gives me the most profit”.

That’s the kind of like one end of the boring scale and that gives you a lot to play with it lets legislators understand what the implications of their laws are it lets developers understand what’s going on it lets the people understand the implications of a developer doing something and so some of the work that Hank is doing with Giraffe is partially about generating things but it’s also partially about allowing the public to interact with proposals to say well this doesn’t look so good like this, or from my house perspective I get shaded at this time. The other thing that I think we’re going to see a lot more of is in the feminist literature around domestic spaces they write a lot about how the introduction of white goods like washing machines and tumble dryers and that kind of thing didn’t give people way more time it just meant that people had much higher standards of cleanliness. Nicole talks a lot about this; she’s done a lot of work on it. As computer tools get better and better we don’t tend to just do the same project faster, we tend to do more work. Partially because architects have some sort of pathological desire to never see their children, and partially because clients demand it.

One of the things we’re going to see a lot more of is simulation of the performance of a building. We already do a lot of solar performance analysis, like “does this apartment get enough sun?” that kind of thing but we’re going to start to see human occupation analysis. One of my like hobby horse topics is that you can get the highest award in New South Wales architecture [the Sulman Medal] without the building ever having been occupied. The idea that you can get an award for a really great building that no one’s ever been in seems horrifying to me. The idea for me would be that in the future we want to see buildings getting awards five or ten years after they’re built for being really great buildings to live in. You don’t give films an Oscar because you like the poster, you give a film an Oscar because you watched it and you thought it was good. Hopefully we’ll see a bit more of that and you can simulate the people moving around it and say “oh well these people can’t get from here to here in time to escape because of fires” or “people seem to bunch around here that’s no good” or “people feel sad here because there’s not enough light” and whatever you can imagine at some point we’re going to be able to simulate that.

Then the third thing I think is going to be a big deal is a much longer ongoing interaction with buildings. This [slide] is from Estimote’s new Space Time OS. It’s so new, it’s like a month old I think, but the idea is that as you track assets around the building, those assets could be furniture they could be people, you can understand how the building works in real time and change it. There’s lots of things you can do with that!

There’s a bunch of references there, they’re unreadably small but you can download it and have a look at it.

That’s about it for this lecture, that’s a kind of potted history of a very odd angle on architectural computing. The angles in the readings that you did last week took completely different views on that and so I think would be some very interesting ways of thinking about how those different views come together to create a kind of super-view in your minds because you can discount or you can encourage different parts of those views to work together.