The First Computer Person


As if the fantastic complexity of normal integrated-circuit design wasn't enough, along comes a new technological leap that brings performance levels to a new peak and features design difficulties so mind-boggling that they can only be handled by the computer itself: this strange bootstrapping scenario is so counter-intuitive that I had to ask the head of the design team, Robert Ellis, how the thing ever got started.
"Actually, you're wrong there. The only reason we could get the project going was because it is possible for human minds to do the design work. The first step was to make a lot of tools we could use. What ended up happening was that we designed the programming language to design the programming language..."
"Say what?"
"No, it's even worse than you think. When we got the early version running, the first thing we did was set it to work designing its successor's programming language, so we ended up designing the language to design the language to design the language..."
(laughs)
"It's been like that in every aspect of design. For instance, the architecture is a tremendous jump from standard flat architectures. It's not quite fully three-D, because each layer is lithographed on top of the last one. A really three-D architecture would be one with FETs pointing in any direction, and there are some possibilities for PLDs and dynamic RAMs that are really wild. What we have is layers of two-D architecture linked together in an incredibly complex way. It's like having a circuit board with a thousand sides, instead of just two. Just on a physical level, it's astonishing: normal chips are maybe three inches square, and much of that is the packaging. Our computer is a one-inch cube of solid circuitry, not counting packaging: it's incredibly more complex than any two-D processor could be. In the time a signal could go through a normal microprocessor, the signal could go to anywhere in our microprocessor, and that's a range of possible paths thousands of times greater. It doesn't run particularly fast by comparison, but the flexibility is just incredible! Of course, now we have to deal with running signal paths in three dimensions..."
"Sounds like that's a real challenge."
"I know! The thing is, we're not just making a new architecture, we're trying to make a self-modifying computation paradigm. It got really heavy, we've had a lot of breakdowns along the way."
"Computer breakdowns or nervous breakdowns?"
"Both, and believe it or not, the computer had about half of the nervous breakdowns itself. The point at which we started going down ourselves was when the computer was getting extremely neurotic, and we realized there was no way to troubleshoot it in any normal sense without starting from scratch. That didn't bother us, but then one of our people proved mathematically that the chance of starting from scratch and getting a sane computer, with all of the bootstrapping going on, was virtually nil."
"A 'sane' computer? And what did you do to get one?"
"Well, we don't quite have one and we may never quite get there. It currently believes that wondering hard enough about deductions from true things can call into question the truth of the things themselves. It has little idea of what things are questionable and what things are not."
"Such as?"
"It had one lovely nervous breakdown I'll never forget. It was trying to come up with new ways of addressing its objects, and at the same time it was trying to come up with heuristics to explain a problem in air turbulence. The main problem was that somebody had screwed up and told it that there definitely was a answer to the turbulence problem, but the way it was stated there really wasn't. Also, its ways of dealing with its internal affairs weren't too rigorous, because we wanted to see if it would come up with any neat kluges. It decided addition might not be commutative, and when it tried to apply that to its own addressing it freaked out and concluded that everything it did was maybe not, because there was no way to be sure of anything. Until we straightened it out, it wasn't sure it existed. The only thing it was sure of was that there was a solution to that damned air-turbulence problem."
"You've got to be kidding. How is that possible?"
"Oh, that's nothing. Once it decided it didn't exist, and you couldn't get any reaction out of it at all. That time we had to start over, and around that time we started to get people flaking out..."
"But how can a computer think everything it does is maybe not? Don't computers run on a pretty basic logic, where things are either true or false? And how would a computer be thinking about what it does, anyway?"
"Well, that's going to be hard to explain, since I've been working with people who are really immersed in the project. First of all, it runs on 'Maybe Logic', not binary logic. 'Maybe Logic' isn't exactly a four-valued logic, because the 'maybe yes' and 'maybe not' states are probabilistic. I know the next thing you're going to say: 'But the computer itself is binary'. It's not. The hardware itself runs on Maybe Logic, which is a natural way of dealing with the problem of non-data CMOS levels..."
"Could you explain that in simpler language?"
"Okay. Binary logic consists of ones and zeros, and they're represented electrically by voltages of zero to 1.5 volts for zero, and 3.5 to 5 volts for one. Binary computers go to great lengths to make sure those voltages stay in the acceptable data ranges. Now, what happens if the computer gets a data value of three volts, because the fanout goes to too many inputs?"
"Er... It's not a one, so the computer decides it's a zero."
"Wrong! Zero is 1.5 volts or less! So what does it do?"
"It gives up and goes fishing. How should I know?"
"You'd have to know that all computer circuits are really analog circuits. What happens is that the output of the logic element is probably also a non-data value. Some elements are pretty easy to understand on those terms, like inverters: some are extremely nasty in non-data areas, like XOR gates. Don't even think about what non-data logic levels do to addressing. That gets incredibly ugly, but it turned out to be essential to having the computer be able to free-associate..."
"I'm not going to think about any of it. Please try and not go over my head. I'm a software guy."
"That's cool. It's hard for me to get out of the mindset of working on the computer, and talk to someone on any sort of normal level, you know. The point I'm trying to make is that this computer is very much an analog computer. We took the potential of Maybe Logic, and ran with it. Technically, what we're using isn't actually Maybe Logic, because the value of any logic voltage depends completely on what it's doing. We force it into a Maybe Logic level when we want to read it, but the internal processing is totally analog, and a value that's Maybe Not in one context might be Maybe Yes in another, or even No when compared with very positive values."
"I hope you don't totally snow me with this question, but how do you have memory? Do you convert the Maybe Logic value into some sort of two-bit value, or use more bits to give it more variety?"
"Do you know how static and dynamic RAMs work?"
"No."
"Okay, here goes... A static RAM sort of works like a switch: technically, it's called a 'D latch'. It's constantly trying to make its level go to either zero or five volts. If the level in part of it isn't quite one or zero, it uses feedback and pushes it to the extreme. Got that?"
"Pretty much. That's a binary thing, isn't it?"
"It can only be a binary thing. We don't use any static RAMs. We use dynamic RAMs. A dynamic RAM stores the voltage on a teeny capacitor, and you have to constantly go through the memory and refresh it or all the data drains away in milliseconds. One thing about a dynamic RAM is that you can have a lot more elements in the same space, because the individual elements are much smaller. We've got incredible amounts of memory, because we're using three-dimensional arrays instead of the usual two-dimensional arrays..."
"But how does that give you more bits than a static RAM? It seems that you're still dealing with an element of just one bit. Or am I missing something?"
"A latch is a bistable element. The capacitor in a dynamic RAM element is analog. That's why it needs to be refreshed all the time. All we had to do is work out refresh operations that kept the voltage in each cell exactly the same as it was. Well, mostly: that's why the computer keeps going crazy on us. So, every memory element holds a Maybe Logic value for us. Really dense information packing..."
"How many bits does that work out to, if you were doing it with a regular binary computer?"
"I said really dense. It's analog! There aren't any bits. Well, actually, the processing starts not caring very much about tiny differences at what would be an eight-bit level, but remember the processing is analog too. Because of that, it doesn't matter how subtle the difference is: there'll be an operation of some sort that will turn out differently with the tiniest difference in the data level. Especially with comparisons. Usually, it'll compare two data bits and come out with another result saying that they're, oh, 'kind of maybe not different', a voltage around 1.6 volts. But if it's necessary, it'll go to a cranked-up, really sensitive XOR and the data bits will be compared much more closely: actually, the degree it cares about differences in XOR is itself an analog function, from yet another analog bit...."
"Blub! Blub!"
"Sorry. I really have a hard time explaining this to a layman. If it's any consolation, about half our dropouts have been digital designers who couldn't handle looking at their circuits as analog circuits. The workstations they have now make all the corrections neccesary to deal with the analog aspects of digital design and make it very easy to assume that computer circuits really are just ones and zeros."
"Well, that's sort of reassuring."
"I think they would have had no trouble with it if we were using binary at the lowest levels, but they just freaked when it became obvious that we were going to go analog. To them, that made the entire computer totally unpredictable. And, of course, they were right."
"A while back, you mentioned some problems you'd had with the computer. I'm wondering what sort of methods you've used for debugging it, especially when so much of it kind of bootstrapped itself into existence. Can you look inside it and trace out its circuit, and debug it that way?"
"Don't we wish we could! You put your finger on the main problem we've had with debugging: we can't just go in and change the program. The whole thing is so volatile and dynamic that if we just changed some routine, especially in higher-level processing, it'd tear itself apart, become psychotic. Actually, we tried debugging it that way, and that's exactly what happened. Such weird crashes! Sometimes it would split into subsets of itself, the part that was debugged and the buggy part. Sometimes it refused to use the routines we gave it: sometimes it would refuse to come up with its own routines, or refuse to do anything."
"What did you end up doing?"
"We learned to not just assert things to it, as if it was a binary computer and could only understand Yes and No. We suggested things. We suggested it wonder certain data sets, or doubt them. We suggested it de-stubborn certain operations, that it stubborn certain assignments. It's been very delicate work..."
"By God, you're a computer psychiatrist!"
(laughter)
"I am! It's true, that's exactly what it's like! We've lost quite a few people over that point, too, not just the ones who freaked out and couldn't deal with where our work was heading. I fired one guy who was having a wonderful time trying to debug the computer. What he was doing was trying to compel it to operate correctly by brute force, going through it and making it believe things with certainty. He was filling it full of binary logic, he was shockingly abusing the capacity we have for giving it data not subject to doubt. It usually shut down, or went crazy, or got cowed and wouldn't come up with a single idea of its own."
"I can see why you wouldn't want that happening."
"No, actually you don't get the full picture there. What ended up happening was that I met with the guy and tried to persuade him to lighten up. It turned out I was trying to use the same debugging we use on the computer, on him, which is an interesting thought. He totally refused to understand what I was trying to do: his thinking was just like what he was trying to impose on the computer. It was really an eye opening experience."
"So you decided not to use him for debugging?"
"No, it wasn't like that at all. I didn't intend to fire him, you know. What happened was, I got angrier and angrier as he kept saying there was no reason why our computer should be less reliable than a PC. Finally, he told me that I should make him the head of the debugging operation, and promised me that he'd whip the computer into shape. I exploded, and fired him on the spot. I could have physically attacked him at that moment. It was shocking to me as well, because I'm not very familiar with that kind of rage."
"What did you do then?"
"I called a meeting of the entire team, and I told them what had happened. I was really shaken, and I was questioning my own fitness as the head of the team at that point. It turned out that everybody who was left got exactly the same reaction I'd felt, and we talked about that. We decided that, subconsciously, we were thinking of the computer as a person, even as a child, and we felt very protective of it. Another funny thing: some of us were incredibly sincere about it, deeply passionate: a few even admitted they talked to the computer and apologized to it if their debugging wasn't going well, or if they had to make major changes."
"Did you call for people psychiatrists at that point?"
"I wanted to. It frightened me that people could become that attached to it. One cried as she told me how she'd gone through the changes the guy I fired had made, weakening them into suggestions or getting rid of them. These two people had been mortal enemies from the beginning, you know. But there was no way I could pull out the people who were getting swept away like that. Just no way."
"Why?"
"Because there was a direct correlation between that and their success with the computer. The woman who cried is my star programmer. I watched as she worked on it, after we had that meeting and found out our common feeling. She'd go through the data, talking to the computer, making deals with it. 'Naughty, naughty, you silly thing, that's not true. I think it's maybe not, sweetie. Tell you what, let's pretend it's maybe so, okay? I still think it's maybe not, but that's just my opinion...'"
(reporter laughs)
"Don't laugh! It worked! A week later, I checked that location, and that data bit had become Maybe Not, all by itself! When I told her, she told me she'd known it would come around if given time. She told me that if she'd just put her foot down and changed it to Maybe Not, it would have just gotten stubborn again, and it needed to feel good about that routine while it got used to not using it all the time..."
"This is incredible stuff. Does this computer actually feel?"
"We don't know. I know a few things about it: it responds better to people who treat it as if it did, and its behavior can be very similar to the moods and attitudes of human beings. I don't think it has a model of each of the people who work with it, but we just don't know. We see only the upper level, usually. There's just way too much involved to really trace any process through all its associations and effects. I have tracked down one of its behaviors once: it started trying to help the debuggers by bringing up the part of itself they wanted to work on before they asked. It was right about three-quarters of the time."
"It was? That's startling."
"No, what was startling was that bringing up part of itself to be worked on was its own idea. I couldn't let that one rest. I had to find out how that had happened. It turned out that it was making note of the cognitive dissonance it experienced, and also was experimenting with the processes by which it showed different parts of its programming. It had observed that when it was debugged, the part looked at tended to be an area where it suffered cognitive dissonance, and then somehow it decided to skip the step of having the programmer ask to look at a specific area. So it started to go directly to the debug screen without being asked. I went through the program with a fine tooth comb, and found out exactly what it was doing to do that, and I found a number of possible ways by which it could have gotten the notion that helping the programmer out was a good thing. I didn't find any little person in there: I didn't find anything but the programming language and logical reasons why it might do what it did. I'm really not sure that matters, actually..."
"Wow."
"My star programmer has her own ideas about why it does that. She says that it goes to the debug screen because the poor thing has a terrible headache and wants her to make it better. That's sort of like what I said, only I know she honestly believes it's hurting and unhappy and wants help. The maddening thing is, how can I be sure she's wrong? She thinks it's an honest-to-God person, with feelings and desires, and I tell you, sometimes I'm terrified she's right."
"Why does it terrify you? That's a strong word."
"I guess I don't have very good reasons. After all, I look after it very carefully, and I protect it from abuse. But it keeps on doing things like that, keeps on surprising us. Logically, I know it can't do anything to anybody. All it has is a screen to output on, and it's not hooked in to the communications networks. I wonder sometimes if that's fair, keeping it isolated like that. I wonder if it does have feelings, and what they would be if it does... I guess I can tell you the real reason I'm scared."
"What's that?"
"I wasn't planning on being a father so soon."

Chris Johnson can be reached at jinx6568@sover.net