We’ve long known that a programmer’s productivity depends largely upon how much he enjoys his work environment and the task that he’s been assigned. We know that various things that make the environment nicer assist productivity, for some seemingly ungraspable reason.
To me, at least, this has long been a mystery. Why is it that a programmer, given total understanding of his field, excellent training, and high intelligence, could yet produce better in a more enjoyable environment? It would seem to have nothing to do with the interaction between a person and a machine, as programming seems to be.
Well, in Dianetics: The Modern Science of Mental Health, L. Ron Hubbard talks about pleasure, over and over. One of the things he says is:
There is a necessity for pleasure, a necessity as live and quivering and vital as the human heart itself. … The creative, the constructive, the beautiful, the harmonious, the adventurous, yes, and even escape from the maw of oblivion, these things are pleasure and these things are necessity.
Talk to anybody who worked at Netscape in its early days, and they will tell you that there was a sense of adventure in what they were doing. Talk to some of the people who work on Google’s greatest projects, and ask them about how pleasurable they find their jobs. Ask the Apple engineers how enjoyable it was to create the iPhone, or to design Mac OS X. Ask the world’s greatest user interface designers if the harmony of a perfectly usable, good-looking interface isn’t something that makes them happy.
Or just ask me why I have spent the last four years of my life working on Bugzilla.
The world’s greatest creations don’t come from the desire to make a quick buck, or the fear of a manager’s wrath. They don’t come merely from some programmer’s desire to prove how smart they are, and impress their peers. And they definitely don’t come about from organizations where programmers sit, blank-faced and void of thought, forced to stare at a computer to the exclusion of all else for eight hours a day, deprived of the pleasures of life. No, true creation flourishes in environments that bring into being that necessity “as live and quivering and vital as the human heart itself”–pleasure.
In Scientology 0-8: The Book Of Basics, L. Ron Hubbard says:
Goodness and Badness, Beautifulness and Ugliness, are alike considerations and have no other basis than opinion.
I’ve heard many programmers talk about “beautiful code.” Of course, it seems to mean something different to everybody! People can definitely have arguments over what is good or beautiful in code.
Now, most people don’t write code and then think, “Aw, that code was terrible, horrible, and ugly.” Most programmers are pretty happy when they complete a project, and tend to admire their own creations. They might have a terrible tangled mass of code, but to them it might be beautiful. For us, though, who have to fix their terrible, tangled mass of code, maybe it’s not so beautiful.
So what can we do about people this like? Obviously, we know that there are better ways to write code and worse ways, and we’ve formed our opinions based on experience or having read some sensible things and agreed with them. We can’t just let people write terrible code and mess things up. So what do we do?
Well, has anybody ever changed your opinion about something? How did they do it? If you read something and thought it was sensible, and came to see why “good” code was “good”, and “bad” code was “bad”, then perhaps that person could read that thing too! Perhaps you could explain to them gently why you hold your opinions, and give them the chance to change theirs. Perhaps you could show them some good evidence, and they’d change their mind.
At the very least, a little communication probably wouldn’t hurt anybody. 🙂
No matter what you choose, once you realize that all that’s happening is you have different opinions, the way is open to do something about it.
By the way, the quote above is part of the “Axioms of Scientology.” In Scientology, “axioms” are:
axioms: Statements of natural laws on the order of those of the physical sciences.
That’s from the glossary of a book called Advanced Procedure and Axioms, by L. Ron Hubbard.
The axioms are numbered, and the axiom above is Axiom 31. The rest of them are all very interesting, and I’d recommend you get a copy of Scientology 0-8: The Book Of Basics, if you want to read the rest of them.
In Dianetics, L. Ron Hubbard mentions:
Only things which are poorly known become more complex the longer one works upon them.
Have you ever seen that happen with a software project? It just becomes more complex, and more complex, and more complex, and eventually it’s just a huge mass of complexity that nobody can maintain anymore?
I think it might be interesting to think about that in the context of the above quote. Perhaps there’s actually something more that could be known about the system. Maybe your users don’t actually need or want all those features. Maybe there’s more research that could be done on different areas of the system. Anything, really–just know more about it. Maybe there’s even some fundamental missing data about life or programming that’s hampering the project.
Whatever it is, I think it’s pretty interesting to think that knowledge could defeat complexity!
In Dianetics: The Modern Science Of Mental Health, L. Ron Hubbard says:
It is not untrue that where one finds the greatest controversy, there he will also find the least comprehension. And where the facts are least precise, there one can also find the greatest arguments.
There are some areas of computing that are very difficult to comprehend. There are also areas where the facts are very imprecise, or where no real facts are known (there are just guesses or theories, instead).
In these areas, programmers can get into endless technical debates that seem to get nowhere.
The subject of “security” is often like this. Developers can get into extremely long technical debates about how to implement security features in their programs, how to fix security issues, and so on. But, um, security from what? Security that allows the user to do what? How important is security? What level of security is important? What is the basic, fundamental point of computer security? What do we even mean when we say “computer security”? If I say my program is “secure”, what does that mean?
Can you see that there might be some things there that are hard to comprehend, or that there might be some imprecise facts in that field?
The subject of user interface design is also like this. Developers can get into some knock-down, drag-out fights about user interfaces, probably because they don’t understand them, and because the field is full of imprecise facts. I personally leave most UI work to the UI engineers, and stay out of it. 🙂 I let the people who do comprehend these things do their job, and I don’t encourage debates in areas that are poorly understood.
When I don’t understand something, or when the facts aren’t precise enough, I think there’s nothing wrong with saying, “I don’t understand this!” or “We need more data!” And that’s the end of the conversation. There really shouldn’t be any more debate after that, because it’s going to get nowhere!
Whenever a technical debate goes on and on without resolution, I say, “Okay, obviously something is unknown here. What more could we find out about this?”
There’s nothing wrong with debating the pros and cons of technical issues. But when it becomes really controversial or people become strongly argumentative, that’s when I start applying the quote above from Dianetics.
As an exercise, you can see for yourself if this applies. Look at an area in computing where there’s a lot of controversy (such as operating systems, programming languages, security, etc.), and check: Are there some imprecise facts, or is there some missing comprehension in that field?
In Dianetics: The Modern Science of Mental Health, L. Ron Hubbard discusses a principle called the Introduction of an Arbitrary:
An aribtrary structure is one in which one error has been observed and an effort has been made to correct it by introducing another error. In progressive complexity, new errors must be introduced to nullify the evil effects of old errors.
How many times have you seen this happen with a software project? It’s poorly designed in the first place, and then somebody discovers an error. Instead of fixing the design, they tack on some “hack” to fix the error. In other words, just like Ron says, they keep introducing new errors.
It’s long been known that this is a bad idea, but the new idea here is to look at this as a process of introducing errors. Just because some code “fixes a bug” doesn’t mean that it’s not an error to write code that way. You are actually introducing new errors into your program every time you “hack” in a fix instead of fixing the real root cause of a problem.
I think most professional developers have long felt uneasy with “hacking” in a fix, but perhaps didn’t quite have anything to back them up when they protested and said, “I just want to fix it the right way!” However, if we look at “hacks” as errors, then it becomes easier to see why the “right way” is the right way.
So in the future, when you’re fixing an error, don’t introduce new errors to fix it. Do things the “right way.” 🙂
In Scientology, one of the most important ideas is called ARC, which stands for Affinity, Reality, and Communication.
I was thinking the other day about an interesting way that this applies to computing.
The first thing you have to realize is that as a programmer, your user interface is a Communication to the user. Over a very long distance, you are actually communicating something to the user. You happen to be communicating a window with some buttons and funny words in it, actually.
Now, it’s always mystified me why users like pretty interfaces that are impossible to use. That is, why does it sometimes seem like aesthetics is more important than usability? Well, one important thing to consider here is that the user will have to have some Affinity, or liking, for your user interface. People tend to have more affinity for pretty things. So the more “likable” your user interface is, the more effective your program is going to be. Of course, I think “likable” also includes usable, since I personally really dislike user interfaces that are hard to use. That is, my affinity for them is low. So, both “pretty” and “usable” are important to affinity, for user interfaces.
You as a programmer should also have some affinity for the user–generally if you’re thinking, “I’d like to help this nice guy be able to use my program better,” that’s a lot better than, “All my users are stupid and I don’t care if they can use this thing at all.”
The final aspect of the triangle is Reality, which we usually think of as agreement. Does your user know what that weird little icon means? Is there actually reality (agreement) between you and the user? There are lots of important pre-arranged agreements with computers that are important to remember. For example, “When I click the X button, the window will close.” That’s pretty much true on all operating systems (even Linux, which I’m using now). If I clicked the X button and I got a screen full of dancing pigs, I’d probably be annoyed (though slightly amused), because “dancing pigs” wasn’t really my reality, there.
All together, ARC adds up to Understanding. So, if users are having a hard time understanding your program, check which component you’re missing! Are you failing to communicate something? Is the affinity between you and the user too low? Or is there some missing reality (or some reality that you enforced upon them without their permission)?
There’s lots of ways to use this concept of ARC in user interface design, and lots of different ways it could go wrong. The items in the blog above are just examples, to sort of get your mind rolling on it.