Believing that process improvement will improve the product is like believing spending more time looking at the map before taking a trip will prevent you from running into a traffic jam along the way. Process improvement creates paint-by-numbers, it creates fast food. Which is fine if that's your goal, but don't expect anything other than paint-by-numbers or fast food. Otherwise, we're treading into Einstein's definition of insanity "Doing the same thing over and over and expecting different results."
If you feel you must improve your process, plan your process for change. There are too many uncontrollable variables to have any expectation of a successful delivery with a single waterfall method. So plan your process for more frequent changes, more frequent reviews, and more frequent releases. Hmm, this sounds like agile (without a capital A).
Tuesday, June 24, 2008
Tuesday, June 17, 2008
Just Because You Can, Doesn't Mean You Should
A note on the Agile adage "Do the Simplest Thing That Could Possibly Work. (DTSTTCPW)" DTSTTCPW does not mean "Do the Easiest Thing," or put another way "Just Because You Can, Doesn't Mean You Should." (Hey look, a title!) There's a file line between those two ideas, because the Simplest Thing to one person isn't necessarily the Simplest Thing for everyone else involved. Reducto ad absurdum example: suppose someone keeps the requirements in a Word doc on their desktop. Anyone who wants to know what they have to do has to go look at that one person's desktop. Now that's surely the Simplest Thing in this case, but that is definitely not the Simplest Thing for everyone.
So, as usual, we have a problem. How do we figure out the Simplest Thing when it comes to organization and communication? Here's some guidelines:
A documentation system based on a file system just isn't effective anymore. Sure, it works. Much like any outdated technology still works, there's just a much better way to do it.
So, as usual, we have a problem. How do we figure out the Simplest Thing when it comes to organization and communication? Here's some guidelines:
- if you have to search for it, it's not in the right place
- if you have to use different tools, you don't have the right tool
- if you have to ask, it's not documented well enough
- if you have to change the process to get something done, the process needs to change
A documentation system based on a file system just isn't effective anymore. Sure, it works. Much like any outdated technology still works, there's just a much better way to do it.
Tuesday, June 10, 2008
The Simpsons as a Software Model

Like most things in life, The Simpsons show us the way. Or, rather in this case, the way NOT to do something. In "Oh Brother, Where Art Thou?" Homer gets to request all the features he wants in his dream car. In effect, he gets to design the car himself. You can see the result on the left. That's like what happens if our customers get all the features they ask for.
Our end product ends up looking like that. Lots of features "stuck on" with very little concern for the overall design. Sure, it will work, and come out as a finished product, but the true art of software comes from taking all those features and still producing a cohesive, solid design. But, like most designs that are "customer-driven," it is way over budget (at least it's not behind schedule, as well). Now, if you're in a market where you can continue to charge the customer to make up the budget, that's a feature, and probably a goal. However, in the real world (and the Simpson's world as well), an ugly, poorly designed product won't sell.
So how do we get from Homer's vision (which, incidentally, he was quite pleased with) to a mass-market friendly one including all the features the customer requested? One simple step that seems to be largely overlooked is "Saying No." I can't believe I've even got to put this down, but if the customer makes an outrageous request, saying no (and explaining what we should do instead) will cut the largest potential budget/schedule over-runners. For example, I'm currently supporting a custom data synchronization engine for my current project. Now, we are using a database that supports replication, but it is not available. If we'd just said "No" (and for the record I said "No" but my managers didn't care, because they didn't want to say "No" to our customer), we would be able to use the built-in replication. But we didn't. C'est la vie.
Thursday, June 05, 2008
A Software Fable
Gather 'round, kids. It's story time. There's so few fables written nowadays, and few of them would apply to software (if you read between the lines). So here goes.
Once upon a time, there were two groups of engineers out for a walk. They came upon a creek, and, being engineers, decided to dam the creek. The first group set to work immediately measuring the depth and width of the creek, calculating flow rates and how high the water would rise once dammed. The second group looked at the creek and started throwing rocks into it. The first group, having taken all their measurements, set off to carve a rock big enough to block the creek. They left the second group still throwing rocks into the creek.
Time passed, and the second group came back with their huge rock. But unfortunately for them, the rain had turned the creek into a raging river and their rock was still not big enough! Or it would have been a raging river if it was flowing. To their surprise, they saw a dam built out of the rocks that the second group had been throwing! The first group was amazed that something so big had been build by so many little rocks. They went and asked the second group how they did it. The second group said "Some of us looked for more rocks to put in a pile, and the rest of us threw the rocks from the pile onto the dam. So while you were measuring and calculating, we were already solving the problem. If anything goes wrong with the dam, we just need to throw more rocks. If it rains more, we just need to throw more rocks. You need to carve a whole new rock."
Many morals apply to this story: do the simplest thing that could possibly work, don't over-engineer the solution. But the one I like is this: if you leave engineers in the woods for too long, this kind of thing is bound to happen.
Once upon a time, there were two groups of engineers out for a walk. They came upon a creek, and, being engineers, decided to dam the creek. The first group set to work immediately measuring the depth and width of the creek, calculating flow rates and how high the water would rise once dammed. The second group looked at the creek and started throwing rocks into it. The first group, having taken all their measurements, set off to carve a rock big enough to block the creek. They left the second group still throwing rocks into the creek.
Time passed, and the second group came back with their huge rock. But unfortunately for them, the rain had turned the creek into a raging river and their rock was still not big enough! Or it would have been a raging river if it was flowing. To their surprise, they saw a dam built out of the rocks that the second group had been throwing! The first group was amazed that something so big had been build by so many little rocks. They went and asked the second group how they did it. The second group said "Some of us looked for more rocks to put in a pile, and the rest of us threw the rocks from the pile onto the dam. So while you were measuring and calculating, we were already solving the problem. If anything goes wrong with the dam, we just need to throw more rocks. If it rains more, we just need to throw more rocks. You need to carve a whole new rock."
Many morals apply to this story: do the simplest thing that could possibly work, don't over-engineer the solution. But the one I like is this: if you leave engineers in the woods for too long, this kind of thing is bound to happen.
Monday, June 02, 2008
Ghost Town
No, not the blog. I know I missed a week due to The Big Release deadline on Friday. I do this as a hobby, and comment about my real job. So if I miss a week, my loyal readers (of which as far as I know there aren't any), will just have go without.
That business aside, I started wondering about the New Developer Problem, or more specifically, the worst-case version of it. "What if everyone here quit/disappeared/died/otherwise didn't work here any more?" Maybe I'm on too much of an Indiana Jones train of thought lately, but I see it like an archeology expedition. Some guys in fedoras pry open the door to our dusty office and see a bunch of dusty cubes. They sit down and fire up the boxes. Would they
The second task is much more daunting. Even if Dr. Jones was able to figure out what we were making, the first question would be "Where is it?" followed by "How do I make it work?" and "How do I fix it if it breaks?" Can he find the answers to those questions buried in all the process-mandated documentation? Maybe it's possible, but in my experience, it's unlikely. Most of my projects have been treated as one big project, rather than a large number of small projects. So all the documentation was (is) created with respect to the whole project, and didn't reflect the individual pieces of the system. That's like trying to understand the internet by reading the HTML and HTTP specs. Yes, you'll understand what it does, but how it does it is totally ignored.
Now imagine Dr. Jones's expedition into a open-source development model software facility (adverb chain ahoy!). He walks in, clicks the browser icon on the desktop (hopefully the only icon there), and the browser goes to the project home page. Now he can see the project roadmap, open bugs, read documentation, and download code and start making changes and fixes. With that kind of organization based on the product, you don't need to be an Indiana Jones. Even Marcus Brody could find his way around that project, and he got lost in his own museum (supposedly).
Most of this problem comes from each group involved doing the easiest thing for them. For testers, they create Word docs. For systems engineers, they create Rational UML. For software engineers, they create code. But all those things add up to create documentation focused on whatever each group finds important. The documentation needs to be correlated to the end product, and should be stored in a common place that can show the correlation of the documentation to the product, i.e. a project website. Sure, software can be built in many ways, but you are at risk if you project depends on "project experts" or "domain specialists." If there are areas that aren't immediately understandable, that's an area for improvement of documentation or refactoring.
That business aside, I started wondering about the New Developer Problem, or more specifically, the worst-case version of it. "What if everyone here quit/disappeared/died/otherwise didn't work here any more?" Maybe I'm on too much of an Indiana Jones train of thought lately, but I see it like an archeology expedition. Some guys in fedoras pry open the door to our dusty office and see a bunch of dusty cubes. They sit down and fire up the boxes. Would they
- be able to figure out what we were working on?
- be able to continue on with what we were doing?
The second task is much more daunting. Even if Dr. Jones was able to figure out what we were making, the first question would be "Where is it?" followed by "How do I make it work?" and "How do I fix it if it breaks?" Can he find the answers to those questions buried in all the process-mandated documentation? Maybe it's possible, but in my experience, it's unlikely. Most of my projects have been treated as one big project, rather than a large number of small projects. So all the documentation was (is) created with respect to the whole project, and didn't reflect the individual pieces of the system. That's like trying to understand the internet by reading the HTML and HTTP specs. Yes, you'll understand what it does, but how it does it is totally ignored.
Now imagine Dr. Jones's expedition into a open-source development model software facility (adverb chain ahoy!). He walks in, clicks the browser icon on the desktop (hopefully the only icon there), and the browser goes to the project home page. Now he can see the project roadmap, open bugs, read documentation, and download code and start making changes and fixes. With that kind of organization based on the product, you don't need to be an Indiana Jones. Even Marcus Brody could find his way around that project, and he got lost in his own museum (supposedly).
Most of this problem comes from each group involved doing the easiest thing for them. For testers, they create Word docs. For systems engineers, they create Rational UML. For software engineers, they create code. But all those things add up to create documentation focused on whatever each group finds important. The documentation needs to be correlated to the end product, and should be stored in a common place that can show the correlation of the documentation to the product, i.e. a project website. Sure, software can be built in many ways, but you are at risk if you project depends on "project experts" or "domain specialists." If there are areas that aren't immediately understandable, that's an area for improvement of documentation or refactoring.
Wednesday, May 14, 2008
Setting the Bar Low
I work in a company that is primarily concerned with doing just enough to get the next contract. That is a demotivation for any employee who wants to keep up with technology, or branch out into a new area of development. The problem with that mindset is that it is difficult to convince anyone that any change is a good idea. You will be met with answers such as "It worked fine (we delivered something on time), why change?" and more simply "That's the way we do it."
That's a perfectly fine mindset if you are working on an assembly line. But, as I've stated repeatedly, treating software as simply production is to do disservice to it. More importantly, the company will lose their top developers who would quickly tire of doing the same thing over and over. But this is not change just for the sake of change. Rather, it is change for product improvement. Doing it better, and/or faster. Take the old vi argument. Sure, you can develop software in vi, but I can do it much faster with Netbeans or Eclipse. But the metrics are already skewed toward the development rate with vi, so I've now got free time on my hands. What do to with it? I'd like to work on something else, investigate something new, or otherwise improve the product in development, deployment, organization, or maintenance. But I'm told I can't, because "There's no budget" or "You're not authorized." So we keep on happily cranking out buggy whips (no pun intended).
Einstein's old adage "Insanity is doing the same thing over and over and expecting different results" is well understood in a large company. The side of it that's not noticed is the side that wants to keep talented developers, but doesn't realize that we're really just doing the same thing over and over. What's really needed is to totally throw out the existing schedule and really challenge the developers. Don't give me three months to do a task, give me three weeks, or three days. You will get a radically different solution, simply by being prevented from solving the problem the same old way.
That's a perfectly fine mindset if you are working on an assembly line. But, as I've stated repeatedly, treating software as simply production is to do disservice to it. More importantly, the company will lose their top developers who would quickly tire of doing the same thing over and over. But this is not change just for the sake of change. Rather, it is change for product improvement. Doing it better, and/or faster. Take the old vi argument. Sure, you can develop software in vi, but I can do it much faster with Netbeans or Eclipse. But the metrics are already skewed toward the development rate with vi, so I've now got free time on my hands. What do to with it? I'd like to work on something else, investigate something new, or otherwise improve the product in development, deployment, organization, or maintenance. But I'm told I can't, because "There's no budget" or "You're not authorized." So we keep on happily cranking out buggy whips (no pun intended).
Einstein's old adage "Insanity is doing the same thing over and over and expecting different results" is well understood in a large company. The side of it that's not noticed is the side that wants to keep talented developers, but doesn't realize that we're really just doing the same thing over and over. What's really needed is to totally throw out the existing schedule and really challenge the developers. Don't give me three months to do a task, give me three weeks, or three days. You will get a radically different solution, simply by being prevented from solving the problem the same old way.
Monday, April 21, 2008
K.I.S.S Applies to Process As Well
K.I.S.S. (Keep It Simple, Stupid, not the 70's band) is a tenet of agile software development. But it should also apply to all facets of development and not be limited to just writing code. If you've got to go to one server to get requirements, another one to view design documents, a third to build your code, and a fourth to view problem reports, odds are one or more of those systems will be out of date. "One server to rule them all" is a much more maintainable approach and ultimately a more useful approach for everyone involved.
There are a number of great sites organized like this, but SourceForge is probably the largest. One project page contains all the necessary elements for the lifecycle of the project: code, documentation, support forums, bug/feature tracking, and release downloads. A user, a developer, and a manager would all access the same site. This is, as always, by necessity. In a distributed development environment, every developer has to have all the necessary information available to them.
There are a number of great sites organized like this, but SourceForge is probably the largest. One project page contains all the necessary elements for the lifecycle of the project: code, documentation, support forums, bug/feature tracking, and release downloads. A user, a developer, and a manager would all access the same site. This is, as always, by necessity. In a distributed development environment, every developer has to have all the necessary information available to them.
Tuesday, April 01, 2008
The Problem with the Office
No, not Microsoft Office. Not the TV show. I mean the place where you work. You know, your cube. The problem isn't exactly with the office, but rather with having office-mates around. Specifically, the problem is the availability of those people. You know, Bob the database guy, Phil the systems guy, or Jill the tester. If you have a question, you go ask them. If they have a question, they come ask you. What's the problem with that, you ask? The problem is that the resolution to that question is only known to you two. The problem is that if anyone else wants to know what the answer is, they have to ask one of you. That's job security, to some people.
When an open-source developer has a question, or has to make a decision, there's usually no one there to ask. So the developer has to go the project mailing list, or forum, or website, and ask the question to the group. Then anyone else can see the answer. If anyone else thinks the question is very important, it can be added to the documentation, FAQ, wiki, or whatever. That way, the documentation represents what anyone really needs to know about the project.
This problem is reflected in the "learning curve" for business projects. Some companies try to overcome this shortcoming by mentoring, or shadowing, or other methods for effectively "picking someone's brain." But all they are doing in ingraining the mindset that some person knows the answer and will give it to you if you ask. This is a dangerous process, because it creates many single points of failure. What if Bob the database guy gets hit by a bus? Who else knows how to run the database? If he'd written it down, then someone else will be able to take over. Overcoming the job security mentality is a tough task. But, assuming that no one will work at the same job for their whole career, a repository of information is absolutely necessary.
Your coworkers may be great people, but stick to talking to them about non-work issues. If it's about the project, create a record of it. If you really like process, create a process to review all the wiki entries, problem reports, and mailing list entries for review for possible inclusion into the formal documentation. But, above all else, write it down!
When an open-source developer has a question, or has to make a decision, there's usually no one there to ask. So the developer has to go the project mailing list, or forum, or website, and ask the question to the group. Then anyone else can see the answer. If anyone else thinks the question is very important, it can be added to the documentation, FAQ, wiki, or whatever. That way, the documentation represents what anyone really needs to know about the project.
This problem is reflected in the "learning curve" for business projects. Some companies try to overcome this shortcoming by mentoring, or shadowing, or other methods for effectively "picking someone's brain." But all they are doing in ingraining the mindset that some person knows the answer and will give it to you if you ask. This is a dangerous process, because it creates many single points of failure. What if Bob the database guy gets hit by a bus? Who else knows how to run the database? If he'd written it down, then someone else will be able to take over. Overcoming the job security mentality is a tough task. But, assuming that no one will work at the same job for their whole career, a repository of information is absolutely necessary.
Your coworkers may be great people, but stick to talking to them about non-work issues. If it's about the project, create a record of it. If you really like process, create a process to review all the wiki entries, problem reports, and mailing list entries for review for possible inclusion into the formal documentation. But, above all else, write it down!
Monday, March 10, 2008
Government Software, An Oxymoron
Or "Software by Earl Scheib: I Can Build that System for $49.99"
I can't wait for Google to start a Federal Systems group. Until they (or someone else with their deep pockets and development methodology) get involved, the rest of us will continue with status quo. It is unfortunately not in our company's best interests to improve the way we develop software. In fact, poor software process is encouraged because it allows budget and schedule over-runs, otherwise known as Cost Plus and Follow-On Work.
Both sides are at fault for this, but ultimately the blame has to fall on the developer side. The client is only asking for what they want, as they always do. The problem lies with the development staff not explaining that they are making unreasonable requests. Furthermore, allowing the client to determine the technology to create the system will most likely not utilize the most optimal architecture for the problem. Which leads to more Follow-On Work. Perfect: the client gets what they want: a convoluted, inefficient system, and we get more money to fix our mistakes later!
Honestly, I can't wait for Google's first meeting with govvies where they say "You'll get your software when it's ready" and watch multiple General's heads explode. I don't have any idea if Google is crazy enough to enter this arena, but if they do, they will dominate it. Why? Because the existing competition is some combination of lazy and inept. Or, we are happy doing whatever crazy job the customer thinks up next.
I can't believe that the Form-A-Committee-to-Investigate-A-Commission world of the government works as closely as they do with the Long-Haired-Free-Thought world of software development. I know they've had in-house staffs before outsourcing the work, and I think that's pretty good evidence that they don't understand how to do it. But rather than say "Build me my system and give it to me when it's ready" they still want to have control over the process, but have success at the end. Remember Einstein's definition on insanity?
So why Google? They've got the money to make it work, but more importantly they've got the influence to break the model. Get government's hand off the software except deciding what they want and making sure it works, and you get a better product. Come to think of it, "Get government's hands off... and you get a better product" is a more succinct explanation anyway.
I can't wait for Google to start a Federal Systems group. Until they (or someone else with their deep pockets and development methodology) get involved, the rest of us will continue with status quo. It is unfortunately not in our company's best interests to improve the way we develop software. In fact, poor software process is encouraged because it allows budget and schedule over-runs, otherwise known as Cost Plus and Follow-On Work.
Both sides are at fault for this, but ultimately the blame has to fall on the developer side. The client is only asking for what they want, as they always do. The problem lies with the development staff not explaining that they are making unreasonable requests. Furthermore, allowing the client to determine the technology to create the system will most likely not utilize the most optimal architecture for the problem. Which leads to more Follow-On Work. Perfect: the client gets what they want: a convoluted, inefficient system, and we get more money to fix our mistakes later!
Honestly, I can't wait for Google's first meeting with govvies where they say "You'll get your software when it's ready" and watch multiple General's heads explode. I don't have any idea if Google is crazy enough to enter this arena, but if they do, they will dominate it. Why? Because the existing competition is some combination of lazy and inept. Or, we are happy doing whatever crazy job the customer thinks up next.
I can't believe that the Form-A-Committee-to-Investigate-A-Commission world of the government works as closely as they do with the Long-Haired-Free-Thought world of software development. I know they've had in-house staffs before outsourcing the work, and I think that's pretty good evidence that they don't understand how to do it. But rather than say "Build me my system and give it to me when it's ready" they still want to have control over the process, but have success at the end. Remember Einstein's definition on insanity?
So why Google? They've got the money to make it work, but more importantly they've got the influence to break the model. Get government's hand off the software except deciding what they want and making sure it works, and you get a better product. Come to think of it, "Get government's hands off... and you get a better product" is a more succinct explanation anyway.
Monday, March 03, 2008
Keep It Small
One of the largest (no pun intended) problems with building government software is too much knowledge at project inception. For the most part, we are rebuilding or adding onto an existing system. Therefore, we think we know where we want the end product will be, and think we know all the problems we will encounter along the way. As Murphy's adage states, that is a recipe for disaster, or at least schedule slip. By not building the system as a progression, but rather taking it as a monolithic task, the environment for innovation is decreased (to put it as nicely as possible). Like a used car, if the system works perfectly now, why are we rebuilding it?
The fallacy that "We built this before, so we know what it does" should never be applied to a system redesign. The problem with it is that it does not address all the other things done under the covers to create the end result. The existing system should not be used as a model for the new one, but rather the functionality the existing system attempts to provide should be recreated in the new one. Or more simply put: look at what it does, not how it does it.
This problem results from the XP concept of Big Design Up Front. Simply put, it's taking too big of a bite that results in a system that's too complex to be understood, tested or extended. Yes, it sounds like the paradox "Can God Create a Rock So Big He Can't Move It?" and, when applied to software, quickly becomes "Can People Create a System So Big They Can't Understand It?" I'm sure you'll agree the answer is yes, and very quickly. In fact, the greater task is preventing that from happening.
So how do we resolve the problem of Big Design Up Front? Stop thinking about it. Take the known pieces of the system, design them such that they can produce query-able, re-usable output (XML Web Services or EJBs in this era), and build it. Define the architecture and the data flows, but keep your hands off the system components. There's enough to do to define or redefine what the users are doing, or better, what they want to do.
I propose the Rule of One: if the problem can't be defined, described, and agreed upon in one hour, it is too complex. Most people would groan at a meeting that is scheduled for more than one hour. Therefore, one hour is quite enough time to organize an approach and discuss the issues with that approach. That quickly leads into The Simplest Thing That Could Possibly Work . Additional issues can be addressed in other iterations, but to get the ball rolling requires a quick, basic action.
The fallacy that "We built this before, so we know what it does" should never be applied to a system redesign. The problem with it is that it does not address all the other things done under the covers to create the end result. The existing system should not be used as a model for the new one, but rather the functionality the existing system attempts to provide should be recreated in the new one. Or more simply put: look at what it does, not how it does it.
This problem results from the XP concept of Big Design Up Front. Simply put, it's taking too big of a bite that results in a system that's too complex to be understood, tested or extended. Yes, it sounds like the paradox "Can God Create a Rock So Big He Can't Move It?" and, when applied to software, quickly becomes "Can People Create a System So Big They Can't Understand It?" I'm sure you'll agree the answer is yes, and very quickly. In fact, the greater task is preventing that from happening.
So how do we resolve the problem of Big Design Up Front? Stop thinking about it. Take the known pieces of the system, design them such that they can produce query-able, re-usable output (XML Web Services or EJBs in this era), and build it. Define the architecture and the data flows, but keep your hands off the system components. There's enough to do to define or redefine what the users are doing, or better, what they want to do.
I propose the Rule of One: if the problem can't be defined, described, and agreed upon in one hour, it is too complex. Most people would groan at a meeting that is scheduled for more than one hour. Therefore, one hour is quite enough time to organize an approach and discuss the issues with that approach. That quickly leads into The Simplest Thing That Could Possibly Work . Additional issues can be addressed in other iterations, but to get the ball rolling requires a quick, basic action.
Friday, February 08, 2008
Happy Birthday, Open Source Software
Bruce Perens wrote an article noting that today is the 10th anniversary of the Open Source Initiative. As discussed, on slashdot, this doesn't necessarily mean ten years of open, collaborative software development, but rather ten years of promoting open software as a viable option to closed, proprietary software.
Today's post is not to discuss the pros and cons of open versus closed software development (Lord knows there's enough of that without me getting involved. See slashdot again. I'm much more interested in open source development as a model for general software development, and what it offers that closed development does not. Specifically, how can the open-source model create so much (MySQL, Firefox, OpenOffice, Apache, etc. etc.) with so little (development staff, management, time, money, etc. etc)?
Knowing that the development staff won't be available forces open source to make a number of decisions about project organization. Because there's no full-time staff, all the assignments have to be given out in a way that is verifiable, discreet, and in isolation. Every developer knows where their piece fits into the application, and all the application information is available and up-to-date. The project has to be self-contained, easy to set-up and document progress. See Mozilla.org's development site for a great model of how to keep the staff informed simply.
Knowing that the staff won't be available requires the use of a software version control system that allows disconnected operation. While not crucial, this model allows the developer to work in isolation until they are ready to submit the changes for approval. This allows the product to be more stable for longer periods of time while development is ongoing, and requires the use of a branching/merging policy to handle bugfixes and new feature development.
Because the development staff may change at any time, code reviews are given a higher priority. There's little room for the attitude that "He did good work last time, so it can slide." Everything has to be reviewed, documented and tested long before hitting the main baseline. Again, this keeps the stable release as stable as possible and removes many opportunities for error.
Open software development follow the model "Give them the tools they need and get out of the way." Innovation and creativity can flow freely in an environment like this, at lower cost with higher productivity. But corporate development is stuck in a Catch-22: managers won't go for a "radical" change like this, since it would put them out of a job. So even though us lowly developers know there is a better way, we won't be allowed to use it. And that's very sad.
Today's post is not to discuss the pros and cons of open versus closed software development (Lord knows there's enough of that without me getting involved. See slashdot again. I'm much more interested in open source development as a model for general software development, and what it offers that closed development does not. Specifically, how can the open-source model create so much (MySQL, Firefox, OpenOffice, Apache, etc. etc.) with so little (development staff, management, time, money, etc. etc)?
Knowing that the development staff won't be available forces open source to make a number of decisions about project organization. Because there's no full-time staff, all the assignments have to be given out in a way that is verifiable, discreet, and in isolation. Every developer knows where their piece fits into the application, and all the application information is available and up-to-date. The project has to be self-contained, easy to set-up and document progress. See Mozilla.org's development site for a great model of how to keep the staff informed simply.
Knowing that the staff won't be available requires the use of a software version control system that allows disconnected operation. While not crucial, this model allows the developer to work in isolation until they are ready to submit the changes for approval. This allows the product to be more stable for longer periods of time while development is ongoing, and requires the use of a branching/merging policy to handle bugfixes and new feature development.
Because the development staff may change at any time, code reviews are given a higher priority. There's little room for the attitude that "He did good work last time, so it can slide." Everything has to be reviewed, documented and tested long before hitting the main baseline. Again, this keeps the stable release as stable as possible and removes many opportunities for error.
Open software development follow the model "Give them the tools they need and get out of the way." Innovation and creativity can flow freely in an environment like this, at lower cost with higher productivity. But corporate development is stuck in a Catch-22: managers won't go for a "radical" change like this, since it would put them out of a job. So even though us lowly developers know there is a better way, we won't be allowed to use it. And that's very sad.
Tuesday, February 05, 2008
Making Programming Boring
Back to software stuff, a day late again.
My company is working very hard to make programming boring. Maybe boring is too strong of a word, but certainly mundane, simple, and straightforward. But, in the usual double-whammy of bad idea and impossibility, this is a move in the wrong direction.
The problem lies in identifying and defining the task. If the task is so tightly defined that no innovation can take place, then the project is bound to stagnate. While there are circumstances where a tight definition of requirements is both desired and necessary, most situations would dictate that you development staff knows more than you do about the domain of the problem to be solved. Joy's law states that "Most intelligent people work somewhere else", which is really a corollary to "50% of the people are below average." That simple fact can be used to prove that both the designer and programmer are average at best. But it also says that there may be someone in the staff who does "think outside the box," which is only possible if the box is not closed on all sides.
Programming is not brick-laying, or assembly-line work. People may mistake the "Cathedral and Bazaar" analogy as indicating that the work must either be done by skilled craftsmen or by "workers." Software is neither a blank canvas nor a paint-by-number. It is somewhere in between, and each absolute end is used very infrequently. The innovative work done by a company like Google was either designed or created by a few, very smart people and then passed to less-smart (again, invoking Joy's Law) people to implement or use and expand upon.
Innovation is what created the software technology we have today, and to try to over-engineer the design and implementation to the point of stifling innovation will lead to a loss of talent who don't want to work where their ideas aren't recognized and ultimately to a loss of market share to companies that can innovate and create software more quickly, for lower cost, and taking better advantage of the existing technologies. If all you know how to make is widgets, and all your people know about is making widgets, then you would be oblivious to anything that solves your problem better than a widget.
My company is working very hard to make programming boring. Maybe boring is too strong of a word, but certainly mundane, simple, and straightforward. But, in the usual double-whammy of bad idea and impossibility, this is a move in the wrong direction.
The problem lies in identifying and defining the task. If the task is so tightly defined that no innovation can take place, then the project is bound to stagnate. While there are circumstances where a tight definition of requirements is both desired and necessary, most situations would dictate that you development staff knows more than you do about the domain of the problem to be solved. Joy's law states that "Most intelligent people work somewhere else", which is really a corollary to "50% of the people are below average." That simple fact can be used to prove that both the designer and programmer are average at best. But it also says that there may be someone in the staff who does "think outside the box," which is only possible if the box is not closed on all sides.
Programming is not brick-laying, or assembly-line work. People may mistake the "Cathedral and Bazaar" analogy as indicating that the work must either be done by skilled craftsmen or by "workers." Software is neither a blank canvas nor a paint-by-number. It is somewhere in between, and each absolute end is used very infrequently. The innovative work done by a company like Google was either designed or created by a few, very smart people and then passed to less-smart (again, invoking Joy's Law) people to implement or use and expand upon.
Innovation is what created the software technology we have today, and to try to over-engineer the design and implementation to the point of stifling innovation will lead to a loss of talent who don't want to work where their ideas aren't recognized and ultimately to a loss of market share to companies that can innovate and create software more quickly, for lower cost, and taking better advantage of the existing technologies. If all you know how to make is widgets, and all your people know about is making widgets, then you would be oblivious to anything that solves your problem better than a widget.
Monday, January 21, 2008
There's No Such Thing as Bottom-Up Design
Yes, I'm stealing "There Ain't No Such Thing As A Free Lunch," from Robert Heinein as the title this week. It's catchier than "Bottom-Up Design is really Stupid and Hard to do Right." I'm not saying that it is impossible, but only because I believe that nothing is impossible given enough time, talent and organization. But Bottom-Up design is much more difficult to manage than Top-Down design, for a number of reasons:
- Management of many incomplete services is much more time-consuming than management of a few complete services. The ramp-up for staffing to begin defining interfaces and interactions of those interfaces is much steeper than working slowly and getting the first part working, correctly, before beginning step two. This ramp-up smacks right into the Mythical Man Month formula that group communication equals n(n − 1) / 2, where n = the number of developers. But by working slowly, the group can reach consensus one one point much quicker than making a decision on many unknowns. As a result, much more time than is necessary is spend monitoring the progress of many, partially functioning services.
- Bottom-Up implies Top-Down, but not vice-versa. This one should be obvious enough to be the nail in the coffin of Bottom-Up design on its own. Any class/function needed in a Bottom-Up design should only be created to fulfill a need. But what defines the need? That's right, some sort of Top-Down analysis. But then, why perform a perfunctory analysis and stop there? If you need to declare the situation in which you need the function, it makes sense to continue defining the situation to an implementable solution, rather than defining ALL possible situations with little effort given to the interactions of those situations.
- Bottom-up design fails another Brooks truism. Create a working prototype and "grow" the software iteratively. By trying to define everything that needs to be done at once, the developer may miss out on opportunities to refactor the design. Creating a working prototype allows the developers to focus on what IS known, rather than spending time and effort trying to define everything that isn't know yet.
Monday, January 07, 2008
LOC is a Pointless Metric
Measuring programming progress by lines of code is like measuring aircraft building progress by weight.
If the head of a major company holds that opinion, what could that imply? To me, it means that you are tracking something that has no direct bearing on the end product. It's related to the product, no doubt about it, but it's a function of other factors, most notably functions or use cases, depending on if you are developing bottom-up or top-down.
The pointlessness of LOC as a metric is directly related to its nebulous nature. What makes a line depends on how you measure it. Is a line truly one line in the source, or is it more like a statement? But what defines a statement? A semicolon in Java? Anything between braces? Something else? When building in assembly, each line of code actually did something to the computer. But how do you count an external library function call? It's an over-simplification to think that a line takes x amout of time to create and test, since each line is almost totally unique from the one preceding it.
A LOC ain't what it used to be. How do you count lines when some of those lines are generated? And then, when your application gets refactored (you do refactor software, don't you), and your LOC goes down, then what? But there's time spent learning the tool to perform the code generation, time spent learning the tool syntax, and that time can't be accounted for until the tool is chosen and integrated.
LOC is considered useful for the reasons that make it useless. You are tracking and managing one of the details of the project as if it has a bearing on the progress of the project. Imagine if my productivity was measured by the feet I walked per day. Less feet = more time at my desk = more productivity. Simple, measurable, but totally inaccurate. Simply because a LOC cannot be defined other than by convention, any use of it as a metric is debatable at best.
So what? Simply, bid per high level requirement. Each requirement costs x dollars to implement, test and maintain. It's not a perfect system, but it would save time and money during analysis by not pretending to perform an estimated analysis on anything other than the functionality that has been requested.
Monday, December 31, 2007
How Burned Out Are You?
When it comes to career burnout, I've been a student for a couple years already. During that time, I've been able to reflect on my career and learn some useful information about burning out. There are many factors affecting burnout, most simply is working too much and not having any time left for yourself. But it all boils down to a few simple questions:
I like to look at burnout not as a yes/no proposition, but as a series of levels. There are many different aspects of work which you may be tired of. Let's look at my case as an example.
- Am I as excited about my future here as I was when I started?
- Does the situation look likely to improve in the time I'm willing to wait?
- What aspects of this job are the most frustrating?
- How would I like to change them?
I like to look at burnout not as a yes/no proposition, but as a series of levels. There are many different aspects of work which you may be tired of. Let's look at my case as an example.
- Customer: NGA. Typical government agency full of bureaucracy and political agendas. Not the best environment to develop software in.
- Project: GeoScout. Multi-company effort to overhaul NGA's hardware and software. The problem is that no one understands what it does now, so it can't effectively be rebuilt. Plus the development is hosted on a virtual server halfway across the country on a shaky network. Need I say more?
- Company: Lockheed Martin. We keep trying to make everything like making rockets, and you know what? It doesn't work very well. But our main job is convince our customers that we are doing a good job, so they keep hiring us. Joy's Law is definitely in effect here, but no one seems to notice. "We Never Forget Who We're Working For." Yeah, not me.
- Location: Colorado. Can't say too much bad about Colorado that can be said about any other big town. So there's one aspect of my career that I can probably keep. Although Ireland is looking very nice this time of year, or I'll have to buy a snowblower.
- Industry: Software Development. Well, to be fair, government software development. Most of my complains are with the way we're told/forced to do our jobs, when there are obviously better ways to do development out there.
Tuesday, December 18, 2007
UML and Paralysis by Analysis
Taking a break from Government complaining and back to work.
First point: UML is no silver bullet. It is only as good as the people producing the diagrams, and the people reading the diagrams. If the diagrams are produced by someone who doesn't understand the problem, how can they specify how to solve the problem? And, if they need to be specified to the implementation level, what does that say about your developers? UML works better when used in line with Brook's original hypothesis, that software should be iteratively grown. Build the parts that you can understand, and add (and rebuild) new parts once you have the framework done.
Second point: UML is usually done to death. It's similar to designing a house down to the cut lengths of each board. The additional time and effort in designing to that level is lost when an error is made. However, the error won't be determined until elaboration or construction. So what's the point in trying to design the interaction between all the components, none of which exist yet? UML should only be created to describe the problem to be solved to the level that a developer can understand it and ask questions. Using it to describe the entire application creates a waterfall model, instead of continually refining the model as the application evolves.
First point: UML is no silver bullet. It is only as good as the people producing the diagrams, and the people reading the diagrams. If the diagrams are produced by someone who doesn't understand the problem, how can they specify how to solve the problem? And, if they need to be specified to the implementation level, what does that say about your developers? UML works better when used in line with Brook's original hypothesis, that software should be iteratively grown. Build the parts that you can understand, and add (and rebuild) new parts once you have the framework done.
Second point: UML is usually done to death. It's similar to designing a house down to the cut lengths of each board. The additional time and effort in designing to that level is lost when an error is made. However, the error won't be determined until elaboration or construction. So what's the point in trying to design the interaction between all the components, none of which exist yet? UML should only be created to describe the problem to be solved to the level that a developer can understand it and ask questions. Using it to describe the entire application creates a waterfall model, instead of continually refining the model as the application evolves.
Sunday, October 21, 2007
On Software and Reluctance to Change
Even in a large, professional, closed-source development office, open source software is held in high regard by software developers. This is countered by management's distaste for it on the grounds that "We have no one to hold accountable if it fails." But why does open source have this (well earned) reputation?
Open source, is even more evolutionary than closed source. Closed source has the advantage of being developed by a paid staff. Open source developers have a much more free range of choice of related projects to support, and the very few that are well organized, well planned, and well executed will survive. Any body can start a new web server project, but very very few will be better than Apache. In an environment without such direct competition for developer resources, a closed source solution can flounder on for years, and the developers may not even know that there is a better way! Think of it like those fish that have evolved to live in caves without daylight. Those fish are the proverbial "big fish in a little pond." But put them out in the sunlight and they'd be killed! But they do fill their niche quite well.
Open source has to have the "Wow" factor. As an outgrowth of the first point, no developer would want to work maintenance on an end-of-life project if they didn't have to. But closed source doesn't necessarily have that constraint. By tying the customer into a set of technologies, the closed source project makes changing from that vendor that much more difficult. Especially in this era of the DMCA, trying to figure out what a vendor did with your data may end up in court! There is some of that in the open source world, but once you can see the source, reverse engineering the data becomes that much easier.
Open source has a long history of being a "hobby." This is the battleship mentality of large, closed source software developments. What they fail to realize is that all those CMMI certifications don't have any correlation to producing a stable, complete product. That those hobby programmers have to have more strict controls over feature development, release schedules, etc. By its nature, open source development depends on a central core of "committers" to control all the distributed effort. Getting that help to understand its job quickly and to perform it to specification is absolutely critical to the success of the project. All the development processes are driven by that point. So being able to reassign a task to anyone who wants to work it must be able to be done with a minimum of effort by the developer and the committers.
If closed source is like a super computer, capable of mind-boggling levels of effort, than open source has the capability to be like Seti@Home, capable of performing to much higher levels of performance than ever imagined!
Open source, is even more evolutionary than closed source. Closed source has the advantage of being developed by a paid staff. Open source developers have a much more free range of choice of related projects to support, and the very few that are well organized, well planned, and well executed will survive. Any body can start a new web server project, but very very few will be better than Apache. In an environment without such direct competition for developer resources, a closed source solution can flounder on for years, and the developers may not even know that there is a better way! Think of it like those fish that have evolved to live in caves without daylight. Those fish are the proverbial "big fish in a little pond." But put them out in the sunlight and they'd be killed! But they do fill their niche quite well.
Open source has to have the "Wow" factor. As an outgrowth of the first point, no developer would want to work maintenance on an end-of-life project if they didn't have to. But closed source doesn't necessarily have that constraint. By tying the customer into a set of technologies, the closed source project makes changing from that vendor that much more difficult. Especially in this era of the DMCA, trying to figure out what a vendor did with your data may end up in court! There is some of that in the open source world, but once you can see the source, reverse engineering the data becomes that much easier.
Open source has a long history of being a "hobby." This is the battleship mentality of large, closed source software developments. What they fail to realize is that all those CMMI certifications don't have any correlation to producing a stable, complete product. That those hobby programmers have to have more strict controls over feature development, release schedules, etc. By its nature, open source development depends on a central core of "committers" to control all the distributed effort. Getting that help to understand its job quickly and to perform it to specification is absolutely critical to the success of the project. All the development processes are driven by that point. So being able to reassign a task to anyone who wants to work it must be able to be done with a minimum of effort by the developer and the committers.
If closed source is like a super computer, capable of mind-boggling levels of effort, than open source has the capability to be like Seti@Home, capable of performing to much higher levels of performance than ever imagined!
Subscribe to:
Posts (Atom)