Now, these pagers would be totally useless because we (a) wrote really good code so there just weren't many defects that users had to deal with and (b) the only consistent problems users faced were network related, not program related. So we would spend our time tracking down networking people. My immediate thought was that the users could do that with -- drum roll please -- giving the network people the pagers.
So I looked my boss in the eye and said, very deliberately, "When I and my programmers write such bad code that we need to carry pagers, **I** will find another line of work." And then I said nothing more, I just looked at him.
Wow that is arrogant. On your side, not the bosses.
To be fair: pager duty would require additional compensation, but no one writes flawless code and if you want systems without downtime pagers are the only way that has been shown to work.
No, we didn't write flawless code. But we damn well knew what code was absolutely essential to work correctly and we made sure it did. 99.9% of the remaining code could wait until morning to deal with. It was the nature of the business we were in. The network people that ran across the 0.1% of the time when we needed to come in to work off hours could call us and we could come in. In the six years I was there, no one ever had to call us in...
The bosses plan would have had us called 100% of the time and us calling the network people to come in 98% of the time. The other 1.9% would be "we'll fix the software in the morning" and the remaining 0.1% would have us coming in right away. That was just plain bass-ackward and that's why I refused.
PS -- I had to put about 2% of our manpower on fixing defects in our software. The rest was for new software development, either green fields work or enhancements to existing code because the business wanted more functionality or laws or contracts had changed. I've read that industry averages for fixing software are in the 50-80% range.
I had really good people working for me, we had good processes, we got specifications from really good analysts who worked with really savvy business people. I've written a lot of conference papers and technical articles over the years to teach people good techniques to improve software quality and software development speed. I used to present 1-3 papers at 5-6 software conferences a year and have been an editor or contributing editor for 4 different technical magazines.
For example, I was presented with a specifications problem in the business. Corporate HQ had tried to solve it with their software and the resulting product required four full time programmers just to get the data representing cargo containers "unstuck" so users could continue to move the containers around and record what they were doing. They showed me the 12 SQL statements they used to verify what the users were allowed to do. Each was over a dozen printed sheets of paper long. I'm damn good at SQL but it would take me a couple of days to work out what the statement was doing and by that time my brain was full. And there were 11 more statements to go thru.
I took the same problem and in a few weeks had developed a technique to analyze the requirements and used it to produce the specs. I designed and built a specifications database to hold a series of decision tables and a business-rule based engine that would populate the tables based on simple business rules that a user could easily verify for correctness. In the process of the analysis, I identified 14 variables and 25 events that had 122.5 quadrillion combinations of pre-event states, events, and post-event states. It took less than 2 hundred business rules to weed those down to less than a thousand allowable combinations. Addition additional variables into the mix was simple and quick to do, so when we identified a rule that needed an extra variable we could incorporate it within an hour, including re-validating our rules against the presence of a new variable. My initial "best educated guess" had identified 5 variables and 25 events so I went thru the new variable drill another 9 times. A business rule included a business English statement and a SQL where clause fragment that tied to the variables and events and their respective values that were pertinent to the rule. The where clause needed to be written such that we could say, "This rule is broken when...". Example Rule: "We do not sell containers we do not own." This rule is broken when " Ownership != 'our company' and event = 'Sell' ". Script would run thru the event database and mark all combinations that broke that rule as invalid.
Advantages to doing it this way were seriously important:
A generic routine could be used to determine whether the pre-event state, event and post-event state were valid combinations. Much easier to code and test than all those combinations with individual if-then statements..
If someone wanted to change a rule, I could run a query to determine which combinations would be newly forbidden by the new rule and which would be newly allowed by the new rule. That way, we could get useful feedback so the users could refine the rule before we put it into production. "What? No, we can't let that happen! Let's modify the rule so..."
Not only that, but if a user (or developer) wanted to know why they weren't allowed to take an action they wanted to take, we could give them a list of all pertinent business rules.
One of the key data entry screen languages we worked in allowed us to load up the source code into a structured database. I wrote scripts to verify standards compliance or to modify code components to meet new standards. Other scripts would write reference manuals for the data entry screens.
In later years, I would write scripts to read a database design and produce prototype business specifications for the data maintenance screens and reports. I would write other scripts that would read the database design and identify likely business rules and record them as candidate rules in a rule database. If the rule was approved, other scripts would write the database enforcement code with either fully working code or a stub marked with an @ToDo marker and the specifications the code should meet. Data entry screens knew how to read the business rule database so a generic routine could tell the user what rules were being violated or just which ones applied to the data. All of these techniques removed human labor and the vagaries of human error from large portions of the system. So, instead of spending our time finding and fixing random defects in simple to intermediate code, we could spend much more attention on the key parts of a system that really needed to be correct.
So no, I don't think that I was being arrogant. :)