I came across some relatively cheap noise canceling headphones on eBay, some Sony MDR-NC11A. I don’t necessarily want to do an in-depth review of them, but figured I’d mention a couple of things. On the plus side, I got them at a reasonable price, and with them I do seem to be able to keep the volume level lower on my MP3 player. On the minus side, there are a couple of things. The small box containing the noise canceling electronics is a bit of a pain, it has to be clipped on somewhere or the headphones start to get pulled out of the ear. I’m sure I’m going to go through batteries quicker than I should by forgetting to turn off the noise canceling circuit. Lastly, when the circuit is on it adds some noticeable noise to the output, sounding sort of like a tape hiss sound. Reducing the low to midrange rumble of train wheels and forced air may be a reasonable tradeoff for a small amount of full range noise, but it winds up that its not a free lunch.
November 6, 2006
November 5, 2006
The other day I was listening to a podcast of the Penn Jillette radio show. It was an episode with guests Bob Saget and Tom Bergeron . At about 36:00 minutes in, Penn starts telling a story about his good friend Rob Pike. Now, I’ve heard of Rob Pike. I’m impressed that Penn knows him; I own books Pike has written, and in many other ways is a name worthy of namedropping. One the other hand, I can understand why the contexts where I have heard of Pike wouldn’t be ones that Saget would come across. When Jillette says “he’s a scientist at Google”, Saget’s initial reaction is “A scientist? Isn’t Google just a search button?” (to which Penn replies “Yes, but someone has to make all of the stuff behind the button work.”) Even so, Saget loved vamping on roles he thought would be as silly sounding as a scientist at Google (“A nuclear physicist at Yahoo?” “A heart surgeon for myspace?”)
I don’t blame Bob Saget for not knowing about Rob Pike or the size of Google R&D. I just find it a bit odd that my view of the world is so skewed that it wouldn’t occur to me that it would surprise people.
In Why Software Sucks, Chris Stewart comes up with a statistic 80% of projects fail. I’m guessing is is misquoting a Standish Group report, whose figure can is more accurately quoted in Best Practices for software development projects. It says “over 80% of projects are unsuccessful either because they are over budget, late, missing function, or a combination.” I don’t have the $99 for the most current version of their report, but their 1994 version is of The CHAOS Report online on their website.
From what I’ve read in places like IT Myths (#5 Most IT Projects Fail) the Standish Group’s sole purpose is to study and report on the success and failure rate of corporate IT. Their criteria of success is pretty high. From what I can see higher than any sort of project initiative I’ve seen for any company, whether the project involves IT or not. I don’t think I’ve gotten a better success rate from home improvement contractors, electricians, or plumbers working on my house. I’m not saying Standish’s criteria is wrong or should be more lax. People making business decisions should know this sort of information. (If nothing else, don’t get so wrapped up in the dream of what things will be like at the projects success to ignore the fact that it might fail)
It might be that he lead off with an incorrect figure to prove his point colored my viewing of the rest of his essay. Or maybe he is as naive as I think he is.
Yes, agile approaches to development are great when you can engage the “customer” fully. On the other hand, there are drawbacks. What if there isn’t one person who can’t speak for all the stakeholders. (Either the boss who doesn’t know enough about what her workers do to give the right answer, or she nominates one employee to speak to what they need, and the one employees view winds up being different the rest.) What about the overhead of small iterations. Iterations have to have some sort of design and QA phases. In many environments, developers would be better of getting more feedback sooner, and being able to keep closer to what is wanted or needed. (similar to what the the rules in the book The Pragmatic Programmer desribe as “tracer bullets”) On the other hand, if an organization is really set up to get it done in one shot, then they could probably save time and money dispensing with the overhead of intermediate iterations.
Stewart complains about needless abstractions that make a project harder to understand by people coming into a project and not of immediate benefit to the customer. The counterpoint to that are the needful abstractions that keep a software project flexible enough to allow these customer driven changes to be implemented quickly and easily. No the customer doesn’t care if what you’ve built has all of the database related functions in a data access abstraction layer or SQL scattered throughout the project. He will care you can’t make schema changes quick enough to accomodate the changes in his business or product line.
Simple software that is quick and easy to develop and modify, so software should be as simple as it can be based on what it needs to do. Most software isn’t complex based on the whims of the developer. Most software is complex because the problem it is trying to solve is complex.
Decisions of agile development or detailed analysis, abstract and flexible design or simple and concrete, able to be delivered quickly or able to be maintained for 5+ years are all engineering tradeoffs. Which tradeoffs are chosen depend on the project and customer at hand. Someone who says that there is only one solution probably is going to fail badly when the they try to implement their same cookie cutter solution to a different problem and environment.