Prestige in software


There is a questions I like to ask software developers; “what would you like to be doing in 10 years?”

There are almost no developers who answer that with “writing software”.

It is actually so bad that people who have been in software for 10 years feel like they should start moving into management. Not because they want to, but because they feel they should. It is not economic reasons as a good software developer likely makes as much money as a manager, not time reasons either as managers don’t tend to work fewer hours. So why do people want to leave a field that has evolving work (you won’t really get bored as things always change) a good market (yes there have been dips) and the opportunity to be creative?

I’m sure that there are many reasons, but one that interests me is prestige.

Let’s consider some other jobs, count how many you would rate above “Software Developer” in social prestige:

  • Firefighter
  • Real-estate agent
  • Scientist
  • Actor
  • Teacher
  • Banker
  • Doctor
  • Accountant
  • Military officer
  • Entertainer
  • Nurse
  • Stockbroker
  • Police officer
  • Journalist
  • Priest / Minister / Cleric
  • Union Leader
  • Farmer
  • Business executive
  • Engineer
  • Athlete

If you ended up with more than 10 you believe that software developer falls in the worst 10 jobs (http://www.usnews.com/usnews/biztech/articles/070802/2prestige.htm). Yeah I butchered some statistics there, but you get the idea.

So if software developer has such a crap reputation, why would anyone want to stick with it. Yep the money is great, but at some point respect would be nice to have as well.

This will not change very fast, but for the people that stick with it there will be a great paycheck for putting up with being looked down upon.

 

 

Posted in Uncategorized | Leave a comment

BindingList Extensions


So when working with binding lists you often have a scenario where you have to update the list you have with the list from the database. If you want to make sure that the selections that a user may have on the UI will be retained you have to make sure that the items are not removed and re-added.

 

These extensions are intended to accomplish that task.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ComponentModel;

namespace Foo
{
    public static class BindingListExtensions
    {
        public static void QuietMerge(this BindingList source, IEnumerable update, IEqualityComparer Matcher)
        {
            source.RaiseListChangedEvents = false;
            source.Merge(update, Matcher);
            source.RaiseListChangedEvents = true;
            source.ResetBindings();
        }

        public static void Merge(this BindingList source, IEnumerable update, IEqualityComparer Matcher)
        {
            List targetlist;
            targetlist = new List(source);
            targetlist.Reverse();
            // trim the list, and update the existing values

            var removes = targetlist.Except(update, Matcher);
            foreach (T w in targetlist.Except(update,Matcher))
                source.Remove(w);

            foreach (var w in targetlist.Except(removes).Join(source, x => x, x => x, (x, y) => new { tl = x,  sl = y }, Matcher))
            {
                w.sl.MergeDataByReflection(w.tl);
            }

            // add new entries.
            targetlist =
            new List(source);
            foreach (T w in update.Except(targetlist, Matcher))
            {
                source.Add(w);
            }
        }

        static public void MergeDataByReflection(this T merged, T original)
        {
            merged.MergeDataByReflection(original, new List());
        }

        static public void MergeDataByReflection(this T merged, T original, IEnumerable ignoreList)
        {
            var PropsToSet = typeof(T).GetProperties().Where(x =>!ignoreList.Contains(x.Name));
            foreach (var prop in PropsToSet.Where(x=>x.CanWrite))
            {
                prop.SetValue(
                        merged,
                        prop.GetValue(original, null),
                        null);
            }
        }
    }
}

And in case you don’t want to write all the comparers then here is something that allows you to do it with lambda


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Foo
{
    public class EqualityComparer : IEqualityComparer
    {
        public EqualityComparer(CommonDelegates.Test test, CommonDelegates.HashCode hashcode)
        {
            Test = test;
            Hashcode = hashcode;
        }

        CommonDelegates.Test Test;
        CommonDelegates.HashCode Hashcode;

        public bool Equals(T x, T y)
        {
            return Test.Invoke(x, y);
        }

        public int GetHashCode(T obj)
        {
            return Hashcode.Invoke(obj);
        }
    }

    public class CommonDelegates
    {
        public delegate bool Test(T x, T y);

        public delegate int HashCode(T x);
    }
}

Posted in C#, Syntax | Tagged , | Leave a comment

Bad UX in Airplanes


So we have been watching Air crash investigations ( http://en.wikipedia.org/wiki/Mayday_(TV_series) ), and it is very interesting to see the depth analysis into all the factors that cause airplane crashes. One common theme that arises is that Bad UX causes actually a lot of airplane crashes; well it is probably not the cause, but it sure helps make bad things worse.

For instance, on a ground controllers display there is a 7 character field next to planes that gives the expected height and the published height of a plane. So in the case of this episode the display of “360-370” meant that the plane was saying it flew at 370, and was supposed to be at 360. If the broadcast stopped the display would be “360Z370” In which case the last broadcast was 370, and the expected was 360. No alarms, no color change, just a character in the middle of a 7 digit string that implies that half of the rest of the numbers is garbage. Now that on its own is not something that will crash a plan, as there is very little reason to turn the broadcast system off. http://en.wikipedia.org/wiki/2006–2007_Brazilian_aviation_crisis

Unless you put the button behind the pilots footrest.

And you display an error message next to the button; presumably below the foot. No alarm, nothing else.

And then you add another plane and you have many dead people spread out over the jungle. And after that there is now an alarm if this system turns off.

Another case had to do with a bad airspeed sensor, apparently some bug build a nest in one. http://en.wikipedia.org/wiki/Birgenair_Flight_301

The system created contradictory warnings, and lacked some warnings which would have been expected. For instance the plane went from over speed into stall warnings, and managed to confuse the pilots into their deaths. A warning of “hey we got conflicting airspeeds” warning and a training scenario are now standard.

Another case had an UX failure on the maintenance side. There were 2 very similar looking modules that gauges fuel reserves, they could actually be swapped out between two different models and would turn on and give reading. The problem was that the readings would be a bit off, and would display the tanks as half full rather than empty if you swapped the wrong module with the wrong plane. Again, this is something that is as easy to fix by putting a key into the socket so they would no longer be interchangeable. http://en.wikipedia.org/wiki/Tuninter_Flight_1153

All of these issues were fixed, and they won’t happen again. That is part of the reason that planes really don’t go down very often these days. Well, they don’t go down very often in the 1st world where liability has made that unacceptable.

It is interesting to see how these small UX issues end up contributing to these disasters, and the only reason they were discovered is because people died. The death of a person is something that has a near infinite cost associated with it in western civilization and therefore causes a very large and detailed process to get down to root causes and implement changes. Especially when it is the death of 100 people at once.

Looking at the small issues that are the root causes in these airline crashes; these small UX changes that are made to prevent them; it really drives home how much the world must have in every day losses due to lousy UX. It is a great show to watch to learn to analyze problems, and it really gives you a deeper appreciation for UX.

UX is hardly the root cause in any of these, but if it had been better these events would have never made the news, and no one would have died.

Posted in Uncategorized | Leave a comment

Harnessing confusion


So I get confused pretty easily, and I like that.

Most for the time I get confused because my brain has a hard time storing & sorting the information that I’m provided with. The reason for that (as best I understand) is that my brain tends to store things as expressions rather than data; meaning that it does not remember details outside of the system but remembers the system and exceptions where my expression does not hold true. This means that when I receive contradictory information I get confused pretty fast and identify it as noise; but if I get a more subtle contradiction it creates an annoyance. These annoyances can bug me for days or years (my brain is exceptionally lazy when it comes to storing exceptions).

Yesterday I had one of those experiences;

After listening to a nice recruiter for a while about this amazing customer focused company (I was how they did what they did, rather than what they did) I was told that they wanted me to write a short program for them as part for the interview process. I love those things, coding katas are always fun and I’m always interested in doing them. So that peaked my interest, and I asked for the questions to be send over so I could complete them.

So it was not a coding kata, it was…. something different.

So leaving the 14 word problem statement out, these were these

Response instructions:

· Please provide 2 solutions, one implemented in Java, C# or C++ and the other solution described in paragraph form. One approach should optimize for runtime. The other should optimize for space. It is your choice as to which one is implemented and which one is described.

· For each solution, state the assumptions, the average runtime complexity and space complexity (memory usage) and worst-case runtime complexity and space complexity if they differ from average.

 

Important Details:

· The response should include 3 files: One README.txt with the explanation of the approaches and runtime and space calculation. One source code file (e.g. .java, .cs or .cpp) with the solution and one source code file with the unit tests demonstrating the solution.

· Code should be production ready – clearly written, runnable and include documentation.

· Unit tests should cover several examples that demonstrate the code working correctly on common use cases and any relevant degenerate cases.

· Code should not include any UI components, any facility for taking input at the console or any extraneous code not relevant to the solution.

Errr… ok. That was not what I expected. I was hoping for something like this [http://codingdojo.org/cgi-bin/wiki.pl?KataCatalogue]; If nothing else the problem statements are complete. Just the basic problem statement already had a number of issues that could only be resolved with more information about the exact problem. Or just make assumptions.

Optimizing based on assumptions is something I dislike; a lot.

Creating an optimized solution that is “production ready” is a really bad idea all together. Not to mention that “production ready” means something completely different to every person.

So I wrote up the questions I had from the first reading and send them off.

No response.

During that time I build out a little test harness, and had an even larger set of questions.

No response.

So I send in what I build along with a list of ~10 extra pieces of data that I’d really like to have before committing to any solution.

And I was left with this annoyance in the back of my mind.

So today I worked it out, I was asked by a company that was touted about being all about the customer to write software that required on my making 10 assumptions stacked on top of each other and ready to be slapped into production.

And to do so without talking to the customer. Their selection criteria for candidates actually evaluate the candidate’s ability to work in a manner counter to the stated company goal.

It is days like these where I like that I get confused easily, and it makes me smile when I figure out why statements annoy me.

Writing good questions is very hard to do, it is arguably an order of magnitude harder than answering them; But that is another topic.

Posted in Software Engineering, Uncategorized | Leave a comment

History in the study of software


One thing that I have enjoyed in the working world, but have had precious little of during my software education is history.

In many fields the focus is either on teaching the formulas or the history; Mechanical engineering focuses heavily on formulas, all the history has been distilled into concise representations. In other fields the focus is on learning from historical examples, economics is a good example of this where stories are used to illustrate principals.

In software I received neither, even in my further education in looking at training trough IASA or other certifications there does not seem to be a focus about learning from the past. There is a drive to try and form software into formulas, and there are people who approach creating the formulas from the past (http://www.construx.com/File.ashx?cid=2868) but there is no focus on storytelling of projects past.

There are some natural reasons for this; we assume that anything done 10 years ago could not possibly be applicable to current day technologies. In a field that believes that technology moves faster than in any other field it is hard to explain that history has value. It is also hard to actually find out about the information, sure there are some (in)famous projects but when compared to the sheer number of development efforts undertaken it is hard to imagine that anyone would even be aware of 1% of the efforts. And lastly a failure has to be amazingly spectacular before anyone would ever hear of it. In general the only failures that are heard of are those that in some way manage to get into production or have a lot of early promotion. Daikatana (http://en.wikipedia.org/wiki/Daikatana), Duke Nukem Forever (http://en.wikipedia.org/wiki/Development_of_Duke_Nukem_Forever); but there is very little information on business efforts that have gone horribly awry.

The only historical development stories that I read in college was in The Mythical Man-month (http://en.wikipedia.org/wiki/The_Mythical_Man-Month) it had some tales from IBM development on mainframe systems. Projects that went over budget, were late and brought up to sell a principal as to why this happened. But that was all.

Maybe software development would be better off if there was a bigger focus on storytelling, a focus in learning about great and small efforts that happened in the past. This could help students in developing a repertoire of stories to tell around the water cooler and find parallels between their challenges and the past. This may seem like a somewhat trivial skill to develop but it ends up being essential in execution.

Storytelling can help teach other people the nature of problems; the ability to abstract an issue from your current reality into a tale allows people to see the problem in isolation. When people can see the problem, and recognize the problem in their own reality, they will be receptive to solutions. For example, in College you will learn about unit testing, in the professional world you will notice that adoption of this practice is … not very good. If your company has a quality issue then you have knowledge that unit testing is something that can alleviate quality problems. But what you have not learned is how to move to that, what exact types of problems can be attacked this way or offer people a real world sample of this process working and/or failing.

The history of software development might prove a more useful guide to the future than is assumed; and there might be value in starting to compile these works into a body of knowledge. Once this exists there can be a generation that can learn from the past due to studying the past. Jumping past history to extrapolated formulas makes the results ring hollow and hinders adoption.

Posted in Uncategorized | Leave a comment

Copying hierarchical data in SQL server


I ran into a problem recently what required me to transverse a hierarchy and create a deep copy of the data in the tables. Expecting this to be a solved problem I turned to Google and was met by cursors. Many, many cursors.

 

I’m one of those people that was thought to never use a cursor and as a side effect I don’t really have much experience with them. So I went out to see if I could make a set operation as nesting cursors tends to end in tears. (or so I’ve been taught), so after some mulling I came up with a way to do it without a cursor.

Declare @Parrent TABLE( ID int PRIMARY KEY IDENTITY, Value nvarchar(50))
Declare @Child TABLE( ID int PRIMARY KEY IDENTITY, ParrentID int, Value nvarchar(50))

insert into @Parrent (Value) Values ('foo'),('bar'),('bob')
insert into @Child (ParrentID, Value) Values (1,'foo-1'),(1,'foo-2'),(2,'bar-1'),(2,'bar-2'),(3,'bob')

declare @parrentToCopy table (ID int) -- you can me this a collection
insert into @parrentToCopy values (2)

select * from @Parrent p inner join @Child c on p.ID = c.ParrentID order by p.ID asc, c.ID asc

DECLARE @Ids TABLE( nID INT);

INSERT INTO @Parrent (Value)
OUTPUT INSERTED.ID
INTO @Ids
SELECT 
		Value
FROM @Parrent p 
inner join @parrentToCopy pc on pc.ID=p.ID
ORDER BY p.ID ASC

INSERT INTO @Child (ParrentID, Value)
SELECT
       nID
       ,Value
FROM @Child c
inner join (select ID, ROW_NUMBER() OVER (ORDER BY ID ASC) AS 'RowNumber' from @parrentToCopy) o ON o.ID = c.ParrentID
inner join (select nID, ROW_NUMBER() OVER (ORDER BY nID ASC) AS 'RowNumber' from @Ids) n ON o.RowNumber = n.RowNumber

select * from @Parrent p inner join @Child c on p.ID = c.ParrentID order by p.ID asc, c.ID asc

This method is flat, no nesting and can be extended for deeper hierarchies. To do a self-referential hierarchy (parents can have parents) would require 2 steps, one to find all the nodes, a second to update the references. The crux is in the mapping of Id’s to Id’s.

inner join (select ID, ROW_NUMBER() OVER (ORDER BY ID ASC) AS 'RowNumber' from @parrentToCopy) o ON o.ID = c.ParrentID
inner join (select nID, ROW_NUMBER() OVER (ORDER BY nID ASC) AS 'RowNumber' from @Ids) n ON o.RowNumber = n.RowNumber

This breaks the problem open and gives you a guide to go to the next step whiteout losing information. If your Id’s are Guids, or non-sequential then you would have to add an ID column to

DECLARE @Ids TABLE( nID INT);

This column would then be used for the order to determine row numbers.

Posted in SQL, Syntax, Uncategorized | Leave a comment

Evolving towards agile


Agile I always understood as being a beast of synergy.

If you followed all the principles of the methodology then they would feed off each other and create a singularity of productivity. Fail on the implementation of some of the aspects and you end up in the well and lose productivity. This means that implementing agile is a gamble for the business, since no one wants to end up in the well in an attempt to catch the high.

It is very unlikely that this is true.

Singularities in any system are rare and even tough productivity and methodologies would make for a complex domain it is unlikely that singularities exist, and let alone that we could find one. So chances are that there is a path to agile that involves individual steps where each and every step improves productivity in itself. So what would the order of these steps be that would allow for an evolution towards agile?

We know that the transition towards agile has some pitfalls and adoption is not always successful. The reason why adoption often fails is because some of the steps taken make fundamentally no sense. An example of this is starting out by going to the business and saying that you want to make the scope of releases flexible if they want to keep the dates fixed. Sure it is part of the package, and sure some people will get this to work as a first step; but for many companies this is too big an impact of the organization to warrant the change. And the implementation loses steam or fails completely

So order likely matters. And possibly there is an order that is maybe slower but less risky.

So let’s consider some of the core technologies that Agile is based upon, and the drawbacks to the business.

  • Customer integration
    • Are developers fit to talk to customers (social skills, etiquette)
    • Do the developers speak the language of the customers
  • Business integration
    • Does development and the business speak the same language
    • Is there mutual respect
  • Self-organizing teams
    • Is there enough contact between the teams to allow for organization
    • Does the organization have the capabilities to support this
  • Retrospectives
    • Is there a culture of constructive criticism
  • Short iterations
    • What are the current commitments of the development team
  • Dynamic prioritization of tasks
    • What is the current workflow associated with prioritization
  • Unit of work segregation
    • Does development share the same language
    • Is there a tracking mechanism
  • Visualize project status
    • Is there space
    • Is there interest

*(I have had to create my own as I did not find suitable information elsewhere. I’m sure that there are some that people use that are missed but these were the core technologies that I’ve seen in implementations. )

Over the next set of posts I’ll be shaping an evolutionary path to putting these technologies in place. Creating an 8 step program towards agile while gaining momentum along the way.

Posted in Uncategorized | Leave a comment

Decisions in an information vacuum


The Buy vs. Build decision process is very interesting to me as it is a small, common process that often leads to the undesired outcome.

I covered before the secondary costs that often block this process in this post: https://bashamer.wordpress.com/2011/04/14/institutionalizing-%e2%80%9cwinging-it%e2%80%9d/

In this new post I’m trying to see if there is a smaller solution that could have the desired outcome.

The decision process at some point has to come down to “if(Cost < Benefit) Buy();” Before I focused on the secondary costs that are added into the process by policies and procedures that exist in organizations. Chancing these procedures in its self would require a high cost and there is no clear and visible benefit to doing so. So the procedures won’t change.

So let’s look if we can make this evaluation easier by attacking the Benefit part of the equation.

The benefit to buying is that you don’t have to build it. Or in other words it is hours saved on initial development plus hours saved on maintenance. For hours saved on maintenance you can probably take 10% of the initial estimate for server components, 20% for components that interact with consumer software (Office for example) and 30% for web components due to the speed of evolution. I imagine mobile will be in the 20-30% range as well. All of these numbers a bit fudgy so apply your own experience.

So our function for benefit now looks like “HoursSaved*(1+(YearlyMaintanceEstimate*ProductLifeExpectency)/100)“. This is neat, but the units don’t work out. We are comparing developer hours to dollars, so we have to add a conversion factor.

Do you know your conversion factor?

Do you know what the company considers the cost of an hour of your time? I’ve only known it in one company and it surprised me. It was about 3x what I thought it was; it was 6x what I was being paid.

Let’s say the companies made this information public; you get your magic factor and learn that an hour of development cost the company $175 (probably in the ballpark w/ QA, defect incidence rate, management, utilities etc.)

When you can all agree that cutting a task in half saves $10,000. How easy would it be to then build up the momentum to put in a request to buy a tool to accomplish that reduction.

By making the benefit side of the equation easy to compute you can determine the opportunity. Knowing the opportunity is there will lead people to explore alternate solutions.

 

Posted in Uncategorized | Leave a comment

Institutionalizing “winging it”


The following is an attempt in improving third party product adoption in software. Third party software often can lead to great savings, but it is often hard to get an actual piece of software on a project in a reasonable amount of time. Part of this may have to do with the process rather than developers who are to use it.

In many companies there are policies and practices that you are to abide by. One of the areas where these policies are strong is when it comes to spending money. In one company I worked for there had to be contracts in place, there had to be master service contracts negotiated and I recall spending an hour on a conference call to create a clause related to the mini-bar for any possible future consultancy services. These were both billion dollar companies, and the meeting had 2 technical resources, ~4 lawyers, and some management & sales. The topic ate about an hour of call time as well as hours of prep time. Assuming a low (but round number) of 100 / hr per person and this clause cost the two companies 1000 USD. I’m not sure what a full mini-bar runs, but I’d be willing to take the chance that we would have never made our money back on that clause. The total cost ofver the first 2 years of the product we were interested in was less than 5,000 USD

This seems like an extreme concept but it happens often in smaller instances. Smaller examples might be long meetings about cheap products; recreating a cheap product rather than going through the hassle of trying to buy one. The reason that these items happen is due to a lack of measuring non- monetary resources, and when things are not measured they have no value. So people take the rational approach of not buying off the shelf products and rather recreate the wheel.

Trying to add measurements to this process so that the costs are tied to time will be extremely laborious and requires relatively large changes in the business. Going this approach does not really resolve anything.

So ideally we want to minimize the costs of the decisions. Although there are many there are many processes to speed up decisions, there is one that is not often mentioned; making decisions in hind sight is very fast and accurate. This seems impossible.

What most purchasing decisions are is a predictive process. Predictions are often inaccurate, and to be more accurate it requires a lot of analysis. The hindsight scenario is a reactive process, where information is cheaper to acquire but it is unable to prevent anything.

For some decisions you want to have a predictive process, for big capital expenses (datacenters), or high risk expenses (people). But for many they are not (3 licenses of a 100 dollar piece of software). So a reactive approach is not always a good path to go. So if you want to institutionalize a reactive approach you do need some limits.

A simplistic model of a reactive approach is to offer people a budget per year of X (let’s say 1000 per developer). And allow people to re-claim this money by getting forgiveness. In effect a developer or group of developers can decide to buy a license on an afternoon, and if it works, and they can show the results they can go to the budget process and say “we want to have reimbursed 580 to buy a product that has shown these benefits (and savings)”. There is no risk and getting an accurate decision is cheap. People that make good decisions get to make more decisions, people who make bad decisions run out of money.

If the decisions process gets streamlined then hopefully third party tools become more attractive and people will spend less time re-inventing the wheel.

Posted in Uncategorized | 1 Comment

Making it tangible


Tangible things are easy to care about; this is because we are humans. We like things that we can hear see & feel. Even in religion you will see all of these sense engaged; idols, symbols, incense & liturgy are common across religions and provide an anchor for people to focus their energy into.

In software we have not really learned this lesson. For instance defect prevention is trying to prevent something from existing in something that doesn’t exist yet. Trying to rally people around defect prevention is doomed to fail because people can’t anchor their energy on anything. Some people will be able to force themselves and implement practices on sheer force of will, but even those people are spending enough effort on willing themselves into new habits that the opportunity cost should not be ignored.

For me this has been very hard to come to terms with; I like the abstract, can enjoy it and care about it deeply. But I also realize that it can become incredibly hard to force yourself to stay motivated on a bad project. Part of this is because there is nothing to focus your energy on; these projects often have no physical aspects. One thing I find myself do is create to-do lists so that I can create small things that I can focus on so I can check them off. In effect I made it tangible because that made it easier for me. I now realize this, but this is a recent thing.

Some of the development practices that actually are getting traction also have these physical aspects. For Agile for instance you have the board that holds the tasks, and the physical act of moving a task is quite gratifying. Kanban has the cards that are being walked through the company and the act of handing it to someone else really brings home that something was completed so something else could start. Even in the least agile implementation of agile you will most often see the physical artifacts survive where as the more abstract elements get warped or forgotten.

The physical is very powerful, and it is worth using that power to drive your project. Try and focus on giving a project a physical identity and meaning; something as seemingly silly as creating a project logo or mascot can actually help people focus on the project easier. They can keep it on their desk as a physical reminder as to why they are doing this and who they are doing it for. Rewarding people with small tokens that can be displayed also makes things more real. Especially in an age where money has become abstract; and along with that has lost some it’s power as a reward.

You could for instance reward people with Lego™ bricks. Again this sounds silly; but it is something that people can use, they can display them on their desk, they can touch, feel and change them when the motivation is lacking and they can realize that they can build something new if they only had a few more bricks.

It is hard to measure these effects, but if you go looking around you will see that a lot of people have symbols at their desk. A picture from their kid, a gift from a client, a token from their school, hobby or favorite team. All of these symbols and idols allow people to channel energy easier and are valuable for that.

Try and get your project on peoples mind, and start by getting on to their desk.

Posted in Software Engineering | Tagged , , , , | Leave a comment