By Walter Lindsay, Director of Solution Architecture, Liaison Technologies
“Sadly, it is … small-scale mass cheating, not the high-profile cases, that is most corrosive to society.” – Dan Ariely, The Wall Street Journal
Have you ever interviewed someone who claimed to be an expert in something but could not answer basic questions? Do you then find yourself figuring out whether the person is ignorant, lying, or perhaps is an expert in a sense you hadn’t considered? The third can be intriguing; the first is a strong warning; but a lie discredits the candidate.
Unfortunately, data integration projects can be done in ways that force those who come later to ask similar questions.
Let’s illustrate this with an example SOA project. At the start, we have a process model and an existing back-end application. Let’s say that business analysts create spreadsheets to document the application data formats. A data architect generates an XML Schema for the operations, and documents their usage in still more spreadsheets. The spreadsheets are carefully saved. And as the project progresses, the implementation slightly diverges from the spreadsheets. After testing, the implementation looks perfect, and so it is deployed. Then the specifications (spreadsheets) and implementation source code are all carefully stored.
At some point in the future, the service needs to be extended. The original team members are unavailable, so a new person is tasked to make the changes. He or she reviews the requirements for this new little project, reviews the specifications (spreadsheets) from the previous team, and then starts making changes and running tests. At some point, it may become apparent that the specifications and the implementation vary – forcing the new person to decide whether he or she has found a bug in the implementation, whether the specifications are out of date, and whether the divergence matters.
Because these kinds of spreadsheet specifications are complex enough, some number of errors in the spreadsheets is likely.
Now, imagine that the project team was under time pressures, or someone didn’t want to stay late and reflect a design change in the spreadsheet format, or someone had been sloppy in writing the spreadsheet and figured nobody would notice if they didn’t mention the mistakes.
Dan Ariely’s article “Why We Lie” describes the result of research done over a decade. Ariely writes, “What we have found, in a nutshell: Everybody has the capacity to be dishonest, and almost everybody cheats—just by a little. Except for a few outliers at the top and bottom, the behavior of almost everyone is driven by two opposing motivations. On the one hand, we want to benefit from cheating and get as much money and glory as possible; on the other hand, we want to view ourselves as honest, honorable people. Sadly, it is this kind of small-scale mass cheating, not the high-profile cases, that is most corrosive to society.”
Ariely then describes experiments which indicate that a person who knows he or she is likely to get caught will also be less likely to cheat.
So, what kind of cheating in integration projects will lead to problems later? How can one detect those cases? And is detection easy enough to be worth it?
To sketch an answer let’s look some characteristics of projects, and what follows after projects.
First, if the implementation is its own documentation, the original specifications provide helpful historical perspective, but nothing more. That is, if the implementation is accessible enough – for instance, by being exposed through a repository in a way non-experts can understand and search – then the implementation is its own authoritative documentation.
Second, there are places where projects intentionally expose data or provide operations. The other parts are considered as being on the inside of the implementation. The exposed (external) things may touch other things, and so if one can verify that the external parts of a project behave as expected, that the overall system is likely to behave as expected. Expected inputs produce expected outputs. QA testing often achieves this.
Third, ensuring that parts of a system don’t provide unexpected inputs, and if they do, that other parts of the system don’t also provide unexpected outputs, allows that new person we spoke of earlier rapidly to make changes in the future. One form of cheating is allowing team members to generate inaccurate outputs (which are the inputs to something else) but which don’t cause a problem in the current project. Thus, strategic governance checks from outside the project team keep the truth fabric and agility of the organization strong. If project teams know they will get caught cheating, honest people will be more honest.
“While ethics lectures and training seem to have little to no effect on people, reminders of morality—right at the point where people are making a decision—appear to have an outsize effect on behavior.” – Dan Ariely
Ok, so strategic governance checks reduce “corrosive” cheating. And tooling which supports a “single source of authority” model reduces both inadvertent errors and “corrosive” cheating. And with a little thought we would likely find other ways to apply Ariely’s insights.
“All of this means that, although it is obviously important to pay attention to flagrant misbehaviors, it is probably even more important to discourage the small and more ubiquitous forms of dishonesty—the misbehavior that affects all of us, as both perpetrators and victims. This is especially true given what we know about the contagious nature of cheating and the way that small transgressions can grease the psychological skids to larger ones.” — Dan Ariely
IT which provides agility for a business requires a high level of discipline and honesty. Ariely’s article describes experiments in which the conditions in which a person works behaves affects his or her honesty, which may provide insight into how you may help keep people honest in your workplaces. Similarly, checking on Liaison’s cloud and data integration governance capabilities may help you solve data integration challenges, or long-term challenges in making your organization agile.
And I will leave you with this thought: In writing an article about cheating – a built-in reminder of morality right at the point when I was making decisions – did I cheat while writing it?