Documentation, before or after? (What if scenarios)

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A,” forty pounds a “B,” and so on. Those being graded on “quality,” however, needed to produce only one pot—albeit a perfect one—to get an “A.” Well, come grading time and a curious fact emerged: the works of the highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work—and learning from their mistakes—the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay
David Bayles and Ted Orland
in their book, “Art and Fear: Observations on the Perils (And Rewards) of Art making“.

Let’s say you’ve bought a commercial-off-the-shelf (COTS) OSS, which nobody in your organisation has deep expertise with. It is incredibly configurable, but your budget doesn’t allow you to request the vendor to customise it for you (ie developers writing customised code for you). Do you write extensive documentation before or after implementation?

I hazard a guess that most projects create documents such as specifications, requirements analysis, interface specifications, test cases, etc, etc before implementations. There are so many “what if?” scenarios for your inexperienced team to ponder, how the tools might respond to certain configurations, how people will interact with tools / processes, what downstream risks will appear, etc. You have to get on top of them early right?

Maybe….. But what if your team just started doing. Dabbling, trying, tweaking, learning, failing, improving. Would they return results similar to the potters in Bayles and Orlands’s book? Once in an optimised state from many trials and errors, does it make sense to then document the experience for future users to learn from? The documentation then is future-facing, having relevance beyond the point of user acceptance rather than backward-facing, being a check-list to hold the implementers to account prior to user acceptance (but having little future value).

For more, see “Thinking, Talking, Documenting, Doing.”

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

No telco wants to buy an OSS/BSS

When you’re a senior exec in a telco and you’ve been made responsible for allocating resources, it’s unlikely that you ever think, “gee, we really

2 Responses

  1. The OSS industry could do more to encourage a hacker ethos. Right now, it’s difficult or impossible to get trial software to tinker with, or access all the APIs of a system that you’ve purchased. Some OSS vendors will charge a fortune for APIs, some will explicitly restrict you from accessing data at the RDBMS level. Ok, some of these limits exist for a reason – As a product manager I don’t want users circumventing an API to edit data directly in tables (the list of reasons is too long for this comment box). But vendors could set some boundaries for acceptable ‘hacking’, or create more open, low level APIs (as thetrend 10 years ago was to sell high-level SoA style APIs only).

  2. Hi James,
    Valid points indeed.
    I was looking at it more from the perspective of configuring data sets within a DEV/TEST (sandpit) environment that a vendor can create straight out of the box. It might not be a vendor’s preferred option, but if milestone payments incentivise this model, or if it’s the difference between winning / losing a contract then most vendors will build a sandpit environment for you.
    The sandpit helps with configuring and refining the user experience (eg naming conventions, processes, etc). Once the users are happy with how the application/s are working for them, you have a refined and stable set of data on which to build your interfaces. Like you indicate, APIs and integration are very important but that would be a secondary focus from the sandpit.

    Going from my own experience, I learn a product much faster if you give me a blank sandpit and let me manually simulate scenarios within it rather than reading documentation (user guides, specs sheets, interface specifications, etc). Note that I’m thinking inventory management, order management, service catalogs, outside plant / GIS, etc here.

    Your point about getting data into the OSS is also highly valid for tools like alarm/fault and performance management tools, which rely on data streams coming across an interface. I’ve always found that vendors are able to simulate these data feeds for the purpose of demonstrating their products, so I tend to dabble with this rather than hacking APIs (initially at least 🙂 ).

    Thanks for your insights James

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.