Using Planguage for specifying requirements

For specifying performance requirements, we use Planguage, a 'language' proposed by Tom Gilb (, to concisely, yet clearly describing requirements. Here we show an example with some basic elements of this method. Check whether you can almost immediately understand the example below.

RQ27: Speed of Luggage Handling at Airport
Scale: Time between <arrival of airplane> and first luggage on belt
Meter: <measure arrival of airplane>, <measure arrival of first luggage on belt>, calculate difference

First we provide a unique reference, 'RQ27' in this case, and a description. Instead of RQ27 one can of course also use a more descriptive unique word, like LuggageHandlingSpeed. Instead of putting the reference and description on one line, some people prefer to use two separate lines with keywords like: Reference, and Description or Gist.
The Scale describes what we measure, in this case 'time' between two clearly marked moments.
The fuzzy brackets <> indicate that we acknowledge that <arrival of airplane> is not yet clear enough: is it the touchdown of the aircraft (like Ryanair touts about: "Tatatata... another on-time arrival" even if the plane still has to roll for 10 min to the gate), or is it the moment of standstill at the gate. Before the requirement can be 'used', the fuzziness has to be resolved, but at this stage we don't want to spend time on this yet.
The Meter describes how we measure on the Scale. The fuzzy brackets indicate that we still have to elaborate what <arrival of airplane> and <arrival of first luggage on belt> mean exactly.
This description can already trigger some questions. For example: Is <arrival of first luggage> appropriate? After all, the luggage handler can take just one bag out of the plane, transport it quickly to the belt and clock the time. Perhaps <arrival of last luggage on belt> is more appropriate? After all, the relevant Stakeholder is the passenger waiting to get his luggage as quickly as possible.

You can add more keywords as needed, for example: Stakeholders.

Past: 2 min [minimum, 2014], 8 min [average, 2014], 83 min [max, 2014]
Current: < 4 min [competitor y, Jan 2015] ← <who said this?>, <Survey Feb2014>
Record: 57 sec [competitor x, Jan 2012]
Wish: < 2 min [2017Q3] ← CEO, 19 Feb 2015, <document ...>

Benchmarks define the playing field. Some examples are:

Note the [square brackets], indicating attributes, or conditions when (where) the specified level applies. Note also the left arrow ←, indicating the source of the information. The fact that the Record here doesn't show a source indicates that we don't even know who said it: it may be just a wild rumour. But if it is, it may be even better to specify: '← wild rumour', to indicate that we know we don't know the source.

Tolerable: < 10 min [99%, Q4] ← SLA
Tolerable: < 15 min [100%, Q4, Schiphol] ← SLA
Goal: < 15 min [99%, Q2], < 10 min [99%, Q3], < 5 min [99%, Q4] ← marketing

Then we describe the Requirements, with at least a Tolerable and a Goal value:

The power of generating requirements in this fashion is that it stimulates greatly the communication and understanding of the requirements. We often see that perceived initial requirements quickly change into other, more appropriate requirements, reducing the risk that we start working on a great solution for the wrong problem. During development the architecture and design tries to cover the set of usually conflicting requirements as best as possible. Having a range between Tolerable and Goal, leaves room for intelligent compromises.

I like Tom Gilb's statement about the use of numerical values:
"The fact that we can set numeric objectives, and track them, is powerful; but in fact is not the main point. The main purpose of quantification is to force us to think deeply, and debate exactly, what we mean; so that others, later, cannot fail to understand us".

Evolutionary steps

Engineers are trained to achieve planned results by design (Figure 1). Sometimes, however, we reach a goal by improving different parts of the system, one step at the time. Many developers are used to trying to accomplish as much as possible in one step. In Evo, we select the smallest step possible. If this step later turns out not to be the right step, we have to redo as little as possible. And the step that takes the least time leaves us the most time for whatever we still have to do.

Figure 2 shows how we are safe after one delivery step (better than Tolerable: at least we don't fail). After two more deliveries we reach the Goal value, indicating that we have achieved our goal and that we don't gain anything if we continue. Hence we stop. This way, we mitigate the risk of Gold Plating, which is doing more than needed.

In real projects, we have to cope with several requirements at the same time. In one Evolutionary Delivery step, we work on Requirement 1 (Figure 3: step 1), getting past the Tolerable level. Therefore, in the next DeliveryCycle, we better first work to get an other requirement beyond the Tolerable level (step 2). In some cases, an improvement of one performance may adversely affect another performance, which we may improve in the next step (step 3). In similar fashion we deliver step by step until all requirements are at the Goal level, or until the budget in time or money is depleted.

In Evo we never overrun the budgets. As soon as we are beyond the 'safe' Tolerable levels for all requirements, we can basically stop at any time, for instance if the customer decides that time-to-market is more important than further improvements. The Evo Requirements Engineering Approach addresses the risks of delivering the wrong things, or delivering at the wrong time.

Once someone told me that the requirements should be 'SMART'.

Well, Planguage requirements are:

S   Specific   Scale makes it specific
M   Measurable   Meter makes it measurable, some call this 'testable'
A   Attainable   The Benchmarks deal with attainable
R   Realisable   The Requirements deal with realisable
T   Timely (some say: Traceable)   The [attributes] deal with timeliness, and the ←sources deal with the traceability

Use Document Inspections to check whether the result of the requirement description really is unambiguous, clear, etc, for the intended readership (define the intended readership), see e.g. 16-page Inspection Manual - page 14: "Generic engineering specification rules".